Project

General

Profile

Main » History » Version 164

Rishabh Dave, 07/27/2023 12:11 PM

1 79 Venky Shankar
h1. MAIN
2
3 148 Rishabh Dave
h3. NEW ENTRY BELOW
4
5 163 Venky Shankar
h3. 28 JULY 2023
6
7
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230725.053049
8
9
* https://tracker.ceph.com/issues/51964
10
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
11
* https://tracker.ceph.com/issues/61400
12
    valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
13
* https://tracker.ceph.com/issues/61399
14
    ior build failure
15
* https://tracker.ceph.com/issues/57676
16
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
17
* https://tracker.ceph.com/issues/59348
18
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
19
* https://tracker.ceph.com/issues/59531
20
    cluster [WRN] OSD bench result of 137706.272521 IOPS exceeded the threshold
21
* https://tracker.ceph.com/issues/59344
22
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
23
* https://tracker.ceph.com/issues/59346
24
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
25
* https://github.com/ceph/ceph/pull/52556
26
    task/test_nfs: AttributeError: 'TestNFS' object has no attribute 'run_ceph_cmd' (see note #4)
27
* https://tracker.ceph.com/issues/62187
28
    iozone: command not found
29
* https://tracker.ceph.com/issues/61399
30
    qa: build failure for ior (the failed instance is when compiling `mdtest`)
31
* https://tracker.ceph.com/issues/62188
32
    AttributeError: 'RemoteProcess' object has no attribute 'read' (happens only with multis-auth test)
33
34 164 Rishabh Dave
h3. 24 Jul 2023
35 158 Rishabh Dave
36
https://pulpito.ceph.com/rishabh-2023-07-13_21:35:13-fs-wip-rishabh-2023Jul13-testing-default-smithi/
37
https://pulpito.ceph.com/rishabh-2023-07-14_10:26:42-fs-wip-rishabh-2023Jul13-testing-default-smithi/
38
There were few failure from one of the PRs under testing. Following run confirms that removing this PR fixes these failures -
39
https://pulpito.ceph.com/rishabh-2023-07-18_02:11:50-fs-wip-rishabh-2023Jul13-m-quota-testing-default-smithi/
40
One more extra run to check if blogbench.sh fail every time:
41
https://pulpito.ceph.com/rishabh-2023-07-21_17:58:19-fs-wip-rishabh-2023Jul13-m-quota-testing-default-smithi/
42
blogbench.sh failure were seen on above runs for first time, following run with main branch that confirms that "blogbench.sh" was not related to any of the PRs that are under testing -
43
https://pulpito.ceph.com/rishabh-2023-07-21_21:30:53-fs-wip-rishabh-2023Jul13-base-2-testing-default-smithi/
44
45 161 Rishabh Dave
* https://tracker.ceph.com/issues/61892
46
  test_snapshot_remove (test_strays.TestStrays) failed
47
* https://tracker.ceph.com/issues/53859
48
  test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
49
* https://tracker.ceph.com/issues/61982
50
  test_clean_stale_subvolume_snapshot_metadata (tasks.cephfs.test_volumes.TestSubvolumeSnapshots)
51
* https://tracker.ceph.com/issues/52438
52
  qa: ffsb timeout
53
* https://tracker.ceph.com/issues/54460
54
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
55
* https://tracker.ceph.com/issues/57655
56
  qa: fs:mixed-clients kernel_untar_build failure
57
* https://tracker.ceph.com/issues/48773
58
  reached max tries: scrub does not complete
59
* https://tracker.ceph.com/issues/58340
60
  mds: fsstress.sh hangs with multimds
61
* https://tracker.ceph.com/issues/61400
62
  valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
63
* https://tracker.ceph.com/issues/57206
64
  libcephfs/test.sh: ceph_test_libcephfs_reclaim
65
  
66
* https://tracker.ceph.com/issues/57656
67
  [testing] dbench: write failed on handle 10010 (Resource temporarily unavailable)
68
* https://tracker.ceph.com/issues/61399
69
  ior build failure
70
* https://tracker.ceph.com/issues/57676
71
  error during scrub thrashing: backtrace
72
  
73
* https://tracker.ceph.com/issues/38452
74
  'sudo -u postgres -- pgbench -s 500 -i' failed
75
* https://tracker.ceph.com/issues/62126
76
  blogbench.sh failure
77 158 Rishabh Dave
78 157 Venky Shankar
h3. 18 July 2023
79
80
* https://tracker.ceph.com/issues/52624
81
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
82
* https://tracker.ceph.com/issues/57676
83
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
84
* https://tracker.ceph.com/issues/54460
85
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
86
* https://tracker.ceph.com/issues/57655
87
    qa: fs:mixed-clients kernel_untar_build failure
88
* https://tracker.ceph.com/issues/51964
89
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
90
* https://tracker.ceph.com/issues/59344
91
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
92
* https://tracker.ceph.com/issues/61182
93
    cephfs-mirror-ha-workunit: reached maximum tries (50) after waiting for 300 seconds
94
* https://tracker.ceph.com/issues/61957
95
    test_client_limits.TestClientLimits.test_client_release_bug
96
* https://tracker.ceph.com/issues/59348
97
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
98
* https://tracker.ceph.com/issues/61892
99
    test_strays.TestStrays.test_snapshot_remove failed
100
* https://tracker.ceph.com/issues/59346
101
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
102
* https://tracker.ceph.com/issues/44565
103
    src/mds/SimpleLock.h: 528: FAILED ceph_assert(state == LOCK_XLOCK || state == LOCK_XLOCKDONE || state == LOCK_XLOCKSNAP || state == LOCK_LOCK_XLOCK || state == LOCK_LOCK || is_locallock())
104
* https://tracker.ceph.com/issues/62067
105
    ffsb.sh failure "Resource temporarily unavailable"
106
107
108 156 Venky Shankar
h3. 17 July 2023
109
110
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230704.040136
111
112
* https://tracker.ceph.com/issues/61982
113
    Test failure: test_clean_stale_subvolume_snapshot_metadata (tasks.cephfs.test_volumes.TestSubvolumeSnapshots)
114
* https://tracker.ceph.com/issues/59344
115
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
116
* https://tracker.ceph.com/issues/61182
117
    cephfs-mirror-ha-workunit: reached maximum tries (50) after waiting for 300 seconds
118
* https://tracker.ceph.com/issues/61957
119
    test_client_limits.TestClientLimits.test_client_release_bug
120
* https://tracker.ceph.com/issues/61400
121
    valgrind+ceph-mon: segmentation fault in rocksdb+tcmalloc
122
* https://tracker.ceph.com/issues/59348
123
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
124
* https://tracker.ceph.com/issues/61892
125
    test_strays.TestStrays.test_snapshot_remove failed
126
* https://tracker.ceph.com/issues/59346
127
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
128
* https://tracker.ceph.com/issues/62036
129
    src/mds/MDCache.cc: 5131: FAILED ceph_assert(isolated_inodes.empty())
130
* https://tracker.ceph.com/issues/61737
131
    coredump from '/bin/podman pull quay.ceph.io/ceph-ci/ceph:pacific'
132
* https://tracker.ceph.com/issues/44565
133
    src/mds/SimpleLock.h: 528: FAILED ceph_assert(state == LOCK_XLOCK || state == LOCK_XLOCKDONE || state == LOCK_XLOCKSNAP || state == LOCK_LOCK_XLOCK || state == LOCK_LOCK || is_locallock())
134
135
136 155 Rishabh Dave
h3. 13 July 2023 Run 2
137 1 Patrick Donnelly
138 153 Rishabh Dave
139 152 Rishabh Dave
https://pulpito.ceph.com/rishabh-2023-07-08_23:33:40-fs-wip-rishabh-2023Jul9-testing-default-smithi/
140
https://pulpito.ceph.com/rishabh-2023-07-09_20:19:09-fs-wip-rishabh-2023Jul9-testing-default-smithi/
141
142
* https://tracker.ceph.com/issues/61957
143
  test_client_limits.TestClientLimits.test_client_release_bug
144
* https://tracker.ceph.com/issues/61982
145
  Test failure: test_clean_stale_subvolume_snapshot_metadata (tasks.cephfs.test_volumes.TestSubvolumeSnapshots)
146
* https://tracker.ceph.com/issues/59348
147
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
148
* https://tracker.ceph.com/issues/59344
149
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
150
* https://tracker.ceph.com/issues/54460
151
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
152
* https://tracker.ceph.com/issues/57655
153
  qa: fs:mixed-clients kernel_untar_build failure
154
* https://tracker.ceph.com/issues/61400
155
  valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
156
* https://tracker.ceph.com/issues/61399
157
  ior build failure
158
159
h3. 13 July 2023
160
161 151 Venky Shankar
https://pulpito.ceph.com/vshankar-2023-07-04_11:45:30-fs-wip-vshankar-testing-20230704.040242-testing-default-smithi/
162
163
* https://tracker.ceph.com/issues/54460
164
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
165
* https://tracker.ceph.com/issues/61400
166
    valgrind+ceph-mon: segmentation fault in rocksdb+tcmalloc
167
* https://tracker.ceph.com/issues/57655
168
    qa: fs:mixed-clients kernel_untar_build failure
169
* https://tracker.ceph.com/issues/61945
170
    LibCephFS.DelegTimeout failure
171
* https://tracker.ceph.com/issues/52624
172
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
173
* https://tracker.ceph.com/issues/57676
174
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
175
* https://tracker.ceph.com/issues/59348
176
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
177
* https://tracker.ceph.com/issues/59344
178
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
179
* https://tracker.ceph.com/issues/51964
180
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
181
* https://tracker.ceph.com/issues/59346
182
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
183
* https://tracker.ceph.com/issues/61982
184
    Test failure: test_clean_stale_subvolume_snapshot_metadata (tasks.cephfs.test_volumes.TestSubvolumeSnapshots)
185
186
187 150 Rishabh Dave
h3. 13 Jul 2023
188
189
https://pulpito.ceph.com/rishabh-2023-07-05_22:21:20-fs-wip-rishabh-2023Jul5-testing-default-smithi/
190
https://pulpito.ceph.com/rishabh-2023-07-06_19:33:28-fs-wip-rishabh-2023Jul5-testing-default-smithi/
191
192
* https://tracker.ceph.com/issues/61957
193
  test_client_limits.TestClientLimits.test_client_release_bug
194
* https://tracker.ceph.com/issues/59348
195
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
196
* https://tracker.ceph.com/issues/59346
197
  fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
198
* https://tracker.ceph.com/issues/48773
199
  scrub does not complete: reached max tries
200
* https://tracker.ceph.com/issues/59344
201
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
202
* https://tracker.ceph.com/issues/52438
203
  qa: ffsb timeout
204
* https://tracker.ceph.com/issues/57656
205
  [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
206
* https://tracker.ceph.com/issues/58742
207
  xfstests-dev: kcephfs: generic
208
* https://tracker.ceph.com/issues/61399
209
  libmpich: undefined references to fi_strerror
210
211 148 Rishabh Dave
h3. 12 July 2023
212 149 Rishabh Dave
213 148 Rishabh Dave
https://pulpito.ceph.com/rishabh-2023-07-05_18:32:52-fs-wip-rishabh-CephManager-in-CephFSTestCase-testing-default-smithi/
214
https://pulpito.ceph.com/rishabh-2023-07-06_19:46:43-fs-wip-rishabh-CephManager-in-CephFSTestCase-testing-default-smithi/
215
216
* https://tracker.ceph.com/issues/61892
217
  test_strays.TestStrays.test_snapshot_remove failed
218
* https://tracker.ceph.com/issues/59348
219
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
220
* https://tracker.ceph.com/issues/53859
221
  qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
222
* https://tracker.ceph.com/issues/59346
223
  fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write"
224
* https://tracker.ceph.com/issues/58742
225
  xfstests-dev: kcephfs: generic
226
* https://tracker.ceph.com/issues/59344
227
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
228
* https://tracker.ceph.com/issues/52438
229
  qa: ffsb timeout
230
* https://tracker.ceph.com/issues/57656
231
  [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
232
* https://tracker.ceph.com/issues/54460
233
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
234
* https://tracker.ceph.com/issues/57655
235
  qa: fs:mixed-clients kernel_untar_build failure
236
* https://tracker.ceph.com/issues/61182
237
  cephfs-mirror-ha-workunit: reached maximum tries (50) after waiting for 300 seconds
238
* https://tracker.ceph.com/issues/61400
239
  valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
240
* https://tracker.ceph.com/issues/48773
241
  reached max tries: scrub does not complete
242 147 Rishabh Dave
243 146 Patrick Donnelly
h3. 05 July 2023
244
245
https://pulpito.ceph.com/pdonnell-2023-07-05_03:38:33-fs:libcephfs-wip-pdonnell-testing-20230705.003205-distro-default-smithi/
246
247
* https://tracker.ceph.com/issues/59346
248
  fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write"
249 137 Rishabh Dave
250 143 Rishabh Dave
h3. 27 Jun 2023
251
252
https://pulpito.ceph.com/rishabh-2023-06-21_23:38:17-fs-wip-rishabh-improvements-authmon-testing-default-smithi/
253
https://pulpito.ceph.com/rishabh-2023-06-23_17:37:30-fs-wip-rishabh-improvements-authmon-distro-default-smithi/
254
255 144 Rishabh Dave
* https://tracker.ceph.com/issues/59348
256
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
257
* https://tracker.ceph.com/issues/54460
258
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
259
* https://tracker.ceph.com/issues/59346
260
  fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write"
261
* https://tracker.ceph.com/issues/59344
262
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
263
* https://tracker.ceph.com/issues/61399
264
  libmpich: undefined references to fi_strerror
265
* https://tracker.ceph.com/issues/50223
266
  client.xxxx isn't responding to mclientcaps(revoke)
267
* https://tracker.ceph.com/issues/61831
268
  Test failure: test_mirroring_init_failure_with_recovery (tasks.cephfs.test_mirroring.TestMirroring)
269 143 Rishabh Dave
270
271 142 Venky Shankar
h3. 22 June 2023
272
273
* https://tracker.ceph.com/issues/57676
274
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
275
* https://tracker.ceph.com/issues/54460
276
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
277
* https://tracker.ceph.com/issues/59344
278
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
279
* https://tracker.ceph.com/issues/59348
280
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
281
* https://tracker.ceph.com/issues/61400
282
    valgrind+ceph-mon: segmentation fault in rocksdb+tcmalloc
283
* https://tracker.ceph.com/issues/57655
284
    qa: fs:mixed-clients kernel_untar_build failure
285
* https://tracker.ceph.com/issues/61394
286
    qa/quincy: cluster [WRN] evicting unresponsive client smithi152 (4298), after 303.726 seconds" in cluster log
287
* https://tracker.ceph.com/issues/61762
288
    qa: wait_for_clean: failed before timeout expired
289
* https://tracker.ceph.com/issues/61775
290
    cephfs-mirror: mirror daemon does not shutdown (in mirror ha tests)
291
* https://tracker.ceph.com/issues/44565
292
    src/mds/SimpleLock.h: 528: FAILED ceph_assert(state == LOCK_XLOCK || state == LOCK_XLOCKDONE || state == LOCK_XLOCKSNAP || state == LOCK_LOCK_XLOCK || state == LOCK_LOCK || is_locallock())
293
* https://tracker.ceph.com/issues/61790
294
    cephfs client to mds comms remain silent after reconnect
295
* https://tracker.ceph.com/issues/61791
296
    snaptest-git-ceph.sh test timed out (job dead)
297
298
299 139 Venky Shankar
h3. 20 June 2023
300
301
https://pulpito.ceph.com/vshankar-2023-06-15_04:58:28-fs-wip-vshankar-testing-20230614.124123-testing-default-smithi/
302
303
* https://tracker.ceph.com/issues/57676
304
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
305
* https://tracker.ceph.com/issues/54460
306
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
307
* https://tracker.ceph.com/issues/54462
308
  Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128
309 140 Venky Shankar
* https://tracker.ceph.com/issues/58340
310 1 Patrick Donnelly
  mds: fsstress.sh hangs with multimds
311 141 Venky Shankar
* https://tracker.ceph.com/issues/59344
312 139 Venky Shankar
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
313
* https://tracker.ceph.com/issues/59348
314
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
315
* https://tracker.ceph.com/issues/57656
316
  [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
317
* https://tracker.ceph.com/issues/61400
318
    valgrind+ceph-mon: segmentation fault in rocksdb+tcmalloc
319
* https://tracker.ceph.com/issues/57655
320
    qa: fs:mixed-clients kernel_untar_build failure
321
* https://tracker.ceph.com/issues/44565
322
    src/mds/SimpleLock.h: 528: FAILED ceph_assert(state == LOCK_XLOCK || state == LOCK_XLOCKDONE || state == LOCK_XLOCKSNAP || state == LOCK_LOCK_XLOCK || state == LOCK_LOCK || is_locallock())
323
* https://tracker.ceph.com/issues/61737
324
    coredump from '/bin/podman pull quay.ceph.io/ceph-ci/ceph:pacific'
325
326 138 Rishabh Dave
h3. 16 June 2023
327
328
https://pulpito.ceph.com/rishabh-2023-05-16_10:39:13-fs-wip-rishabh-2023May15-1524-testing-default-smithi/
329
https://pulpito.ceph.com/rishabh-2023-05-17_11:09:48-fs-wip-rishabh-2023May15-1524-testing-default-smithi/
330 1 Patrick Donnelly
https://pulpito.ceph.com/rishabh-2023-05-18_10:01:53-fs-wip-rishabh-2023May15-1524-testing-default-smithi/
331 145 Rishabh Dave
(bins were rebuilt with a subset of orig PRs) https://pulpito.ceph.com/rishabh-2023-06-09_10:19:22-fs-wip-rishabh-2023Jun9-1308-testing-default-smithi/
332 138 Rishabh Dave
333 1 Patrick Donnelly
334
* https://tracker.ceph.com/issues/59344
335
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
336
* https://tracker.ceph.com/issues/59348
337
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
338 138 Rishabh Dave
* https://tracker.ceph.com/issues/59346
339
  fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write"
340 145 Rishabh Dave
* https://tracker.ceph.com/issues/57656
341
  [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
342
* https://tracker.ceph.com/issues/54460
343
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
344
* https://tracker.ceph.com/issues/54462
345
  Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128
346 138 Rishabh Dave
* https://tracker.ceph.com/issues/61399
347
  libmpich: undefined references to fi_strerror
348 145 Rishabh Dave
* https://tracker.ceph.com/issues/58945
349
  xfstests-dev: ceph-fuse: generic 
350
* https://tracker.ceph.com/issues/58742
351
  xfstests-dev: kcephfs: generic
352 138 Rishabh Dave
353 136 Patrick Donnelly
h3. 24 May 2023
354
355
https://pulpito.ceph.com/pdonnell-2023-05-23_18:20:18-fs-wip-pdonnell-testing-20230523.134409-distro-default-smithi/
356
357
* https://tracker.ceph.com/issues/57676
358
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
359
* https://tracker.ceph.com/issues/59683
360
    Error: Unable to find a match: userspace-rcu-devel libedit-devel device-mapper-devel with fscrypt tests
361
* https://tracker.ceph.com/issues/61399
362
    qa: "[Makefile:299: ior] Error 1"
363
* https://tracker.ceph.com/issues/61265
364
    qa: tasks.cephfs.fuse_mount:process failed to terminate after unmount
365
* https://tracker.ceph.com/issues/59348
366
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
367
* https://tracker.ceph.com/issues/59346
368
    qa/workunits/fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write"
369
* https://tracker.ceph.com/issues/61400
370
    valgrind+ceph-mon: segmentation fault in rocksdb+tcmalloc
371
* https://tracker.ceph.com/issues/54460
372
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
373
* https://tracker.ceph.com/issues/51964
374
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
375
* https://tracker.ceph.com/issues/59344
376
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
377
* https://tracker.ceph.com/issues/61407
378
    mds: abort on CInode::verify_dirfrags
379
* https://tracker.ceph.com/issues/48773
380
    qa: scrub does not complete
381
* https://tracker.ceph.com/issues/57655
382
    qa: fs:mixed-clients kernel_untar_build failure
383
* https://tracker.ceph.com/issues/61409
384
    qa: _test_stale_caps does not wait for file flush before stat
385
386 128 Venky Shankar
h3. 15 May 2023
387
388
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230509.090020
389 130 Venky Shankar
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230509.090020-6
390 128 Venky Shankar
391
* https://tracker.ceph.com/issues/52624
392
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
393
* https://tracker.ceph.com/issues/54460
394
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
395
* https://tracker.ceph.com/issues/57676
396
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
397
* https://tracker.ceph.com/issues/59684 [kclient bug]
398
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
399
* https://tracker.ceph.com/issues/59348
400
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
401
* https://tracker.ceph.com/issues/61148
402
    dbench test results in call trace in dmesg [kclient bug]
403 131 Venky Shankar
* https://tracker.ceph.com/issues/58340
404
    mds: fsstress.sh hangs with multimds
405 133 Kotresh Hiremath Ravishankar
406 134 Kotresh Hiremath Ravishankar
 
407 125 Venky Shankar
h3. 11 May 2023
408
409 129 Rishabh Dave
https://pulpito.ceph.com/yuriw-2023-05-10_18:21:40-fs-wip-yuri7-testing-2023-05-10-0742-distro-default-smithi/
410
411
* https://tracker.ceph.com/issues/59684 [kclient bug]
412
  Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
413
* https://tracker.ceph.com/issues/59348
414
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
415
* https://tracker.ceph.com/issues/57655
416
  qa: fs:mixed-clients kernel_untar_build failure
417
* https://tracker.ceph.com/issues/57676
418
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
419
* https://tracker.ceph.com/issues/55805
420
  error during scrub thrashing reached max tries in 900 secs
421
* https://tracker.ceph.com/issues/54460
422
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
423
* https://tracker.ceph.com/issues/57656
424
  [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
425
* https://tracker.ceph.com/issues/58220
426
  Command failed (workunit test fs/quota/quota.sh) on smithi081 with status 1:
427
* https://tracker.ceph.com/issues/58220#note-9
428
  workunit fs/test_python.sh: test_disk_quota_exceeeded_error failure
429 1 Patrick Donnelly
* https://tracker.ceph.com/issues/59342
430
  qa/workunits/kernel_untar_build.sh failed when compiling the Linux source
431 134 Kotresh Hiremath Ravishankar
* https://tracker.ceph.com/issues/58949
432
    test_cephfs.test_disk_quota_exceeeded_error - AssertionError: DiskQuotaExceeded not raised by write
433 135 Kotresh Hiremath Ravishankar
* https://tracker.ceph.com/issues/61243 (NEW)
434
  test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
435 129 Rishabh Dave
436
h3. 11 May 2023
437
438 125 Venky Shankar
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230509.054005
439 127 Venky Shankar
440
(no fsstress job failure [https://tracker.ceph.com/issues/58340] since https://github.com/ceph/ceph/pull/49553
441 126 Venky Shankar
 was included in the branch, however, the PR got updated and needs retest).
442 125 Venky Shankar
443
* https://tracker.ceph.com/issues/52624
444
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
445
* https://tracker.ceph.com/issues/54460
446
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
447
* https://tracker.ceph.com/issues/57676
448
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
449
* https://tracker.ceph.com/issues/59683
450
    Error: Unable to find a match: userspace-rcu-devel libedit-devel device-mapper-devel with fscrypt tests
451
* https://tracker.ceph.com/issues/59684 [kclient bug]
452
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
453
* https://tracker.ceph.com/issues/59348
454
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
455
456 124 Venky Shankar
h3. 09 May 2023
457
458
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230506.143554
459
460
* https://tracker.ceph.com/issues/52624
461
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
462
* https://tracker.ceph.com/issues/58340
463
    mds: fsstress.sh hangs with multimds
464
* https://tracker.ceph.com/issues/54460
465
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
466
* https://tracker.ceph.com/issues/57676
467
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
468
* https://tracker.ceph.com/issues/51964
469
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
470
* https://tracker.ceph.com/issues/59350
471
    qa: test_flag_scrub_mdsdir (tasks.cephfs.test_scrub_checks.TestScrubChecks) ... ERROR
472
* https://tracker.ceph.com/issues/59683
473
    Error: Unable to find a match: userspace-rcu-devel libedit-devel device-mapper-devel with fscrypt tests
474
* https://tracker.ceph.com/issues/59684 [kclient bug]
475
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
476
* https://tracker.ceph.com/issues/59348
477
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
478
479 123 Venky Shankar
h3. 10 Apr 2023
480
481
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230330.105356
482
483
* https://tracker.ceph.com/issues/52624
484
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
485
* https://tracker.ceph.com/issues/58340
486
    mds: fsstress.sh hangs with multimds
487
* https://tracker.ceph.com/issues/54460
488
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
489
* https://tracker.ceph.com/issues/57676
490
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
491
* https://tracker.ceph.com/issues/51964
492
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
493 119 Rishabh Dave
494 120 Rishabh Dave
h3. 31 Mar 2023
495 121 Rishabh Dave
496 120 Rishabh Dave
run: http://pulpito.front.sepia.ceph.com/rishabh-2023-03-03_21:39:49-fs-wip-rishabh-2023Mar03-2316-testing-default-smithi/
497 122 Rishabh Dave
re-run (some PRs removed from batch): http://pulpito.front.sepia.ceph.com/rishabh-2023-03-11_05:54:03-fs-wip-rishabh-2023Mar10-1727-testing-default-smithi/
498
re-run (some PRs removed from batch): http://pulpito.front.sepia.ceph.com/rishabh-2023-03-23_08:27:28-fs-wip-rishabh-2023Mar20-2250-testing-default-smithi/
499 120 Rishabh Dave
500
There were many more re-runs for "failed+dead" jobs as well as for individual jobs. half of the PRs from the batch were removed (gradually over subsequent re-runs).
501
502
* https://tracker.ceph.com/issues/57676
503
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
504
* https://tracker.ceph.com/issues/54460
505
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
506
* https://tracker.ceph.com/issues/58220
507
  Command failed (workunit test fs/quota/quota.sh) on smithixxx with status 1
508
* https://tracker.ceph.com/issues/58220#note-9
509
  workunit fs/test_python.sh: test_disk_quota_exceeeded_error failure
510
* https://tracker.ceph.com/issues/56695
511
  Command failed (workunit test suites/pjd.sh)
512
* https://tracker.ceph.com/issues/58564 
513
  workuit dbench failed with error code 1
514
* https://tracker.ceph.com/issues/57206
515
  libcephfs/test.sh: ceph_test_libcephfs_reclaim
516
* https://tracker.ceph.com/issues/57580
517
  Test failure: test_newops_getvxattr (tasks.cephfs.test_newops.TestNewOps)
518
* https://tracker.ceph.com/issues/58940
519
  ceph osd hit ceph_abort
520
* https://tracker.ceph.com/issues/55805
521
  error scrub thrashing reached max tries in 900 secs
522
523 118 Venky Shankar
h3. 30 March 2023
524
525
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230315.085747
526
527
* https://tracker.ceph.com/issues/58938
528
    qa: xfstests-dev's generic test suite has 7 failures with kclient
529
* https://tracker.ceph.com/issues/51964
530
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
531
* https://tracker.ceph.com/issues/58340
532
    mds: fsstress.sh hangs with multimds
533
534 114 Venky Shankar
h3. 29 March 2023
535
536 115 Venky Shankar
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230317.095222
537 114 Venky Shankar
538
* https://tracker.ceph.com/issues/56695
539
    [RHEL stock] pjd test failures
540
* https://tracker.ceph.com/issues/57676
541
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
542
* https://tracker.ceph.com/issues/57087
543
    qa: test_fragmented_injection (tasks.cephfs.test_data_scan.TestDataScan) failure
544
* https://tracker.ceph.com/issues/58340
545
    mds: fsstress.sh hangs with multimds
546 116 Venky Shankar
* https://tracker.ceph.com/issues/57655
547
    qa: fs:mixed-clients kernel_untar_build failure
548 114 Venky Shankar
* https://tracker.ceph.com/issues/59230
549
    Test failure: test_object_deletion (tasks.cephfs.test_damage.TestDamage)
550 117 Venky Shankar
* https://tracker.ceph.com/issues/58938
551
    qa: xfstests-dev's generic test suite has 7 failures with kclient
552 114 Venky Shankar
553 113 Venky Shankar
h3. 13 Mar 2023
554
555
* https://tracker.ceph.com/issues/56695
556
    [RHEL stock] pjd test failures
557
* https://tracker.ceph.com/issues/57676
558
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
559
* https://tracker.ceph.com/issues/51964
560
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
561
* https://tracker.ceph.com/issues/54460
562
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
563
* https://tracker.ceph.com/issues/57656
564
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
565
566 112 Venky Shankar
h3. 09 Mar 2023
567
568
https://pulpito.ceph.com/vshankar-2023-03-03_04:39:14-fs-wip-vshankar-testing-20230303.023823-testing-default-smithi/
569
https://pulpito.ceph.com/vshankar-2023-03-08_15:12:36-fs-wip-vshankar-testing-20230308.112059-testing-default-smithi/
570
571
* https://tracker.ceph.com/issues/56695
572
    [RHEL stock] pjd test failures
573
* https://tracker.ceph.com/issues/57676
574
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
575
* https://tracker.ceph.com/issues/51964
576
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
577
* https://tracker.ceph.com/issues/54460
578
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
579
* https://tracker.ceph.com/issues/58340
580
    mds: fsstress.sh hangs with multimds
581
* https://tracker.ceph.com/issues/57087
582
    qa: test_fragmented_injection (tasks.cephfs.test_data_scan.TestDataScan) failure
583
584 111 Venky Shankar
h3. 07 Mar 2023
585
586
https://pulpito.ceph.com/vshankar-2023-03-02_09:21:58-fs-wip-vshankar-testing-20230222.044949-testing-default-smithi/
587
https://pulpito.ceph.com/vshankar-2023-03-07_05:15:12-fs-wip-vshankar-testing-20230307.030510-testing-default-smithi/
588
589
* https://tracker.ceph.com/issues/56695
590
    [RHEL stock] pjd test failures
591
* https://tracker.ceph.com/issues/57676
592
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
593
* https://tracker.ceph.com/issues/51964
594
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
595
* https://tracker.ceph.com/issues/57656
596
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
597
* https://tracker.ceph.com/issues/57655
598
    qa: fs:mixed-clients kernel_untar_build failure
599
* https://tracker.ceph.com/issues/58220
600
    Command failed (workunit test fs/quota/quota.sh) on smithi081 with status 1:
601
* https://tracker.ceph.com/issues/54460
602
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
603
* https://tracker.ceph.com/issues/58934
604
    snaptest-git-ceph.sh failure with ceph-fuse
605
606 109 Venky Shankar
h3. 28 Feb 2023
607
608
https://pulpito.ceph.com/vshankar-2023-02-24_02:11:45-fs-wip-vshankar-testing-20230222.025426-testing-default-smithi/
609
610
* https://tracker.ceph.com/issues/56695
611
    [RHEL stock] pjd test failures
612
* https://tracker.ceph.com/issues/57676
613
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
614
* https://tracker.ceph.com/issues/56446
615
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
616 110 Venky Shankar
617 109 Venky Shankar
(teuthology infra issues causing testing delays - merging PRs which have tests passing)
618
619 107 Venky Shankar
h3. 25 Jan 2023
620
621
https://pulpito.ceph.com/vshankar-2023-01-25_07:57:32-fs-wip-vshankar-testing-20230125.055346-testing-default-smithi/
622
623
* https://tracker.ceph.com/issues/52624
624
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
625
* https://tracker.ceph.com/issues/56695
626
    [RHEL stock] pjd test failures
627
* https://tracker.ceph.com/issues/57676
628
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
629
* https://tracker.ceph.com/issues/56446
630
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
631
* https://tracker.ceph.com/issues/57206
632
    libcephfs/test.sh: ceph_test_libcephfs_reclaim
633
* https://tracker.ceph.com/issues/58220
634
    Command failed (workunit test fs/quota/quota.sh) on smithi081 with status 1:
635
* https://tracker.ceph.com/issues/58340
636
  mds: fsstress.sh hangs with multimds
637
* https://tracker.ceph.com/issues/56011
638
    fs/thrash: snaptest-snap-rm-cmp.sh fails in mds5sum comparison
639
* https://tracker.ceph.com/issues/54460
640
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
641
642 101 Rishabh Dave
h3. 30 JAN 2023
643
644
run: http://pulpito.front.sepia.ceph.com/rishabh-2022-11-28_08:04:11-fs-wip-rishabh-testing-2022Nov24-1818-testing-default-smithi/
645
re-run: http://pulpito.front.sepia.ceph.com/rishabh-2023-01-13_12:08:33-fs-wip-rishabh-testing-2022Nov24-11Jan2023-distro-default-smithi/
646
re-run of re-run: http://pulpito.front.sepia.ceph.com/rishabh-2023-01-23_18:53:32-fs-wip-rishabh-testing-2022Nov24-11Jan2023-distro-default-smithi/
647
648 105 Rishabh Dave
* https://tracker.ceph.com/issues/52624
649
  qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
650 101 Rishabh Dave
* https://tracker.ceph.com/issues/56695
651
  [RHEL stock] pjd test failures
652
* https://tracker.ceph.com/issues/57676
653
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
654
* https://tracker.ceph.com/issues/55332
655
  Failure in snaptest-git-ceph.sh
656
* https://tracker.ceph.com/issues/51964
657
  qa: test_cephfs_mirror_restart_sync_on_blocklist failure
658
* https://tracker.ceph.com/issues/56446
659
  Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
660
* https://tracker.ceph.com/issues/57655 
661
  qa: fs:mixed-clients kernel_untar_build failure
662
* https://tracker.ceph.com/issues/54460
663
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
664
* https://tracker.ceph.com/issues/58340
665
  mds: fsstress.sh hangs with multimds
666 103 Rishabh Dave
* https://tracker.ceph.com/issues/58219
667
  Command crashed: 'ceph-dencoder type inode_backtrace_t import - decode dump_json'
668 101 Rishabh Dave
669 102 Rishabh Dave
* "Failed to load ceph-mgr modules: prometheus" in cluster log"
670
  http://pulpito.front.sepia.ceph.com/rishabh-2023-01-23_18:53:32-fs-wip-rishabh-testing-2022Nov24-11Jan2023-distro-default-smithi/7134086
671
  Acc to Venky this was fixed in https://github.com/ceph/ceph/commit/cf6089200d96fc56b08ee17a4e31f19823370dc8
672 106 Rishabh Dave
* Created https://tracker.ceph.com/issues/58564
673
  workunit test suites/dbench.sh failed error code 1
674 102 Rishabh Dave
675 100 Venky Shankar
h3. 15 Dec 2022
676
677
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20221215.112736
678
679
* https://tracker.ceph.com/issues/52624
680
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
681
* https://tracker.ceph.com/issues/56695
682
    [RHEL stock] pjd test failures
683
* https://tracker.ceph.com/issues/58219
684
* https://tracker.ceph.com/issues/57655
685
* qa: fs:mixed-clients kernel_untar_build failure
686
    Test failure: test_journal_migration (tasks.cephfs.test_journal_migration.TestJournalMigration)
687
* https://tracker.ceph.com/issues/57676
688
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
689
* https://tracker.ceph.com/issues/58340
690
    mds: fsstress.sh hangs with multimds
691
692 96 Venky Shankar
h3. 08 Dec 2022
693
694
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20221130.043104
695 99 Venky Shankar
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20221209.043803
696 96 Venky Shankar
697
(lots of transient git.ceph.com failures)
698
699
* https://tracker.ceph.com/issues/52624
700
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
701
* https://tracker.ceph.com/issues/56695
702
    [RHEL stock] pjd test failures
703
* https://tracker.ceph.com/issues/57655
704
    qa: fs:mixed-clients kernel_untar_build failure
705
* https://tracker.ceph.com/issues/58219
706
    Test failure: test_journal_migration (tasks.cephfs.test_journal_migration.TestJournalMigration)
707
* https://tracker.ceph.com/issues/58220
708
    Command failed (workunit test fs/quota/quota.sh) on smithi081 with status 1:
709
* https://tracker.ceph.com/issues/57676
710
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
711 97 Venky Shankar
* https://tracker.ceph.com/issues/53859
712
    qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
713 98 Venky Shankar
* https://tracker.ceph.com/issues/54460
714
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
715
* https://tracker.ceph.com/issues/58244
716
    Test failure: test_rebuild_inotable (tasks.cephfs.test_data_scan.TestDataScan)
717 96 Venky Shankar
718 95 Venky Shankar
h3. 14 Oct 2022
719
720
https://pulpito.ceph.com/vshankar-2022-10-12_04:56:59-fs-wip-vshankar-testing-20221011-145847-testing-default-smithi/
721
https://pulpito.ceph.com/vshankar-2022-10-14_04:04:57-fs-wip-vshankar-testing-20221014-072608-testing-default-smithi/
722
723
* https://tracker.ceph.com/issues/52624
724
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
725
* https://tracker.ceph.com/issues/55804
726
    Command failed (workunit test suites/pjd.sh)
727
* https://tracker.ceph.com/issues/51964
728
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
729
* https://tracker.ceph.com/issues/57682
730
    client: ERROR: test_reconnect_after_blocklisted
731
* https://tracker.ceph.com/issues/54460
732
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
733 90 Rishabh Dave
734 91 Rishabh Dave
h3. 10 Oct 2022
735
736
http://pulpito.front.sepia.ceph.com/rishabh-2022-09-30_19:45:21-fs-wip-rishabh-testing-30Sep2022-testing-default-smithi/
737 92 Rishabh Dave
738 91 Rishabh Dave
reruns
739
* fs-thrash, passed: http://pulpito.front.sepia.ceph.com/rishabh-2022-10-04_13:19:47-fs-wip-rishabh-testing-30Sep2022-testing-default-smithi/
740
* fs-verify, passed: http://pulpito.front.sepia.ceph.com/rishabh-2022-10-05_12:25:37-fs-wip-rishabh-testing-30Sep2022-testing-default-smithi/
741
* cephadm failures also passed after many re-runs: http://pulpito.front.sepia.ceph.com/rishabh-2022-10-06_13:50:51-fs-wip-rishabh-testing-30Sep2022-2-testing-default-smithi/
742 94 Rishabh Dave
    ** needed this PR to be merged in ceph-ci branch - https://github.com/ceph/ceph/pull/47458
743 91 Rishabh Dave
744 93 Rishabh Dave
known bugs
745 91 Rishabh Dave
* https://tracker.ceph.com/issues/52624
746
  qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
747
* https://tracker.ceph.com/issues/50223
748
  client.xxxx isn't responding to mclientcaps(revoke
749
* https://tracker.ceph.com/issues/57299
750
  qa: test_dump_loads fails with JSONDecodeError
751
* https://tracker.ceph.com/issues/57655 [Exist in main as well]
752
  qa: fs:mixed-clients kernel_untar_build failure
753
* https://tracker.ceph.com/issues/57206
754
  libcephfs/test.sh: ceph_test_libcephfs_reclaim
755
756 90 Rishabh Dave
h3. 2022 Sep 29
757
758
http://pulpito.front.sepia.ceph.com/rishabh-2022-09-14_12:48:43-fs-wip-rishabh-testing-2022Sep9-1708-testing-default-smithi/
759
760
* https://tracker.ceph.com/issues/55804
761
  Command failed (workunit test suites/pjd.sh)
762
* https://tracker.ceph.com/issues/36593
763
  Command failed (workunit test fs/quota/quota.sh) on smithixxx with status 1
764
* https://tracker.ceph.com/issues/52624
765
  qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
766
* https://tracker.ceph.com/issues/51964
767
  qa: test_cephfs_mirror_restart_sync_on_blocklist failure
768
* https://tracker.ceph.com/issues/56632
769
  Test failure: test_subvolume_snapshot_clone_quota_exceeded
770
* https://tracker.ceph.com/issues/50821
771
  qa: untar_snap_rm failure during mds thrashing
772
773 88 Patrick Donnelly
h3. 2022 Sep 26
774
775
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20220923.171109
776
777
* https://tracker.ceph.com/issues/55804
778
    qa failure: pjd link tests failed
779
* https://tracker.ceph.com/issues/57676
780
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
781
* https://tracker.ceph.com/issues/52624
782
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
783
* https://tracker.ceph.com/issues/57580
784
    Test failure: test_newops_getvxattr (tasks.cephfs.test_newops.TestNewOps)
785
* https://tracker.ceph.com/issues/48773
786
    qa: scrub does not complete
787
* https://tracker.ceph.com/issues/57299
788
    qa: test_dump_loads fails with JSONDecodeError
789
* https://tracker.ceph.com/issues/57280
790
    qa: tasks/kernel_cfuse_workunits_untarbuild_blogbench fails - Failed to fetch package version from shaman
791
* https://tracker.ceph.com/issues/57205
792
    Test failure: test_subvolume_group_ls_filter_internal_directories (tasks.cephfs.test_volumes.TestSubvolumeGroups)
793
* https://tracker.ceph.com/issues/57656
794
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
795
* https://tracker.ceph.com/issues/57677
796
    qa: "1 MDSs behind on trimming (MDS_TRIM)"
797
* https://tracker.ceph.com/issues/57206
798
    libcephfs/test.sh: ceph_test_libcephfs_reclaim
799
* https://tracker.ceph.com/issues/57446
800
    qa: test_subvolume_snapshot_info_if_orphan_clone fails
801
* https://tracker.ceph.com/issues/57655 [Exist in main as well]
802
    qa: fs:mixed-clients kernel_untar_build failure
803 89 Patrick Donnelly
* https://tracker.ceph.com/issues/57682
804
    client: ERROR: test_reconnect_after_blocklisted
805 88 Patrick Donnelly
806
807 87 Patrick Donnelly
h3. 2022 Sep 22
808
809
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20220920.234701
810
811
* https://tracker.ceph.com/issues/57299
812
    qa: test_dump_loads fails with JSONDecodeError
813
* https://tracker.ceph.com/issues/57205
814
    Test failure: test_subvolume_group_ls_filter_internal_directories (tasks.cephfs.test_volumes.TestSubvolumeGroups)
815
* https://tracker.ceph.com/issues/52624
816
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
817
* https://tracker.ceph.com/issues/57580
818
    Test failure: test_newops_getvxattr (tasks.cephfs.test_newops.TestNewOps)
819
* https://tracker.ceph.com/issues/57280
820
    qa: tasks/kernel_cfuse_workunits_untarbuild_blogbench fails - Failed to fetch package version from shaman
821
* https://tracker.ceph.com/issues/48773
822
    qa: scrub does not complete
823
* https://tracker.ceph.com/issues/56446
824
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
825
* https://tracker.ceph.com/issues/57206
826
    libcephfs/test.sh: ceph_test_libcephfs_reclaim
827
* https://tracker.ceph.com/issues/51267
828
    CommandFailedError: Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithi096 with status 1:...
829
830
NEW:
831
832
* https://tracker.ceph.com/issues/57656
833
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
834
* https://tracker.ceph.com/issues/57655 [Exist in main as well]
835
    qa: fs:mixed-clients kernel_untar_build failure
836
* https://tracker.ceph.com/issues/57657
837
    mds: scrub locates mismatch between child accounted_rstats and self rstats
838
839
Segfault probably caused by: https://github.com/ceph/ceph/pull/47795#issuecomment-1255724799
840
841
842 80 Venky Shankar
h3. 2022 Sep 16
843 79 Venky Shankar
844
https://pulpito.ceph.com/?branch=wip-vshankar-testing1-20220905-132828
845
846
* https://tracker.ceph.com/issues/57446
847
    qa: test_subvolume_snapshot_info_if_orphan_clone fails
848
* https://tracker.ceph.com/issues/57299
849
    qa: test_dump_loads fails with JSONDecodeError
850
* https://tracker.ceph.com/issues/50223
851
    client.xxxx isn't responding to mclientcaps(revoke)
852
* https://tracker.ceph.com/issues/52624
853
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
854
* https://tracker.ceph.com/issues/57205
855
    Test failure: test_subvolume_group_ls_filter_internal_directories (tasks.cephfs.test_volumes.TestSubvolumeGroups)
856
* https://tracker.ceph.com/issues/57280
857
    qa: tasks/kernel_cfuse_workunits_untarbuild_blogbench fails - Failed to fetch package version from shaman
858
* https://tracker.ceph.com/issues/51282
859
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
860
* https://tracker.ceph.com/issues/48203
861
  https://tracker.ceph.com/issues/36593
862
    qa: quota failure
863
    qa: quota failure caused by clients stepping on each other
864
* https://tracker.ceph.com/issues/57580
865
    Test failure: test_newops_getvxattr (tasks.cephfs.test_newops.TestNewOps)
866
867 77 Rishabh Dave
868
h3. 2022 Aug 26
869 76 Rishabh Dave
870
http://pulpito.front.sepia.ceph.com/rishabh-2022-08-22_17:49:59-fs-wip-rishabh-testing-2022Aug19-testing-default-smithi/
871
http://pulpito.front.sepia.ceph.com/rishabh-2022-08-24_11:56:51-fs-wip-rishabh-testing-2022Aug19-testing-default-smithi/
872
873
* https://tracker.ceph.com/issues/57206
874
  libcephfs/test.sh: ceph_test_libcephfs_reclaim
875
* https://tracker.ceph.com/issues/56632
876
  Test failure: test_subvolume_snapshot_clone_quota_exceeded (tasks.cephfs.test_volumes.TestSubvolumeSnapshotClones)
877
* https://tracker.ceph.com/issues/56446
878
  Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
879
* https://tracker.ceph.com/issues/51964
880
  qa: test_cephfs_mirror_restart_sync_on_blocklist failure
881
* https://tracker.ceph.com/issues/53859
882
  qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
883
884
* https://tracker.ceph.com/issues/54460
885
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
886
* https://tracker.ceph.com/issues/54462
887
  Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128
888
* https://tracker.ceph.com/issues/54460
889
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
890
* https://tracker.ceph.com/issues/36593
891
  Command failed (workunit test fs/quota/quota.sh) on smithixxx with status 1
892
893
* https://tracker.ceph.com/issues/52624
894
  qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
895
* https://tracker.ceph.com/issues/55804
896
  Command failed (workunit test suites/pjd.sh)
897
* https://tracker.ceph.com/issues/50223
898
  client.xxxx isn't responding to mclientcaps(revoke)
899
900
901 75 Venky Shankar
h3. 2022 Aug 22
902
903
https://pulpito.ceph.com/vshankar-2022-08-12_09:34:24-fs-wip-vshankar-testing1-20220812-072441-testing-default-smithi/
904
https://pulpito.ceph.com/vshankar-2022-08-18_04:30:42-fs-wip-vshankar-testing1-20220818-082047-testing-default-smithi/ (drop problematic PR and re-run)
905
906
* https://tracker.ceph.com/issues/52624
907
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
908
* https://tracker.ceph.com/issues/56446
909
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
910
* https://tracker.ceph.com/issues/55804
911
    Command failed (workunit test suites/pjd.sh)
912
* https://tracker.ceph.com/issues/51278
913
    mds: "FAILED ceph_assert(!segments.empty())"
914
* https://tracker.ceph.com/issues/54460
915
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
916
* https://tracker.ceph.com/issues/57205
917
    Test failure: test_subvolume_group_ls_filter_internal_directories (tasks.cephfs.test_volumes.TestSubvolumeGroups)
918
* https://tracker.ceph.com/issues/57206
919
    ceph_test_libcephfs_reclaim crashes during test
920
* https://tracker.ceph.com/issues/53859
921
  Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
922
* https://tracker.ceph.com/issues/50223
923
    client.xxxx isn't responding to mclientcaps(revoke)
924
925 72 Venky Shankar
h3. 2022 Aug 12
926
927
https://pulpito.ceph.com/vshankar-2022-08-10_04:06:00-fs-wip-vshankar-testing-20220805-190751-testing-default-smithi/
928
https://pulpito.ceph.com/vshankar-2022-08-11_12:16:58-fs-wip-vshankar-testing-20220811-145809-testing-default-smithi/ (drop problematic PR and re-run)
929
930
* https://tracker.ceph.com/issues/52624
931
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
932
* https://tracker.ceph.com/issues/56446
933
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
934
* https://tracker.ceph.com/issues/51964
935
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
936
* https://tracker.ceph.com/issues/55804
937
    Command failed (workunit test suites/pjd.sh)
938
* https://tracker.ceph.com/issues/50223
939
    client.xxxx isn't responding to mclientcaps(revoke)
940
* https://tracker.ceph.com/issues/50821
941
    qa: untar_snap_rm failure during mds thrashing
942
* https://tracker.ceph.com/issues/54460
943 73 Venky Shankar
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
944 72 Venky Shankar
945 71 Venky Shankar
h3. 2022 Aug 04
946
947
https://pulpito.ceph.com/?branch=wip-vshankar-testing1-20220804-123835 (only mgr/volumes, mgr/stats)
948
949
Unrealted teuthology failure on rhel
950
951 69 Rishabh Dave
h3. 2022 Jul 25
952 68 Rishabh Dave
953
http://pulpito.front.sepia.ceph.com/rishabh-2022-07-22_11:34:20-fs-wip-rishabh-testing-2022Jul22-1400-testing-default-smithi/
954
955
1st re-run: http://pulpito.front.sepia.ceph.com/rishabh-2022-07-24_03:51:19-fs-wip-rishabh-testing-2022Jul22-1400-testing-default-smithi
956
2nd re-run: http://pulpito.front.sepia.ceph.com/rishabh-2022-07-24_08:53:36-fs-wip-rishabh-testing-2022Jul22-1400-testing-default-smithi/
957 74 Rishabh Dave
3rd re-run: http://pulpito.front.sepia.ceph.com/rishabh-2022-07-24_08:53:36-fs-wip-rishabh-testing-2022Jul22-1400-testing-default-smithi/
958
4th (final) re-run: http://pulpito.front.sepia.ceph.com/rishabh-2022-07-28_03:59:01-fs-wip-rishabh-testing-2022Jul28-0143-testing-default-smithi/
959 68 Rishabh Dave
960
* https://tracker.ceph.com/issues/55804
961
  Command failed (workunit test suites/pjd.sh)
962
* https://tracker.ceph.com/issues/50223
963
  client.xxxx isn't responding to mclientcaps(revoke)
964
965
* https://tracker.ceph.com/issues/54460
966
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
967
* https://tracker.ceph.com/issues/36593
968
  Command failed (workunit test fs/quota/quota.sh) on smithixxx with status 1
969 1 Patrick Donnelly
* https://tracker.ceph.com/issues/54462
970 74 Rishabh Dave
  Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128~
971 68 Rishabh Dave
972 67 Patrick Donnelly
h3. 2022 July 22
973
974
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20220721.235756
975
976
MDS_HEALTH_DUMMY error in log fixed by followup commit.
977
transient selinux ping failure
978
979
* https://tracker.ceph.com/issues/56694
980
    qa: avoid blocking forever on hung umount
981
* https://tracker.ceph.com/issues/56695
982
    [RHEL stock] pjd test failures
983
* https://tracker.ceph.com/issues/56696
984
    admin keyring disappears during qa run
985
* https://tracker.ceph.com/issues/56697
986
    qa: fs/snaps fails for fuse
987
* https://tracker.ceph.com/issues/50222
988
    osd: 5.2s0 deep-scrub : stat mismatch
989
* https://tracker.ceph.com/issues/56698
990
    client: FAILED ceph_assert(_size == 0)
991
* https://tracker.ceph.com/issues/50223
992
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
993
994
995 66 Rishabh Dave
h3. 2022 Jul 15
996 65 Rishabh Dave
997
http://pulpito.front.sepia.ceph.com/rishabh-2022-07-08_23:53:34-fs-wip-rishabh-testing-2022Jul08-1820-testing-default-smithi/
998
999
re-run: http://pulpito.front.sepia.ceph.com/rishabh-2022-07-15_06:42:04-fs-wip-rishabh-testing-2022Jul08-1820-testing-default-smithi/
1000
1001
* https://tracker.ceph.com/issues/53859
1002
  Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
1003
* https://tracker.ceph.com/issues/55804
1004
  Command failed (workunit test suites/pjd.sh)
1005
* https://tracker.ceph.com/issues/50223
1006
  client.xxxx isn't responding to mclientcaps(revoke)
1007
* https://tracker.ceph.com/issues/50222
1008
  osd: deep-scrub : stat mismatch
1009
1010
* https://tracker.ceph.com/issues/56632
1011
  Test failure: test_subvolume_snapshot_clone_quota_exceeded (tasks.cephfs.test_volumes.TestSubvolumeSnapshotClones)
1012
* https://tracker.ceph.com/issues/56634
1013
  workunit test fs/snaps/snaptest-intodir.sh
1014
* https://tracker.ceph.com/issues/56644
1015
  Test failure: test_rapid_creation (tasks.cephfs.test_fragment.TestFragmentation)
1016
1017
1018
1019 61 Rishabh Dave
h3. 2022 July 05
1020
1021
http://pulpito.front.sepia.ceph.com/rishabh-2022-07-02_14:14:52-fs-wip-rishabh-testing-20220702-1631-testing-default-smithi/
1022 62 Rishabh Dave
1023 64 Rishabh Dave
On 1st re-run some jobs passed - http://pulpito.front.sepia.ceph.com/rishabh-2022-07-03_15:10:28-fs-wip-rishabh-testing-20220702-1631-distro-default-smithi/
1024
1025
On 2nd re-run only few jobs failed -
1026
http://pulpito.front.sepia.ceph.com/rishabh-2022-07-06_05:24:29-fs-wip-rishabh-testing-20220705-2132-distro-default-smithi/
1027
http://pulpito.front.sepia.ceph.com/rishabh-2022-07-06_05:24:29-fs-wip-rishabh-testing-20220705-2132-distro-default-smithi/
1028 62 Rishabh Dave
1029
* https://tracker.ceph.com/issues/56446
1030
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
1031
* https://tracker.ceph.com/issues/55804
1032
    Command failed (workunit test suites/pjd.sh) on smithi047 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/
1033
1034
* https://tracker.ceph.com/issues/56445
1035
    Command failed on smithi080 with status 123: "find /home/ubuntu/cephtest/archive/syslog -name '*.log' -print0 | sudo xargs -0 --no-run-if-empty -- gzip --"
1036
* https://tracker.ceph.com/issues/51267
1037 63 Rishabh Dave
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithi098 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1
1038
* https://tracker.ceph.com/issues/50224
1039
    Test failure: test_mirroring_init_failure_with_recovery (tasks.cephfs.test_mirroring.TestMirroring)
1040 62 Rishabh Dave
1041
1042 61 Rishabh Dave
1043 58 Venky Shankar
h3. 2022 July 04
1044
1045
https://pulpito.ceph.com/vshankar-2022-06-29_09:19:00-fs-wip-vshankar-testing-20220627-100931-testing-default-smithi/
1046
(rhel runs were borked due to: https://lists.ceph.io/hyperkitty/list/dev@ceph.io/thread/JSZQFUKVLDND4W33PXDGCABPHNSPT6SS/, tests ran with --filter-out=rhel)
1047
1048
* https://tracker.ceph.com/issues/56445
1049
    Command failed on smithi162 with status 123: "find /home/ubuntu/cephtest/archive/syslog -name '*.log' -print0 | sudo xargs -0 --no-run-if-empty -- gzip --"
1050
* https://tracker.ceph.com/issues/56446
1051 59 Rishabh Dave
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
1052
* https://tracker.ceph.com/issues/51964
1053
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1054
* https://tracker.ceph.com/issues/52624
1055 60 Rishabh Dave
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1056 59 Rishabh Dave
1057 57 Venky Shankar
h3. 2022 June 20
1058
1059
https://pulpito.ceph.com/vshankar-2022-06-15_04:03:39-fs-wip-vshankar-testing1-20220615-072516-testing-default-smithi/
1060
https://pulpito.ceph.com/vshankar-2022-06-19_08:22:46-fs-wip-vshankar-testing1-20220619-102531-testing-default-smithi/
1061
1062
* https://tracker.ceph.com/issues/52624
1063
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1064
* https://tracker.ceph.com/issues/55804
1065
    qa failure: pjd link tests failed
1066
* https://tracker.ceph.com/issues/54108
1067
    qa: iogen workunit: "The following counters failed to be set on mds daemons: {'mds.exported', 'mds.imported'}"
1068
* https://tracker.ceph.com/issues/55332
1069
    Failure in snaptest-git-ceph.sh (it's an async unlink/create bug)
1070
1071 56 Patrick Donnelly
h3. 2022 June 13
1072
1073
https://pulpito.ceph.com/pdonnell-2022-06-12_05:08:12-fs:workload-wip-pdonnell-testing-20220612.004943-distro-default-smithi/
1074
1075
* https://tracker.ceph.com/issues/56024
1076
    cephadm: removes ceph.conf during qa run causing command failure
1077
* https://tracker.ceph.com/issues/48773
1078
    qa: scrub does not complete
1079
* https://tracker.ceph.com/issues/56012
1080
    mds: src/mds/MDLog.cc: 283: FAILED ceph_assert(!mds->is_ any_replay())
1081
1082
1083 55 Venky Shankar
h3. 2022 Jun 13
1084 54 Venky Shankar
1085
https://pulpito.ceph.com/vshankar-2022-06-07_00:25:50-fs-wip-vshankar-testing-20220606-223254-testing-default-smithi/
1086
https://pulpito.ceph.com/vshankar-2022-06-10_01:04:46-fs-wip-vshankar-testing-20220609-175550-testing-default-smithi/
1087
1088
* https://tracker.ceph.com/issues/52624
1089
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1090
* https://tracker.ceph.com/issues/51964
1091
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1092
* https://tracker.ceph.com/issues/53859
1093
    qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
1094
* https://tracker.ceph.com/issues/55804
1095
    qa failure: pjd link tests failed
1096
* https://tracker.ceph.com/issues/56003
1097
    client: src/include/xlist.h: 81: FAILED ceph_assert(_size == 0)
1098
* https://tracker.ceph.com/issues/56011
1099
    fs/thrash: snaptest-snap-rm-cmp.sh fails in mds5sum comparison
1100
* https://tracker.ceph.com/issues/56012
1101
    mds: src/mds/MDLog.cc: 283: FAILED ceph_assert(!mds->is_ any_replay())
1102
1103 53 Venky Shankar
h3. 2022 Jun 07
1104
1105
https://pulpito.ceph.com/vshankar-2022-06-06_21:25:41-fs-wip-vshankar-testing1-20220606-230129-testing-default-smithi/
1106
https://pulpito.ceph.com/vshankar-2022-06-07_10:53:31-fs-wip-vshankar-testing1-20220607-104134-testing-default-smithi/ (rerun after dropping a problematic PR)
1107
1108
* https://tracker.ceph.com/issues/52624
1109
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1110
* https://tracker.ceph.com/issues/50223
1111
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
1112
* https://tracker.ceph.com/issues/50224
1113
    qa: test_mirroring_init_failure_with_recovery failure
1114
1115 51 Venky Shankar
h3. 2022 May 12
1116
1117
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20220509-125847
1118 52 Venky Shankar
https://pulpito.ceph.com/vshankar-2022-05-13_17:09:16-fs-wip-vshankar-testing-20220513-120051-testing-default-smithi/ (drop prs + rerun)
1119 51 Venky Shankar
1120
* https://tracker.ceph.com/issues/52624
1121
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1122
* https://tracker.ceph.com/issues/50223
1123
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
1124
* https://tracker.ceph.com/issues/55332
1125
    Failure in snaptest-git-ceph.sh
1126
* https://tracker.ceph.com/issues/53859
1127
    qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
1128
* https://tracker.ceph.com/issues/55538
1129 1 Patrick Donnelly
    Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead)
1130 52 Venky Shankar
* https://tracker.ceph.com/issues/55258
1131
    lots of "heartbeat_check: no reply from X.X.X.X" in OSD logs (cropss up again, though very infrequent)
1132 51 Venky Shankar
1133 49 Venky Shankar
h3. 2022 May 04
1134
1135 50 Venky Shankar
https://pulpito.ceph.com/vshankar-2022-05-01_13:18:44-fs-wip-vshankar-testing1-20220428-204527-testing-default-smithi/
1136
https://pulpito.ceph.com/vshankar-2022-05-02_16:58:59-fs-wip-vshankar-testing1-20220502-201957-testing-default-smithi/ (after dropping PRs)
1137
1138 49 Venky Shankar
* https://tracker.ceph.com/issues/52624
1139
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1140
* https://tracker.ceph.com/issues/50223
1141
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
1142
* https://tracker.ceph.com/issues/55332
1143
    Failure in snaptest-git-ceph.sh
1144
* https://tracker.ceph.com/issues/53859
1145
    qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
1146
* https://tracker.ceph.com/issues/55516
1147
    qa: fs suite tests failing with "json.decoder.JSONDecodeError: Extra data: line 2 column 82 (char 82)"
1148
* https://tracker.ceph.com/issues/55537
1149
    mds: crash during fs:upgrade test
1150
* https://tracker.ceph.com/issues/55538
1151
    Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead)
1152
1153 48 Venky Shankar
h3. 2022 Apr 25
1154
1155
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20220420-113951 (owner vshankar)
1156
1157
* https://tracker.ceph.com/issues/52624
1158
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1159
* https://tracker.ceph.com/issues/50223
1160
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
1161
* https://tracker.ceph.com/issues/55258
1162
    lots of "heartbeat_check: no reply from X.X.X.X" in OSD logs
1163
* https://tracker.ceph.com/issues/55377
1164
    kclient: mds revoke Fwb caps stuck after the kclient tries writebcak once
1165
1166 47 Venky Shankar
h3. 2022 Apr 14
1167
1168
https://pulpito.ceph.com/?branch=wip-vshankar-testing1-20220411-144044
1169
1170
* https://tracker.ceph.com/issues/52624
1171
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1172
* https://tracker.ceph.com/issues/50223
1173
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
1174
* https://tracker.ceph.com/issues/52438
1175
    qa: ffsb timeout
1176
* https://tracker.ceph.com/issues/55170
1177
    mds: crash during rejoin (CDir::fetch_keys)
1178
* https://tracker.ceph.com/issues/55331
1179
    pjd failure
1180
* https://tracker.ceph.com/issues/48773
1181
    qa: scrub does not complete
1182
* https://tracker.ceph.com/issues/55332
1183
    Failure in snaptest-git-ceph.sh
1184
* https://tracker.ceph.com/issues/55258
1185
    lots of "heartbeat_check: no reply from X.X.X.X" in OSD logs
1186
1187 45 Venky Shankar
h3. 2022 Apr 11
1188
1189 46 Venky Shankar
https://pulpito.ceph.com/?branch=wip-vshankar-testing-55110-20220408-203242
1190 45 Venky Shankar
1191
* https://tracker.ceph.com/issues/48773
1192
    qa: scrub does not complete
1193
* https://tracker.ceph.com/issues/52624
1194
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1195
* https://tracker.ceph.com/issues/52438
1196
    qa: ffsb timeout
1197
* https://tracker.ceph.com/issues/48680
1198
    mds: scrubbing stuck "scrub active (0 inodes in the stack)"
1199
* https://tracker.ceph.com/issues/55236
1200
    qa: fs/snaps tests fails with "hit max job timeout"
1201
* https://tracker.ceph.com/issues/54108
1202
    qa: iogen workunit: "The following counters failed to be set on mds daemons: {'mds.exported', 'mds.imported'}"
1203
* https://tracker.ceph.com/issues/54971
1204
    Test failure: test_perf_stats_stale_metrics (tasks.cephfs.test_mds_metrics.TestMDSMetrics)
1205
* https://tracker.ceph.com/issues/50223
1206
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
1207
* https://tracker.ceph.com/issues/55258
1208
    lots of "heartbeat_check: no reply from X.X.X.X" in OSD logs
1209
1210 44 Venky Shankar
h3. 2022 Mar 21
1211 42 Venky Shankar
1212 43 Venky Shankar
https://pulpito.ceph.com/vshankar-2022-03-20_02:16:37-fs-wip-vshankar-testing-20220319-163539-testing-default-smithi/
1213
1214
Run didn't go well, lots of failures - debugging by dropping PRs and running against master branch. Only merging unrelated PRs that pass tests.
1215
1216
1217
h3. 2022 Mar 08
1218
1219 42 Venky Shankar
https://pulpito.ceph.com/vshankar-2022-02-28_04:32:15-fs-wip-vshankar-testing-20220226-211550-testing-default-smithi/
1220
1221
rerun with
1222
- (drop) https://github.com/ceph/ceph/pull/44679
1223
- (drop) https://github.com/ceph/ceph/pull/44958
1224
https://pulpito.ceph.com/vshankar-2022-03-06_14:47:51-fs-wip-vshankar-testing-20220304-132102-testing-default-smithi/
1225
1226
* https://tracker.ceph.com/issues/54419 (new)
1227
    `ceph orch upgrade start` seems to never reach completion
1228
* https://tracker.ceph.com/issues/51964
1229
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1230
* https://tracker.ceph.com/issues/52624
1231
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1232
* https://tracker.ceph.com/issues/50223
1233
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
1234
* https://tracker.ceph.com/issues/52438
1235
    qa: ffsb timeout
1236
* https://tracker.ceph.com/issues/50821
1237
    qa: untar_snap_rm failure during mds thrashing
1238
1239
1240 41 Venky Shankar
h3. 2022 Feb 09
1241
1242
https://pulpito.ceph.com/vshankar-2022-02-05_17:27:49-fs-wip-vshankar-testing-20220201-113815-testing-default-smithi/
1243
1244
rerun with
1245
- (drop) https://github.com/ceph/ceph/pull/37938
1246
- (drop) https://github.com/ceph/ceph/pull/44335
1247
- (drop) https://github.com/ceph/ceph/pull/44491
1248
- (drop) https://github.com/ceph/ceph/pull/44501
1249
https://pulpito.ceph.com/vshankar-2022-02-08_14:27:29-fs-wip-vshankar-testing-20220208-181241-testing-default-smithi/
1250
1251
* https://tracker.ceph.com/issues/51964
1252
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1253
* https://tracker.ceph.com/issues/54066
1254
    test_subvolume_no_upgrade_v1_sanity fails with `AssertionError: 1000 != 0`
1255
* https://tracker.ceph.com/issues/48773
1256
    qa: scrub does not complete
1257
* https://tracker.ceph.com/issues/52624
1258
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1259
* https://tracker.ceph.com/issues/50223
1260
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
1261
* https://tracker.ceph.com/issues/52438
1262
    qa: ffsb timeout
1263
1264 40 Patrick Donnelly
h3. 2022 Feb 01
1265
1266
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20220127.171526
1267
1268
* https://tracker.ceph.com/issues/54107
1269
    kclient: hang during umount
1270
* https://tracker.ceph.com/issues/54106
1271
    kclient: hang during workunit cleanup
1272
* https://tracker.ceph.com/issues/54108
1273
    qa: iogen workunit: "The following counters failed to be set on mds daemons: {'mds.exported', 'mds.imported'}"
1274
* https://tracker.ceph.com/issues/48773
1275
    qa: scrub does not complete
1276
* https://tracker.ceph.com/issues/52438
1277
    qa: ffsb timeout
1278
1279
1280 36 Venky Shankar
h3. 2022 Jan 13
1281
1282
https://pulpito.ceph.com/vshankar-2022-01-06_13:18:41-fs-wip-vshankar-testing-20220106-145819-testing-default-smithi/
1283 39 Venky Shankar
1284 36 Venky Shankar
rerun with:
1285 38 Venky Shankar
- (add) https://github.com/ceph/ceph/pull/44570
1286
- (drop) https://github.com/ceph/ceph/pull/43184
1287 36 Venky Shankar
https://pulpito.ceph.com/vshankar-2022-01-13_04:42:40-fs-wip-vshankar-testing-20220106-145819-testing-default-smithi/
1288
1289
* https://tracker.ceph.com/issues/50223
1290
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
1291
* https://tracker.ceph.com/issues/51282
1292
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
1293
* https://tracker.ceph.com/issues/48773
1294
    qa: scrub does not complete
1295
* https://tracker.ceph.com/issues/52624
1296
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1297
* https://tracker.ceph.com/issues/53859
1298
    qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
1299
1300 34 Venky Shankar
h3. 2022 Jan 03
1301
1302
https://pulpito.ceph.com/vshankar-2021-12-22_07:37:44-fs-wip-vshankar-testing-20211216-114012-testing-default-smithi/
1303
https://pulpito.ceph.com/vshankar-2022-01-03_12:27:45-fs-wip-vshankar-testing-20220103-142738-testing-default-smithi/ (rerun)
1304
1305
* https://tracker.ceph.com/issues/50223
1306
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
1307
* https://tracker.ceph.com/issues/51964
1308
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1309
* https://tracker.ceph.com/issues/51267
1310
    CommandFailedError: Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithi096 with status 1:...
1311
* https://tracker.ceph.com/issues/51282
1312
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
1313
* https://tracker.ceph.com/issues/50821
1314
    qa: untar_snap_rm failure during mds thrashing
1315
* https://tracker.ceph.com/issues/51278
1316
    mds: "FAILED ceph_assert(!segments.empty())"
1317 35 Ramana Raja
* https://tracker.ceph.com/issues/52279
1318
    cephadm tests fail due to: error adding seccomp filter rule for syscall bdflush: requested action matches default action of filter
1319
1320 34 Venky Shankar
1321 33 Patrick Donnelly
h3. 2021 Dec 22
1322
1323
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211222.014316
1324
1325
* https://tracker.ceph.com/issues/52624
1326
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1327
* https://tracker.ceph.com/issues/50223
1328
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
1329
* https://tracker.ceph.com/issues/52279
1330
    cephadm tests fail due to: error adding seccomp filter rule for syscall bdflush: requested action matches default action of filter
1331
* https://tracker.ceph.com/issues/50224
1332
    qa: test_mirroring_init_failure_with_recovery failure
1333
* https://tracker.ceph.com/issues/48773
1334
    qa: scrub does not complete
1335
1336
1337 32 Venky Shankar
h3. 2021 Nov 30
1338
1339
https://pulpito.ceph.com/vshankar-2021-11-24_07:14:27-fs-wip-vshankar-testing-20211124-094330-testing-default-smithi/
1340
https://pulpito.ceph.com/vshankar-2021-11-30_06:23:32-fs-wip-vshankar-testing-20211124-094330-distro-default-smithi/ (rerun w/ QA fixes)
1341
1342
* https://tracker.ceph.com/issues/53436
1343
    mds, mon: mds beacon messages get dropped? (mds never reaches up:active state)
1344
* https://tracker.ceph.com/issues/51964
1345
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1346
* https://tracker.ceph.com/issues/48812
1347
    qa: test_scrub_pause_and_resume_with_abort failure
1348
* https://tracker.ceph.com/issues/51076
1349
    "wait_for_recovery: failed before timeout expired" during thrashosd test with EC backend.
1350
* https://tracker.ceph.com/issues/50223
1351
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
1352
* https://tracker.ceph.com/issues/52624
1353
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1354
* https://tracker.ceph.com/issues/50250
1355
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
1356
1357
1358 31 Patrick Donnelly
h3. 2021 November 9
1359
1360
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211109.180315
1361
1362
* https://tracker.ceph.com/issues/53214
1363
    qa: "dd: error reading '/sys/kernel/debug/ceph/2a934501-6731-4052-a836-f42229a869be.client4874/metrics': Is a directory"
1364
* https://tracker.ceph.com/issues/48773
1365
    qa: scrub does not complete
1366
* https://tracker.ceph.com/issues/50223
1367
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
1368
* https://tracker.ceph.com/issues/51282
1369
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
1370
* https://tracker.ceph.com/issues/52624
1371
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1372
* https://tracker.ceph.com/issues/53216
1373
    qa: "RuntimeError: value of attributes should be either str or None. client_id"
1374
* https://tracker.ceph.com/issues/50250
1375
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
1376
1377
1378
1379 30 Patrick Donnelly
h3. 2021 November 03
1380
1381
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211103.023355
1382
1383
* https://tracker.ceph.com/issues/51964
1384
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1385
* https://tracker.ceph.com/issues/51282
1386
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
1387
* https://tracker.ceph.com/issues/52436
1388
    fs/ceph: "corrupt mdsmap"
1389
* https://tracker.ceph.com/issues/53074
1390
    pybind/mgr/cephadm: upgrade sequence does not continue if no MDS are active
1391
* https://tracker.ceph.com/issues/53150
1392
    pybind/mgr/cephadm/upgrade: tolerate MDS failures during upgrade straddling v16.2.5
1393
* https://tracker.ceph.com/issues/53155
1394
    MDSMonitor: assertion during upgrade to v16.2.5+
1395
1396
1397 29 Patrick Donnelly
h3. 2021 October 26
1398
1399
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211025.000447
1400
1401
* https://tracker.ceph.com/issues/53074
1402
    pybind/mgr/cephadm: upgrade sequence does not continue if no MDS are active
1403
* https://tracker.ceph.com/issues/52997
1404
    testing: hang ing umount
1405
* https://tracker.ceph.com/issues/50824
1406
    qa: snaptest-git-ceph bus error
1407
* https://tracker.ceph.com/issues/52436
1408
    fs/ceph: "corrupt mdsmap"
1409
* https://tracker.ceph.com/issues/48773
1410
    qa: scrub does not complete
1411
* https://tracker.ceph.com/issues/53082
1412
    ceph-fuse: segmenetation fault in Client::handle_mds_map
1413
* https://tracker.ceph.com/issues/50223
1414
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
1415
* https://tracker.ceph.com/issues/52624
1416
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1417
* https://tracker.ceph.com/issues/50224
1418
    qa: test_mirroring_init_failure_with_recovery failure
1419
* https://tracker.ceph.com/issues/50821
1420
    qa: untar_snap_rm failure during mds thrashing
1421
* https://tracker.ceph.com/issues/50250
1422
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
1423
1424
1425
1426 27 Patrick Donnelly
h3. 2021 October 19
1427
1428 28 Patrick Donnelly
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211019.013028
1429 27 Patrick Donnelly
1430
* https://tracker.ceph.com/issues/52995
1431
    qa: test_standby_count_wanted failure
1432
* https://tracker.ceph.com/issues/52948
1433
    osd: fails to come up: "teuthology.misc:7 of 8 OSDs are up"
1434
* https://tracker.ceph.com/issues/52996
1435
    qa: test_perf_counters via test_openfiletable
1436
* https://tracker.ceph.com/issues/48772
1437
    qa: pjd: not ok 9, 44, 80
1438
* https://tracker.ceph.com/issues/52997
1439
    testing: hang ing umount
1440
* https://tracker.ceph.com/issues/50250
1441
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
1442
* https://tracker.ceph.com/issues/52624
1443
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1444
* https://tracker.ceph.com/issues/50223
1445
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
1446
* https://tracker.ceph.com/issues/50821
1447
    qa: untar_snap_rm failure during mds thrashing
1448
* https://tracker.ceph.com/issues/48773
1449
    qa: scrub does not complete
1450
1451
1452 26 Patrick Donnelly
h3. 2021 October 12
1453
1454
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211012.192211
1455
1456
Some failures caused by teuthology bug: https://tracker.ceph.com/issues/52944
1457
1458
New test caused failure: https://github.com/ceph/ceph/pull/43297#discussion_r729883167
1459
1460
1461
* https://tracker.ceph.com/issues/51282
1462
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
1463
* https://tracker.ceph.com/issues/52948
1464
    osd: fails to come up: "teuthology.misc:7 of 8 OSDs are up"
1465
* https://tracker.ceph.com/issues/48773
1466
    qa: scrub does not complete
1467
* https://tracker.ceph.com/issues/50224
1468
    qa: test_mirroring_init_failure_with_recovery failure
1469
* https://tracker.ceph.com/issues/52949
1470
    RuntimeError: The following counters failed to be set on mds daemons: {'mds.dir_split'}
1471
1472
1473 25 Patrick Donnelly
h3. 2021 October 02
1474 23 Patrick Donnelly
1475 24 Patrick Donnelly
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211002.163337
1476
1477
Some failures caused by cephadm upgrade test. Fixed in follow-up qa commit.
1478
1479
test_simple failures caused by PR in this set.
1480
1481
A few reruns because of QA infra noise.
1482
1483
* https://tracker.ceph.com/issues/52822
1484
    qa: failed pacific install on fs:upgrade
1485
* https://tracker.ceph.com/issues/52624
1486
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1487
* https://tracker.ceph.com/issues/50223
1488
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
1489
* https://tracker.ceph.com/issues/48773
1490
    qa: scrub does not complete
1491
1492
1493
h3. 2021 September 20
1494
1495 23 Patrick Donnelly
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210917.174826
1496
1497
* https://tracker.ceph.com/issues/52677
1498
    qa: test_simple failure
1499
* https://tracker.ceph.com/issues/51279
1500
    kclient hangs on umount (testing branch)
1501
* https://tracker.ceph.com/issues/50223
1502
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
1503
* https://tracker.ceph.com/issues/50250
1504
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
1505
* https://tracker.ceph.com/issues/52624
1506
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1507
* https://tracker.ceph.com/issues/52438
1508
    qa: ffsb timeout
1509
1510
1511 22 Patrick Donnelly
h3. 2021 September 10
1512
1513
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210910.181451
1514
1515
* https://tracker.ceph.com/issues/50223
1516
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
1517
* https://tracker.ceph.com/issues/50250
1518
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
1519
* https://tracker.ceph.com/issues/52624
1520
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1521
* https://tracker.ceph.com/issues/52625
1522
    qa: test_kill_mdstable (tasks.cephfs.test_snapshots.TestSnapshots)
1523
* https://tracker.ceph.com/issues/52439
1524
    qa: acls does not compile on centos stream
1525
* https://tracker.ceph.com/issues/50821
1526
    qa: untar_snap_rm failure during mds thrashing
1527
* https://tracker.ceph.com/issues/48773
1528
    qa: scrub does not complete
1529
* https://tracker.ceph.com/issues/52626
1530
    mds: ScrubStack.cc: 831: FAILED ceph_assert(diri)
1531
* https://tracker.ceph.com/issues/51279
1532
    kclient hangs on umount (testing branch)
1533
1534
1535 21 Patrick Donnelly
h3. 2021 August 27
1536
1537
Several jobs died because of device failures.
1538
1539
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210827.024746
1540
1541
* https://tracker.ceph.com/issues/52430
1542
    mds: fast async create client mount breaks racy test
1543
* https://tracker.ceph.com/issues/52436
1544
    fs/ceph: "corrupt mdsmap"
1545
* https://tracker.ceph.com/issues/52437
1546
    mds: InoTable::replay_release_ids abort via test_inotable_sync
1547
* https://tracker.ceph.com/issues/51282
1548
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
1549
* https://tracker.ceph.com/issues/52438
1550
    qa: ffsb timeout
1551
* https://tracker.ceph.com/issues/52439
1552
    qa: acls does not compile on centos stream
1553
1554
1555 20 Patrick Donnelly
h3. 2021 July 30
1556
1557
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210729.214022
1558
1559
* https://tracker.ceph.com/issues/50250
1560
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
1561
* https://tracker.ceph.com/issues/51282
1562
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
1563
* https://tracker.ceph.com/issues/48773
1564
    qa: scrub does not complete
1565
* https://tracker.ceph.com/issues/51975
1566
    pybind/mgr/stats: KeyError
1567
1568
1569 19 Patrick Donnelly
h3. 2021 July 28
1570
1571
https://pulpito.ceph.com/pdonnell-2021-07-28_00:39:45-fs-wip-pdonnell-testing-20210727.213757-distro-basic-smithi/
1572
1573
with qa fix: https://pulpito.ceph.com/pdonnell-2021-07-28_16:20:28-fs-wip-pdonnell-testing-20210728.141004-distro-basic-smithi/
1574
1575
* https://tracker.ceph.com/issues/51905
1576
    qa: "error reading sessionmap 'mds1_sessionmap'"
1577
* https://tracker.ceph.com/issues/48773
1578
    qa: scrub does not complete
1579
* https://tracker.ceph.com/issues/50250
1580
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
1581
* https://tracker.ceph.com/issues/51267
1582
    CommandFailedError: Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithi096 with status 1:...
1583
* https://tracker.ceph.com/issues/51279
1584
    kclient hangs on umount (testing branch)
1585
1586
1587 18 Patrick Donnelly
h3. 2021 July 16
1588
1589
https://pulpito.ceph.com/pdonnell-2021-07-16_05:50:11-fs-wip-pdonnell-testing-20210716.022804-distro-basic-smithi/
1590
1591
* https://tracker.ceph.com/issues/48773
1592
    qa: scrub does not complete
1593
* https://tracker.ceph.com/issues/48772
1594
    qa: pjd: not ok 9, 44, 80
1595
* https://tracker.ceph.com/issues/45434
1596
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
1597
* https://tracker.ceph.com/issues/51279
1598
    kclient hangs on umount (testing branch)
1599
* https://tracker.ceph.com/issues/50824
1600
    qa: snaptest-git-ceph bus error
1601
1602
1603 17 Patrick Donnelly
h3. 2021 July 04
1604
1605
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210703.052904
1606
1607
* https://tracker.ceph.com/issues/48773
1608
    qa: scrub does not complete
1609
* https://tracker.ceph.com/issues/39150
1610
    mon: "FAILED ceph_assert(session_map.sessions.empty())" when out of quorum
1611
* https://tracker.ceph.com/issues/45434
1612
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
1613
* https://tracker.ceph.com/issues/51282
1614
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
1615
* https://tracker.ceph.com/issues/48771
1616
    qa: iogen: workload fails to cause balancing
1617
* https://tracker.ceph.com/issues/51279
1618
    kclient hangs on umount (testing branch)
1619
* https://tracker.ceph.com/issues/50250
1620
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
1621
1622
1623 16 Patrick Donnelly
h3. 2021 July 01
1624
1625
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210701.192056
1626
1627
* https://tracker.ceph.com/issues/51197
1628
    qa: [WRN] Scrub error on inode 0x10000001520 (/client.0/tmp/t/linux-5.4/Documentation/driver-api) see mds.f log and `damage ls` output for details
1629
* https://tracker.ceph.com/issues/50866
1630
    osd: stat mismatch on objects
1631
* https://tracker.ceph.com/issues/48773
1632
    qa: scrub does not complete
1633
1634
1635 15 Patrick Donnelly
h3. 2021 June 26
1636
1637
https://pulpito.ceph.com/pdonnell-2021-06-26_00:57:00-fs-wip-pdonnell-testing-20210625.225421-distro-basic-smithi/
1638
1639
* https://tracker.ceph.com/issues/51183
1640
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
1641
* https://tracker.ceph.com/issues/51410
1642
    kclient: fails to finish reconnect during MDS thrashing (testing branch)
1643
* https://tracker.ceph.com/issues/48773
1644
    qa: scrub does not complete
1645
* https://tracker.ceph.com/issues/51282
1646
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
1647
* https://tracker.ceph.com/issues/51169
1648
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
1649
* https://tracker.ceph.com/issues/48772
1650
    qa: pjd: not ok 9, 44, 80
1651
1652
1653 14 Patrick Donnelly
h3. 2021 June 21
1654
1655
https://pulpito.ceph.com/pdonnell-2021-06-22_00:27:21-fs-wip-pdonnell-testing-20210621.231646-distro-basic-smithi/
1656
1657
One failure caused by PR: https://github.com/ceph/ceph/pull/41935#issuecomment-866472599
1658
1659
* https://tracker.ceph.com/issues/51282
1660
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
1661
* https://tracker.ceph.com/issues/51183
1662
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
1663
* https://tracker.ceph.com/issues/48773
1664
    qa: scrub does not complete
1665
* https://tracker.ceph.com/issues/48771
1666
    qa: iogen: workload fails to cause balancing
1667
* https://tracker.ceph.com/issues/51169
1668
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
1669
* https://tracker.ceph.com/issues/50495
1670
    libcephfs: shutdown race fails with status 141
1671
* https://tracker.ceph.com/issues/45434
1672
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
1673
* https://tracker.ceph.com/issues/50824
1674
    qa: snaptest-git-ceph bus error
1675
* https://tracker.ceph.com/issues/50223
1676
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
1677
1678
1679 13 Patrick Donnelly
h3. 2021 June 16
1680
1681
https://pulpito.ceph.com/pdonnell-2021-06-16_21:26:55-fs-wip-pdonnell-testing-20210616.191804-distro-basic-smithi/
1682
1683
MDS abort class of failures caused by PR: https://github.com/ceph/ceph/pull/41667
1684
1685
* https://tracker.ceph.com/issues/45434
1686
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
1687
* https://tracker.ceph.com/issues/51169
1688
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
1689
* https://tracker.ceph.com/issues/43216
1690
    MDSMonitor: removes MDS coming out of quorum election
1691
* https://tracker.ceph.com/issues/51278
1692
    mds: "FAILED ceph_assert(!segments.empty())"
1693
* https://tracker.ceph.com/issues/51279
1694
    kclient hangs on umount (testing branch)
1695
* https://tracker.ceph.com/issues/51280
1696
    mds: "FAILED ceph_assert(r == 0 || r == -2)"
1697
* https://tracker.ceph.com/issues/51183
1698
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
1699
* https://tracker.ceph.com/issues/51281
1700
    qa: snaptest-snap-rm-cmp.sh: "echo 'FAIL: bad match, /tmp/a 4637e766853d1ad16a7b17079e2c6f03 != real c3883760b18d50e8d78819c54d579b00'"
1701
* https://tracker.ceph.com/issues/48773
1702
    qa: scrub does not complete
1703
* https://tracker.ceph.com/issues/51076
1704
    "wait_for_recovery: failed before timeout expired" during thrashosd test with EC backend.
1705
* https://tracker.ceph.com/issues/51228
1706
    qa: rmdir: failed to remove 'a/.snap/*': No such file or directory
1707
* https://tracker.ceph.com/issues/51282
1708
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
1709
1710
1711 12 Patrick Donnelly
h3. 2021 June 14
1712
1713
https://pulpito.ceph.com/pdonnell-2021-06-14_20:53:05-fs-wip-pdonnell-testing-20210614.173325-distro-basic-smithi/
1714
1715
Some Ubuntu 20.04 upgrade fallout. In particular, upgrade tests are failing due to missing packages for 18.04 Pacific.
1716
1717
* https://tracker.ceph.com/issues/51169
1718
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
1719
* https://tracker.ceph.com/issues/51228
1720
    qa: rmdir: failed to remove 'a/.snap/*': No such file or directory
1721
* https://tracker.ceph.com/issues/48773
1722
    qa: scrub does not complete
1723
* https://tracker.ceph.com/issues/51183
1724
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
1725
* https://tracker.ceph.com/issues/45434
1726
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
1727
* https://tracker.ceph.com/issues/51182
1728
    pybind/mgr/snap_schedule: Invalid command: Unexpected argument 'fs=cephfs'
1729
* https://tracker.ceph.com/issues/51229
1730
    qa: test_multi_snap_schedule list difference failure
1731
* https://tracker.ceph.com/issues/50821
1732
    qa: untar_snap_rm failure during mds thrashing
1733
1734
1735 11 Patrick Donnelly
h3. 2021 June 13
1736
1737
https://pulpito.ceph.com/pdonnell-2021-06-12_02:45:35-fs-wip-pdonnell-testing-20210612.002809-distro-basic-smithi/
1738
1739
Some Ubuntu 20.04 upgrade fallout. In particular, upgrade tests are failing due to missing packages for 18.04 Pacific.
1740
1741
* https://tracker.ceph.com/issues/51169
1742
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
1743
* https://tracker.ceph.com/issues/48773
1744
    qa: scrub does not complete
1745
* https://tracker.ceph.com/issues/51182
1746
    pybind/mgr/snap_schedule: Invalid command: Unexpected argument 'fs=cephfs'
1747
* https://tracker.ceph.com/issues/51183
1748
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
1749
* https://tracker.ceph.com/issues/51197
1750
    qa: [WRN] Scrub error on inode 0x10000001520 (/client.0/tmp/t/linux-5.4/Documentation/driver-api) see mds.f log and `damage ls` output for details
1751
* https://tracker.ceph.com/issues/45434
1752
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
1753
1754 10 Patrick Donnelly
h3. 2021 June 11
1755
1756
https://pulpito.ceph.com/pdonnell-2021-06-11_18:02:10-fs-wip-pdonnell-testing-20210611.162716-distro-basic-smithi/
1757
1758
Some Ubuntu 20.04 upgrade fallout. In particular, upgrade tests are failing due to missing packages for 18.04 Pacific.
1759
1760
* https://tracker.ceph.com/issues/51169
1761
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
1762
* https://tracker.ceph.com/issues/45434
1763
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
1764
* https://tracker.ceph.com/issues/48771
1765
    qa: iogen: workload fails to cause balancing
1766
* https://tracker.ceph.com/issues/43216
1767
    MDSMonitor: removes MDS coming out of quorum election
1768
* https://tracker.ceph.com/issues/51182
1769
    pybind/mgr/snap_schedule: Invalid command: Unexpected argument 'fs=cephfs'
1770
* https://tracker.ceph.com/issues/50223
1771
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
1772
* https://tracker.ceph.com/issues/48773
1773
    qa: scrub does not complete
1774
* https://tracker.ceph.com/issues/51183
1775
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
1776
* https://tracker.ceph.com/issues/51184
1777
    qa: fs:bugs does not specify distro
1778
1779
1780 9 Patrick Donnelly
h3. 2021 June 03
1781
1782
https://pulpito.ceph.com/pdonnell-2021-06-03_03:40:33-fs-wip-pdonnell-testing-20210603.020013-distro-basic-smithi/
1783
1784
* https://tracker.ceph.com/issues/45434
1785
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
1786
* https://tracker.ceph.com/issues/50016
1787
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
1788
* https://tracker.ceph.com/issues/50821
1789
    qa: untar_snap_rm failure during mds thrashing
1790
* https://tracker.ceph.com/issues/50622 (regression)
1791
    msg: active_connections regression
1792
* https://tracker.ceph.com/issues/49845#note-2 (regression)
1793
    qa: failed umount in test_volumes
1794
* https://tracker.ceph.com/issues/48773
1795
    qa: scrub does not complete
1796
* https://tracker.ceph.com/issues/43216
1797
    MDSMonitor: removes MDS coming out of quorum election
1798
1799
1800 7 Patrick Donnelly
h3. 2021 May 18
1801
1802 8 Patrick Donnelly
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210518.214114
1803
1804
Regression in testing kernel caused some failures. Ilya fixed those and rerun
1805
looked better. Some odd new noise in the rerun relating to packaging and "No
1806
module named 'tasks.ceph'".
1807
1808
* https://tracker.ceph.com/issues/50824
1809
    qa: snaptest-git-ceph bus error
1810
* https://tracker.ceph.com/issues/50622 (regression)
1811
    msg: active_connections regression
1812
* https://tracker.ceph.com/issues/49845#note-2 (regression)
1813
    qa: failed umount in test_volumes
1814
* https://tracker.ceph.com/issues/48203 (stock kernel update required)
1815
    qa: quota failure
1816
1817
1818
h3. 2021 May 18
1819
1820 7 Patrick Donnelly
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210518.025642
1821
1822
* https://tracker.ceph.com/issues/50821
1823
    qa: untar_snap_rm failure during mds thrashing
1824
* https://tracker.ceph.com/issues/48773
1825
    qa: scrub does not complete
1826
* https://tracker.ceph.com/issues/45591
1827
    mgr: FAILED ceph_assert(daemon != nullptr)
1828
* https://tracker.ceph.com/issues/50866
1829
    osd: stat mismatch on objects
1830
* https://tracker.ceph.com/issues/50016
1831
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
1832
* https://tracker.ceph.com/issues/50867
1833
    qa: fs:mirror: reduced data availability
1834
* https://tracker.ceph.com/issues/50821
1835
    qa: untar_snap_rm failure during mds thrashing
1836
* https://tracker.ceph.com/issues/50622 (regression)
1837
    msg: active_connections regression
1838
* https://tracker.ceph.com/issues/50223
1839
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
1840
* https://tracker.ceph.com/issues/50868
1841
    qa: "kern.log.gz already exists; not overwritten"
1842
* https://tracker.ceph.com/issues/50870
1843
    qa: test_full: "rm: cannot remove 'large_file_a': Permission denied"
1844
1845
1846 6 Patrick Donnelly
h3. 2021 May 11
1847
1848
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210511.232042
1849
1850
* one class of failures caused by PR
1851
* https://tracker.ceph.com/issues/48812
1852
    qa: test_scrub_pause_and_resume_with_abort failure
1853
* https://tracker.ceph.com/issues/50390
1854
    mds: monclient: wait_auth_rotating timed out after 30
1855
* https://tracker.ceph.com/issues/48773
1856
    qa: scrub does not complete
1857
* https://tracker.ceph.com/issues/50821
1858
    qa: untar_snap_rm failure during mds thrashing
1859
* https://tracker.ceph.com/issues/50224
1860
    qa: test_mirroring_init_failure_with_recovery failure
1861
* https://tracker.ceph.com/issues/50622 (regression)
1862
    msg: active_connections regression
1863
* https://tracker.ceph.com/issues/50825
1864
    qa: snaptest-git-ceph hang during mon thrashing v2
1865
* https://tracker.ceph.com/issues/50821
1866
    qa: untar_snap_rm failure during mds thrashing
1867
* https://tracker.ceph.com/issues/50823
1868
    qa: RuntimeError: timeout waiting for cluster to stabilize
1869
1870
1871 5 Patrick Donnelly
h3. 2021 May 14
1872
1873
https://pulpito.ceph.com/pdonnell-2021-05-14_21:45:42-fs-master-distro-basic-smithi/
1874
1875
* https://tracker.ceph.com/issues/48812
1876
    qa: test_scrub_pause_and_resume_with_abort failure
1877
* https://tracker.ceph.com/issues/50821
1878
    qa: untar_snap_rm failure during mds thrashing
1879
* https://tracker.ceph.com/issues/50622 (regression)
1880
    msg: active_connections regression
1881
* https://tracker.ceph.com/issues/50822
1882
    qa: testing kernel patch for client metrics causes mds abort
1883
* https://tracker.ceph.com/issues/48773
1884
    qa: scrub does not complete
1885
* https://tracker.ceph.com/issues/50823
1886
    qa: RuntimeError: timeout waiting for cluster to stabilize
1887
* https://tracker.ceph.com/issues/50824
1888
    qa: snaptest-git-ceph bus error
1889
* https://tracker.ceph.com/issues/50825
1890
    qa: snaptest-git-ceph hang during mon thrashing v2
1891
* https://tracker.ceph.com/issues/50826
1892
    kceph: stock RHEL kernel hangs on snaptests with mon|osd thrashers
1893
1894
1895 4 Patrick Donnelly
h3. 2021 May 01
1896
1897
https://pulpito.ceph.com/pdonnell-2021-05-01_09:07:09-fs-wip-pdonnell-testing-20210501.040415-distro-basic-smithi/
1898
1899
* https://tracker.ceph.com/issues/45434
1900
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
1901
* https://tracker.ceph.com/issues/50281
1902
    qa: untar_snap_rm timeout
1903
* https://tracker.ceph.com/issues/48203 (stock kernel update required)
1904
    qa: quota failure
1905
* https://tracker.ceph.com/issues/48773
1906
    qa: scrub does not complete
1907
* https://tracker.ceph.com/issues/50390
1908
    mds: monclient: wait_auth_rotating timed out after 30
1909
* https://tracker.ceph.com/issues/50250
1910
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details"
1911
* https://tracker.ceph.com/issues/50622 (regression)
1912
    msg: active_connections regression
1913
* https://tracker.ceph.com/issues/45591
1914
    mgr: FAILED ceph_assert(daemon != nullptr)
1915
* https://tracker.ceph.com/issues/50221
1916
    qa: snaptest-git-ceph failure in git diff
1917
* https://tracker.ceph.com/issues/50016
1918
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
1919
1920
1921 3 Patrick Donnelly
h3. 2021 Apr 15
1922
1923
https://pulpito.ceph.com/pdonnell-2021-04-15_01:35:57-fs-wip-pdonnell-testing-20210414.230315-distro-basic-smithi/
1924
1925
* https://tracker.ceph.com/issues/50281
1926
    qa: untar_snap_rm timeout
1927
* https://tracker.ceph.com/issues/50220
1928
    qa: dbench workload timeout
1929
* https://tracker.ceph.com/issues/50246
1930
    mds: failure replaying journal (EMetaBlob)
1931
* https://tracker.ceph.com/issues/50250
1932
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details"
1933
* https://tracker.ceph.com/issues/50016
1934
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
1935
* https://tracker.ceph.com/issues/50222
1936
    osd: 5.2s0 deep-scrub : stat mismatch
1937
* https://tracker.ceph.com/issues/45434
1938
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
1939
* https://tracker.ceph.com/issues/49845
1940
    qa: failed umount in test_volumes
1941
* https://tracker.ceph.com/issues/37808
1942
    osd: osdmap cache weak_refs assert during shutdown
1943
* https://tracker.ceph.com/issues/50387
1944
    client: fs/snaps failure
1945
* https://tracker.ceph.com/issues/50389
1946
    mds: "cluster [ERR] Error recovering journal 0x203: (2) No such file or directory" in cluster log"
1947
* https://tracker.ceph.com/issues/50216
1948
    qa: "ls: cannot access 'lost+found': No such file or directory"
1949
* https://tracker.ceph.com/issues/50390
1950
    mds: monclient: wait_auth_rotating timed out after 30
1951
1952
1953
1954 1 Patrick Donnelly
h3. 2021 Apr 08
1955
1956 2 Patrick Donnelly
https://pulpito.ceph.com/pdonnell-2021-04-08_22:42:24-fs-wip-pdonnell-testing-20210408.192301-distro-basic-smithi/
1957
1958
* https://tracker.ceph.com/issues/45434
1959
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
1960
* https://tracker.ceph.com/issues/50016
1961
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
1962
* https://tracker.ceph.com/issues/48773
1963
    qa: scrub does not complete
1964
* https://tracker.ceph.com/issues/50279
1965
    qa: "Replacing daemon mds.b as rank 0 with standby daemon mds.c"
1966
* https://tracker.ceph.com/issues/50246
1967
    mds: failure replaying journal (EMetaBlob)
1968
* https://tracker.ceph.com/issues/48365
1969
    qa: ffsb build failure on CentOS 8.2
1970
* https://tracker.ceph.com/issues/50216
1971
    qa: "ls: cannot access 'lost+found': No such file or directory"
1972
* https://tracker.ceph.com/issues/50223
1973
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
1974
* https://tracker.ceph.com/issues/50280
1975
    cephadm: RuntimeError: uid/gid not found
1976
* https://tracker.ceph.com/issues/50281
1977
    qa: untar_snap_rm timeout
1978
1979
h3. 2021 Apr 08
1980
1981 1 Patrick Donnelly
https://pulpito.ceph.com/pdonnell-2021-04-08_04:31:36-fs-wip-pdonnell-testing-20210408.024225-distro-basic-smithi/
1982
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210408.142238 (with logic inversion / QA fix)
1983
1984
* https://tracker.ceph.com/issues/50246
1985
    mds: failure replaying journal (EMetaBlob)
1986
* https://tracker.ceph.com/issues/50250
1987
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details"
1988
1989
1990
h3. 2021 Apr 07
1991
1992
https://pulpito.ceph.com/pdonnell-2021-04-07_02:12:41-fs-wip-pdonnell-testing-20210406.213012-distro-basic-smithi/
1993
1994
* https://tracker.ceph.com/issues/50215
1995
    qa: "log [ERR] : error reading sessionmap 'mds2_sessionmap'"
1996
* https://tracker.ceph.com/issues/49466
1997
    qa: "Command failed on gibba030 with status 1: 'set -ex\nsudo dd of=/tmp/tmp.ZEeZBasJer'"
1998
* https://tracker.ceph.com/issues/50216
1999
    qa: "ls: cannot access 'lost+found': No such file or directory"
2000
* https://tracker.ceph.com/issues/48773
2001
    qa: scrub does not complete
2002
* https://tracker.ceph.com/issues/49845
2003
    qa: failed umount in test_volumes
2004
* https://tracker.ceph.com/issues/50220
2005
    qa: dbench workload timeout
2006
* https://tracker.ceph.com/issues/50221
2007
    qa: snaptest-git-ceph failure in git diff
2008
* https://tracker.ceph.com/issues/50222
2009
    osd: 5.2s0 deep-scrub : stat mismatch
2010
* https://tracker.ceph.com/issues/50223
2011
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2012
* https://tracker.ceph.com/issues/50224
2013
    qa: test_mirroring_init_failure_with_recovery failure
2014
2015
h3. 2021 Apr 01
2016
2017
https://pulpito.ceph.com/pdonnell-2021-04-01_00:45:34-fs-wip-pdonnell-testing-20210331.222326-distro-basic-smithi/
2018
2019
* https://tracker.ceph.com/issues/48772
2020
    qa: pjd: not ok 9, 44, 80
2021
* https://tracker.ceph.com/issues/50177
2022
    osd: "stalled aio... buggy kernel or bad device?"
2023
* https://tracker.ceph.com/issues/48771
2024
    qa: iogen: workload fails to cause balancing
2025
* https://tracker.ceph.com/issues/49845
2026
    qa: failed umount in test_volumes
2027
* https://tracker.ceph.com/issues/48773
2028
    qa: scrub does not complete
2029
* https://tracker.ceph.com/issues/48805
2030
    mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details"
2031
* https://tracker.ceph.com/issues/50178
2032
    qa: "TypeError: run() got an unexpected keyword argument 'shell'"
2033
* https://tracker.ceph.com/issues/45434
2034
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2035
2036
h3. 2021 Mar 24
2037
2038
https://pulpito.ceph.com/pdonnell-2021-03-24_23:26:35-fs-wip-pdonnell-testing-20210324.190252-distro-basic-smithi/
2039
2040
* https://tracker.ceph.com/issues/49500
2041
    qa: "Assertion `cb_done' failed."
2042
* https://tracker.ceph.com/issues/50019
2043
    qa: mount failure with cephadm "probably no MDS server is up?"
2044
* https://tracker.ceph.com/issues/50020
2045
    qa: "RADOS object not found (Failed to operate read op for oid cephfs_mirror)"
2046
* https://tracker.ceph.com/issues/48773
2047
    qa: scrub does not complete
2048
* https://tracker.ceph.com/issues/45434
2049
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2050
* https://tracker.ceph.com/issues/48805
2051
    mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details"
2052
* https://tracker.ceph.com/issues/48772
2053
    qa: pjd: not ok 9, 44, 80
2054
* https://tracker.ceph.com/issues/50021
2055
    qa: snaptest-git-ceph failure during mon thrashing
2056
* https://tracker.ceph.com/issues/48771
2057
    qa: iogen: workload fails to cause balancing
2058
* https://tracker.ceph.com/issues/50016
2059
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
2060
* https://tracker.ceph.com/issues/49466
2061
    qa: "Command failed on gibba030 with status 1: 'set -ex\nsudo dd of=/tmp/tmp.ZEeZBasJer'"
2062
2063
2064
h3. 2021 Mar 18
2065
2066
https://pulpito.ceph.com/pdonnell-2021-03-18_13:46:31-fs-wip-pdonnell-testing-20210318.024145-distro-basic-smithi/
2067
2068
* https://tracker.ceph.com/issues/49466
2069
    qa: "Command failed on gibba030 with status 1: 'set -ex\nsudo dd of=/tmp/tmp.ZEeZBasJer'"
2070
* https://tracker.ceph.com/issues/48773
2071
    qa: scrub does not complete
2072
* https://tracker.ceph.com/issues/48805
2073
    mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details"
2074
* https://tracker.ceph.com/issues/45434
2075
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2076
* https://tracker.ceph.com/issues/49845
2077
    qa: failed umount in test_volumes
2078
* https://tracker.ceph.com/issues/49605
2079
    mgr: drops command on the floor
2080
* https://tracker.ceph.com/issues/48203 (stock kernel update required)
2081
    qa: quota failure
2082
* https://tracker.ceph.com/issues/49928
2083
    client: items pinned in cache preventing unmount x2
2084
2085
h3. 2021 Mar 15
2086
2087
https://pulpito.ceph.com/pdonnell-2021-03-15_22:16:56-fs-wip-pdonnell-testing-20210315.182203-distro-basic-smithi/
2088
2089
* https://tracker.ceph.com/issues/49842
2090
    qa: stuck pkg install
2091
* https://tracker.ceph.com/issues/49466
2092
    qa: "Command failed on gibba030 with status 1: 'set -ex\nsudo dd of=/tmp/tmp.ZEeZBasJer'"
2093
* https://tracker.ceph.com/issues/49822
2094
    test: test_mirroring_command_idempotency (tasks.cephfs.test_admin.TestMirroringCommands) failure
2095
* https://tracker.ceph.com/issues/49240
2096
    terminate called after throwing an instance of 'std::bad_alloc'
2097
* https://tracker.ceph.com/issues/48773
2098
    qa: scrub does not complete
2099
* https://tracker.ceph.com/issues/45434
2100
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2101
* https://tracker.ceph.com/issues/49500
2102
    qa: "Assertion `cb_done' failed."
2103
* https://tracker.ceph.com/issues/49843
2104
    qa: fs/snaps/snaptest-upchildrealms.sh failure
2105
* https://tracker.ceph.com/issues/49845
2106
    qa: failed umount in test_volumes
2107
* https://tracker.ceph.com/issues/48805
2108
    mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details"
2109
* https://tracker.ceph.com/issues/49605
2110
    mgr: drops command on the floor
2111
2112
and failure caused by PR: https://github.com/ceph/ceph/pull/39969
2113
2114
2115
h3. 2021 Mar 09
2116
2117
https://pulpito.ceph.com/pdonnell-2021-03-09_03:27:39-fs-wip-pdonnell-testing-20210308.214827-distro-basic-smithi/
2118
2119
* https://tracker.ceph.com/issues/49500
2120
    qa: "Assertion `cb_done' failed."
2121
* https://tracker.ceph.com/issues/48805
2122
    mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details"
2123
* https://tracker.ceph.com/issues/48773
2124
    qa: scrub does not complete
2125
* https://tracker.ceph.com/issues/45434
2126
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2127
* https://tracker.ceph.com/issues/49240
2128
    terminate called after throwing an instance of 'std::bad_alloc'
2129
* https://tracker.ceph.com/issues/49466
2130
    qa: "Command failed on gibba030 with status 1: 'set -ex\nsudo dd of=/tmp/tmp.ZEeZBasJer'"
2131
* https://tracker.ceph.com/issues/49684
2132
    qa: fs:cephadm mount does not wait for mds to be created
2133
* https://tracker.ceph.com/issues/48771
2134
    qa: iogen: workload fails to cause balancing