Project

General

Profile

Main » History » Version 21

Patrick Donnelly, 08/28/2021 01:25 AM

1 21 Patrick Donnelly
h3. 2021 August 27
2
3
Several jobs died because of device failures.
4
5
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210827.024746
6
7
* https://tracker.ceph.com/issues/52430
8
    mds: fast async create client mount breaks racy test
9
* https://tracker.ceph.com/issues/52436
10
    fs/ceph: "corrupt mdsmap"
11
* https://tracker.ceph.com/issues/52437
12
    mds: InoTable::replay_release_ids abort via test_inotable_sync
13
* https://tracker.ceph.com/issues/51282
14
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
15
* https://tracker.ceph.com/issues/52438
16
    qa: ffsb timeout
17
* https://tracker.ceph.com/issues/52439
18
    qa: acls does not compile on centos stream
19
20
21 20 Patrick Donnelly
h3. 2021 July 30
22
23
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210729.214022
24
25
* https://tracker.ceph.com/issues/50250
26
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
27
* https://tracker.ceph.com/issues/51282
28
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
29
* https://tracker.ceph.com/issues/48773
30
    qa: scrub does not complete
31
* https://tracker.ceph.com/issues/51975
32
    pybind/mgr/stats: KeyError
33
34
35 19 Patrick Donnelly
h3. 2021 July 28
36
37
https://pulpito.ceph.com/pdonnell-2021-07-28_00:39:45-fs-wip-pdonnell-testing-20210727.213757-distro-basic-smithi/
38
39
with qa fix: https://pulpito.ceph.com/pdonnell-2021-07-28_16:20:28-fs-wip-pdonnell-testing-20210728.141004-distro-basic-smithi/
40
41
* https://tracker.ceph.com/issues/51905
42
    qa: "error reading sessionmap 'mds1_sessionmap'"
43
* https://tracker.ceph.com/issues/48773
44
    qa: scrub does not complete
45
* https://tracker.ceph.com/issues/50250
46
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
47
* https://tracker.ceph.com/issues/51267
48
    CommandFailedError: Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithi096 with status 1:...
49
* https://tracker.ceph.com/issues/51279
50
    kclient hangs on umount (testing branch)
51
52
53 18 Patrick Donnelly
h3. 2021 July 16
54
55
https://pulpito.ceph.com/pdonnell-2021-07-16_05:50:11-fs-wip-pdonnell-testing-20210716.022804-distro-basic-smithi/
56
57
* https://tracker.ceph.com/issues/48773
58
    qa: scrub does not complete
59
* https://tracker.ceph.com/issues/48772
60
    qa: pjd: not ok 9, 44, 80
61
* https://tracker.ceph.com/issues/45434
62
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
63
* https://tracker.ceph.com/issues/51279
64
    kclient hangs on umount (testing branch)
65
* https://tracker.ceph.com/issues/50824
66
    qa: snaptest-git-ceph bus error
67
68
69 17 Patrick Donnelly
h3. 2021 July 04
70
71
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210703.052904
72
73
* https://tracker.ceph.com/issues/48773
74
    qa: scrub does not complete
75
* https://tracker.ceph.com/issues/39150
76
    mon: "FAILED ceph_assert(session_map.sessions.empty())" when out of quorum
77
* https://tracker.ceph.com/issues/45434
78
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
79
* https://tracker.ceph.com/issues/51282
80
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
81
* https://tracker.ceph.com/issues/48771
82
    qa: iogen: workload fails to cause balancing
83
* https://tracker.ceph.com/issues/51279
84
    kclient hangs on umount (testing branch)
85
* https://tracker.ceph.com/issues/50250
86
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
87
88
89 16 Patrick Donnelly
h3. 2021 July 01
90
91
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210701.192056
92
93
* https://tracker.ceph.com/issues/51197
94
    qa: [WRN] Scrub error on inode 0x10000001520 (/client.0/tmp/t/linux-5.4/Documentation/driver-api) see mds.f log and `damage ls` output for details
95
* https://tracker.ceph.com/issues/50866
96
    osd: stat mismatch on objects
97
* https://tracker.ceph.com/issues/48773
98
    qa: scrub does not complete
99
100
101 15 Patrick Donnelly
h3. 2021 June 26
102
103
https://pulpito.ceph.com/pdonnell-2021-06-26_00:57:00-fs-wip-pdonnell-testing-20210625.225421-distro-basic-smithi/
104
105
* https://tracker.ceph.com/issues/51183
106
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
107
* https://tracker.ceph.com/issues/51410
108
    kclient: fails to finish reconnect during MDS thrashing (testing branch)
109
* https://tracker.ceph.com/issues/48773
110
    qa: scrub does not complete
111
* https://tracker.ceph.com/issues/51282
112
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
113
* https://tracker.ceph.com/issues/51169
114
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
115
* https://tracker.ceph.com/issues/48772
116
    qa: pjd: not ok 9, 44, 80
117
118
119 14 Patrick Donnelly
h3. 2021 June 21
120
121
https://pulpito.ceph.com/pdonnell-2021-06-22_00:27:21-fs-wip-pdonnell-testing-20210621.231646-distro-basic-smithi/
122
123
One failure caused by PR: https://github.com/ceph/ceph/pull/41935#issuecomment-866472599
124
125
* https://tracker.ceph.com/issues/51282
126
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
127
* https://tracker.ceph.com/issues/51183
128
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
129
* https://tracker.ceph.com/issues/48773
130
    qa: scrub does not complete
131
* https://tracker.ceph.com/issues/48771
132
    qa: iogen: workload fails to cause balancing
133
* https://tracker.ceph.com/issues/51169
134
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
135
* https://tracker.ceph.com/issues/50495
136
    libcephfs: shutdown race fails with status 141
137
* https://tracker.ceph.com/issues/45434
138
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
139
* https://tracker.ceph.com/issues/50824
140
    qa: snaptest-git-ceph bus error
141
* https://tracker.ceph.com/issues/50223
142
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
143
144
145 13 Patrick Donnelly
h3. 2021 June 16
146
147
https://pulpito.ceph.com/pdonnell-2021-06-16_21:26:55-fs-wip-pdonnell-testing-20210616.191804-distro-basic-smithi/
148
149
MDS abort class of failures caused by PR: https://github.com/ceph/ceph/pull/41667
150
151
* https://tracker.ceph.com/issues/45434
152
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
153
* https://tracker.ceph.com/issues/51169
154
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
155
* https://tracker.ceph.com/issues/43216
156
    MDSMonitor: removes MDS coming out of quorum election
157
* https://tracker.ceph.com/issues/51278
158
    mds: "FAILED ceph_assert(!segments.empty())"
159
* https://tracker.ceph.com/issues/51279
160
    kclient hangs on umount (testing branch)
161
* https://tracker.ceph.com/issues/51280
162
    mds: "FAILED ceph_assert(r == 0 || r == -2)"
163
* https://tracker.ceph.com/issues/51183
164
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
165
* https://tracker.ceph.com/issues/51281
166
    qa: snaptest-snap-rm-cmp.sh: "echo 'FAIL: bad match, /tmp/a 4637e766853d1ad16a7b17079e2c6f03 != real c3883760b18d50e8d78819c54d579b00'"
167
* https://tracker.ceph.com/issues/48773
168
    qa: scrub does not complete
169
* https://tracker.ceph.com/issues/51076
170
    "wait_for_recovery: failed before timeout expired" during thrashosd test with EC backend.
171
* https://tracker.ceph.com/issues/51228
172
    qa: rmdir: failed to remove 'a/.snap/*': No such file or directory
173
* https://tracker.ceph.com/issues/51282
174
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
175
176
177 12 Patrick Donnelly
h3. 2021 June 14
178
179
https://pulpito.ceph.com/pdonnell-2021-06-14_20:53:05-fs-wip-pdonnell-testing-20210614.173325-distro-basic-smithi/
180
181
Some Ubuntu 20.04 upgrade fallout. In particular, upgrade tests are failing due to missing packages for 18.04 Pacific.
182
183
* https://tracker.ceph.com/issues/51169
184
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
185
* https://tracker.ceph.com/issues/51228
186
    qa: rmdir: failed to remove 'a/.snap/*': No such file or directory
187
* https://tracker.ceph.com/issues/48773
188
    qa: scrub does not complete
189
* https://tracker.ceph.com/issues/51183
190
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
191
* https://tracker.ceph.com/issues/45434
192
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
193
* https://tracker.ceph.com/issues/51182
194
    pybind/mgr/snap_schedule: Invalid command: Unexpected argument 'fs=cephfs'
195
* https://tracker.ceph.com/issues/51229
196
    qa: test_multi_snap_schedule list difference failure
197
* https://tracker.ceph.com/issues/50821
198
    qa: untar_snap_rm failure during mds thrashing
199
200
201 11 Patrick Donnelly
h3. 2021 June 13
202
203
https://pulpito.ceph.com/pdonnell-2021-06-12_02:45:35-fs-wip-pdonnell-testing-20210612.002809-distro-basic-smithi/
204
205
Some Ubuntu 20.04 upgrade fallout. In particular, upgrade tests are failing due to missing packages for 18.04 Pacific.
206
207
* https://tracker.ceph.com/issues/51169
208
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
209
* https://tracker.ceph.com/issues/48773
210
    qa: scrub does not complete
211
* https://tracker.ceph.com/issues/51182
212
    pybind/mgr/snap_schedule: Invalid command: Unexpected argument 'fs=cephfs'
213
* https://tracker.ceph.com/issues/51183
214
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
215
* https://tracker.ceph.com/issues/51197
216
    qa: [WRN] Scrub error on inode 0x10000001520 (/client.0/tmp/t/linux-5.4/Documentation/driver-api) see mds.f log and `damage ls` output for details
217
* https://tracker.ceph.com/issues/45434
218
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
219
220 10 Patrick Donnelly
h3. 2021 June 11
221
222
https://pulpito.ceph.com/pdonnell-2021-06-11_18:02:10-fs-wip-pdonnell-testing-20210611.162716-distro-basic-smithi/
223
224
Some Ubuntu 20.04 upgrade fallout. In particular, upgrade tests are failing due to missing packages for 18.04 Pacific.
225
226
* https://tracker.ceph.com/issues/51169
227
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
228
* https://tracker.ceph.com/issues/45434
229
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
230
* https://tracker.ceph.com/issues/48771
231
    qa: iogen: workload fails to cause balancing
232
* https://tracker.ceph.com/issues/43216
233
    MDSMonitor: removes MDS coming out of quorum election
234
* https://tracker.ceph.com/issues/51182
235
    pybind/mgr/snap_schedule: Invalid command: Unexpected argument 'fs=cephfs'
236
* https://tracker.ceph.com/issues/50223
237
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
238
* https://tracker.ceph.com/issues/48773
239
    qa: scrub does not complete
240
* https://tracker.ceph.com/issues/51183
241
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
242
* https://tracker.ceph.com/issues/51184
243
    qa: fs:bugs does not specify distro
244
245
246 9 Patrick Donnelly
h3. 2021 June 03
247
248
https://pulpito.ceph.com/pdonnell-2021-06-03_03:40:33-fs-wip-pdonnell-testing-20210603.020013-distro-basic-smithi/
249
250
* https://tracker.ceph.com/issues/45434
251
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
252
* https://tracker.ceph.com/issues/50016
253
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
254
* https://tracker.ceph.com/issues/50821
255
    qa: untar_snap_rm failure during mds thrashing
256
* https://tracker.ceph.com/issues/50622 (regression)
257
    msg: active_connections regression
258
* https://tracker.ceph.com/issues/49845#note-2 (regression)
259
    qa: failed umount in test_volumes
260
* https://tracker.ceph.com/issues/48773
261
    qa: scrub does not complete
262
* https://tracker.ceph.com/issues/43216
263
    MDSMonitor: removes MDS coming out of quorum election
264
265
266 7 Patrick Donnelly
h3. 2021 May 18
267
268 8 Patrick Donnelly
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210518.214114
269
270
Regression in testing kernel caused some failures. Ilya fixed those and rerun
271
looked better. Some odd new noise in the rerun relating to packaging and "No
272
module named 'tasks.ceph'".
273
274
* https://tracker.ceph.com/issues/50824
275
    qa: snaptest-git-ceph bus error
276
* https://tracker.ceph.com/issues/50622 (regression)
277
    msg: active_connections regression
278
* https://tracker.ceph.com/issues/49845#note-2 (regression)
279
    qa: failed umount in test_volumes
280
* https://tracker.ceph.com/issues/48203 (stock kernel update required)
281
    qa: quota failure
282
283
284
h3. 2021 May 18
285
286 7 Patrick Donnelly
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210518.025642
287
288
* https://tracker.ceph.com/issues/50821
289
    qa: untar_snap_rm failure during mds thrashing
290
* https://tracker.ceph.com/issues/48773
291
    qa: scrub does not complete
292
* https://tracker.ceph.com/issues/45591
293
    mgr: FAILED ceph_assert(daemon != nullptr)
294
* https://tracker.ceph.com/issues/50866
295
    osd: stat mismatch on objects
296
* https://tracker.ceph.com/issues/50016
297
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
298
* https://tracker.ceph.com/issues/50867
299
    qa: fs:mirror: reduced data availability
300
* https://tracker.ceph.com/issues/50821
301
    qa: untar_snap_rm failure during mds thrashing
302
* https://tracker.ceph.com/issues/50622 (regression)
303
    msg: active_connections regression
304
* https://tracker.ceph.com/issues/50223
305
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
306
* https://tracker.ceph.com/issues/50868
307
    qa: "kern.log.gz already exists; not overwritten"
308
* https://tracker.ceph.com/issues/50870
309
    qa: test_full: "rm: cannot remove 'large_file_a': Permission denied"
310
311
312 6 Patrick Donnelly
h3. 2021 May 11
313
314
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210511.232042
315
316
* one class of failures caused by PR
317
* https://tracker.ceph.com/issues/48812
318
    qa: test_scrub_pause_and_resume_with_abort failure
319
* https://tracker.ceph.com/issues/50390
320
    mds: monclient: wait_auth_rotating timed out after 30
321
* https://tracker.ceph.com/issues/48773
322
    qa: scrub does not complete
323
* https://tracker.ceph.com/issues/50821
324
    qa: untar_snap_rm failure during mds thrashing
325
* https://tracker.ceph.com/issues/50224
326
    qa: test_mirroring_init_failure_with_recovery failure
327
* https://tracker.ceph.com/issues/50622 (regression)
328
    msg: active_connections regression
329
* https://tracker.ceph.com/issues/50825
330
    qa: snaptest-git-ceph hang during mon thrashing v2
331
* https://tracker.ceph.com/issues/50821
332
    qa: untar_snap_rm failure during mds thrashing
333
* https://tracker.ceph.com/issues/50823
334
    qa: RuntimeError: timeout waiting for cluster to stabilize
335
336
337 5 Patrick Donnelly
h3. 2021 May 14
338
339
https://pulpito.ceph.com/pdonnell-2021-05-14_21:45:42-fs-master-distro-basic-smithi/
340
341
* https://tracker.ceph.com/issues/48812
342
    qa: test_scrub_pause_and_resume_with_abort failure
343
* https://tracker.ceph.com/issues/50821
344
    qa: untar_snap_rm failure during mds thrashing
345
* https://tracker.ceph.com/issues/50622 (regression)
346
    msg: active_connections regression
347
* https://tracker.ceph.com/issues/50822
348
    qa: testing kernel patch for client metrics causes mds abort
349
* https://tracker.ceph.com/issues/48773
350
    qa: scrub does not complete
351
* https://tracker.ceph.com/issues/50823
352
    qa: RuntimeError: timeout waiting for cluster to stabilize
353
* https://tracker.ceph.com/issues/50824
354
    qa: snaptest-git-ceph bus error
355
* https://tracker.ceph.com/issues/50825
356
    qa: snaptest-git-ceph hang during mon thrashing v2
357
* https://tracker.ceph.com/issues/50826
358
    kceph: stock RHEL kernel hangs on snaptests with mon|osd thrashers
359
360
361 4 Patrick Donnelly
h3. 2021 May 01
362
363
https://pulpito.ceph.com/pdonnell-2021-05-01_09:07:09-fs-wip-pdonnell-testing-20210501.040415-distro-basic-smithi/
364
365
* https://tracker.ceph.com/issues/45434
366
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
367
* https://tracker.ceph.com/issues/50281
368
    qa: untar_snap_rm timeout
369
* https://tracker.ceph.com/issues/48203 (stock kernel update required)
370
    qa: quota failure
371
* https://tracker.ceph.com/issues/48773
372
    qa: scrub does not complete
373
* https://tracker.ceph.com/issues/50390
374
    mds: monclient: wait_auth_rotating timed out after 30
375
* https://tracker.ceph.com/issues/50250
376
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details"
377
* https://tracker.ceph.com/issues/50622 (regression)
378
    msg: active_connections regression
379
* https://tracker.ceph.com/issues/45591
380
    mgr: FAILED ceph_assert(daemon != nullptr)
381
* https://tracker.ceph.com/issues/50221
382
    qa: snaptest-git-ceph failure in git diff
383
* https://tracker.ceph.com/issues/50016
384
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
385
386
387 3 Patrick Donnelly
h3. 2021 Apr 15
388
389
https://pulpito.ceph.com/pdonnell-2021-04-15_01:35:57-fs-wip-pdonnell-testing-20210414.230315-distro-basic-smithi/
390
391
* https://tracker.ceph.com/issues/50281
392
    qa: untar_snap_rm timeout
393
* https://tracker.ceph.com/issues/50220
394
    qa: dbench workload timeout
395
* https://tracker.ceph.com/issues/50246
396
    mds: failure replaying journal (EMetaBlob)
397
* https://tracker.ceph.com/issues/50250
398
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details"
399
* https://tracker.ceph.com/issues/50016
400
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
401
* https://tracker.ceph.com/issues/50222
402
    osd: 5.2s0 deep-scrub : stat mismatch
403
* https://tracker.ceph.com/issues/45434
404
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
405
* https://tracker.ceph.com/issues/49845
406
    qa: failed umount in test_volumes
407
* https://tracker.ceph.com/issues/37808
408
    osd: osdmap cache weak_refs assert during shutdown
409
* https://tracker.ceph.com/issues/50387
410
    client: fs/snaps failure
411
* https://tracker.ceph.com/issues/50389
412
    mds: "cluster [ERR] Error recovering journal 0x203: (2) No such file or directory" in cluster log"
413
* https://tracker.ceph.com/issues/50216
414
    qa: "ls: cannot access 'lost+found': No such file or directory"
415
* https://tracker.ceph.com/issues/50390
416
    mds: monclient: wait_auth_rotating timed out after 30
417
418
419
420 1 Patrick Donnelly
h3. 2021 Apr 08
421
422 2 Patrick Donnelly
https://pulpito.ceph.com/pdonnell-2021-04-08_22:42:24-fs-wip-pdonnell-testing-20210408.192301-distro-basic-smithi/
423
424
* https://tracker.ceph.com/issues/45434
425
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
426
* https://tracker.ceph.com/issues/50016
427
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
428
* https://tracker.ceph.com/issues/48773
429
    qa: scrub does not complete
430
* https://tracker.ceph.com/issues/50279
431
    qa: "Replacing daemon mds.b as rank 0 with standby daemon mds.c"
432
* https://tracker.ceph.com/issues/50246
433
    mds: failure replaying journal (EMetaBlob)
434
* https://tracker.ceph.com/issues/48365
435
    qa: ffsb build failure on CentOS 8.2
436
* https://tracker.ceph.com/issues/50216
437
    qa: "ls: cannot access 'lost+found': No such file or directory"
438
* https://tracker.ceph.com/issues/50223
439
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
440
* https://tracker.ceph.com/issues/50280
441
    cephadm: RuntimeError: uid/gid not found
442
* https://tracker.ceph.com/issues/50281
443
    qa: untar_snap_rm timeout
444
445
h3. 2021 Apr 08
446
447 1 Patrick Donnelly
https://pulpito.ceph.com/pdonnell-2021-04-08_04:31:36-fs-wip-pdonnell-testing-20210408.024225-distro-basic-smithi/
448
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210408.142238 (with logic inversion / QA fix)
449
450
* https://tracker.ceph.com/issues/50246
451
    mds: failure replaying journal (EMetaBlob)
452
* https://tracker.ceph.com/issues/50250
453
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details"
454
455
456
h3. 2021 Apr 07
457
458
https://pulpito.ceph.com/pdonnell-2021-04-07_02:12:41-fs-wip-pdonnell-testing-20210406.213012-distro-basic-smithi/
459
460
* https://tracker.ceph.com/issues/50215
461
    qa: "log [ERR] : error reading sessionmap 'mds2_sessionmap'"
462
* https://tracker.ceph.com/issues/49466
463
    qa: "Command failed on gibba030 with status 1: 'set -ex\nsudo dd of=/tmp/tmp.ZEeZBasJer'"
464
* https://tracker.ceph.com/issues/50216
465
    qa: "ls: cannot access 'lost+found': No such file or directory"
466
* https://tracker.ceph.com/issues/48773
467
    qa: scrub does not complete
468
* https://tracker.ceph.com/issues/49845
469
    qa: failed umount in test_volumes
470
* https://tracker.ceph.com/issues/50220
471
    qa: dbench workload timeout
472
* https://tracker.ceph.com/issues/50221
473
    qa: snaptest-git-ceph failure in git diff
474
* https://tracker.ceph.com/issues/50222
475
    osd: 5.2s0 deep-scrub : stat mismatch
476
* https://tracker.ceph.com/issues/50223
477
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
478
* https://tracker.ceph.com/issues/50224
479
    qa: test_mirroring_init_failure_with_recovery failure
480
481
h3. 2021 Apr 01
482
483
https://pulpito.ceph.com/pdonnell-2021-04-01_00:45:34-fs-wip-pdonnell-testing-20210331.222326-distro-basic-smithi/
484
485
* https://tracker.ceph.com/issues/48772
486
    qa: pjd: not ok 9, 44, 80
487
* https://tracker.ceph.com/issues/50177
488
    osd: "stalled aio... buggy kernel or bad device?"
489
* https://tracker.ceph.com/issues/48771
490
    qa: iogen: workload fails to cause balancing
491
* https://tracker.ceph.com/issues/49845
492
    qa: failed umount in test_volumes
493
* https://tracker.ceph.com/issues/48773
494
    qa: scrub does not complete
495
* https://tracker.ceph.com/issues/48805
496
    mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details"
497
* https://tracker.ceph.com/issues/50178
498
    qa: "TypeError: run() got an unexpected keyword argument 'shell'"
499
* https://tracker.ceph.com/issues/45434
500
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
501
502
h3. 2021 Mar 24
503
504
https://pulpito.ceph.com/pdonnell-2021-03-24_23:26:35-fs-wip-pdonnell-testing-20210324.190252-distro-basic-smithi/
505
506
* https://tracker.ceph.com/issues/49500
507
    qa: "Assertion `cb_done' failed."
508
* https://tracker.ceph.com/issues/50019
509
    qa: mount failure with cephadm "probably no MDS server is up?"
510
* https://tracker.ceph.com/issues/50020
511
    qa: "RADOS object not found (Failed to operate read op for oid cephfs_mirror)"
512
* https://tracker.ceph.com/issues/48773
513
    qa: scrub does not complete
514
* https://tracker.ceph.com/issues/45434
515
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
516
* https://tracker.ceph.com/issues/48805
517
    mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details"
518
* https://tracker.ceph.com/issues/48772
519
    qa: pjd: not ok 9, 44, 80
520
* https://tracker.ceph.com/issues/50021
521
    qa: snaptest-git-ceph failure during mon thrashing
522
* https://tracker.ceph.com/issues/48771
523
    qa: iogen: workload fails to cause balancing
524
* https://tracker.ceph.com/issues/50016
525
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
526
* https://tracker.ceph.com/issues/49466
527
    qa: "Command failed on gibba030 with status 1: 'set -ex\nsudo dd of=/tmp/tmp.ZEeZBasJer'"
528
529
530
h3. 2021 Mar 18
531
532
https://pulpito.ceph.com/pdonnell-2021-03-18_13:46:31-fs-wip-pdonnell-testing-20210318.024145-distro-basic-smithi/
533
534
* https://tracker.ceph.com/issues/49466
535
    qa: "Command failed on gibba030 with status 1: 'set -ex\nsudo dd of=/tmp/tmp.ZEeZBasJer'"
536
* https://tracker.ceph.com/issues/48773
537
    qa: scrub does not complete
538
* https://tracker.ceph.com/issues/48805
539
    mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details"
540
* https://tracker.ceph.com/issues/45434
541
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
542
* https://tracker.ceph.com/issues/49845
543
    qa: failed umount in test_volumes
544
* https://tracker.ceph.com/issues/49605
545
    mgr: drops command on the floor
546
* https://tracker.ceph.com/issues/48203 (stock kernel update required)
547
    qa: quota failure
548
* https://tracker.ceph.com/issues/49928
549
    client: items pinned in cache preventing unmount x2
550
551
h3. 2021 Mar 15
552
553
https://pulpito.ceph.com/pdonnell-2021-03-15_22:16:56-fs-wip-pdonnell-testing-20210315.182203-distro-basic-smithi/
554
555
* https://tracker.ceph.com/issues/49842
556
    qa: stuck pkg install
557
* https://tracker.ceph.com/issues/49466
558
    qa: "Command failed on gibba030 with status 1: 'set -ex\nsudo dd of=/tmp/tmp.ZEeZBasJer'"
559
* https://tracker.ceph.com/issues/49822
560
    test: test_mirroring_command_idempotency (tasks.cephfs.test_admin.TestMirroringCommands) failure
561
* https://tracker.ceph.com/issues/49240
562
    terminate called after throwing an instance of 'std::bad_alloc'
563
* https://tracker.ceph.com/issues/48773
564
    qa: scrub does not complete
565
* https://tracker.ceph.com/issues/45434
566
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
567
* https://tracker.ceph.com/issues/49500
568
    qa: "Assertion `cb_done' failed."
569
* https://tracker.ceph.com/issues/49843
570
    qa: fs/snaps/snaptest-upchildrealms.sh failure
571
* https://tracker.ceph.com/issues/49845
572
    qa: failed umount in test_volumes
573
* https://tracker.ceph.com/issues/48805
574
    mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details"
575
* https://tracker.ceph.com/issues/49605
576
    mgr: drops command on the floor
577
578
and failure caused by PR: https://github.com/ceph/ceph/pull/39969
579
580
581
h3. 2021 Mar 09
582
583
https://pulpito.ceph.com/pdonnell-2021-03-09_03:27:39-fs-wip-pdonnell-testing-20210308.214827-distro-basic-smithi/
584
585
* https://tracker.ceph.com/issues/49500
586
    qa: "Assertion `cb_done' failed."
587
* https://tracker.ceph.com/issues/48805
588
    mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details"
589
* https://tracker.ceph.com/issues/48773
590
    qa: scrub does not complete
591
* https://tracker.ceph.com/issues/45434
592
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
593
* https://tracker.ceph.com/issues/49240
594
    terminate called after throwing an instance of 'std::bad_alloc'
595
* https://tracker.ceph.com/issues/49466
596
    qa: "Command failed on gibba030 with status 1: 'set -ex\nsudo dd of=/tmp/tmp.ZEeZBasJer'"
597
* https://tracker.ceph.com/issues/49684
598
    qa: fs:cephadm mount does not wait for mds to be created
599
* https://tracker.ceph.com/issues/48771
600
    qa: iogen: workload fails to cause balancing