Project

General

Profile

Main » History » Version 23

Patrick Donnelly, 09/21/2021 12:44 AM

1 23 Patrick Donnelly
h3. 2021 September 20
2
3
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210917.174826
4
5
* https://tracker.ceph.com/issues/52677
6
    qa: test_simple failure
7
* https://tracker.ceph.com/issues/51279
8
    kclient hangs on umount (testing branch)
9
* https://tracker.ceph.com/issues/50223
10
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
11
* https://tracker.ceph.com/issues/50250
12
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
13
* https://tracker.ceph.com/issues/52624
14
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
15
* https://tracker.ceph.com/issues/52438
16
    qa: ffsb timeout
17
18
19 22 Patrick Donnelly
h3. 2021 September 10
20
21
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210910.181451
22
23
* https://tracker.ceph.com/issues/50223
24
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
25
* https://tracker.ceph.com/issues/50250
26
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
27
* https://tracker.ceph.com/issues/52624
28
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
29
* https://tracker.ceph.com/issues/52625
30
    qa: test_kill_mdstable (tasks.cephfs.test_snapshots.TestSnapshots)
31
* https://tracker.ceph.com/issues/52439
32
    qa: acls does not compile on centos stream
33
* https://tracker.ceph.com/issues/50821
34
    qa: untar_snap_rm failure during mds thrashing
35
* https://tracker.ceph.com/issues/48773
36
    qa: scrub does not complete
37
* https://tracker.ceph.com/issues/52626
38
    mds: ScrubStack.cc: 831: FAILED ceph_assert(diri)
39
* https://tracker.ceph.com/issues/51279
40
    kclient hangs on umount (testing branch)
41
42
43 21 Patrick Donnelly
h3. 2021 August 27
44
45
Several jobs died because of device failures.
46
47
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210827.024746
48
49
* https://tracker.ceph.com/issues/52430
50
    mds: fast async create client mount breaks racy test
51
* https://tracker.ceph.com/issues/52436
52
    fs/ceph: "corrupt mdsmap"
53
* https://tracker.ceph.com/issues/52437
54
    mds: InoTable::replay_release_ids abort via test_inotable_sync
55
* https://tracker.ceph.com/issues/51282
56
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
57
* https://tracker.ceph.com/issues/52438
58
    qa: ffsb timeout
59
* https://tracker.ceph.com/issues/52439
60
    qa: acls does not compile on centos stream
61
62
63 20 Patrick Donnelly
h3. 2021 July 30
64
65
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210729.214022
66
67
* https://tracker.ceph.com/issues/50250
68
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
69
* https://tracker.ceph.com/issues/51282
70
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
71
* https://tracker.ceph.com/issues/48773
72
    qa: scrub does not complete
73
* https://tracker.ceph.com/issues/51975
74
    pybind/mgr/stats: KeyError
75
76
77 19 Patrick Donnelly
h3. 2021 July 28
78
79
https://pulpito.ceph.com/pdonnell-2021-07-28_00:39:45-fs-wip-pdonnell-testing-20210727.213757-distro-basic-smithi/
80
81
with qa fix: https://pulpito.ceph.com/pdonnell-2021-07-28_16:20:28-fs-wip-pdonnell-testing-20210728.141004-distro-basic-smithi/
82
83
* https://tracker.ceph.com/issues/51905
84
    qa: "error reading sessionmap 'mds1_sessionmap'"
85
* https://tracker.ceph.com/issues/48773
86
    qa: scrub does not complete
87
* https://tracker.ceph.com/issues/50250
88
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
89
* https://tracker.ceph.com/issues/51267
90
    CommandFailedError: Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithi096 with status 1:...
91
* https://tracker.ceph.com/issues/51279
92
    kclient hangs on umount (testing branch)
93
94
95 18 Patrick Donnelly
h3. 2021 July 16
96
97
https://pulpito.ceph.com/pdonnell-2021-07-16_05:50:11-fs-wip-pdonnell-testing-20210716.022804-distro-basic-smithi/
98
99
* https://tracker.ceph.com/issues/48773
100
    qa: scrub does not complete
101
* https://tracker.ceph.com/issues/48772
102
    qa: pjd: not ok 9, 44, 80
103
* https://tracker.ceph.com/issues/45434
104
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
105
* https://tracker.ceph.com/issues/51279
106
    kclient hangs on umount (testing branch)
107
* https://tracker.ceph.com/issues/50824
108
    qa: snaptest-git-ceph bus error
109
110
111 17 Patrick Donnelly
h3. 2021 July 04
112
113
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210703.052904
114
115
* https://tracker.ceph.com/issues/48773
116
    qa: scrub does not complete
117
* https://tracker.ceph.com/issues/39150
118
    mon: "FAILED ceph_assert(session_map.sessions.empty())" when out of quorum
119
* https://tracker.ceph.com/issues/45434
120
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
121
* https://tracker.ceph.com/issues/51282
122
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
123
* https://tracker.ceph.com/issues/48771
124
    qa: iogen: workload fails to cause balancing
125
* https://tracker.ceph.com/issues/51279
126
    kclient hangs on umount (testing branch)
127
* https://tracker.ceph.com/issues/50250
128
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
129
130
131 16 Patrick Donnelly
h3. 2021 July 01
132
133
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210701.192056
134
135
* https://tracker.ceph.com/issues/51197
136
    qa: [WRN] Scrub error on inode 0x10000001520 (/client.0/tmp/t/linux-5.4/Documentation/driver-api) see mds.f log and `damage ls` output for details
137
* https://tracker.ceph.com/issues/50866
138
    osd: stat mismatch on objects
139
* https://tracker.ceph.com/issues/48773
140
    qa: scrub does not complete
141
142
143 15 Patrick Donnelly
h3. 2021 June 26
144
145
https://pulpito.ceph.com/pdonnell-2021-06-26_00:57:00-fs-wip-pdonnell-testing-20210625.225421-distro-basic-smithi/
146
147
* https://tracker.ceph.com/issues/51183
148
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
149
* https://tracker.ceph.com/issues/51410
150
    kclient: fails to finish reconnect during MDS thrashing (testing branch)
151
* https://tracker.ceph.com/issues/48773
152
    qa: scrub does not complete
153
* https://tracker.ceph.com/issues/51282
154
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
155
* https://tracker.ceph.com/issues/51169
156
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
157
* https://tracker.ceph.com/issues/48772
158
    qa: pjd: not ok 9, 44, 80
159
160
161 14 Patrick Donnelly
h3. 2021 June 21
162
163
https://pulpito.ceph.com/pdonnell-2021-06-22_00:27:21-fs-wip-pdonnell-testing-20210621.231646-distro-basic-smithi/
164
165
One failure caused by PR: https://github.com/ceph/ceph/pull/41935#issuecomment-866472599
166
167
* https://tracker.ceph.com/issues/51282
168
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
169
* https://tracker.ceph.com/issues/51183
170
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
171
* https://tracker.ceph.com/issues/48773
172
    qa: scrub does not complete
173
* https://tracker.ceph.com/issues/48771
174
    qa: iogen: workload fails to cause balancing
175
* https://tracker.ceph.com/issues/51169
176
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
177
* https://tracker.ceph.com/issues/50495
178
    libcephfs: shutdown race fails with status 141
179
* https://tracker.ceph.com/issues/45434
180
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
181
* https://tracker.ceph.com/issues/50824
182
    qa: snaptest-git-ceph bus error
183
* https://tracker.ceph.com/issues/50223
184
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
185
186
187 13 Patrick Donnelly
h3. 2021 June 16
188
189
https://pulpito.ceph.com/pdonnell-2021-06-16_21:26:55-fs-wip-pdonnell-testing-20210616.191804-distro-basic-smithi/
190
191
MDS abort class of failures caused by PR: https://github.com/ceph/ceph/pull/41667
192
193
* https://tracker.ceph.com/issues/45434
194
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
195
* https://tracker.ceph.com/issues/51169
196
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
197
* https://tracker.ceph.com/issues/43216
198
    MDSMonitor: removes MDS coming out of quorum election
199
* https://tracker.ceph.com/issues/51278
200
    mds: "FAILED ceph_assert(!segments.empty())"
201
* https://tracker.ceph.com/issues/51279
202
    kclient hangs on umount (testing branch)
203
* https://tracker.ceph.com/issues/51280
204
    mds: "FAILED ceph_assert(r == 0 || r == -2)"
205
* https://tracker.ceph.com/issues/51183
206
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
207
* https://tracker.ceph.com/issues/51281
208
    qa: snaptest-snap-rm-cmp.sh: "echo 'FAIL: bad match, /tmp/a 4637e766853d1ad16a7b17079e2c6f03 != real c3883760b18d50e8d78819c54d579b00'"
209
* https://tracker.ceph.com/issues/48773
210
    qa: scrub does not complete
211
* https://tracker.ceph.com/issues/51076
212
    "wait_for_recovery: failed before timeout expired" during thrashosd test with EC backend.
213
* https://tracker.ceph.com/issues/51228
214
    qa: rmdir: failed to remove 'a/.snap/*': No such file or directory
215
* https://tracker.ceph.com/issues/51282
216
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
217
218
219 12 Patrick Donnelly
h3. 2021 June 14
220
221
https://pulpito.ceph.com/pdonnell-2021-06-14_20:53:05-fs-wip-pdonnell-testing-20210614.173325-distro-basic-smithi/
222
223
Some Ubuntu 20.04 upgrade fallout. In particular, upgrade tests are failing due to missing packages for 18.04 Pacific.
224
225
* https://tracker.ceph.com/issues/51169
226
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
227
* https://tracker.ceph.com/issues/51228
228
    qa: rmdir: failed to remove 'a/.snap/*': No such file or directory
229
* https://tracker.ceph.com/issues/48773
230
    qa: scrub does not complete
231
* https://tracker.ceph.com/issues/51183
232
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
233
* https://tracker.ceph.com/issues/45434
234
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
235
* https://tracker.ceph.com/issues/51182
236
    pybind/mgr/snap_schedule: Invalid command: Unexpected argument 'fs=cephfs'
237
* https://tracker.ceph.com/issues/51229
238
    qa: test_multi_snap_schedule list difference failure
239
* https://tracker.ceph.com/issues/50821
240
    qa: untar_snap_rm failure during mds thrashing
241
242
243 11 Patrick Donnelly
h3. 2021 June 13
244
245
https://pulpito.ceph.com/pdonnell-2021-06-12_02:45:35-fs-wip-pdonnell-testing-20210612.002809-distro-basic-smithi/
246
247
Some Ubuntu 20.04 upgrade fallout. In particular, upgrade tests are failing due to missing packages for 18.04 Pacific.
248
249
* https://tracker.ceph.com/issues/51169
250
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
251
* https://tracker.ceph.com/issues/48773
252
    qa: scrub does not complete
253
* https://tracker.ceph.com/issues/51182
254
    pybind/mgr/snap_schedule: Invalid command: Unexpected argument 'fs=cephfs'
255
* https://tracker.ceph.com/issues/51183
256
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
257
* https://tracker.ceph.com/issues/51197
258
    qa: [WRN] Scrub error on inode 0x10000001520 (/client.0/tmp/t/linux-5.4/Documentation/driver-api) see mds.f log and `damage ls` output for details
259
* https://tracker.ceph.com/issues/45434
260
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
261
262 10 Patrick Donnelly
h3. 2021 June 11
263
264
https://pulpito.ceph.com/pdonnell-2021-06-11_18:02:10-fs-wip-pdonnell-testing-20210611.162716-distro-basic-smithi/
265
266
Some Ubuntu 20.04 upgrade fallout. In particular, upgrade tests are failing due to missing packages for 18.04 Pacific.
267
268
* https://tracker.ceph.com/issues/51169
269
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
270
* https://tracker.ceph.com/issues/45434
271
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
272
* https://tracker.ceph.com/issues/48771
273
    qa: iogen: workload fails to cause balancing
274
* https://tracker.ceph.com/issues/43216
275
    MDSMonitor: removes MDS coming out of quorum election
276
* https://tracker.ceph.com/issues/51182
277
    pybind/mgr/snap_schedule: Invalid command: Unexpected argument 'fs=cephfs'
278
* https://tracker.ceph.com/issues/50223
279
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
280
* https://tracker.ceph.com/issues/48773
281
    qa: scrub does not complete
282
* https://tracker.ceph.com/issues/51183
283
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
284
* https://tracker.ceph.com/issues/51184
285
    qa: fs:bugs does not specify distro
286
287
288 9 Patrick Donnelly
h3. 2021 June 03
289
290
https://pulpito.ceph.com/pdonnell-2021-06-03_03:40:33-fs-wip-pdonnell-testing-20210603.020013-distro-basic-smithi/
291
292
* https://tracker.ceph.com/issues/45434
293
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
294
* https://tracker.ceph.com/issues/50016
295
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
296
* https://tracker.ceph.com/issues/50821
297
    qa: untar_snap_rm failure during mds thrashing
298
* https://tracker.ceph.com/issues/50622 (regression)
299
    msg: active_connections regression
300
* https://tracker.ceph.com/issues/49845#note-2 (regression)
301
    qa: failed umount in test_volumes
302
* https://tracker.ceph.com/issues/48773
303
    qa: scrub does not complete
304
* https://tracker.ceph.com/issues/43216
305
    MDSMonitor: removes MDS coming out of quorum election
306
307
308 7 Patrick Donnelly
h3. 2021 May 18
309
310 8 Patrick Donnelly
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210518.214114
311
312
Regression in testing kernel caused some failures. Ilya fixed those and rerun
313
looked better. Some odd new noise in the rerun relating to packaging and "No
314
module named 'tasks.ceph'".
315
316
* https://tracker.ceph.com/issues/50824
317
    qa: snaptest-git-ceph bus error
318
* https://tracker.ceph.com/issues/50622 (regression)
319
    msg: active_connections regression
320
* https://tracker.ceph.com/issues/49845#note-2 (regression)
321
    qa: failed umount in test_volumes
322
* https://tracker.ceph.com/issues/48203 (stock kernel update required)
323
    qa: quota failure
324
325
326
h3. 2021 May 18
327
328 7 Patrick Donnelly
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210518.025642
329
330
* https://tracker.ceph.com/issues/50821
331
    qa: untar_snap_rm failure during mds thrashing
332
* https://tracker.ceph.com/issues/48773
333
    qa: scrub does not complete
334
* https://tracker.ceph.com/issues/45591
335
    mgr: FAILED ceph_assert(daemon != nullptr)
336
* https://tracker.ceph.com/issues/50866
337
    osd: stat mismatch on objects
338
* https://tracker.ceph.com/issues/50016
339
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
340
* https://tracker.ceph.com/issues/50867
341
    qa: fs:mirror: reduced data availability
342
* https://tracker.ceph.com/issues/50821
343
    qa: untar_snap_rm failure during mds thrashing
344
* https://tracker.ceph.com/issues/50622 (regression)
345
    msg: active_connections regression
346
* https://tracker.ceph.com/issues/50223
347
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
348
* https://tracker.ceph.com/issues/50868
349
    qa: "kern.log.gz already exists; not overwritten"
350
* https://tracker.ceph.com/issues/50870
351
    qa: test_full: "rm: cannot remove 'large_file_a': Permission denied"
352
353
354 6 Patrick Donnelly
h3. 2021 May 11
355
356
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210511.232042
357
358
* one class of failures caused by PR
359
* https://tracker.ceph.com/issues/48812
360
    qa: test_scrub_pause_and_resume_with_abort failure
361
* https://tracker.ceph.com/issues/50390
362
    mds: monclient: wait_auth_rotating timed out after 30
363
* https://tracker.ceph.com/issues/48773
364
    qa: scrub does not complete
365
* https://tracker.ceph.com/issues/50821
366
    qa: untar_snap_rm failure during mds thrashing
367
* https://tracker.ceph.com/issues/50224
368
    qa: test_mirroring_init_failure_with_recovery failure
369
* https://tracker.ceph.com/issues/50622 (regression)
370
    msg: active_connections regression
371
* https://tracker.ceph.com/issues/50825
372
    qa: snaptest-git-ceph hang during mon thrashing v2
373
* https://tracker.ceph.com/issues/50821
374
    qa: untar_snap_rm failure during mds thrashing
375
* https://tracker.ceph.com/issues/50823
376
    qa: RuntimeError: timeout waiting for cluster to stabilize
377
378
379 5 Patrick Donnelly
h3. 2021 May 14
380
381
https://pulpito.ceph.com/pdonnell-2021-05-14_21:45:42-fs-master-distro-basic-smithi/
382
383
* https://tracker.ceph.com/issues/48812
384
    qa: test_scrub_pause_and_resume_with_abort failure
385
* https://tracker.ceph.com/issues/50821
386
    qa: untar_snap_rm failure during mds thrashing
387
* https://tracker.ceph.com/issues/50622 (regression)
388
    msg: active_connections regression
389
* https://tracker.ceph.com/issues/50822
390
    qa: testing kernel patch for client metrics causes mds abort
391
* https://tracker.ceph.com/issues/48773
392
    qa: scrub does not complete
393
* https://tracker.ceph.com/issues/50823
394
    qa: RuntimeError: timeout waiting for cluster to stabilize
395
* https://tracker.ceph.com/issues/50824
396
    qa: snaptest-git-ceph bus error
397
* https://tracker.ceph.com/issues/50825
398
    qa: snaptest-git-ceph hang during mon thrashing v2
399
* https://tracker.ceph.com/issues/50826
400
    kceph: stock RHEL kernel hangs on snaptests with mon|osd thrashers
401
402
403 4 Patrick Donnelly
h3. 2021 May 01
404
405
https://pulpito.ceph.com/pdonnell-2021-05-01_09:07:09-fs-wip-pdonnell-testing-20210501.040415-distro-basic-smithi/
406
407
* https://tracker.ceph.com/issues/45434
408
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
409
* https://tracker.ceph.com/issues/50281
410
    qa: untar_snap_rm timeout
411
* https://tracker.ceph.com/issues/48203 (stock kernel update required)
412
    qa: quota failure
413
* https://tracker.ceph.com/issues/48773
414
    qa: scrub does not complete
415
* https://tracker.ceph.com/issues/50390
416
    mds: monclient: wait_auth_rotating timed out after 30
417
* https://tracker.ceph.com/issues/50250
418
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details"
419
* https://tracker.ceph.com/issues/50622 (regression)
420
    msg: active_connections regression
421
* https://tracker.ceph.com/issues/45591
422
    mgr: FAILED ceph_assert(daemon != nullptr)
423
* https://tracker.ceph.com/issues/50221
424
    qa: snaptest-git-ceph failure in git diff
425
* https://tracker.ceph.com/issues/50016
426
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
427
428
429 3 Patrick Donnelly
h3. 2021 Apr 15
430
431
https://pulpito.ceph.com/pdonnell-2021-04-15_01:35:57-fs-wip-pdonnell-testing-20210414.230315-distro-basic-smithi/
432
433
* https://tracker.ceph.com/issues/50281
434
    qa: untar_snap_rm timeout
435
* https://tracker.ceph.com/issues/50220
436
    qa: dbench workload timeout
437
* https://tracker.ceph.com/issues/50246
438
    mds: failure replaying journal (EMetaBlob)
439
* https://tracker.ceph.com/issues/50250
440
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details"
441
* https://tracker.ceph.com/issues/50016
442
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
443
* https://tracker.ceph.com/issues/50222
444
    osd: 5.2s0 deep-scrub : stat mismatch
445
* https://tracker.ceph.com/issues/45434
446
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
447
* https://tracker.ceph.com/issues/49845
448
    qa: failed umount in test_volumes
449
* https://tracker.ceph.com/issues/37808
450
    osd: osdmap cache weak_refs assert during shutdown
451
* https://tracker.ceph.com/issues/50387
452
    client: fs/snaps failure
453
* https://tracker.ceph.com/issues/50389
454
    mds: "cluster [ERR] Error recovering journal 0x203: (2) No such file or directory" in cluster log"
455
* https://tracker.ceph.com/issues/50216
456
    qa: "ls: cannot access 'lost+found': No such file or directory"
457
* https://tracker.ceph.com/issues/50390
458
    mds: monclient: wait_auth_rotating timed out after 30
459
460
461
462 1 Patrick Donnelly
h3. 2021 Apr 08
463
464 2 Patrick Donnelly
https://pulpito.ceph.com/pdonnell-2021-04-08_22:42:24-fs-wip-pdonnell-testing-20210408.192301-distro-basic-smithi/
465
466
* https://tracker.ceph.com/issues/45434
467
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
468
* https://tracker.ceph.com/issues/50016
469
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
470
* https://tracker.ceph.com/issues/48773
471
    qa: scrub does not complete
472
* https://tracker.ceph.com/issues/50279
473
    qa: "Replacing daemon mds.b as rank 0 with standby daemon mds.c"
474
* https://tracker.ceph.com/issues/50246
475
    mds: failure replaying journal (EMetaBlob)
476
* https://tracker.ceph.com/issues/48365
477
    qa: ffsb build failure on CentOS 8.2
478
* https://tracker.ceph.com/issues/50216
479
    qa: "ls: cannot access 'lost+found': No such file or directory"
480
* https://tracker.ceph.com/issues/50223
481
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
482
* https://tracker.ceph.com/issues/50280
483
    cephadm: RuntimeError: uid/gid not found
484
* https://tracker.ceph.com/issues/50281
485
    qa: untar_snap_rm timeout
486
487
h3. 2021 Apr 08
488
489 1 Patrick Donnelly
https://pulpito.ceph.com/pdonnell-2021-04-08_04:31:36-fs-wip-pdonnell-testing-20210408.024225-distro-basic-smithi/
490
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210408.142238 (with logic inversion / QA fix)
491
492
* https://tracker.ceph.com/issues/50246
493
    mds: failure replaying journal (EMetaBlob)
494
* https://tracker.ceph.com/issues/50250
495
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details"
496
497
498
h3. 2021 Apr 07
499
500
https://pulpito.ceph.com/pdonnell-2021-04-07_02:12:41-fs-wip-pdonnell-testing-20210406.213012-distro-basic-smithi/
501
502
* https://tracker.ceph.com/issues/50215
503
    qa: "log [ERR] : error reading sessionmap 'mds2_sessionmap'"
504
* https://tracker.ceph.com/issues/49466
505
    qa: "Command failed on gibba030 with status 1: 'set -ex\nsudo dd of=/tmp/tmp.ZEeZBasJer'"
506
* https://tracker.ceph.com/issues/50216
507
    qa: "ls: cannot access 'lost+found': No such file or directory"
508
* https://tracker.ceph.com/issues/48773
509
    qa: scrub does not complete
510
* https://tracker.ceph.com/issues/49845
511
    qa: failed umount in test_volumes
512
* https://tracker.ceph.com/issues/50220
513
    qa: dbench workload timeout
514
* https://tracker.ceph.com/issues/50221
515
    qa: snaptest-git-ceph failure in git diff
516
* https://tracker.ceph.com/issues/50222
517
    osd: 5.2s0 deep-scrub : stat mismatch
518
* https://tracker.ceph.com/issues/50223
519
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
520
* https://tracker.ceph.com/issues/50224
521
    qa: test_mirroring_init_failure_with_recovery failure
522
523
h3. 2021 Apr 01
524
525
https://pulpito.ceph.com/pdonnell-2021-04-01_00:45:34-fs-wip-pdonnell-testing-20210331.222326-distro-basic-smithi/
526
527
* https://tracker.ceph.com/issues/48772
528
    qa: pjd: not ok 9, 44, 80
529
* https://tracker.ceph.com/issues/50177
530
    osd: "stalled aio... buggy kernel or bad device?"
531
* https://tracker.ceph.com/issues/48771
532
    qa: iogen: workload fails to cause balancing
533
* https://tracker.ceph.com/issues/49845
534
    qa: failed umount in test_volumes
535
* https://tracker.ceph.com/issues/48773
536
    qa: scrub does not complete
537
* https://tracker.ceph.com/issues/48805
538
    mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details"
539
* https://tracker.ceph.com/issues/50178
540
    qa: "TypeError: run() got an unexpected keyword argument 'shell'"
541
* https://tracker.ceph.com/issues/45434
542
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
543
544
h3. 2021 Mar 24
545
546
https://pulpito.ceph.com/pdonnell-2021-03-24_23:26:35-fs-wip-pdonnell-testing-20210324.190252-distro-basic-smithi/
547
548
* https://tracker.ceph.com/issues/49500
549
    qa: "Assertion `cb_done' failed."
550
* https://tracker.ceph.com/issues/50019
551
    qa: mount failure with cephadm "probably no MDS server is up?"
552
* https://tracker.ceph.com/issues/50020
553
    qa: "RADOS object not found (Failed to operate read op for oid cephfs_mirror)"
554
* https://tracker.ceph.com/issues/48773
555
    qa: scrub does not complete
556
* https://tracker.ceph.com/issues/45434
557
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
558
* https://tracker.ceph.com/issues/48805
559
    mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details"
560
* https://tracker.ceph.com/issues/48772
561
    qa: pjd: not ok 9, 44, 80
562
* https://tracker.ceph.com/issues/50021
563
    qa: snaptest-git-ceph failure during mon thrashing
564
* https://tracker.ceph.com/issues/48771
565
    qa: iogen: workload fails to cause balancing
566
* https://tracker.ceph.com/issues/50016
567
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
568
* https://tracker.ceph.com/issues/49466
569
    qa: "Command failed on gibba030 with status 1: 'set -ex\nsudo dd of=/tmp/tmp.ZEeZBasJer'"
570
571
572
h3. 2021 Mar 18
573
574
https://pulpito.ceph.com/pdonnell-2021-03-18_13:46:31-fs-wip-pdonnell-testing-20210318.024145-distro-basic-smithi/
575
576
* https://tracker.ceph.com/issues/49466
577
    qa: "Command failed on gibba030 with status 1: 'set -ex\nsudo dd of=/tmp/tmp.ZEeZBasJer'"
578
* https://tracker.ceph.com/issues/48773
579
    qa: scrub does not complete
580
* https://tracker.ceph.com/issues/48805
581
    mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details"
582
* https://tracker.ceph.com/issues/45434
583
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
584
* https://tracker.ceph.com/issues/49845
585
    qa: failed umount in test_volumes
586
* https://tracker.ceph.com/issues/49605
587
    mgr: drops command on the floor
588
* https://tracker.ceph.com/issues/48203 (stock kernel update required)
589
    qa: quota failure
590
* https://tracker.ceph.com/issues/49928
591
    client: items pinned in cache preventing unmount x2
592
593
h3. 2021 Mar 15
594
595
https://pulpito.ceph.com/pdonnell-2021-03-15_22:16:56-fs-wip-pdonnell-testing-20210315.182203-distro-basic-smithi/
596
597
* https://tracker.ceph.com/issues/49842
598
    qa: stuck pkg install
599
* https://tracker.ceph.com/issues/49466
600
    qa: "Command failed on gibba030 with status 1: 'set -ex\nsudo dd of=/tmp/tmp.ZEeZBasJer'"
601
* https://tracker.ceph.com/issues/49822
602
    test: test_mirroring_command_idempotency (tasks.cephfs.test_admin.TestMirroringCommands) failure
603
* https://tracker.ceph.com/issues/49240
604
    terminate called after throwing an instance of 'std::bad_alloc'
605
* https://tracker.ceph.com/issues/48773
606
    qa: scrub does not complete
607
* https://tracker.ceph.com/issues/45434
608
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
609
* https://tracker.ceph.com/issues/49500
610
    qa: "Assertion `cb_done' failed."
611
* https://tracker.ceph.com/issues/49843
612
    qa: fs/snaps/snaptest-upchildrealms.sh failure
613
* https://tracker.ceph.com/issues/49845
614
    qa: failed umount in test_volumes
615
* https://tracker.ceph.com/issues/48805
616
    mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details"
617
* https://tracker.ceph.com/issues/49605
618
    mgr: drops command on the floor
619
620
and failure caused by PR: https://github.com/ceph/ceph/pull/39969
621
622
623
h3. 2021 Mar 09
624
625
https://pulpito.ceph.com/pdonnell-2021-03-09_03:27:39-fs-wip-pdonnell-testing-20210308.214827-distro-basic-smithi/
626
627
* https://tracker.ceph.com/issues/49500
628
    qa: "Assertion `cb_done' failed."
629
* https://tracker.ceph.com/issues/48805
630
    mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details"
631
* https://tracker.ceph.com/issues/48773
632
    qa: scrub does not complete
633
* https://tracker.ceph.com/issues/45434
634
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
635
* https://tracker.ceph.com/issues/49240
636
    terminate called after throwing an instance of 'std::bad_alloc'
637
* https://tracker.ceph.com/issues/49466
638
    qa: "Command failed on gibba030 with status 1: 'set -ex\nsudo dd of=/tmp/tmp.ZEeZBasJer'"
639
* https://tracker.ceph.com/issues/49684
640
    qa: fs:cephadm mount does not wait for mds to be created
641
* https://tracker.ceph.com/issues/48771
642
    qa: iogen: workload fails to cause balancing