Project

General

Profile

Main » History » Version 44

Venky Shankar, 03/21/2022 01:05 PM

1 44 Venky Shankar
h3. 2022 Mar 21
2 42 Venky Shankar
3 43 Venky Shankar
https://pulpito.ceph.com/vshankar-2022-03-20_02:16:37-fs-wip-vshankar-testing-20220319-163539-testing-default-smithi/
4
5
Run didn't go well, lots of failures - debugging by dropping PRs and running against master branch. Only merging unrelated PRs that pass tests.
6
7
8
h3. 2022 Mar 08
9
10 42 Venky Shankar
https://pulpito.ceph.com/vshankar-2022-02-28_04:32:15-fs-wip-vshankar-testing-20220226-211550-testing-default-smithi/
11
12
rerun with
13
- (drop) https://github.com/ceph/ceph/pull/44679
14
- (drop) https://github.com/ceph/ceph/pull/44958
15
https://pulpito.ceph.com/vshankar-2022-03-06_14:47:51-fs-wip-vshankar-testing-20220304-132102-testing-default-smithi/
16
17
* https://tracker.ceph.com/issues/54419 (new)
18
    `ceph orch upgrade start` seems to never reach completion
19
* https://tracker.ceph.com/issues/51964
20
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
21
* https://tracker.ceph.com/issues/52624
22
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
23
* https://tracker.ceph.com/issues/50223
24
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
25
* https://tracker.ceph.com/issues/52438
26
    qa: ffsb timeout
27
* https://tracker.ceph.com/issues/50821
28
    qa: untar_snap_rm failure during mds thrashing
29
30
31 41 Venky Shankar
h3. 2022 Feb 09
32
33
https://pulpito.ceph.com/vshankar-2022-02-05_17:27:49-fs-wip-vshankar-testing-20220201-113815-testing-default-smithi/
34
35
rerun with
36
- (drop) https://github.com/ceph/ceph/pull/37938
37
- (drop) https://github.com/ceph/ceph/pull/44335
38
- (drop) https://github.com/ceph/ceph/pull/44491
39
- (drop) https://github.com/ceph/ceph/pull/44501
40
https://pulpito.ceph.com/vshankar-2022-02-08_14:27:29-fs-wip-vshankar-testing-20220208-181241-testing-default-smithi/
41
42
* https://tracker.ceph.com/issues/51964
43
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
44
* https://tracker.ceph.com/issues/54066
45
    test_subvolume_no_upgrade_v1_sanity fails with `AssertionError: 1000 != 0`
46
* https://tracker.ceph.com/issues/48773
47
    qa: scrub does not complete
48
* https://tracker.ceph.com/issues/52624
49
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
50
* https://tracker.ceph.com/issues/50223
51
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
52
* https://tracker.ceph.com/issues/52438
53
    qa: ffsb timeout
54
55 40 Patrick Donnelly
h3. 2022 Feb 01
56
57
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20220127.171526
58
59
* https://tracker.ceph.com/issues/54107
60
    kclient: hang during umount
61
* https://tracker.ceph.com/issues/54106
62
    kclient: hang during workunit cleanup
63
* https://tracker.ceph.com/issues/54108
64
    qa: iogen workunit: "The following counters failed to be set on mds daemons: {'mds.exported', 'mds.imported'}"
65
* https://tracker.ceph.com/issues/48773
66
    qa: scrub does not complete
67
* https://tracker.ceph.com/issues/52438
68
    qa: ffsb timeout
69
70
71 36 Venky Shankar
h3. 2022 Jan 13
72
73
https://pulpito.ceph.com/vshankar-2022-01-06_13:18:41-fs-wip-vshankar-testing-20220106-145819-testing-default-smithi/
74 39 Venky Shankar
75 36 Venky Shankar
rerun with:
76 38 Venky Shankar
- (add) https://github.com/ceph/ceph/pull/44570
77
- (drop) https://github.com/ceph/ceph/pull/43184
78 36 Venky Shankar
https://pulpito.ceph.com/vshankar-2022-01-13_04:42:40-fs-wip-vshankar-testing-20220106-145819-testing-default-smithi/
79
80
* https://tracker.ceph.com/issues/50223
81
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
82
* https://tracker.ceph.com/issues/51282
83
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
84
* https://tracker.ceph.com/issues/48773
85
    qa: scrub does not complete
86
* https://tracker.ceph.com/issues/52624
87
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
88
* https://tracker.ceph.com/issues/53859
89
    qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
90
91 34 Venky Shankar
h3. 2022 Jan 03
92
93
https://pulpito.ceph.com/vshankar-2021-12-22_07:37:44-fs-wip-vshankar-testing-20211216-114012-testing-default-smithi/
94
https://pulpito.ceph.com/vshankar-2022-01-03_12:27:45-fs-wip-vshankar-testing-20220103-142738-testing-default-smithi/ (rerun)
95
96
* https://tracker.ceph.com/issues/50223
97
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
98
* https://tracker.ceph.com/issues/51964
99
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
100
* https://tracker.ceph.com/issues/51267
101
    CommandFailedError: Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithi096 with status 1:...
102
* https://tracker.ceph.com/issues/51282
103
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
104
* https://tracker.ceph.com/issues/50821
105
    qa: untar_snap_rm failure during mds thrashing
106
* https://tracker.ceph.com/issues/51278
107
    mds: "FAILED ceph_assert(!segments.empty())"
108 35 Ramana Raja
* https://tracker.ceph.com/issues/52279
109
    cephadm tests fail due to: error adding seccomp filter rule for syscall bdflush: requested action matches default action of filter
110
111 34 Venky Shankar
112 33 Patrick Donnelly
h3. 2021 Dec 22
113
114
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211222.014316
115
116
* https://tracker.ceph.com/issues/52624
117
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
118
* https://tracker.ceph.com/issues/50223
119
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
120
* https://tracker.ceph.com/issues/52279
121
    cephadm tests fail due to: error adding seccomp filter rule for syscall bdflush: requested action matches default action of filter
122
* https://tracker.ceph.com/issues/50224
123
    qa: test_mirroring_init_failure_with_recovery failure
124
* https://tracker.ceph.com/issues/48773
125
    qa: scrub does not complete
126
127
128 32 Venky Shankar
h3. 2021 Nov 30
129
130
https://pulpito.ceph.com/vshankar-2021-11-24_07:14:27-fs-wip-vshankar-testing-20211124-094330-testing-default-smithi/
131
https://pulpito.ceph.com/vshankar-2021-11-30_06:23:32-fs-wip-vshankar-testing-20211124-094330-distro-default-smithi/ (rerun w/ QA fixes)
132
133
* https://tracker.ceph.com/issues/53436
134
    mds, mon: mds beacon messages get dropped? (mds never reaches up:active state)
135
* https://tracker.ceph.com/issues/51964
136
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
137
* https://tracker.ceph.com/issues/48812
138
    qa: test_scrub_pause_and_resume_with_abort failure
139
* https://tracker.ceph.com/issues/51076
140
    "wait_for_recovery: failed before timeout expired" during thrashosd test with EC backend.
141
* https://tracker.ceph.com/issues/50223
142
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
143
* https://tracker.ceph.com/issues/52624
144
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
145
* https://tracker.ceph.com/issues/50250
146
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
147
148
149 31 Patrick Donnelly
h3. 2021 November 9
150
151
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211109.180315
152
153
* https://tracker.ceph.com/issues/53214
154
    qa: "dd: error reading '/sys/kernel/debug/ceph/2a934501-6731-4052-a836-f42229a869be.client4874/metrics': Is a directory"
155
* https://tracker.ceph.com/issues/48773
156
    qa: scrub does not complete
157
* https://tracker.ceph.com/issues/50223
158
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
159
* https://tracker.ceph.com/issues/51282
160
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
161
* https://tracker.ceph.com/issues/52624
162
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
163
* https://tracker.ceph.com/issues/53216
164
    qa: "RuntimeError: value of attributes should be either str or None. client_id"
165
* https://tracker.ceph.com/issues/50250
166
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
167
168
169
170 30 Patrick Donnelly
h3. 2021 November 03
171
172
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211103.023355
173
174
* https://tracker.ceph.com/issues/51964
175
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
176
* https://tracker.ceph.com/issues/51282
177
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
178
* https://tracker.ceph.com/issues/52436
179
    fs/ceph: "corrupt mdsmap"
180
* https://tracker.ceph.com/issues/53074
181
    pybind/mgr/cephadm: upgrade sequence does not continue if no MDS are active
182
* https://tracker.ceph.com/issues/53150
183
    pybind/mgr/cephadm/upgrade: tolerate MDS failures during upgrade straddling v16.2.5
184
* https://tracker.ceph.com/issues/53155
185
    MDSMonitor: assertion during upgrade to v16.2.5+
186
187
188 29 Patrick Donnelly
h3. 2021 October 26
189
190
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211025.000447
191
192
* https://tracker.ceph.com/issues/53074
193
    pybind/mgr/cephadm: upgrade sequence does not continue if no MDS are active
194
* https://tracker.ceph.com/issues/52997
195
    testing: hang ing umount
196
* https://tracker.ceph.com/issues/50824
197
    qa: snaptest-git-ceph bus error
198
* https://tracker.ceph.com/issues/52436
199
    fs/ceph: "corrupt mdsmap"
200
* https://tracker.ceph.com/issues/48773
201
    qa: scrub does not complete
202
* https://tracker.ceph.com/issues/53082
203
    ceph-fuse: segmenetation fault in Client::handle_mds_map
204
* https://tracker.ceph.com/issues/50223
205
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
206
* https://tracker.ceph.com/issues/52624
207
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
208
* https://tracker.ceph.com/issues/50224
209
    qa: test_mirroring_init_failure_with_recovery failure
210
* https://tracker.ceph.com/issues/50821
211
    qa: untar_snap_rm failure during mds thrashing
212
* https://tracker.ceph.com/issues/50250
213
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
214
215
216
217 27 Patrick Donnelly
h3. 2021 October 19
218
219 28 Patrick Donnelly
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211019.013028
220 27 Patrick Donnelly
221
* https://tracker.ceph.com/issues/52995
222
    qa: test_standby_count_wanted failure
223
* https://tracker.ceph.com/issues/52948
224
    osd: fails to come up: "teuthology.misc:7 of 8 OSDs are up"
225
* https://tracker.ceph.com/issues/52996
226
    qa: test_perf_counters via test_openfiletable
227
* https://tracker.ceph.com/issues/48772
228
    qa: pjd: not ok 9, 44, 80
229
* https://tracker.ceph.com/issues/52997
230
    testing: hang ing umount
231
* https://tracker.ceph.com/issues/50250
232
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
233
* https://tracker.ceph.com/issues/52624
234
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
235
* https://tracker.ceph.com/issues/50223
236
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
237
* https://tracker.ceph.com/issues/50821
238
    qa: untar_snap_rm failure during mds thrashing
239
* https://tracker.ceph.com/issues/48773
240
    qa: scrub does not complete
241
242
243 26 Patrick Donnelly
h3. 2021 October 12
244
245
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211012.192211
246
247
Some failures caused by teuthology bug: https://tracker.ceph.com/issues/52944
248
249
New test caused failure: https://github.com/ceph/ceph/pull/43297#discussion_r729883167
250
251
252
* https://tracker.ceph.com/issues/51282
253
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
254
* https://tracker.ceph.com/issues/52948
255
    osd: fails to come up: "teuthology.misc:7 of 8 OSDs are up"
256
* https://tracker.ceph.com/issues/48773
257
    qa: scrub does not complete
258
* https://tracker.ceph.com/issues/50224
259
    qa: test_mirroring_init_failure_with_recovery failure
260
* https://tracker.ceph.com/issues/52949
261
    RuntimeError: The following counters failed to be set on mds daemons: {'mds.dir_split'}
262
263
264 25 Patrick Donnelly
h3. 2021 October 02
265 23 Patrick Donnelly
266 24 Patrick Donnelly
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211002.163337
267
268
Some failures caused by cephadm upgrade test. Fixed in follow-up qa commit.
269
270
test_simple failures caused by PR in this set.
271
272
A few reruns because of QA infra noise.
273
274
* https://tracker.ceph.com/issues/52822
275
    qa: failed pacific install on fs:upgrade
276
* https://tracker.ceph.com/issues/52624
277
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
278
* https://tracker.ceph.com/issues/50223
279
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
280
* https://tracker.ceph.com/issues/48773
281
    qa: scrub does not complete
282
283
284
h3. 2021 September 20
285
286 23 Patrick Donnelly
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210917.174826
287
288
* https://tracker.ceph.com/issues/52677
289
    qa: test_simple failure
290
* https://tracker.ceph.com/issues/51279
291
    kclient hangs on umount (testing branch)
292
* https://tracker.ceph.com/issues/50223
293
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
294
* https://tracker.ceph.com/issues/50250
295
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
296
* https://tracker.ceph.com/issues/52624
297
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
298
* https://tracker.ceph.com/issues/52438
299
    qa: ffsb timeout
300
301
302 22 Patrick Donnelly
h3. 2021 September 10
303
304
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210910.181451
305
306
* https://tracker.ceph.com/issues/50223
307
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
308
* https://tracker.ceph.com/issues/50250
309
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
310
* https://tracker.ceph.com/issues/52624
311
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
312
* https://tracker.ceph.com/issues/52625
313
    qa: test_kill_mdstable (tasks.cephfs.test_snapshots.TestSnapshots)
314
* https://tracker.ceph.com/issues/52439
315
    qa: acls does not compile on centos stream
316
* https://tracker.ceph.com/issues/50821
317
    qa: untar_snap_rm failure during mds thrashing
318
* https://tracker.ceph.com/issues/48773
319
    qa: scrub does not complete
320
* https://tracker.ceph.com/issues/52626
321
    mds: ScrubStack.cc: 831: FAILED ceph_assert(diri)
322
* https://tracker.ceph.com/issues/51279
323
    kclient hangs on umount (testing branch)
324
325
326 21 Patrick Donnelly
h3. 2021 August 27
327
328
Several jobs died because of device failures.
329
330
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210827.024746
331
332
* https://tracker.ceph.com/issues/52430
333
    mds: fast async create client mount breaks racy test
334
* https://tracker.ceph.com/issues/52436
335
    fs/ceph: "corrupt mdsmap"
336
* https://tracker.ceph.com/issues/52437
337
    mds: InoTable::replay_release_ids abort via test_inotable_sync
338
* https://tracker.ceph.com/issues/51282
339
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
340
* https://tracker.ceph.com/issues/52438
341
    qa: ffsb timeout
342
* https://tracker.ceph.com/issues/52439
343
    qa: acls does not compile on centos stream
344
345
346 20 Patrick Donnelly
h3. 2021 July 30
347
348
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210729.214022
349
350
* https://tracker.ceph.com/issues/50250
351
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
352
* https://tracker.ceph.com/issues/51282
353
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
354
* https://tracker.ceph.com/issues/48773
355
    qa: scrub does not complete
356
* https://tracker.ceph.com/issues/51975
357
    pybind/mgr/stats: KeyError
358
359
360 19 Patrick Donnelly
h3. 2021 July 28
361
362
https://pulpito.ceph.com/pdonnell-2021-07-28_00:39:45-fs-wip-pdonnell-testing-20210727.213757-distro-basic-smithi/
363
364
with qa fix: https://pulpito.ceph.com/pdonnell-2021-07-28_16:20:28-fs-wip-pdonnell-testing-20210728.141004-distro-basic-smithi/
365
366
* https://tracker.ceph.com/issues/51905
367
    qa: "error reading sessionmap 'mds1_sessionmap'"
368
* https://tracker.ceph.com/issues/48773
369
    qa: scrub does not complete
370
* https://tracker.ceph.com/issues/50250
371
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
372
* https://tracker.ceph.com/issues/51267
373
    CommandFailedError: Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithi096 with status 1:...
374
* https://tracker.ceph.com/issues/51279
375
    kclient hangs on umount (testing branch)
376
377
378 18 Patrick Donnelly
h3. 2021 July 16
379
380
https://pulpito.ceph.com/pdonnell-2021-07-16_05:50:11-fs-wip-pdonnell-testing-20210716.022804-distro-basic-smithi/
381
382
* https://tracker.ceph.com/issues/48773
383
    qa: scrub does not complete
384
* https://tracker.ceph.com/issues/48772
385
    qa: pjd: not ok 9, 44, 80
386
* https://tracker.ceph.com/issues/45434
387
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
388
* https://tracker.ceph.com/issues/51279
389
    kclient hangs on umount (testing branch)
390
* https://tracker.ceph.com/issues/50824
391
    qa: snaptest-git-ceph bus error
392
393
394 17 Patrick Donnelly
h3. 2021 July 04
395
396
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210703.052904
397
398
* https://tracker.ceph.com/issues/48773
399
    qa: scrub does not complete
400
* https://tracker.ceph.com/issues/39150
401
    mon: "FAILED ceph_assert(session_map.sessions.empty())" when out of quorum
402
* https://tracker.ceph.com/issues/45434
403
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
404
* https://tracker.ceph.com/issues/51282
405
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
406
* https://tracker.ceph.com/issues/48771
407
    qa: iogen: workload fails to cause balancing
408
* https://tracker.ceph.com/issues/51279
409
    kclient hangs on umount (testing branch)
410
* https://tracker.ceph.com/issues/50250
411
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
412
413
414 16 Patrick Donnelly
h3. 2021 July 01
415
416
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210701.192056
417
418
* https://tracker.ceph.com/issues/51197
419
    qa: [WRN] Scrub error on inode 0x10000001520 (/client.0/tmp/t/linux-5.4/Documentation/driver-api) see mds.f log and `damage ls` output for details
420
* https://tracker.ceph.com/issues/50866
421
    osd: stat mismatch on objects
422
* https://tracker.ceph.com/issues/48773
423
    qa: scrub does not complete
424
425
426 15 Patrick Donnelly
h3. 2021 June 26
427
428
https://pulpito.ceph.com/pdonnell-2021-06-26_00:57:00-fs-wip-pdonnell-testing-20210625.225421-distro-basic-smithi/
429
430
* https://tracker.ceph.com/issues/51183
431
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
432
* https://tracker.ceph.com/issues/51410
433
    kclient: fails to finish reconnect during MDS thrashing (testing branch)
434
* https://tracker.ceph.com/issues/48773
435
    qa: scrub does not complete
436
* https://tracker.ceph.com/issues/51282
437
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
438
* https://tracker.ceph.com/issues/51169
439
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
440
* https://tracker.ceph.com/issues/48772
441
    qa: pjd: not ok 9, 44, 80
442
443
444 14 Patrick Donnelly
h3. 2021 June 21
445
446
https://pulpito.ceph.com/pdonnell-2021-06-22_00:27:21-fs-wip-pdonnell-testing-20210621.231646-distro-basic-smithi/
447
448
One failure caused by PR: https://github.com/ceph/ceph/pull/41935#issuecomment-866472599
449
450
* https://tracker.ceph.com/issues/51282
451
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
452
* https://tracker.ceph.com/issues/51183
453
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
454
* https://tracker.ceph.com/issues/48773
455
    qa: scrub does not complete
456
* https://tracker.ceph.com/issues/48771
457
    qa: iogen: workload fails to cause balancing
458
* https://tracker.ceph.com/issues/51169
459
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
460
* https://tracker.ceph.com/issues/50495
461
    libcephfs: shutdown race fails with status 141
462
* https://tracker.ceph.com/issues/45434
463
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
464
* https://tracker.ceph.com/issues/50824
465
    qa: snaptest-git-ceph bus error
466
* https://tracker.ceph.com/issues/50223
467
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
468
469
470 13 Patrick Donnelly
h3. 2021 June 16
471
472
https://pulpito.ceph.com/pdonnell-2021-06-16_21:26:55-fs-wip-pdonnell-testing-20210616.191804-distro-basic-smithi/
473
474
MDS abort class of failures caused by PR: https://github.com/ceph/ceph/pull/41667
475
476
* https://tracker.ceph.com/issues/45434
477
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
478
* https://tracker.ceph.com/issues/51169
479
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
480
* https://tracker.ceph.com/issues/43216
481
    MDSMonitor: removes MDS coming out of quorum election
482
* https://tracker.ceph.com/issues/51278
483
    mds: "FAILED ceph_assert(!segments.empty())"
484
* https://tracker.ceph.com/issues/51279
485
    kclient hangs on umount (testing branch)
486
* https://tracker.ceph.com/issues/51280
487
    mds: "FAILED ceph_assert(r == 0 || r == -2)"
488
* https://tracker.ceph.com/issues/51183
489
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
490
* https://tracker.ceph.com/issues/51281
491
    qa: snaptest-snap-rm-cmp.sh: "echo 'FAIL: bad match, /tmp/a 4637e766853d1ad16a7b17079e2c6f03 != real c3883760b18d50e8d78819c54d579b00'"
492
* https://tracker.ceph.com/issues/48773
493
    qa: scrub does not complete
494
* https://tracker.ceph.com/issues/51076
495
    "wait_for_recovery: failed before timeout expired" during thrashosd test with EC backend.
496
* https://tracker.ceph.com/issues/51228
497
    qa: rmdir: failed to remove 'a/.snap/*': No such file or directory
498
* https://tracker.ceph.com/issues/51282
499
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
500
501
502 12 Patrick Donnelly
h3. 2021 June 14
503
504
https://pulpito.ceph.com/pdonnell-2021-06-14_20:53:05-fs-wip-pdonnell-testing-20210614.173325-distro-basic-smithi/
505
506
Some Ubuntu 20.04 upgrade fallout. In particular, upgrade tests are failing due to missing packages for 18.04 Pacific.
507
508
* https://tracker.ceph.com/issues/51169
509
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
510
* https://tracker.ceph.com/issues/51228
511
    qa: rmdir: failed to remove 'a/.snap/*': No such file or directory
512
* https://tracker.ceph.com/issues/48773
513
    qa: scrub does not complete
514
* https://tracker.ceph.com/issues/51183
515
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
516
* https://tracker.ceph.com/issues/45434
517
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
518
* https://tracker.ceph.com/issues/51182
519
    pybind/mgr/snap_schedule: Invalid command: Unexpected argument 'fs=cephfs'
520
* https://tracker.ceph.com/issues/51229
521
    qa: test_multi_snap_schedule list difference failure
522
* https://tracker.ceph.com/issues/50821
523
    qa: untar_snap_rm failure during mds thrashing
524
525
526 11 Patrick Donnelly
h3. 2021 June 13
527
528
https://pulpito.ceph.com/pdonnell-2021-06-12_02:45:35-fs-wip-pdonnell-testing-20210612.002809-distro-basic-smithi/
529
530
Some Ubuntu 20.04 upgrade fallout. In particular, upgrade tests are failing due to missing packages for 18.04 Pacific.
531
532
* https://tracker.ceph.com/issues/51169
533
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
534
* https://tracker.ceph.com/issues/48773
535
    qa: scrub does not complete
536
* https://tracker.ceph.com/issues/51182
537
    pybind/mgr/snap_schedule: Invalid command: Unexpected argument 'fs=cephfs'
538
* https://tracker.ceph.com/issues/51183
539
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
540
* https://tracker.ceph.com/issues/51197
541
    qa: [WRN] Scrub error on inode 0x10000001520 (/client.0/tmp/t/linux-5.4/Documentation/driver-api) see mds.f log and `damage ls` output for details
542
* https://tracker.ceph.com/issues/45434
543
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
544
545 10 Patrick Donnelly
h3. 2021 June 11
546
547
https://pulpito.ceph.com/pdonnell-2021-06-11_18:02:10-fs-wip-pdonnell-testing-20210611.162716-distro-basic-smithi/
548
549
Some Ubuntu 20.04 upgrade fallout. In particular, upgrade tests are failing due to missing packages for 18.04 Pacific.
550
551
* https://tracker.ceph.com/issues/51169
552
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
553
* https://tracker.ceph.com/issues/45434
554
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
555
* https://tracker.ceph.com/issues/48771
556
    qa: iogen: workload fails to cause balancing
557
* https://tracker.ceph.com/issues/43216
558
    MDSMonitor: removes MDS coming out of quorum election
559
* https://tracker.ceph.com/issues/51182
560
    pybind/mgr/snap_schedule: Invalid command: Unexpected argument 'fs=cephfs'
561
* https://tracker.ceph.com/issues/50223
562
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
563
* https://tracker.ceph.com/issues/48773
564
    qa: scrub does not complete
565
* https://tracker.ceph.com/issues/51183
566
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
567
* https://tracker.ceph.com/issues/51184
568
    qa: fs:bugs does not specify distro
569
570
571 9 Patrick Donnelly
h3. 2021 June 03
572
573
https://pulpito.ceph.com/pdonnell-2021-06-03_03:40:33-fs-wip-pdonnell-testing-20210603.020013-distro-basic-smithi/
574
575
* https://tracker.ceph.com/issues/45434
576
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
577
* https://tracker.ceph.com/issues/50016
578
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
579
* https://tracker.ceph.com/issues/50821
580
    qa: untar_snap_rm failure during mds thrashing
581
* https://tracker.ceph.com/issues/50622 (regression)
582
    msg: active_connections regression
583
* https://tracker.ceph.com/issues/49845#note-2 (regression)
584
    qa: failed umount in test_volumes
585
* https://tracker.ceph.com/issues/48773
586
    qa: scrub does not complete
587
* https://tracker.ceph.com/issues/43216
588
    MDSMonitor: removes MDS coming out of quorum election
589
590
591 7 Patrick Donnelly
h3. 2021 May 18
592
593 8 Patrick Donnelly
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210518.214114
594
595
Regression in testing kernel caused some failures. Ilya fixed those and rerun
596
looked better. Some odd new noise in the rerun relating to packaging and "No
597
module named 'tasks.ceph'".
598
599
* https://tracker.ceph.com/issues/50824
600
    qa: snaptest-git-ceph bus error
601
* https://tracker.ceph.com/issues/50622 (regression)
602
    msg: active_connections regression
603
* https://tracker.ceph.com/issues/49845#note-2 (regression)
604
    qa: failed umount in test_volumes
605
* https://tracker.ceph.com/issues/48203 (stock kernel update required)
606
    qa: quota failure
607
608
609
h3. 2021 May 18
610
611 7 Patrick Donnelly
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210518.025642
612
613
* https://tracker.ceph.com/issues/50821
614
    qa: untar_snap_rm failure during mds thrashing
615
* https://tracker.ceph.com/issues/48773
616
    qa: scrub does not complete
617
* https://tracker.ceph.com/issues/45591
618
    mgr: FAILED ceph_assert(daemon != nullptr)
619
* https://tracker.ceph.com/issues/50866
620
    osd: stat mismatch on objects
621
* https://tracker.ceph.com/issues/50016
622
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
623
* https://tracker.ceph.com/issues/50867
624
    qa: fs:mirror: reduced data availability
625
* https://tracker.ceph.com/issues/50821
626
    qa: untar_snap_rm failure during mds thrashing
627
* https://tracker.ceph.com/issues/50622 (regression)
628
    msg: active_connections regression
629
* https://tracker.ceph.com/issues/50223
630
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
631
* https://tracker.ceph.com/issues/50868
632
    qa: "kern.log.gz already exists; not overwritten"
633
* https://tracker.ceph.com/issues/50870
634
    qa: test_full: "rm: cannot remove 'large_file_a': Permission denied"
635
636
637 6 Patrick Donnelly
h3. 2021 May 11
638
639
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210511.232042
640
641
* one class of failures caused by PR
642
* https://tracker.ceph.com/issues/48812
643
    qa: test_scrub_pause_and_resume_with_abort failure
644
* https://tracker.ceph.com/issues/50390
645
    mds: monclient: wait_auth_rotating timed out after 30
646
* https://tracker.ceph.com/issues/48773
647
    qa: scrub does not complete
648
* https://tracker.ceph.com/issues/50821
649
    qa: untar_snap_rm failure during mds thrashing
650
* https://tracker.ceph.com/issues/50224
651
    qa: test_mirroring_init_failure_with_recovery failure
652
* https://tracker.ceph.com/issues/50622 (regression)
653
    msg: active_connections regression
654
* https://tracker.ceph.com/issues/50825
655
    qa: snaptest-git-ceph hang during mon thrashing v2
656
* https://tracker.ceph.com/issues/50821
657
    qa: untar_snap_rm failure during mds thrashing
658
* https://tracker.ceph.com/issues/50823
659
    qa: RuntimeError: timeout waiting for cluster to stabilize
660
661
662 5 Patrick Donnelly
h3. 2021 May 14
663
664
https://pulpito.ceph.com/pdonnell-2021-05-14_21:45:42-fs-master-distro-basic-smithi/
665
666
* https://tracker.ceph.com/issues/48812
667
    qa: test_scrub_pause_and_resume_with_abort failure
668
* https://tracker.ceph.com/issues/50821
669
    qa: untar_snap_rm failure during mds thrashing
670
* https://tracker.ceph.com/issues/50622 (regression)
671
    msg: active_connections regression
672
* https://tracker.ceph.com/issues/50822
673
    qa: testing kernel patch for client metrics causes mds abort
674
* https://tracker.ceph.com/issues/48773
675
    qa: scrub does not complete
676
* https://tracker.ceph.com/issues/50823
677
    qa: RuntimeError: timeout waiting for cluster to stabilize
678
* https://tracker.ceph.com/issues/50824
679
    qa: snaptest-git-ceph bus error
680
* https://tracker.ceph.com/issues/50825
681
    qa: snaptest-git-ceph hang during mon thrashing v2
682
* https://tracker.ceph.com/issues/50826
683
    kceph: stock RHEL kernel hangs on snaptests with mon|osd thrashers
684
685
686 4 Patrick Donnelly
h3. 2021 May 01
687
688
https://pulpito.ceph.com/pdonnell-2021-05-01_09:07:09-fs-wip-pdonnell-testing-20210501.040415-distro-basic-smithi/
689
690
* https://tracker.ceph.com/issues/45434
691
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
692
* https://tracker.ceph.com/issues/50281
693
    qa: untar_snap_rm timeout
694
* https://tracker.ceph.com/issues/48203 (stock kernel update required)
695
    qa: quota failure
696
* https://tracker.ceph.com/issues/48773
697
    qa: scrub does not complete
698
* https://tracker.ceph.com/issues/50390
699
    mds: monclient: wait_auth_rotating timed out after 30
700
* https://tracker.ceph.com/issues/50250
701
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details"
702
* https://tracker.ceph.com/issues/50622 (regression)
703
    msg: active_connections regression
704
* https://tracker.ceph.com/issues/45591
705
    mgr: FAILED ceph_assert(daemon != nullptr)
706
* https://tracker.ceph.com/issues/50221
707
    qa: snaptest-git-ceph failure in git diff
708
* https://tracker.ceph.com/issues/50016
709
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
710
711
712 3 Patrick Donnelly
h3. 2021 Apr 15
713
714
https://pulpito.ceph.com/pdonnell-2021-04-15_01:35:57-fs-wip-pdonnell-testing-20210414.230315-distro-basic-smithi/
715
716
* https://tracker.ceph.com/issues/50281
717
    qa: untar_snap_rm timeout
718
* https://tracker.ceph.com/issues/50220
719
    qa: dbench workload timeout
720
* https://tracker.ceph.com/issues/50246
721
    mds: failure replaying journal (EMetaBlob)
722
* https://tracker.ceph.com/issues/50250
723
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details"
724
* https://tracker.ceph.com/issues/50016
725
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
726
* https://tracker.ceph.com/issues/50222
727
    osd: 5.2s0 deep-scrub : stat mismatch
728
* https://tracker.ceph.com/issues/45434
729
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
730
* https://tracker.ceph.com/issues/49845
731
    qa: failed umount in test_volumes
732
* https://tracker.ceph.com/issues/37808
733
    osd: osdmap cache weak_refs assert during shutdown
734
* https://tracker.ceph.com/issues/50387
735
    client: fs/snaps failure
736
* https://tracker.ceph.com/issues/50389
737
    mds: "cluster [ERR] Error recovering journal 0x203: (2) No such file or directory" in cluster log"
738
* https://tracker.ceph.com/issues/50216
739
    qa: "ls: cannot access 'lost+found': No such file or directory"
740
* https://tracker.ceph.com/issues/50390
741
    mds: monclient: wait_auth_rotating timed out after 30
742
743
744
745 1 Patrick Donnelly
h3. 2021 Apr 08
746
747 2 Patrick Donnelly
https://pulpito.ceph.com/pdonnell-2021-04-08_22:42:24-fs-wip-pdonnell-testing-20210408.192301-distro-basic-smithi/
748
749
* https://tracker.ceph.com/issues/45434
750
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
751
* https://tracker.ceph.com/issues/50016
752
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
753
* https://tracker.ceph.com/issues/48773
754
    qa: scrub does not complete
755
* https://tracker.ceph.com/issues/50279
756
    qa: "Replacing daemon mds.b as rank 0 with standby daemon mds.c"
757
* https://tracker.ceph.com/issues/50246
758
    mds: failure replaying journal (EMetaBlob)
759
* https://tracker.ceph.com/issues/48365
760
    qa: ffsb build failure on CentOS 8.2
761
* https://tracker.ceph.com/issues/50216
762
    qa: "ls: cannot access 'lost+found': No such file or directory"
763
* https://tracker.ceph.com/issues/50223
764
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
765
* https://tracker.ceph.com/issues/50280
766
    cephadm: RuntimeError: uid/gid not found
767
* https://tracker.ceph.com/issues/50281
768
    qa: untar_snap_rm timeout
769
770
h3. 2021 Apr 08
771
772 1 Patrick Donnelly
https://pulpito.ceph.com/pdonnell-2021-04-08_04:31:36-fs-wip-pdonnell-testing-20210408.024225-distro-basic-smithi/
773
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210408.142238 (with logic inversion / QA fix)
774
775
* https://tracker.ceph.com/issues/50246
776
    mds: failure replaying journal (EMetaBlob)
777
* https://tracker.ceph.com/issues/50250
778
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details"
779
780
781
h3. 2021 Apr 07
782
783
https://pulpito.ceph.com/pdonnell-2021-04-07_02:12:41-fs-wip-pdonnell-testing-20210406.213012-distro-basic-smithi/
784
785
* https://tracker.ceph.com/issues/50215
786
    qa: "log [ERR] : error reading sessionmap 'mds2_sessionmap'"
787
* https://tracker.ceph.com/issues/49466
788
    qa: "Command failed on gibba030 with status 1: 'set -ex\nsudo dd of=/tmp/tmp.ZEeZBasJer'"
789
* https://tracker.ceph.com/issues/50216
790
    qa: "ls: cannot access 'lost+found': No such file or directory"
791
* https://tracker.ceph.com/issues/48773
792
    qa: scrub does not complete
793
* https://tracker.ceph.com/issues/49845
794
    qa: failed umount in test_volumes
795
* https://tracker.ceph.com/issues/50220
796
    qa: dbench workload timeout
797
* https://tracker.ceph.com/issues/50221
798
    qa: snaptest-git-ceph failure in git diff
799
* https://tracker.ceph.com/issues/50222
800
    osd: 5.2s0 deep-scrub : stat mismatch
801
* https://tracker.ceph.com/issues/50223
802
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
803
* https://tracker.ceph.com/issues/50224
804
    qa: test_mirroring_init_failure_with_recovery failure
805
806
h3. 2021 Apr 01
807
808
https://pulpito.ceph.com/pdonnell-2021-04-01_00:45:34-fs-wip-pdonnell-testing-20210331.222326-distro-basic-smithi/
809
810
* https://tracker.ceph.com/issues/48772
811
    qa: pjd: not ok 9, 44, 80
812
* https://tracker.ceph.com/issues/50177
813
    osd: "stalled aio... buggy kernel or bad device?"
814
* https://tracker.ceph.com/issues/48771
815
    qa: iogen: workload fails to cause balancing
816
* https://tracker.ceph.com/issues/49845
817
    qa: failed umount in test_volumes
818
* https://tracker.ceph.com/issues/48773
819
    qa: scrub does not complete
820
* https://tracker.ceph.com/issues/48805
821
    mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details"
822
* https://tracker.ceph.com/issues/50178
823
    qa: "TypeError: run() got an unexpected keyword argument 'shell'"
824
* https://tracker.ceph.com/issues/45434
825
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
826
827
h3. 2021 Mar 24
828
829
https://pulpito.ceph.com/pdonnell-2021-03-24_23:26:35-fs-wip-pdonnell-testing-20210324.190252-distro-basic-smithi/
830
831
* https://tracker.ceph.com/issues/49500
832
    qa: "Assertion `cb_done' failed."
833
* https://tracker.ceph.com/issues/50019
834
    qa: mount failure with cephadm "probably no MDS server is up?"
835
* https://tracker.ceph.com/issues/50020
836
    qa: "RADOS object not found (Failed to operate read op for oid cephfs_mirror)"
837
* https://tracker.ceph.com/issues/48773
838
    qa: scrub does not complete
839
* https://tracker.ceph.com/issues/45434
840
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
841
* https://tracker.ceph.com/issues/48805
842
    mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details"
843
* https://tracker.ceph.com/issues/48772
844
    qa: pjd: not ok 9, 44, 80
845
* https://tracker.ceph.com/issues/50021
846
    qa: snaptest-git-ceph failure during mon thrashing
847
* https://tracker.ceph.com/issues/48771
848
    qa: iogen: workload fails to cause balancing
849
* https://tracker.ceph.com/issues/50016
850
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
851
* https://tracker.ceph.com/issues/49466
852
    qa: "Command failed on gibba030 with status 1: 'set -ex\nsudo dd of=/tmp/tmp.ZEeZBasJer'"
853
854
855
h3. 2021 Mar 18
856
857
https://pulpito.ceph.com/pdonnell-2021-03-18_13:46:31-fs-wip-pdonnell-testing-20210318.024145-distro-basic-smithi/
858
859
* https://tracker.ceph.com/issues/49466
860
    qa: "Command failed on gibba030 with status 1: 'set -ex\nsudo dd of=/tmp/tmp.ZEeZBasJer'"
861
* https://tracker.ceph.com/issues/48773
862
    qa: scrub does not complete
863
* https://tracker.ceph.com/issues/48805
864
    mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details"
865
* https://tracker.ceph.com/issues/45434
866
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
867
* https://tracker.ceph.com/issues/49845
868
    qa: failed umount in test_volumes
869
* https://tracker.ceph.com/issues/49605
870
    mgr: drops command on the floor
871
* https://tracker.ceph.com/issues/48203 (stock kernel update required)
872
    qa: quota failure
873
* https://tracker.ceph.com/issues/49928
874
    client: items pinned in cache preventing unmount x2
875
876
h3. 2021 Mar 15
877
878
https://pulpito.ceph.com/pdonnell-2021-03-15_22:16:56-fs-wip-pdonnell-testing-20210315.182203-distro-basic-smithi/
879
880
* https://tracker.ceph.com/issues/49842
881
    qa: stuck pkg install
882
* https://tracker.ceph.com/issues/49466
883
    qa: "Command failed on gibba030 with status 1: 'set -ex\nsudo dd of=/tmp/tmp.ZEeZBasJer'"
884
* https://tracker.ceph.com/issues/49822
885
    test: test_mirroring_command_idempotency (tasks.cephfs.test_admin.TestMirroringCommands) failure
886
* https://tracker.ceph.com/issues/49240
887
    terminate called after throwing an instance of 'std::bad_alloc'
888
* https://tracker.ceph.com/issues/48773
889
    qa: scrub does not complete
890
* https://tracker.ceph.com/issues/45434
891
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
892
* https://tracker.ceph.com/issues/49500
893
    qa: "Assertion `cb_done' failed."
894
* https://tracker.ceph.com/issues/49843
895
    qa: fs/snaps/snaptest-upchildrealms.sh failure
896
* https://tracker.ceph.com/issues/49845
897
    qa: failed umount in test_volumes
898
* https://tracker.ceph.com/issues/48805
899
    mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details"
900
* https://tracker.ceph.com/issues/49605
901
    mgr: drops command on the floor
902
903
and failure caused by PR: https://github.com/ceph/ceph/pull/39969
904
905
906
h3. 2021 Mar 09
907
908
https://pulpito.ceph.com/pdonnell-2021-03-09_03:27:39-fs-wip-pdonnell-testing-20210308.214827-distro-basic-smithi/
909
910
* https://tracker.ceph.com/issues/49500
911
    qa: "Assertion `cb_done' failed."
912
* https://tracker.ceph.com/issues/48805
913
    mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details"
914
* https://tracker.ceph.com/issues/48773
915
    qa: scrub does not complete
916
* https://tracker.ceph.com/issues/45434
917
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
918
* https://tracker.ceph.com/issues/49240
919
    terminate called after throwing an instance of 'std::bad_alloc'
920
* https://tracker.ceph.com/issues/49466
921
    qa: "Command failed on gibba030 with status 1: 'set -ex\nsudo dd of=/tmp/tmp.ZEeZBasJer'"
922
* https://tracker.ceph.com/issues/49684
923
    qa: fs:cephadm mount does not wait for mds to be created
924
* https://tracker.ceph.com/issues/48771
925
    qa: iogen: workload fails to cause balancing