Project

General

Profile

Main » History » Version 55

Venky Shankar, 06/13/2022 06:54 AM

1 55 Venky Shankar
h3. 2022 Jun 13
2 54 Venky Shankar
3
https://pulpito.ceph.com/vshankar-2022-06-07_00:25:50-fs-wip-vshankar-testing-20220606-223254-testing-default-smithi/
4
https://pulpito.ceph.com/vshankar-2022-06-10_01:04:46-fs-wip-vshankar-testing-20220609-175550-testing-default-smithi/
5
6
* https://tracker.ceph.com/issues/52624
7
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
8
* https://tracker.ceph.com/issues/51964
9
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
10
* https://tracker.ceph.com/issues/53859
11
    qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
12
* https://tracker.ceph.com/issues/55804
13
    qa failure: pjd link tests failed
14
* https://tracker.ceph.com/issues/56003
15
    client: src/include/xlist.h: 81: FAILED ceph_assert(_size == 0)
16
* https://tracker.ceph.com/issues/56011
17
    fs/thrash: snaptest-snap-rm-cmp.sh fails in mds5sum comparison
18
* https://tracker.ceph.com/issues/56012
19
    mds: src/mds/MDLog.cc: 283: FAILED ceph_assert(!mds->is_ any_replay())
20
21 53 Venky Shankar
h3. 2022 Jun 07
22
23
https://pulpito.ceph.com/vshankar-2022-06-06_21:25:41-fs-wip-vshankar-testing1-20220606-230129-testing-default-smithi/
24
https://pulpito.ceph.com/vshankar-2022-06-07_10:53:31-fs-wip-vshankar-testing1-20220607-104134-testing-default-smithi/ (rerun after dropping a problematic PR)
25
26
* https://tracker.ceph.com/issues/52624
27
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
28
* https://tracker.ceph.com/issues/50223
29
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
30
* https://tracker.ceph.com/issues/50224
31
    qa: test_mirroring_init_failure_with_recovery failure
32
33 51 Venky Shankar
h3. 2022 May 12
34
35
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20220509-125847
36 52 Venky Shankar
https://pulpito.ceph.com/vshankar-2022-05-13_17:09:16-fs-wip-vshankar-testing-20220513-120051-testing-default-smithi/ (drop prs + rerun)
37 51 Venky Shankar
38
* https://tracker.ceph.com/issues/52624
39
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
40
* https://tracker.ceph.com/issues/50223
41
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
42
* https://tracker.ceph.com/issues/55332
43
    Failure in snaptest-git-ceph.sh
44
* https://tracker.ceph.com/issues/53859
45
    qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
46
* https://tracker.ceph.com/issues/55538
47 1 Patrick Donnelly
    Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead)
48 52 Venky Shankar
* https://tracker.ceph.com/issues/55258
49
    lots of "heartbeat_check: no reply from X.X.X.X" in OSD logs (cropss up again, though very infrequent)
50 51 Venky Shankar
51 49 Venky Shankar
h3. 2022 May 04
52
53 50 Venky Shankar
https://pulpito.ceph.com/vshankar-2022-05-01_13:18:44-fs-wip-vshankar-testing1-20220428-204527-testing-default-smithi/
54
https://pulpito.ceph.com/vshankar-2022-05-02_16:58:59-fs-wip-vshankar-testing1-20220502-201957-testing-default-smithi/ (after dropping PRs)
55
56 49 Venky Shankar
* https://tracker.ceph.com/issues/52624
57
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
58
* https://tracker.ceph.com/issues/50223
59
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
60
* https://tracker.ceph.com/issues/55332
61
    Failure in snaptest-git-ceph.sh
62
* https://tracker.ceph.com/issues/53859
63
    qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
64
* https://tracker.ceph.com/issues/55516
65
    qa: fs suite tests failing with "json.decoder.JSONDecodeError: Extra data: line 2 column 82 (char 82)"
66
* https://tracker.ceph.com/issues/55537
67
    mds: crash during fs:upgrade test
68
* https://tracker.ceph.com/issues/55538
69
    Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead)
70
71 48 Venky Shankar
h3. 2022 Apr 25
72
73
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20220420-113951 (owner vshankar)
74
75
* https://tracker.ceph.com/issues/52624
76
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
77
* https://tracker.ceph.com/issues/50223
78
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
79
* https://tracker.ceph.com/issues/55258
80
    lots of "heartbeat_check: no reply from X.X.X.X" in OSD logs
81
* https://tracker.ceph.com/issues/55377
82
    kclient: mds revoke Fwb caps stuck after the kclient tries writebcak once
83
84 47 Venky Shankar
h3. 2022 Apr 14
85
86
https://pulpito.ceph.com/?branch=wip-vshankar-testing1-20220411-144044
87
88
* https://tracker.ceph.com/issues/52624
89
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
90
* https://tracker.ceph.com/issues/50223
91
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
92
* https://tracker.ceph.com/issues/52438
93
    qa: ffsb timeout
94
* https://tracker.ceph.com/issues/55170
95
    mds: crash during rejoin (CDir::fetch_keys)
96
* https://tracker.ceph.com/issues/55331
97
    pjd failure
98
* https://tracker.ceph.com/issues/48773
99
    qa: scrub does not complete
100
* https://tracker.ceph.com/issues/55332
101
    Failure in snaptest-git-ceph.sh
102
* https://tracker.ceph.com/issues/55258
103
    lots of "heartbeat_check: no reply from X.X.X.X" in OSD logs
104
105 45 Venky Shankar
h3. 2022 Apr 11
106
107 46 Venky Shankar
https://pulpito.ceph.com/?branch=wip-vshankar-testing-55110-20220408-203242
108 45 Venky Shankar
109
* https://tracker.ceph.com/issues/48773
110
    qa: scrub does not complete
111
* https://tracker.ceph.com/issues/52624
112
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
113
* https://tracker.ceph.com/issues/52438
114
    qa: ffsb timeout
115
* https://tracker.ceph.com/issues/48680
116
    mds: scrubbing stuck "scrub active (0 inodes in the stack)"
117
* https://tracker.ceph.com/issues/55236
118
    qa: fs/snaps tests fails with "hit max job timeout"
119
* https://tracker.ceph.com/issues/54108
120
    qa: iogen workunit: "The following counters failed to be set on mds daemons: {'mds.exported', 'mds.imported'}"
121
* https://tracker.ceph.com/issues/54971
122
    Test failure: test_perf_stats_stale_metrics (tasks.cephfs.test_mds_metrics.TestMDSMetrics)
123
* https://tracker.ceph.com/issues/50223
124
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
125
* https://tracker.ceph.com/issues/55258
126
    lots of "heartbeat_check: no reply from X.X.X.X" in OSD logs
127
128 44 Venky Shankar
h3. 2022 Mar 21
129 42 Venky Shankar
130 43 Venky Shankar
https://pulpito.ceph.com/vshankar-2022-03-20_02:16:37-fs-wip-vshankar-testing-20220319-163539-testing-default-smithi/
131
132
Run didn't go well, lots of failures - debugging by dropping PRs and running against master branch. Only merging unrelated PRs that pass tests.
133
134
135
h3. 2022 Mar 08
136
137 42 Venky Shankar
https://pulpito.ceph.com/vshankar-2022-02-28_04:32:15-fs-wip-vshankar-testing-20220226-211550-testing-default-smithi/
138
139
rerun with
140
- (drop) https://github.com/ceph/ceph/pull/44679
141
- (drop) https://github.com/ceph/ceph/pull/44958
142
https://pulpito.ceph.com/vshankar-2022-03-06_14:47:51-fs-wip-vshankar-testing-20220304-132102-testing-default-smithi/
143
144
* https://tracker.ceph.com/issues/54419 (new)
145
    `ceph orch upgrade start` seems to never reach completion
146
* https://tracker.ceph.com/issues/51964
147
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
148
* https://tracker.ceph.com/issues/52624
149
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
150
* https://tracker.ceph.com/issues/50223
151
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
152
* https://tracker.ceph.com/issues/52438
153
    qa: ffsb timeout
154
* https://tracker.ceph.com/issues/50821
155
    qa: untar_snap_rm failure during mds thrashing
156
157
158 41 Venky Shankar
h3. 2022 Feb 09
159
160
https://pulpito.ceph.com/vshankar-2022-02-05_17:27:49-fs-wip-vshankar-testing-20220201-113815-testing-default-smithi/
161
162
rerun with
163
- (drop) https://github.com/ceph/ceph/pull/37938
164
- (drop) https://github.com/ceph/ceph/pull/44335
165
- (drop) https://github.com/ceph/ceph/pull/44491
166
- (drop) https://github.com/ceph/ceph/pull/44501
167
https://pulpito.ceph.com/vshankar-2022-02-08_14:27:29-fs-wip-vshankar-testing-20220208-181241-testing-default-smithi/
168
169
* https://tracker.ceph.com/issues/51964
170
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
171
* https://tracker.ceph.com/issues/54066
172
    test_subvolume_no_upgrade_v1_sanity fails with `AssertionError: 1000 != 0`
173
* https://tracker.ceph.com/issues/48773
174
    qa: scrub does not complete
175
* https://tracker.ceph.com/issues/52624
176
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
177
* https://tracker.ceph.com/issues/50223
178
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
179
* https://tracker.ceph.com/issues/52438
180
    qa: ffsb timeout
181
182 40 Patrick Donnelly
h3. 2022 Feb 01
183
184
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20220127.171526
185
186
* https://tracker.ceph.com/issues/54107
187
    kclient: hang during umount
188
* https://tracker.ceph.com/issues/54106
189
    kclient: hang during workunit cleanup
190
* https://tracker.ceph.com/issues/54108
191
    qa: iogen workunit: "The following counters failed to be set on mds daemons: {'mds.exported', 'mds.imported'}"
192
* https://tracker.ceph.com/issues/48773
193
    qa: scrub does not complete
194
* https://tracker.ceph.com/issues/52438
195
    qa: ffsb timeout
196
197
198 36 Venky Shankar
h3. 2022 Jan 13
199
200
https://pulpito.ceph.com/vshankar-2022-01-06_13:18:41-fs-wip-vshankar-testing-20220106-145819-testing-default-smithi/
201 39 Venky Shankar
202 36 Venky Shankar
rerun with:
203 38 Venky Shankar
- (add) https://github.com/ceph/ceph/pull/44570
204
- (drop) https://github.com/ceph/ceph/pull/43184
205 36 Venky Shankar
https://pulpito.ceph.com/vshankar-2022-01-13_04:42:40-fs-wip-vshankar-testing-20220106-145819-testing-default-smithi/
206
207
* https://tracker.ceph.com/issues/50223
208
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
209
* https://tracker.ceph.com/issues/51282
210
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
211
* https://tracker.ceph.com/issues/48773
212
    qa: scrub does not complete
213
* https://tracker.ceph.com/issues/52624
214
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
215
* https://tracker.ceph.com/issues/53859
216
    qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
217
218 34 Venky Shankar
h3. 2022 Jan 03
219
220
https://pulpito.ceph.com/vshankar-2021-12-22_07:37:44-fs-wip-vshankar-testing-20211216-114012-testing-default-smithi/
221
https://pulpito.ceph.com/vshankar-2022-01-03_12:27:45-fs-wip-vshankar-testing-20220103-142738-testing-default-smithi/ (rerun)
222
223
* https://tracker.ceph.com/issues/50223
224
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
225
* https://tracker.ceph.com/issues/51964
226
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
227
* https://tracker.ceph.com/issues/51267
228
    CommandFailedError: Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithi096 with status 1:...
229
* https://tracker.ceph.com/issues/51282
230
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
231
* https://tracker.ceph.com/issues/50821
232
    qa: untar_snap_rm failure during mds thrashing
233
* https://tracker.ceph.com/issues/51278
234
    mds: "FAILED ceph_assert(!segments.empty())"
235 35 Ramana Raja
* https://tracker.ceph.com/issues/52279
236
    cephadm tests fail due to: error adding seccomp filter rule for syscall bdflush: requested action matches default action of filter
237
238 34 Venky Shankar
239 33 Patrick Donnelly
h3. 2021 Dec 22
240
241
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211222.014316
242
243
* https://tracker.ceph.com/issues/52624
244
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
245
* https://tracker.ceph.com/issues/50223
246
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
247
* https://tracker.ceph.com/issues/52279
248
    cephadm tests fail due to: error adding seccomp filter rule for syscall bdflush: requested action matches default action of filter
249
* https://tracker.ceph.com/issues/50224
250
    qa: test_mirroring_init_failure_with_recovery failure
251
* https://tracker.ceph.com/issues/48773
252
    qa: scrub does not complete
253
254
255 32 Venky Shankar
h3. 2021 Nov 30
256
257
https://pulpito.ceph.com/vshankar-2021-11-24_07:14:27-fs-wip-vshankar-testing-20211124-094330-testing-default-smithi/
258
https://pulpito.ceph.com/vshankar-2021-11-30_06:23:32-fs-wip-vshankar-testing-20211124-094330-distro-default-smithi/ (rerun w/ QA fixes)
259
260
* https://tracker.ceph.com/issues/53436
261
    mds, mon: mds beacon messages get dropped? (mds never reaches up:active state)
262
* https://tracker.ceph.com/issues/51964
263
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
264
* https://tracker.ceph.com/issues/48812
265
    qa: test_scrub_pause_and_resume_with_abort failure
266
* https://tracker.ceph.com/issues/51076
267
    "wait_for_recovery: failed before timeout expired" during thrashosd test with EC backend.
268
* https://tracker.ceph.com/issues/50223
269
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
270
* https://tracker.ceph.com/issues/52624
271
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
272
* https://tracker.ceph.com/issues/50250
273
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
274
275
276 31 Patrick Donnelly
h3. 2021 November 9
277
278
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211109.180315
279
280
* https://tracker.ceph.com/issues/53214
281
    qa: "dd: error reading '/sys/kernel/debug/ceph/2a934501-6731-4052-a836-f42229a869be.client4874/metrics': Is a directory"
282
* https://tracker.ceph.com/issues/48773
283
    qa: scrub does not complete
284
* https://tracker.ceph.com/issues/50223
285
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
286
* https://tracker.ceph.com/issues/51282
287
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
288
* https://tracker.ceph.com/issues/52624
289
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
290
* https://tracker.ceph.com/issues/53216
291
    qa: "RuntimeError: value of attributes should be either str or None. client_id"
292
* https://tracker.ceph.com/issues/50250
293
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
294
295
296
297 30 Patrick Donnelly
h3. 2021 November 03
298
299
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211103.023355
300
301
* https://tracker.ceph.com/issues/51964
302
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
303
* https://tracker.ceph.com/issues/51282
304
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
305
* https://tracker.ceph.com/issues/52436
306
    fs/ceph: "corrupt mdsmap"
307
* https://tracker.ceph.com/issues/53074
308
    pybind/mgr/cephadm: upgrade sequence does not continue if no MDS are active
309
* https://tracker.ceph.com/issues/53150
310
    pybind/mgr/cephadm/upgrade: tolerate MDS failures during upgrade straddling v16.2.5
311
* https://tracker.ceph.com/issues/53155
312
    MDSMonitor: assertion during upgrade to v16.2.5+
313
314
315 29 Patrick Donnelly
h3. 2021 October 26
316
317
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211025.000447
318
319
* https://tracker.ceph.com/issues/53074
320
    pybind/mgr/cephadm: upgrade sequence does not continue if no MDS are active
321
* https://tracker.ceph.com/issues/52997
322
    testing: hang ing umount
323
* https://tracker.ceph.com/issues/50824
324
    qa: snaptest-git-ceph bus error
325
* https://tracker.ceph.com/issues/52436
326
    fs/ceph: "corrupt mdsmap"
327
* https://tracker.ceph.com/issues/48773
328
    qa: scrub does not complete
329
* https://tracker.ceph.com/issues/53082
330
    ceph-fuse: segmenetation fault in Client::handle_mds_map
331
* https://tracker.ceph.com/issues/50223
332
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
333
* https://tracker.ceph.com/issues/52624
334
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
335
* https://tracker.ceph.com/issues/50224
336
    qa: test_mirroring_init_failure_with_recovery failure
337
* https://tracker.ceph.com/issues/50821
338
    qa: untar_snap_rm failure during mds thrashing
339
* https://tracker.ceph.com/issues/50250
340
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
341
342
343
344 27 Patrick Donnelly
h3. 2021 October 19
345
346 28 Patrick Donnelly
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211019.013028
347 27 Patrick Donnelly
348
* https://tracker.ceph.com/issues/52995
349
    qa: test_standby_count_wanted failure
350
* https://tracker.ceph.com/issues/52948
351
    osd: fails to come up: "teuthology.misc:7 of 8 OSDs are up"
352
* https://tracker.ceph.com/issues/52996
353
    qa: test_perf_counters via test_openfiletable
354
* https://tracker.ceph.com/issues/48772
355
    qa: pjd: not ok 9, 44, 80
356
* https://tracker.ceph.com/issues/52997
357
    testing: hang ing umount
358
* https://tracker.ceph.com/issues/50250
359
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
360
* https://tracker.ceph.com/issues/52624
361
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
362
* https://tracker.ceph.com/issues/50223
363
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
364
* https://tracker.ceph.com/issues/50821
365
    qa: untar_snap_rm failure during mds thrashing
366
* https://tracker.ceph.com/issues/48773
367
    qa: scrub does not complete
368
369
370 26 Patrick Donnelly
h3. 2021 October 12
371
372
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211012.192211
373
374
Some failures caused by teuthology bug: https://tracker.ceph.com/issues/52944
375
376
New test caused failure: https://github.com/ceph/ceph/pull/43297#discussion_r729883167
377
378
379
* https://tracker.ceph.com/issues/51282
380
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
381
* https://tracker.ceph.com/issues/52948
382
    osd: fails to come up: "teuthology.misc:7 of 8 OSDs are up"
383
* https://tracker.ceph.com/issues/48773
384
    qa: scrub does not complete
385
* https://tracker.ceph.com/issues/50224
386
    qa: test_mirroring_init_failure_with_recovery failure
387
* https://tracker.ceph.com/issues/52949
388
    RuntimeError: The following counters failed to be set on mds daemons: {'mds.dir_split'}
389
390
391 25 Patrick Donnelly
h3. 2021 October 02
392 23 Patrick Donnelly
393 24 Patrick Donnelly
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211002.163337
394
395
Some failures caused by cephadm upgrade test. Fixed in follow-up qa commit.
396
397
test_simple failures caused by PR in this set.
398
399
A few reruns because of QA infra noise.
400
401
* https://tracker.ceph.com/issues/52822
402
    qa: failed pacific install on fs:upgrade
403
* https://tracker.ceph.com/issues/52624
404
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
405
* https://tracker.ceph.com/issues/50223
406
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
407
* https://tracker.ceph.com/issues/48773
408
    qa: scrub does not complete
409
410
411
h3. 2021 September 20
412
413 23 Patrick Donnelly
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210917.174826
414
415
* https://tracker.ceph.com/issues/52677
416
    qa: test_simple failure
417
* https://tracker.ceph.com/issues/51279
418
    kclient hangs on umount (testing branch)
419
* https://tracker.ceph.com/issues/50223
420
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
421
* https://tracker.ceph.com/issues/50250
422
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
423
* https://tracker.ceph.com/issues/52624
424
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
425
* https://tracker.ceph.com/issues/52438
426
    qa: ffsb timeout
427
428
429 22 Patrick Donnelly
h3. 2021 September 10
430
431
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210910.181451
432
433
* https://tracker.ceph.com/issues/50223
434
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
435
* https://tracker.ceph.com/issues/50250
436
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
437
* https://tracker.ceph.com/issues/52624
438
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
439
* https://tracker.ceph.com/issues/52625
440
    qa: test_kill_mdstable (tasks.cephfs.test_snapshots.TestSnapshots)
441
* https://tracker.ceph.com/issues/52439
442
    qa: acls does not compile on centos stream
443
* https://tracker.ceph.com/issues/50821
444
    qa: untar_snap_rm failure during mds thrashing
445
* https://tracker.ceph.com/issues/48773
446
    qa: scrub does not complete
447
* https://tracker.ceph.com/issues/52626
448
    mds: ScrubStack.cc: 831: FAILED ceph_assert(diri)
449
* https://tracker.ceph.com/issues/51279
450
    kclient hangs on umount (testing branch)
451
452
453 21 Patrick Donnelly
h3. 2021 August 27
454
455
Several jobs died because of device failures.
456
457
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210827.024746
458
459
* https://tracker.ceph.com/issues/52430
460
    mds: fast async create client mount breaks racy test
461
* https://tracker.ceph.com/issues/52436
462
    fs/ceph: "corrupt mdsmap"
463
* https://tracker.ceph.com/issues/52437
464
    mds: InoTable::replay_release_ids abort via test_inotable_sync
465
* https://tracker.ceph.com/issues/51282
466
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
467
* https://tracker.ceph.com/issues/52438
468
    qa: ffsb timeout
469
* https://tracker.ceph.com/issues/52439
470
    qa: acls does not compile on centos stream
471
472
473 20 Patrick Donnelly
h3. 2021 July 30
474
475
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210729.214022
476
477
* https://tracker.ceph.com/issues/50250
478
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
479
* https://tracker.ceph.com/issues/51282
480
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
481
* https://tracker.ceph.com/issues/48773
482
    qa: scrub does not complete
483
* https://tracker.ceph.com/issues/51975
484
    pybind/mgr/stats: KeyError
485
486
487 19 Patrick Donnelly
h3. 2021 July 28
488
489
https://pulpito.ceph.com/pdonnell-2021-07-28_00:39:45-fs-wip-pdonnell-testing-20210727.213757-distro-basic-smithi/
490
491
with qa fix: https://pulpito.ceph.com/pdonnell-2021-07-28_16:20:28-fs-wip-pdonnell-testing-20210728.141004-distro-basic-smithi/
492
493
* https://tracker.ceph.com/issues/51905
494
    qa: "error reading sessionmap 'mds1_sessionmap'"
495
* https://tracker.ceph.com/issues/48773
496
    qa: scrub does not complete
497
* https://tracker.ceph.com/issues/50250
498
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
499
* https://tracker.ceph.com/issues/51267
500
    CommandFailedError: Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithi096 with status 1:...
501
* https://tracker.ceph.com/issues/51279
502
    kclient hangs on umount (testing branch)
503
504
505 18 Patrick Donnelly
h3. 2021 July 16
506
507
https://pulpito.ceph.com/pdonnell-2021-07-16_05:50:11-fs-wip-pdonnell-testing-20210716.022804-distro-basic-smithi/
508
509
* https://tracker.ceph.com/issues/48773
510
    qa: scrub does not complete
511
* https://tracker.ceph.com/issues/48772
512
    qa: pjd: not ok 9, 44, 80
513
* https://tracker.ceph.com/issues/45434
514
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
515
* https://tracker.ceph.com/issues/51279
516
    kclient hangs on umount (testing branch)
517
* https://tracker.ceph.com/issues/50824
518
    qa: snaptest-git-ceph bus error
519
520
521 17 Patrick Donnelly
h3. 2021 July 04
522
523
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210703.052904
524
525
* https://tracker.ceph.com/issues/48773
526
    qa: scrub does not complete
527
* https://tracker.ceph.com/issues/39150
528
    mon: "FAILED ceph_assert(session_map.sessions.empty())" when out of quorum
529
* https://tracker.ceph.com/issues/45434
530
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
531
* https://tracker.ceph.com/issues/51282
532
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
533
* https://tracker.ceph.com/issues/48771
534
    qa: iogen: workload fails to cause balancing
535
* https://tracker.ceph.com/issues/51279
536
    kclient hangs on umount (testing branch)
537
* https://tracker.ceph.com/issues/50250
538
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
539
540
541 16 Patrick Donnelly
h3. 2021 July 01
542
543
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210701.192056
544
545
* https://tracker.ceph.com/issues/51197
546
    qa: [WRN] Scrub error on inode 0x10000001520 (/client.0/tmp/t/linux-5.4/Documentation/driver-api) see mds.f log and `damage ls` output for details
547
* https://tracker.ceph.com/issues/50866
548
    osd: stat mismatch on objects
549
* https://tracker.ceph.com/issues/48773
550
    qa: scrub does not complete
551
552
553 15 Patrick Donnelly
h3. 2021 June 26
554
555
https://pulpito.ceph.com/pdonnell-2021-06-26_00:57:00-fs-wip-pdonnell-testing-20210625.225421-distro-basic-smithi/
556
557
* https://tracker.ceph.com/issues/51183
558
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
559
* https://tracker.ceph.com/issues/51410
560
    kclient: fails to finish reconnect during MDS thrashing (testing branch)
561
* https://tracker.ceph.com/issues/48773
562
    qa: scrub does not complete
563
* https://tracker.ceph.com/issues/51282
564
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
565
* https://tracker.ceph.com/issues/51169
566
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
567
* https://tracker.ceph.com/issues/48772
568
    qa: pjd: not ok 9, 44, 80
569
570
571 14 Patrick Donnelly
h3. 2021 June 21
572
573
https://pulpito.ceph.com/pdonnell-2021-06-22_00:27:21-fs-wip-pdonnell-testing-20210621.231646-distro-basic-smithi/
574
575
One failure caused by PR: https://github.com/ceph/ceph/pull/41935#issuecomment-866472599
576
577
* https://tracker.ceph.com/issues/51282
578
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
579
* https://tracker.ceph.com/issues/51183
580
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
581
* https://tracker.ceph.com/issues/48773
582
    qa: scrub does not complete
583
* https://tracker.ceph.com/issues/48771
584
    qa: iogen: workload fails to cause balancing
585
* https://tracker.ceph.com/issues/51169
586
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
587
* https://tracker.ceph.com/issues/50495
588
    libcephfs: shutdown race fails with status 141
589
* https://tracker.ceph.com/issues/45434
590
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
591
* https://tracker.ceph.com/issues/50824
592
    qa: snaptest-git-ceph bus error
593
* https://tracker.ceph.com/issues/50223
594
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
595
596
597 13 Patrick Donnelly
h3. 2021 June 16
598
599
https://pulpito.ceph.com/pdonnell-2021-06-16_21:26:55-fs-wip-pdonnell-testing-20210616.191804-distro-basic-smithi/
600
601
MDS abort class of failures caused by PR: https://github.com/ceph/ceph/pull/41667
602
603
* https://tracker.ceph.com/issues/45434
604
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
605
* https://tracker.ceph.com/issues/51169
606
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
607
* https://tracker.ceph.com/issues/43216
608
    MDSMonitor: removes MDS coming out of quorum election
609
* https://tracker.ceph.com/issues/51278
610
    mds: "FAILED ceph_assert(!segments.empty())"
611
* https://tracker.ceph.com/issues/51279
612
    kclient hangs on umount (testing branch)
613
* https://tracker.ceph.com/issues/51280
614
    mds: "FAILED ceph_assert(r == 0 || r == -2)"
615
* https://tracker.ceph.com/issues/51183
616
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
617
* https://tracker.ceph.com/issues/51281
618
    qa: snaptest-snap-rm-cmp.sh: "echo 'FAIL: bad match, /tmp/a 4637e766853d1ad16a7b17079e2c6f03 != real c3883760b18d50e8d78819c54d579b00'"
619
* https://tracker.ceph.com/issues/48773
620
    qa: scrub does not complete
621
* https://tracker.ceph.com/issues/51076
622
    "wait_for_recovery: failed before timeout expired" during thrashosd test with EC backend.
623
* https://tracker.ceph.com/issues/51228
624
    qa: rmdir: failed to remove 'a/.snap/*': No such file or directory
625
* https://tracker.ceph.com/issues/51282
626
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
627
628
629 12 Patrick Donnelly
h3. 2021 June 14
630
631
https://pulpito.ceph.com/pdonnell-2021-06-14_20:53:05-fs-wip-pdonnell-testing-20210614.173325-distro-basic-smithi/
632
633
Some Ubuntu 20.04 upgrade fallout. In particular, upgrade tests are failing due to missing packages for 18.04 Pacific.
634
635
* https://tracker.ceph.com/issues/51169
636
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
637
* https://tracker.ceph.com/issues/51228
638
    qa: rmdir: failed to remove 'a/.snap/*': No such file or directory
639
* https://tracker.ceph.com/issues/48773
640
    qa: scrub does not complete
641
* https://tracker.ceph.com/issues/51183
642
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
643
* https://tracker.ceph.com/issues/45434
644
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
645
* https://tracker.ceph.com/issues/51182
646
    pybind/mgr/snap_schedule: Invalid command: Unexpected argument 'fs=cephfs'
647
* https://tracker.ceph.com/issues/51229
648
    qa: test_multi_snap_schedule list difference failure
649
* https://tracker.ceph.com/issues/50821
650
    qa: untar_snap_rm failure during mds thrashing
651
652
653 11 Patrick Donnelly
h3. 2021 June 13
654
655
https://pulpito.ceph.com/pdonnell-2021-06-12_02:45:35-fs-wip-pdonnell-testing-20210612.002809-distro-basic-smithi/
656
657
Some Ubuntu 20.04 upgrade fallout. In particular, upgrade tests are failing due to missing packages for 18.04 Pacific.
658
659
* https://tracker.ceph.com/issues/51169
660
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
661
* https://tracker.ceph.com/issues/48773
662
    qa: scrub does not complete
663
* https://tracker.ceph.com/issues/51182
664
    pybind/mgr/snap_schedule: Invalid command: Unexpected argument 'fs=cephfs'
665
* https://tracker.ceph.com/issues/51183
666
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
667
* https://tracker.ceph.com/issues/51197
668
    qa: [WRN] Scrub error on inode 0x10000001520 (/client.0/tmp/t/linux-5.4/Documentation/driver-api) see mds.f log and `damage ls` output for details
669
* https://tracker.ceph.com/issues/45434
670
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
671
672 10 Patrick Donnelly
h3. 2021 June 11
673
674
https://pulpito.ceph.com/pdonnell-2021-06-11_18:02:10-fs-wip-pdonnell-testing-20210611.162716-distro-basic-smithi/
675
676
Some Ubuntu 20.04 upgrade fallout. In particular, upgrade tests are failing due to missing packages for 18.04 Pacific.
677
678
* https://tracker.ceph.com/issues/51169
679
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
680
* https://tracker.ceph.com/issues/45434
681
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
682
* https://tracker.ceph.com/issues/48771
683
    qa: iogen: workload fails to cause balancing
684
* https://tracker.ceph.com/issues/43216
685
    MDSMonitor: removes MDS coming out of quorum election
686
* https://tracker.ceph.com/issues/51182
687
    pybind/mgr/snap_schedule: Invalid command: Unexpected argument 'fs=cephfs'
688
* https://tracker.ceph.com/issues/50223
689
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
690
* https://tracker.ceph.com/issues/48773
691
    qa: scrub does not complete
692
* https://tracker.ceph.com/issues/51183
693
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
694
* https://tracker.ceph.com/issues/51184
695
    qa: fs:bugs does not specify distro
696
697
698 9 Patrick Donnelly
h3. 2021 June 03
699
700
https://pulpito.ceph.com/pdonnell-2021-06-03_03:40:33-fs-wip-pdonnell-testing-20210603.020013-distro-basic-smithi/
701
702
* https://tracker.ceph.com/issues/45434
703
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
704
* https://tracker.ceph.com/issues/50016
705
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
706
* https://tracker.ceph.com/issues/50821
707
    qa: untar_snap_rm failure during mds thrashing
708
* https://tracker.ceph.com/issues/50622 (regression)
709
    msg: active_connections regression
710
* https://tracker.ceph.com/issues/49845#note-2 (regression)
711
    qa: failed umount in test_volumes
712
* https://tracker.ceph.com/issues/48773
713
    qa: scrub does not complete
714
* https://tracker.ceph.com/issues/43216
715
    MDSMonitor: removes MDS coming out of quorum election
716
717
718 7 Patrick Donnelly
h3. 2021 May 18
719
720 8 Patrick Donnelly
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210518.214114
721
722
Regression in testing kernel caused some failures. Ilya fixed those and rerun
723
looked better. Some odd new noise in the rerun relating to packaging and "No
724
module named 'tasks.ceph'".
725
726
* https://tracker.ceph.com/issues/50824
727
    qa: snaptest-git-ceph bus error
728
* https://tracker.ceph.com/issues/50622 (regression)
729
    msg: active_connections regression
730
* https://tracker.ceph.com/issues/49845#note-2 (regression)
731
    qa: failed umount in test_volumes
732
* https://tracker.ceph.com/issues/48203 (stock kernel update required)
733
    qa: quota failure
734
735
736
h3. 2021 May 18
737
738 7 Patrick Donnelly
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210518.025642
739
740
* https://tracker.ceph.com/issues/50821
741
    qa: untar_snap_rm failure during mds thrashing
742
* https://tracker.ceph.com/issues/48773
743
    qa: scrub does not complete
744
* https://tracker.ceph.com/issues/45591
745
    mgr: FAILED ceph_assert(daemon != nullptr)
746
* https://tracker.ceph.com/issues/50866
747
    osd: stat mismatch on objects
748
* https://tracker.ceph.com/issues/50016
749
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
750
* https://tracker.ceph.com/issues/50867
751
    qa: fs:mirror: reduced data availability
752
* https://tracker.ceph.com/issues/50821
753
    qa: untar_snap_rm failure during mds thrashing
754
* https://tracker.ceph.com/issues/50622 (regression)
755
    msg: active_connections regression
756
* https://tracker.ceph.com/issues/50223
757
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
758
* https://tracker.ceph.com/issues/50868
759
    qa: "kern.log.gz already exists; not overwritten"
760
* https://tracker.ceph.com/issues/50870
761
    qa: test_full: "rm: cannot remove 'large_file_a': Permission denied"
762
763
764 6 Patrick Donnelly
h3. 2021 May 11
765
766
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210511.232042
767
768
* one class of failures caused by PR
769
* https://tracker.ceph.com/issues/48812
770
    qa: test_scrub_pause_and_resume_with_abort failure
771
* https://tracker.ceph.com/issues/50390
772
    mds: monclient: wait_auth_rotating timed out after 30
773
* https://tracker.ceph.com/issues/48773
774
    qa: scrub does not complete
775
* https://tracker.ceph.com/issues/50821
776
    qa: untar_snap_rm failure during mds thrashing
777
* https://tracker.ceph.com/issues/50224
778
    qa: test_mirroring_init_failure_with_recovery failure
779
* https://tracker.ceph.com/issues/50622 (regression)
780
    msg: active_connections regression
781
* https://tracker.ceph.com/issues/50825
782
    qa: snaptest-git-ceph hang during mon thrashing v2
783
* https://tracker.ceph.com/issues/50821
784
    qa: untar_snap_rm failure during mds thrashing
785
* https://tracker.ceph.com/issues/50823
786
    qa: RuntimeError: timeout waiting for cluster to stabilize
787
788
789 5 Patrick Donnelly
h3. 2021 May 14
790
791
https://pulpito.ceph.com/pdonnell-2021-05-14_21:45:42-fs-master-distro-basic-smithi/
792
793
* https://tracker.ceph.com/issues/48812
794
    qa: test_scrub_pause_and_resume_with_abort failure
795
* https://tracker.ceph.com/issues/50821
796
    qa: untar_snap_rm failure during mds thrashing
797
* https://tracker.ceph.com/issues/50622 (regression)
798
    msg: active_connections regression
799
* https://tracker.ceph.com/issues/50822
800
    qa: testing kernel patch for client metrics causes mds abort
801
* https://tracker.ceph.com/issues/48773
802
    qa: scrub does not complete
803
* https://tracker.ceph.com/issues/50823
804
    qa: RuntimeError: timeout waiting for cluster to stabilize
805
* https://tracker.ceph.com/issues/50824
806
    qa: snaptest-git-ceph bus error
807
* https://tracker.ceph.com/issues/50825
808
    qa: snaptest-git-ceph hang during mon thrashing v2
809
* https://tracker.ceph.com/issues/50826
810
    kceph: stock RHEL kernel hangs on snaptests with mon|osd thrashers
811
812
813 4 Patrick Donnelly
h3. 2021 May 01
814
815
https://pulpito.ceph.com/pdonnell-2021-05-01_09:07:09-fs-wip-pdonnell-testing-20210501.040415-distro-basic-smithi/
816
817
* https://tracker.ceph.com/issues/45434
818
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
819
* https://tracker.ceph.com/issues/50281
820
    qa: untar_snap_rm timeout
821
* https://tracker.ceph.com/issues/48203 (stock kernel update required)
822
    qa: quota failure
823
* https://tracker.ceph.com/issues/48773
824
    qa: scrub does not complete
825
* https://tracker.ceph.com/issues/50390
826
    mds: monclient: wait_auth_rotating timed out after 30
827
* https://tracker.ceph.com/issues/50250
828
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details"
829
* https://tracker.ceph.com/issues/50622 (regression)
830
    msg: active_connections regression
831
* https://tracker.ceph.com/issues/45591
832
    mgr: FAILED ceph_assert(daemon != nullptr)
833
* https://tracker.ceph.com/issues/50221
834
    qa: snaptest-git-ceph failure in git diff
835
* https://tracker.ceph.com/issues/50016
836
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
837
838
839 3 Patrick Donnelly
h3. 2021 Apr 15
840
841
https://pulpito.ceph.com/pdonnell-2021-04-15_01:35:57-fs-wip-pdonnell-testing-20210414.230315-distro-basic-smithi/
842
843
* https://tracker.ceph.com/issues/50281
844
    qa: untar_snap_rm timeout
845
* https://tracker.ceph.com/issues/50220
846
    qa: dbench workload timeout
847
* https://tracker.ceph.com/issues/50246
848
    mds: failure replaying journal (EMetaBlob)
849
* https://tracker.ceph.com/issues/50250
850
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details"
851
* https://tracker.ceph.com/issues/50016
852
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
853
* https://tracker.ceph.com/issues/50222
854
    osd: 5.2s0 deep-scrub : stat mismatch
855
* https://tracker.ceph.com/issues/45434
856
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
857
* https://tracker.ceph.com/issues/49845
858
    qa: failed umount in test_volumes
859
* https://tracker.ceph.com/issues/37808
860
    osd: osdmap cache weak_refs assert during shutdown
861
* https://tracker.ceph.com/issues/50387
862
    client: fs/snaps failure
863
* https://tracker.ceph.com/issues/50389
864
    mds: "cluster [ERR] Error recovering journal 0x203: (2) No such file or directory" in cluster log"
865
* https://tracker.ceph.com/issues/50216
866
    qa: "ls: cannot access 'lost+found': No such file or directory"
867
* https://tracker.ceph.com/issues/50390
868
    mds: monclient: wait_auth_rotating timed out after 30
869
870
871
872 1 Patrick Donnelly
h3. 2021 Apr 08
873
874 2 Patrick Donnelly
https://pulpito.ceph.com/pdonnell-2021-04-08_22:42:24-fs-wip-pdonnell-testing-20210408.192301-distro-basic-smithi/
875
876
* https://tracker.ceph.com/issues/45434
877
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
878
* https://tracker.ceph.com/issues/50016
879
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
880
* https://tracker.ceph.com/issues/48773
881
    qa: scrub does not complete
882
* https://tracker.ceph.com/issues/50279
883
    qa: "Replacing daemon mds.b as rank 0 with standby daemon mds.c"
884
* https://tracker.ceph.com/issues/50246
885
    mds: failure replaying journal (EMetaBlob)
886
* https://tracker.ceph.com/issues/48365
887
    qa: ffsb build failure on CentOS 8.2
888
* https://tracker.ceph.com/issues/50216
889
    qa: "ls: cannot access 'lost+found': No such file or directory"
890
* https://tracker.ceph.com/issues/50223
891
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
892
* https://tracker.ceph.com/issues/50280
893
    cephadm: RuntimeError: uid/gid not found
894
* https://tracker.ceph.com/issues/50281
895
    qa: untar_snap_rm timeout
896
897
h3. 2021 Apr 08
898
899 1 Patrick Donnelly
https://pulpito.ceph.com/pdonnell-2021-04-08_04:31:36-fs-wip-pdonnell-testing-20210408.024225-distro-basic-smithi/
900
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210408.142238 (with logic inversion / QA fix)
901
902
* https://tracker.ceph.com/issues/50246
903
    mds: failure replaying journal (EMetaBlob)
904
* https://tracker.ceph.com/issues/50250
905
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details"
906
907
908
h3. 2021 Apr 07
909
910
https://pulpito.ceph.com/pdonnell-2021-04-07_02:12:41-fs-wip-pdonnell-testing-20210406.213012-distro-basic-smithi/
911
912
* https://tracker.ceph.com/issues/50215
913
    qa: "log [ERR] : error reading sessionmap 'mds2_sessionmap'"
914
* https://tracker.ceph.com/issues/49466
915
    qa: "Command failed on gibba030 with status 1: 'set -ex\nsudo dd of=/tmp/tmp.ZEeZBasJer'"
916
* https://tracker.ceph.com/issues/50216
917
    qa: "ls: cannot access 'lost+found': No such file or directory"
918
* https://tracker.ceph.com/issues/48773
919
    qa: scrub does not complete
920
* https://tracker.ceph.com/issues/49845
921
    qa: failed umount in test_volumes
922
* https://tracker.ceph.com/issues/50220
923
    qa: dbench workload timeout
924
* https://tracker.ceph.com/issues/50221
925
    qa: snaptest-git-ceph failure in git diff
926
* https://tracker.ceph.com/issues/50222
927
    osd: 5.2s0 deep-scrub : stat mismatch
928
* https://tracker.ceph.com/issues/50223
929
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
930
* https://tracker.ceph.com/issues/50224
931
    qa: test_mirroring_init_failure_with_recovery failure
932
933
h3. 2021 Apr 01
934
935
https://pulpito.ceph.com/pdonnell-2021-04-01_00:45:34-fs-wip-pdonnell-testing-20210331.222326-distro-basic-smithi/
936
937
* https://tracker.ceph.com/issues/48772
938
    qa: pjd: not ok 9, 44, 80
939
* https://tracker.ceph.com/issues/50177
940
    osd: "stalled aio... buggy kernel or bad device?"
941
* https://tracker.ceph.com/issues/48771
942
    qa: iogen: workload fails to cause balancing
943
* https://tracker.ceph.com/issues/49845
944
    qa: failed umount in test_volumes
945
* https://tracker.ceph.com/issues/48773
946
    qa: scrub does not complete
947
* https://tracker.ceph.com/issues/48805
948
    mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details"
949
* https://tracker.ceph.com/issues/50178
950
    qa: "TypeError: run() got an unexpected keyword argument 'shell'"
951
* https://tracker.ceph.com/issues/45434
952
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
953
954
h3. 2021 Mar 24
955
956
https://pulpito.ceph.com/pdonnell-2021-03-24_23:26:35-fs-wip-pdonnell-testing-20210324.190252-distro-basic-smithi/
957
958
* https://tracker.ceph.com/issues/49500
959
    qa: "Assertion `cb_done' failed."
960
* https://tracker.ceph.com/issues/50019
961
    qa: mount failure with cephadm "probably no MDS server is up?"
962
* https://tracker.ceph.com/issues/50020
963
    qa: "RADOS object not found (Failed to operate read op for oid cephfs_mirror)"
964
* https://tracker.ceph.com/issues/48773
965
    qa: scrub does not complete
966
* https://tracker.ceph.com/issues/45434
967
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
968
* https://tracker.ceph.com/issues/48805
969
    mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details"
970
* https://tracker.ceph.com/issues/48772
971
    qa: pjd: not ok 9, 44, 80
972
* https://tracker.ceph.com/issues/50021
973
    qa: snaptest-git-ceph failure during mon thrashing
974
* https://tracker.ceph.com/issues/48771
975
    qa: iogen: workload fails to cause balancing
976
* https://tracker.ceph.com/issues/50016
977
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
978
* https://tracker.ceph.com/issues/49466
979
    qa: "Command failed on gibba030 with status 1: 'set -ex\nsudo dd of=/tmp/tmp.ZEeZBasJer'"
980
981
982
h3. 2021 Mar 18
983
984
https://pulpito.ceph.com/pdonnell-2021-03-18_13:46:31-fs-wip-pdonnell-testing-20210318.024145-distro-basic-smithi/
985
986
* https://tracker.ceph.com/issues/49466
987
    qa: "Command failed on gibba030 with status 1: 'set -ex\nsudo dd of=/tmp/tmp.ZEeZBasJer'"
988
* https://tracker.ceph.com/issues/48773
989
    qa: scrub does not complete
990
* https://tracker.ceph.com/issues/48805
991
    mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details"
992
* https://tracker.ceph.com/issues/45434
993
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
994
* https://tracker.ceph.com/issues/49845
995
    qa: failed umount in test_volumes
996
* https://tracker.ceph.com/issues/49605
997
    mgr: drops command on the floor
998
* https://tracker.ceph.com/issues/48203 (stock kernel update required)
999
    qa: quota failure
1000
* https://tracker.ceph.com/issues/49928
1001
    client: items pinned in cache preventing unmount x2
1002
1003
h3. 2021 Mar 15
1004
1005
https://pulpito.ceph.com/pdonnell-2021-03-15_22:16:56-fs-wip-pdonnell-testing-20210315.182203-distro-basic-smithi/
1006
1007
* https://tracker.ceph.com/issues/49842
1008
    qa: stuck pkg install
1009
* https://tracker.ceph.com/issues/49466
1010
    qa: "Command failed on gibba030 with status 1: 'set -ex\nsudo dd of=/tmp/tmp.ZEeZBasJer'"
1011
* https://tracker.ceph.com/issues/49822
1012
    test: test_mirroring_command_idempotency (tasks.cephfs.test_admin.TestMirroringCommands) failure
1013
* https://tracker.ceph.com/issues/49240
1014
    terminate called after throwing an instance of 'std::bad_alloc'
1015
* https://tracker.ceph.com/issues/48773
1016
    qa: scrub does not complete
1017
* https://tracker.ceph.com/issues/45434
1018
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
1019
* https://tracker.ceph.com/issues/49500
1020
    qa: "Assertion `cb_done' failed."
1021
* https://tracker.ceph.com/issues/49843
1022
    qa: fs/snaps/snaptest-upchildrealms.sh failure
1023
* https://tracker.ceph.com/issues/49845
1024
    qa: failed umount in test_volumes
1025
* https://tracker.ceph.com/issues/48805
1026
    mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details"
1027
* https://tracker.ceph.com/issues/49605
1028
    mgr: drops command on the floor
1029
1030
and failure caused by PR: https://github.com/ceph/ceph/pull/39969
1031
1032
1033
h3. 2021 Mar 09
1034
1035
https://pulpito.ceph.com/pdonnell-2021-03-09_03:27:39-fs-wip-pdonnell-testing-20210308.214827-distro-basic-smithi/
1036
1037
* https://tracker.ceph.com/issues/49500
1038
    qa: "Assertion `cb_done' failed."
1039
* https://tracker.ceph.com/issues/48805
1040
    mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details"
1041
* https://tracker.ceph.com/issues/48773
1042
    qa: scrub does not complete
1043
* https://tracker.ceph.com/issues/45434
1044
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
1045
* https://tracker.ceph.com/issues/49240
1046
    terminate called after throwing an instance of 'std::bad_alloc'
1047
* https://tracker.ceph.com/issues/49466
1048
    qa: "Command failed on gibba030 with status 1: 'set -ex\nsudo dd of=/tmp/tmp.ZEeZBasJer'"
1049
* https://tracker.ceph.com/issues/49684
1050
    qa: fs:cephadm mount does not wait for mds to be created
1051
* https://tracker.ceph.com/issues/48771
1052
    qa: iogen: workload fails to cause balancing