Project

General

Profile

Main » History » Version 52

Venky Shankar, 05/14/2022 09:39 AM

1 51 Venky Shankar
h3. 2022 May 12
2
3
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20220509-125847
4 52 Venky Shankar
https://pulpito.ceph.com/vshankar-2022-05-13_17:09:16-fs-wip-vshankar-testing-20220513-120051-testing-default-smithi/ (drop prs + rerun)
5 51 Venky Shankar
6
* https://tracker.ceph.com/issues/52624
7
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
8
* https://tracker.ceph.com/issues/50223
9
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
10
* https://tracker.ceph.com/issues/55332
11
    Failure in snaptest-git-ceph.sh
12
* https://tracker.ceph.com/issues/53859
13
    qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
14
* https://tracker.ceph.com/issues/55538
15 1 Patrick Donnelly
    Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead)
16 52 Venky Shankar
* https://tracker.ceph.com/issues/55258
17
    lots of "heartbeat_check: no reply from X.X.X.X" in OSD logs (cropss up again, though very infrequent)
18 51 Venky Shankar
19 49 Venky Shankar
h3. 2022 May 04
20
21 50 Venky Shankar
https://pulpito.ceph.com/vshankar-2022-05-01_13:18:44-fs-wip-vshankar-testing1-20220428-204527-testing-default-smithi/
22
https://pulpito.ceph.com/vshankar-2022-05-02_16:58:59-fs-wip-vshankar-testing1-20220502-201957-testing-default-smithi/ (after dropping PRs)
23
24 49 Venky Shankar
* https://tracker.ceph.com/issues/52624
25
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
26
* https://tracker.ceph.com/issues/50223
27
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
28
* https://tracker.ceph.com/issues/55332
29
    Failure in snaptest-git-ceph.sh
30
* https://tracker.ceph.com/issues/53859
31
    qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
32
* https://tracker.ceph.com/issues/55516
33
    qa: fs suite tests failing with "json.decoder.JSONDecodeError: Extra data: line 2 column 82 (char 82)"
34
* https://tracker.ceph.com/issues/55537
35
    mds: crash during fs:upgrade test
36
* https://tracker.ceph.com/issues/55538
37
    Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead)
38
39 48 Venky Shankar
h3. 2022 Apr 25
40
41
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20220420-113951 (owner vshankar)
42
43
* https://tracker.ceph.com/issues/52624
44
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
45
* https://tracker.ceph.com/issues/50223
46
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
47
* https://tracker.ceph.com/issues/55258
48
    lots of "heartbeat_check: no reply from X.X.X.X" in OSD logs
49
* https://tracker.ceph.com/issues/55377
50
    kclient: mds revoke Fwb caps stuck after the kclient tries writebcak once
51
52 47 Venky Shankar
h3. 2022 Apr 14
53
54
https://pulpito.ceph.com/?branch=wip-vshankar-testing1-20220411-144044
55
56
* https://tracker.ceph.com/issues/52624
57
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
58
* https://tracker.ceph.com/issues/50223
59
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
60
* https://tracker.ceph.com/issues/52438
61
    qa: ffsb timeout
62
* https://tracker.ceph.com/issues/55170
63
    mds: crash during rejoin (CDir::fetch_keys)
64
* https://tracker.ceph.com/issues/55331
65
    pjd failure
66
* https://tracker.ceph.com/issues/48773
67
    qa: scrub does not complete
68
* https://tracker.ceph.com/issues/55332
69
    Failure in snaptest-git-ceph.sh
70
* https://tracker.ceph.com/issues/55258
71
    lots of "heartbeat_check: no reply from X.X.X.X" in OSD logs
72
73 45 Venky Shankar
h3. 2022 Apr 11
74
75 46 Venky Shankar
https://pulpito.ceph.com/?branch=wip-vshankar-testing-55110-20220408-203242
76 45 Venky Shankar
77
* https://tracker.ceph.com/issues/48773
78
    qa: scrub does not complete
79
* https://tracker.ceph.com/issues/52624
80
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
81
* https://tracker.ceph.com/issues/52438
82
    qa: ffsb timeout
83
* https://tracker.ceph.com/issues/48680
84
    mds: scrubbing stuck "scrub active (0 inodes in the stack)"
85
* https://tracker.ceph.com/issues/55236
86
    qa: fs/snaps tests fails with "hit max job timeout"
87
* https://tracker.ceph.com/issues/54108
88
    qa: iogen workunit: "The following counters failed to be set on mds daemons: {'mds.exported', 'mds.imported'}"
89
* https://tracker.ceph.com/issues/54971
90
    Test failure: test_perf_stats_stale_metrics (tasks.cephfs.test_mds_metrics.TestMDSMetrics)
91
* https://tracker.ceph.com/issues/50223
92
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
93
* https://tracker.ceph.com/issues/55258
94
    lots of "heartbeat_check: no reply from X.X.X.X" in OSD logs
95
96 44 Venky Shankar
h3. 2022 Mar 21
97 42 Venky Shankar
98 43 Venky Shankar
https://pulpito.ceph.com/vshankar-2022-03-20_02:16:37-fs-wip-vshankar-testing-20220319-163539-testing-default-smithi/
99
100
Run didn't go well, lots of failures - debugging by dropping PRs and running against master branch. Only merging unrelated PRs that pass tests.
101
102
103
h3. 2022 Mar 08
104
105 42 Venky Shankar
https://pulpito.ceph.com/vshankar-2022-02-28_04:32:15-fs-wip-vshankar-testing-20220226-211550-testing-default-smithi/
106
107
rerun with
108
- (drop) https://github.com/ceph/ceph/pull/44679
109
- (drop) https://github.com/ceph/ceph/pull/44958
110
https://pulpito.ceph.com/vshankar-2022-03-06_14:47:51-fs-wip-vshankar-testing-20220304-132102-testing-default-smithi/
111
112
* https://tracker.ceph.com/issues/54419 (new)
113
    `ceph orch upgrade start` seems to never reach completion
114
* https://tracker.ceph.com/issues/51964
115
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
116
* https://tracker.ceph.com/issues/52624
117
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
118
* https://tracker.ceph.com/issues/50223
119
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
120
* https://tracker.ceph.com/issues/52438
121
    qa: ffsb timeout
122
* https://tracker.ceph.com/issues/50821
123
    qa: untar_snap_rm failure during mds thrashing
124
125
126 41 Venky Shankar
h3. 2022 Feb 09
127
128
https://pulpito.ceph.com/vshankar-2022-02-05_17:27:49-fs-wip-vshankar-testing-20220201-113815-testing-default-smithi/
129
130
rerun with
131
- (drop) https://github.com/ceph/ceph/pull/37938
132
- (drop) https://github.com/ceph/ceph/pull/44335
133
- (drop) https://github.com/ceph/ceph/pull/44491
134
- (drop) https://github.com/ceph/ceph/pull/44501
135
https://pulpito.ceph.com/vshankar-2022-02-08_14:27:29-fs-wip-vshankar-testing-20220208-181241-testing-default-smithi/
136
137
* https://tracker.ceph.com/issues/51964
138
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
139
* https://tracker.ceph.com/issues/54066
140
    test_subvolume_no_upgrade_v1_sanity fails with `AssertionError: 1000 != 0`
141
* https://tracker.ceph.com/issues/48773
142
    qa: scrub does not complete
143
* https://tracker.ceph.com/issues/52624
144
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
145
* https://tracker.ceph.com/issues/50223
146
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
147
* https://tracker.ceph.com/issues/52438
148
    qa: ffsb timeout
149
150 40 Patrick Donnelly
h3. 2022 Feb 01
151
152
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20220127.171526
153
154
* https://tracker.ceph.com/issues/54107
155
    kclient: hang during umount
156
* https://tracker.ceph.com/issues/54106
157
    kclient: hang during workunit cleanup
158
* https://tracker.ceph.com/issues/54108
159
    qa: iogen workunit: "The following counters failed to be set on mds daemons: {'mds.exported', 'mds.imported'}"
160
* https://tracker.ceph.com/issues/48773
161
    qa: scrub does not complete
162
* https://tracker.ceph.com/issues/52438
163
    qa: ffsb timeout
164
165
166 36 Venky Shankar
h3. 2022 Jan 13
167
168
https://pulpito.ceph.com/vshankar-2022-01-06_13:18:41-fs-wip-vshankar-testing-20220106-145819-testing-default-smithi/
169 39 Venky Shankar
170 36 Venky Shankar
rerun with:
171 38 Venky Shankar
- (add) https://github.com/ceph/ceph/pull/44570
172
- (drop) https://github.com/ceph/ceph/pull/43184
173 36 Venky Shankar
https://pulpito.ceph.com/vshankar-2022-01-13_04:42:40-fs-wip-vshankar-testing-20220106-145819-testing-default-smithi/
174
175
* https://tracker.ceph.com/issues/50223
176
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
177
* https://tracker.ceph.com/issues/51282
178
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
179
* https://tracker.ceph.com/issues/48773
180
    qa: scrub does not complete
181
* https://tracker.ceph.com/issues/52624
182
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
183
* https://tracker.ceph.com/issues/53859
184
    qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
185
186 34 Venky Shankar
h3. 2022 Jan 03
187
188
https://pulpito.ceph.com/vshankar-2021-12-22_07:37:44-fs-wip-vshankar-testing-20211216-114012-testing-default-smithi/
189
https://pulpito.ceph.com/vshankar-2022-01-03_12:27:45-fs-wip-vshankar-testing-20220103-142738-testing-default-smithi/ (rerun)
190
191
* https://tracker.ceph.com/issues/50223
192
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
193
* https://tracker.ceph.com/issues/51964
194
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
195
* https://tracker.ceph.com/issues/51267
196
    CommandFailedError: Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithi096 with status 1:...
197
* https://tracker.ceph.com/issues/51282
198
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
199
* https://tracker.ceph.com/issues/50821
200
    qa: untar_snap_rm failure during mds thrashing
201
* https://tracker.ceph.com/issues/51278
202
    mds: "FAILED ceph_assert(!segments.empty())"
203 35 Ramana Raja
* https://tracker.ceph.com/issues/52279
204
    cephadm tests fail due to: error adding seccomp filter rule for syscall bdflush: requested action matches default action of filter
205
206 34 Venky Shankar
207 33 Patrick Donnelly
h3. 2021 Dec 22
208
209
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211222.014316
210
211
* https://tracker.ceph.com/issues/52624
212
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
213
* https://tracker.ceph.com/issues/50223
214
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
215
* https://tracker.ceph.com/issues/52279
216
    cephadm tests fail due to: error adding seccomp filter rule for syscall bdflush: requested action matches default action of filter
217
* https://tracker.ceph.com/issues/50224
218
    qa: test_mirroring_init_failure_with_recovery failure
219
* https://tracker.ceph.com/issues/48773
220
    qa: scrub does not complete
221
222
223 32 Venky Shankar
h3. 2021 Nov 30
224
225
https://pulpito.ceph.com/vshankar-2021-11-24_07:14:27-fs-wip-vshankar-testing-20211124-094330-testing-default-smithi/
226
https://pulpito.ceph.com/vshankar-2021-11-30_06:23:32-fs-wip-vshankar-testing-20211124-094330-distro-default-smithi/ (rerun w/ QA fixes)
227
228
* https://tracker.ceph.com/issues/53436
229
    mds, mon: mds beacon messages get dropped? (mds never reaches up:active state)
230
* https://tracker.ceph.com/issues/51964
231
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
232
* https://tracker.ceph.com/issues/48812
233
    qa: test_scrub_pause_and_resume_with_abort failure
234
* https://tracker.ceph.com/issues/51076
235
    "wait_for_recovery: failed before timeout expired" during thrashosd test with EC backend.
236
* https://tracker.ceph.com/issues/50223
237
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
238
* https://tracker.ceph.com/issues/52624
239
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
240
* https://tracker.ceph.com/issues/50250
241
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
242
243
244 31 Patrick Donnelly
h3. 2021 November 9
245
246
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211109.180315
247
248
* https://tracker.ceph.com/issues/53214
249
    qa: "dd: error reading '/sys/kernel/debug/ceph/2a934501-6731-4052-a836-f42229a869be.client4874/metrics': Is a directory"
250
* https://tracker.ceph.com/issues/48773
251
    qa: scrub does not complete
252
* https://tracker.ceph.com/issues/50223
253
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
254
* https://tracker.ceph.com/issues/51282
255
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
256
* https://tracker.ceph.com/issues/52624
257
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
258
* https://tracker.ceph.com/issues/53216
259
    qa: "RuntimeError: value of attributes should be either str or None. client_id"
260
* https://tracker.ceph.com/issues/50250
261
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
262
263
264
265 30 Patrick Donnelly
h3. 2021 November 03
266
267
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211103.023355
268
269
* https://tracker.ceph.com/issues/51964
270
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
271
* https://tracker.ceph.com/issues/51282
272
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
273
* https://tracker.ceph.com/issues/52436
274
    fs/ceph: "corrupt mdsmap"
275
* https://tracker.ceph.com/issues/53074
276
    pybind/mgr/cephadm: upgrade sequence does not continue if no MDS are active
277
* https://tracker.ceph.com/issues/53150
278
    pybind/mgr/cephadm/upgrade: tolerate MDS failures during upgrade straddling v16.2.5
279
* https://tracker.ceph.com/issues/53155
280
    MDSMonitor: assertion during upgrade to v16.2.5+
281
282
283 29 Patrick Donnelly
h3. 2021 October 26
284
285
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211025.000447
286
287
* https://tracker.ceph.com/issues/53074
288
    pybind/mgr/cephadm: upgrade sequence does not continue if no MDS are active
289
* https://tracker.ceph.com/issues/52997
290
    testing: hang ing umount
291
* https://tracker.ceph.com/issues/50824
292
    qa: snaptest-git-ceph bus error
293
* https://tracker.ceph.com/issues/52436
294
    fs/ceph: "corrupt mdsmap"
295
* https://tracker.ceph.com/issues/48773
296
    qa: scrub does not complete
297
* https://tracker.ceph.com/issues/53082
298
    ceph-fuse: segmenetation fault in Client::handle_mds_map
299
* https://tracker.ceph.com/issues/50223
300
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
301
* https://tracker.ceph.com/issues/52624
302
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
303
* https://tracker.ceph.com/issues/50224
304
    qa: test_mirroring_init_failure_with_recovery failure
305
* https://tracker.ceph.com/issues/50821
306
    qa: untar_snap_rm failure during mds thrashing
307
* https://tracker.ceph.com/issues/50250
308
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
309
310
311
312 27 Patrick Donnelly
h3. 2021 October 19
313
314 28 Patrick Donnelly
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211019.013028
315 27 Patrick Donnelly
316
* https://tracker.ceph.com/issues/52995
317
    qa: test_standby_count_wanted failure
318
* https://tracker.ceph.com/issues/52948
319
    osd: fails to come up: "teuthology.misc:7 of 8 OSDs are up"
320
* https://tracker.ceph.com/issues/52996
321
    qa: test_perf_counters via test_openfiletable
322
* https://tracker.ceph.com/issues/48772
323
    qa: pjd: not ok 9, 44, 80
324
* https://tracker.ceph.com/issues/52997
325
    testing: hang ing umount
326
* https://tracker.ceph.com/issues/50250
327
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
328
* https://tracker.ceph.com/issues/52624
329
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
330
* https://tracker.ceph.com/issues/50223
331
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
332
* https://tracker.ceph.com/issues/50821
333
    qa: untar_snap_rm failure during mds thrashing
334
* https://tracker.ceph.com/issues/48773
335
    qa: scrub does not complete
336
337
338 26 Patrick Donnelly
h3. 2021 October 12
339
340
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211012.192211
341
342
Some failures caused by teuthology bug: https://tracker.ceph.com/issues/52944
343
344
New test caused failure: https://github.com/ceph/ceph/pull/43297#discussion_r729883167
345
346
347
* https://tracker.ceph.com/issues/51282
348
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
349
* https://tracker.ceph.com/issues/52948
350
    osd: fails to come up: "teuthology.misc:7 of 8 OSDs are up"
351
* https://tracker.ceph.com/issues/48773
352
    qa: scrub does not complete
353
* https://tracker.ceph.com/issues/50224
354
    qa: test_mirroring_init_failure_with_recovery failure
355
* https://tracker.ceph.com/issues/52949
356
    RuntimeError: The following counters failed to be set on mds daemons: {'mds.dir_split'}
357
358
359 25 Patrick Donnelly
h3. 2021 October 02
360 23 Patrick Donnelly
361 24 Patrick Donnelly
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211002.163337
362
363
Some failures caused by cephadm upgrade test. Fixed in follow-up qa commit.
364
365
test_simple failures caused by PR in this set.
366
367
A few reruns because of QA infra noise.
368
369
* https://tracker.ceph.com/issues/52822
370
    qa: failed pacific install on fs:upgrade
371
* https://tracker.ceph.com/issues/52624
372
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
373
* https://tracker.ceph.com/issues/50223
374
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
375
* https://tracker.ceph.com/issues/48773
376
    qa: scrub does not complete
377
378
379
h3. 2021 September 20
380
381 23 Patrick Donnelly
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210917.174826
382
383
* https://tracker.ceph.com/issues/52677
384
    qa: test_simple failure
385
* https://tracker.ceph.com/issues/51279
386
    kclient hangs on umount (testing branch)
387
* https://tracker.ceph.com/issues/50223
388
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
389
* https://tracker.ceph.com/issues/50250
390
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
391
* https://tracker.ceph.com/issues/52624
392
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
393
* https://tracker.ceph.com/issues/52438
394
    qa: ffsb timeout
395
396
397 22 Patrick Donnelly
h3. 2021 September 10
398
399
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210910.181451
400
401
* https://tracker.ceph.com/issues/50223
402
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
403
* https://tracker.ceph.com/issues/50250
404
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
405
* https://tracker.ceph.com/issues/52624
406
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
407
* https://tracker.ceph.com/issues/52625
408
    qa: test_kill_mdstable (tasks.cephfs.test_snapshots.TestSnapshots)
409
* https://tracker.ceph.com/issues/52439
410
    qa: acls does not compile on centos stream
411
* https://tracker.ceph.com/issues/50821
412
    qa: untar_snap_rm failure during mds thrashing
413
* https://tracker.ceph.com/issues/48773
414
    qa: scrub does not complete
415
* https://tracker.ceph.com/issues/52626
416
    mds: ScrubStack.cc: 831: FAILED ceph_assert(diri)
417
* https://tracker.ceph.com/issues/51279
418
    kclient hangs on umount (testing branch)
419
420
421 21 Patrick Donnelly
h3. 2021 August 27
422
423
Several jobs died because of device failures.
424
425
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210827.024746
426
427
* https://tracker.ceph.com/issues/52430
428
    mds: fast async create client mount breaks racy test
429
* https://tracker.ceph.com/issues/52436
430
    fs/ceph: "corrupt mdsmap"
431
* https://tracker.ceph.com/issues/52437
432
    mds: InoTable::replay_release_ids abort via test_inotable_sync
433
* https://tracker.ceph.com/issues/51282
434
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
435
* https://tracker.ceph.com/issues/52438
436
    qa: ffsb timeout
437
* https://tracker.ceph.com/issues/52439
438
    qa: acls does not compile on centos stream
439
440
441 20 Patrick Donnelly
h3. 2021 July 30
442
443
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210729.214022
444
445
* https://tracker.ceph.com/issues/50250
446
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
447
* https://tracker.ceph.com/issues/51282
448
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
449
* https://tracker.ceph.com/issues/48773
450
    qa: scrub does not complete
451
* https://tracker.ceph.com/issues/51975
452
    pybind/mgr/stats: KeyError
453
454
455 19 Patrick Donnelly
h3. 2021 July 28
456
457
https://pulpito.ceph.com/pdonnell-2021-07-28_00:39:45-fs-wip-pdonnell-testing-20210727.213757-distro-basic-smithi/
458
459
with qa fix: https://pulpito.ceph.com/pdonnell-2021-07-28_16:20:28-fs-wip-pdonnell-testing-20210728.141004-distro-basic-smithi/
460
461
* https://tracker.ceph.com/issues/51905
462
    qa: "error reading sessionmap 'mds1_sessionmap'"
463
* https://tracker.ceph.com/issues/48773
464
    qa: scrub does not complete
465
* https://tracker.ceph.com/issues/50250
466
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
467
* https://tracker.ceph.com/issues/51267
468
    CommandFailedError: Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithi096 with status 1:...
469
* https://tracker.ceph.com/issues/51279
470
    kclient hangs on umount (testing branch)
471
472
473 18 Patrick Donnelly
h3. 2021 July 16
474
475
https://pulpito.ceph.com/pdonnell-2021-07-16_05:50:11-fs-wip-pdonnell-testing-20210716.022804-distro-basic-smithi/
476
477
* https://tracker.ceph.com/issues/48773
478
    qa: scrub does not complete
479
* https://tracker.ceph.com/issues/48772
480
    qa: pjd: not ok 9, 44, 80
481
* https://tracker.ceph.com/issues/45434
482
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
483
* https://tracker.ceph.com/issues/51279
484
    kclient hangs on umount (testing branch)
485
* https://tracker.ceph.com/issues/50824
486
    qa: snaptest-git-ceph bus error
487
488
489 17 Patrick Donnelly
h3. 2021 July 04
490
491
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210703.052904
492
493
* https://tracker.ceph.com/issues/48773
494
    qa: scrub does not complete
495
* https://tracker.ceph.com/issues/39150
496
    mon: "FAILED ceph_assert(session_map.sessions.empty())" when out of quorum
497
* https://tracker.ceph.com/issues/45434
498
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
499
* https://tracker.ceph.com/issues/51282
500
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
501
* https://tracker.ceph.com/issues/48771
502
    qa: iogen: workload fails to cause balancing
503
* https://tracker.ceph.com/issues/51279
504
    kclient hangs on umount (testing branch)
505
* https://tracker.ceph.com/issues/50250
506
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
507
508
509 16 Patrick Donnelly
h3. 2021 July 01
510
511
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210701.192056
512
513
* https://tracker.ceph.com/issues/51197
514
    qa: [WRN] Scrub error on inode 0x10000001520 (/client.0/tmp/t/linux-5.4/Documentation/driver-api) see mds.f log and `damage ls` output for details
515
* https://tracker.ceph.com/issues/50866
516
    osd: stat mismatch on objects
517
* https://tracker.ceph.com/issues/48773
518
    qa: scrub does not complete
519
520
521 15 Patrick Donnelly
h3. 2021 June 26
522
523
https://pulpito.ceph.com/pdonnell-2021-06-26_00:57:00-fs-wip-pdonnell-testing-20210625.225421-distro-basic-smithi/
524
525
* https://tracker.ceph.com/issues/51183
526
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
527
* https://tracker.ceph.com/issues/51410
528
    kclient: fails to finish reconnect during MDS thrashing (testing branch)
529
* https://tracker.ceph.com/issues/48773
530
    qa: scrub does not complete
531
* https://tracker.ceph.com/issues/51282
532
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
533
* https://tracker.ceph.com/issues/51169
534
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
535
* https://tracker.ceph.com/issues/48772
536
    qa: pjd: not ok 9, 44, 80
537
538
539 14 Patrick Donnelly
h3. 2021 June 21
540
541
https://pulpito.ceph.com/pdonnell-2021-06-22_00:27:21-fs-wip-pdonnell-testing-20210621.231646-distro-basic-smithi/
542
543
One failure caused by PR: https://github.com/ceph/ceph/pull/41935#issuecomment-866472599
544
545
* https://tracker.ceph.com/issues/51282
546
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
547
* https://tracker.ceph.com/issues/51183
548
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
549
* https://tracker.ceph.com/issues/48773
550
    qa: scrub does not complete
551
* https://tracker.ceph.com/issues/48771
552
    qa: iogen: workload fails to cause balancing
553
* https://tracker.ceph.com/issues/51169
554
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
555
* https://tracker.ceph.com/issues/50495
556
    libcephfs: shutdown race fails with status 141
557
* https://tracker.ceph.com/issues/45434
558
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
559
* https://tracker.ceph.com/issues/50824
560
    qa: snaptest-git-ceph bus error
561
* https://tracker.ceph.com/issues/50223
562
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
563
564
565 13 Patrick Donnelly
h3. 2021 June 16
566
567
https://pulpito.ceph.com/pdonnell-2021-06-16_21:26:55-fs-wip-pdonnell-testing-20210616.191804-distro-basic-smithi/
568
569
MDS abort class of failures caused by PR: https://github.com/ceph/ceph/pull/41667
570
571
* https://tracker.ceph.com/issues/45434
572
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
573
* https://tracker.ceph.com/issues/51169
574
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
575
* https://tracker.ceph.com/issues/43216
576
    MDSMonitor: removes MDS coming out of quorum election
577
* https://tracker.ceph.com/issues/51278
578
    mds: "FAILED ceph_assert(!segments.empty())"
579
* https://tracker.ceph.com/issues/51279
580
    kclient hangs on umount (testing branch)
581
* https://tracker.ceph.com/issues/51280
582
    mds: "FAILED ceph_assert(r == 0 || r == -2)"
583
* https://tracker.ceph.com/issues/51183
584
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
585
* https://tracker.ceph.com/issues/51281
586
    qa: snaptest-snap-rm-cmp.sh: "echo 'FAIL: bad match, /tmp/a 4637e766853d1ad16a7b17079e2c6f03 != real c3883760b18d50e8d78819c54d579b00'"
587
* https://tracker.ceph.com/issues/48773
588
    qa: scrub does not complete
589
* https://tracker.ceph.com/issues/51076
590
    "wait_for_recovery: failed before timeout expired" during thrashosd test with EC backend.
591
* https://tracker.ceph.com/issues/51228
592
    qa: rmdir: failed to remove 'a/.snap/*': No such file or directory
593
* https://tracker.ceph.com/issues/51282
594
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
595
596
597 12 Patrick Donnelly
h3. 2021 June 14
598
599
https://pulpito.ceph.com/pdonnell-2021-06-14_20:53:05-fs-wip-pdonnell-testing-20210614.173325-distro-basic-smithi/
600
601
Some Ubuntu 20.04 upgrade fallout. In particular, upgrade tests are failing due to missing packages for 18.04 Pacific.
602
603
* https://tracker.ceph.com/issues/51169
604
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
605
* https://tracker.ceph.com/issues/51228
606
    qa: rmdir: failed to remove 'a/.snap/*': No such file or directory
607
* https://tracker.ceph.com/issues/48773
608
    qa: scrub does not complete
609
* https://tracker.ceph.com/issues/51183
610
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
611
* https://tracker.ceph.com/issues/45434
612
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
613
* https://tracker.ceph.com/issues/51182
614
    pybind/mgr/snap_schedule: Invalid command: Unexpected argument 'fs=cephfs'
615
* https://tracker.ceph.com/issues/51229
616
    qa: test_multi_snap_schedule list difference failure
617
* https://tracker.ceph.com/issues/50821
618
    qa: untar_snap_rm failure during mds thrashing
619
620
621 11 Patrick Donnelly
h3. 2021 June 13
622
623
https://pulpito.ceph.com/pdonnell-2021-06-12_02:45:35-fs-wip-pdonnell-testing-20210612.002809-distro-basic-smithi/
624
625
Some Ubuntu 20.04 upgrade fallout. In particular, upgrade tests are failing due to missing packages for 18.04 Pacific.
626
627
* https://tracker.ceph.com/issues/51169
628
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
629
* https://tracker.ceph.com/issues/48773
630
    qa: scrub does not complete
631
* https://tracker.ceph.com/issues/51182
632
    pybind/mgr/snap_schedule: Invalid command: Unexpected argument 'fs=cephfs'
633
* https://tracker.ceph.com/issues/51183
634
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
635
* https://tracker.ceph.com/issues/51197
636
    qa: [WRN] Scrub error on inode 0x10000001520 (/client.0/tmp/t/linux-5.4/Documentation/driver-api) see mds.f log and `damage ls` output for details
637
* https://tracker.ceph.com/issues/45434
638
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
639
640 10 Patrick Donnelly
h3. 2021 June 11
641
642
https://pulpito.ceph.com/pdonnell-2021-06-11_18:02:10-fs-wip-pdonnell-testing-20210611.162716-distro-basic-smithi/
643
644
Some Ubuntu 20.04 upgrade fallout. In particular, upgrade tests are failing due to missing packages for 18.04 Pacific.
645
646
* https://tracker.ceph.com/issues/51169
647
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
648
* https://tracker.ceph.com/issues/45434
649
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
650
* https://tracker.ceph.com/issues/48771
651
    qa: iogen: workload fails to cause balancing
652
* https://tracker.ceph.com/issues/43216
653
    MDSMonitor: removes MDS coming out of quorum election
654
* https://tracker.ceph.com/issues/51182
655
    pybind/mgr/snap_schedule: Invalid command: Unexpected argument 'fs=cephfs'
656
* https://tracker.ceph.com/issues/50223
657
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
658
* https://tracker.ceph.com/issues/48773
659
    qa: scrub does not complete
660
* https://tracker.ceph.com/issues/51183
661
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
662
* https://tracker.ceph.com/issues/51184
663
    qa: fs:bugs does not specify distro
664
665
666 9 Patrick Donnelly
h3. 2021 June 03
667
668
https://pulpito.ceph.com/pdonnell-2021-06-03_03:40:33-fs-wip-pdonnell-testing-20210603.020013-distro-basic-smithi/
669
670
* https://tracker.ceph.com/issues/45434
671
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
672
* https://tracker.ceph.com/issues/50016
673
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
674
* https://tracker.ceph.com/issues/50821
675
    qa: untar_snap_rm failure during mds thrashing
676
* https://tracker.ceph.com/issues/50622 (regression)
677
    msg: active_connections regression
678
* https://tracker.ceph.com/issues/49845#note-2 (regression)
679
    qa: failed umount in test_volumes
680
* https://tracker.ceph.com/issues/48773
681
    qa: scrub does not complete
682
* https://tracker.ceph.com/issues/43216
683
    MDSMonitor: removes MDS coming out of quorum election
684
685
686 7 Patrick Donnelly
h3. 2021 May 18
687
688 8 Patrick Donnelly
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210518.214114
689
690
Regression in testing kernel caused some failures. Ilya fixed those and rerun
691
looked better. Some odd new noise in the rerun relating to packaging and "No
692
module named 'tasks.ceph'".
693
694
* https://tracker.ceph.com/issues/50824
695
    qa: snaptest-git-ceph bus error
696
* https://tracker.ceph.com/issues/50622 (regression)
697
    msg: active_connections regression
698
* https://tracker.ceph.com/issues/49845#note-2 (regression)
699
    qa: failed umount in test_volumes
700
* https://tracker.ceph.com/issues/48203 (stock kernel update required)
701
    qa: quota failure
702
703
704
h3. 2021 May 18
705
706 7 Patrick Donnelly
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210518.025642
707
708
* https://tracker.ceph.com/issues/50821
709
    qa: untar_snap_rm failure during mds thrashing
710
* https://tracker.ceph.com/issues/48773
711
    qa: scrub does not complete
712
* https://tracker.ceph.com/issues/45591
713
    mgr: FAILED ceph_assert(daemon != nullptr)
714
* https://tracker.ceph.com/issues/50866
715
    osd: stat mismatch on objects
716
* https://tracker.ceph.com/issues/50016
717
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
718
* https://tracker.ceph.com/issues/50867
719
    qa: fs:mirror: reduced data availability
720
* https://tracker.ceph.com/issues/50821
721
    qa: untar_snap_rm failure during mds thrashing
722
* https://tracker.ceph.com/issues/50622 (regression)
723
    msg: active_connections regression
724
* https://tracker.ceph.com/issues/50223
725
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
726
* https://tracker.ceph.com/issues/50868
727
    qa: "kern.log.gz already exists; not overwritten"
728
* https://tracker.ceph.com/issues/50870
729
    qa: test_full: "rm: cannot remove 'large_file_a': Permission denied"
730
731
732 6 Patrick Donnelly
h3. 2021 May 11
733
734
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210511.232042
735
736
* one class of failures caused by PR
737
* https://tracker.ceph.com/issues/48812
738
    qa: test_scrub_pause_and_resume_with_abort failure
739
* https://tracker.ceph.com/issues/50390
740
    mds: monclient: wait_auth_rotating timed out after 30
741
* https://tracker.ceph.com/issues/48773
742
    qa: scrub does not complete
743
* https://tracker.ceph.com/issues/50821
744
    qa: untar_snap_rm failure during mds thrashing
745
* https://tracker.ceph.com/issues/50224
746
    qa: test_mirroring_init_failure_with_recovery failure
747
* https://tracker.ceph.com/issues/50622 (regression)
748
    msg: active_connections regression
749
* https://tracker.ceph.com/issues/50825
750
    qa: snaptest-git-ceph hang during mon thrashing v2
751
* https://tracker.ceph.com/issues/50821
752
    qa: untar_snap_rm failure during mds thrashing
753
* https://tracker.ceph.com/issues/50823
754
    qa: RuntimeError: timeout waiting for cluster to stabilize
755
756
757 5 Patrick Donnelly
h3. 2021 May 14
758
759
https://pulpito.ceph.com/pdonnell-2021-05-14_21:45:42-fs-master-distro-basic-smithi/
760
761
* https://tracker.ceph.com/issues/48812
762
    qa: test_scrub_pause_and_resume_with_abort failure
763
* https://tracker.ceph.com/issues/50821
764
    qa: untar_snap_rm failure during mds thrashing
765
* https://tracker.ceph.com/issues/50622 (regression)
766
    msg: active_connections regression
767
* https://tracker.ceph.com/issues/50822
768
    qa: testing kernel patch for client metrics causes mds abort
769
* https://tracker.ceph.com/issues/48773
770
    qa: scrub does not complete
771
* https://tracker.ceph.com/issues/50823
772
    qa: RuntimeError: timeout waiting for cluster to stabilize
773
* https://tracker.ceph.com/issues/50824
774
    qa: snaptest-git-ceph bus error
775
* https://tracker.ceph.com/issues/50825
776
    qa: snaptest-git-ceph hang during mon thrashing v2
777
* https://tracker.ceph.com/issues/50826
778
    kceph: stock RHEL kernel hangs on snaptests with mon|osd thrashers
779
780
781 4 Patrick Donnelly
h3. 2021 May 01
782
783
https://pulpito.ceph.com/pdonnell-2021-05-01_09:07:09-fs-wip-pdonnell-testing-20210501.040415-distro-basic-smithi/
784
785
* https://tracker.ceph.com/issues/45434
786
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
787
* https://tracker.ceph.com/issues/50281
788
    qa: untar_snap_rm timeout
789
* https://tracker.ceph.com/issues/48203 (stock kernel update required)
790
    qa: quota failure
791
* https://tracker.ceph.com/issues/48773
792
    qa: scrub does not complete
793
* https://tracker.ceph.com/issues/50390
794
    mds: monclient: wait_auth_rotating timed out after 30
795
* https://tracker.ceph.com/issues/50250
796
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details"
797
* https://tracker.ceph.com/issues/50622 (regression)
798
    msg: active_connections regression
799
* https://tracker.ceph.com/issues/45591
800
    mgr: FAILED ceph_assert(daemon != nullptr)
801
* https://tracker.ceph.com/issues/50221
802
    qa: snaptest-git-ceph failure in git diff
803
* https://tracker.ceph.com/issues/50016
804
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
805
806
807 3 Patrick Donnelly
h3. 2021 Apr 15
808
809
https://pulpito.ceph.com/pdonnell-2021-04-15_01:35:57-fs-wip-pdonnell-testing-20210414.230315-distro-basic-smithi/
810
811
* https://tracker.ceph.com/issues/50281
812
    qa: untar_snap_rm timeout
813
* https://tracker.ceph.com/issues/50220
814
    qa: dbench workload timeout
815
* https://tracker.ceph.com/issues/50246
816
    mds: failure replaying journal (EMetaBlob)
817
* https://tracker.ceph.com/issues/50250
818
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details"
819
* https://tracker.ceph.com/issues/50016
820
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
821
* https://tracker.ceph.com/issues/50222
822
    osd: 5.2s0 deep-scrub : stat mismatch
823
* https://tracker.ceph.com/issues/45434
824
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
825
* https://tracker.ceph.com/issues/49845
826
    qa: failed umount in test_volumes
827
* https://tracker.ceph.com/issues/37808
828
    osd: osdmap cache weak_refs assert during shutdown
829
* https://tracker.ceph.com/issues/50387
830
    client: fs/snaps failure
831
* https://tracker.ceph.com/issues/50389
832
    mds: "cluster [ERR] Error recovering journal 0x203: (2) No such file or directory" in cluster log"
833
* https://tracker.ceph.com/issues/50216
834
    qa: "ls: cannot access 'lost+found': No such file or directory"
835
* https://tracker.ceph.com/issues/50390
836
    mds: monclient: wait_auth_rotating timed out after 30
837
838
839
840 1 Patrick Donnelly
h3. 2021 Apr 08
841
842 2 Patrick Donnelly
https://pulpito.ceph.com/pdonnell-2021-04-08_22:42:24-fs-wip-pdonnell-testing-20210408.192301-distro-basic-smithi/
843
844
* https://tracker.ceph.com/issues/45434
845
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
846
* https://tracker.ceph.com/issues/50016
847
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
848
* https://tracker.ceph.com/issues/48773
849
    qa: scrub does not complete
850
* https://tracker.ceph.com/issues/50279
851
    qa: "Replacing daemon mds.b as rank 0 with standby daemon mds.c"
852
* https://tracker.ceph.com/issues/50246
853
    mds: failure replaying journal (EMetaBlob)
854
* https://tracker.ceph.com/issues/48365
855
    qa: ffsb build failure on CentOS 8.2
856
* https://tracker.ceph.com/issues/50216
857
    qa: "ls: cannot access 'lost+found': No such file or directory"
858
* https://tracker.ceph.com/issues/50223
859
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
860
* https://tracker.ceph.com/issues/50280
861
    cephadm: RuntimeError: uid/gid not found
862
* https://tracker.ceph.com/issues/50281
863
    qa: untar_snap_rm timeout
864
865
h3. 2021 Apr 08
866
867 1 Patrick Donnelly
https://pulpito.ceph.com/pdonnell-2021-04-08_04:31:36-fs-wip-pdonnell-testing-20210408.024225-distro-basic-smithi/
868
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210408.142238 (with logic inversion / QA fix)
869
870
* https://tracker.ceph.com/issues/50246
871
    mds: failure replaying journal (EMetaBlob)
872
* https://tracker.ceph.com/issues/50250
873
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details"
874
875
876
h3. 2021 Apr 07
877
878
https://pulpito.ceph.com/pdonnell-2021-04-07_02:12:41-fs-wip-pdonnell-testing-20210406.213012-distro-basic-smithi/
879
880
* https://tracker.ceph.com/issues/50215
881
    qa: "log [ERR] : error reading sessionmap 'mds2_sessionmap'"
882
* https://tracker.ceph.com/issues/49466
883
    qa: "Command failed on gibba030 with status 1: 'set -ex\nsudo dd of=/tmp/tmp.ZEeZBasJer'"
884
* https://tracker.ceph.com/issues/50216
885
    qa: "ls: cannot access 'lost+found': No such file or directory"
886
* https://tracker.ceph.com/issues/48773
887
    qa: scrub does not complete
888
* https://tracker.ceph.com/issues/49845
889
    qa: failed umount in test_volumes
890
* https://tracker.ceph.com/issues/50220
891
    qa: dbench workload timeout
892
* https://tracker.ceph.com/issues/50221
893
    qa: snaptest-git-ceph failure in git diff
894
* https://tracker.ceph.com/issues/50222
895
    osd: 5.2s0 deep-scrub : stat mismatch
896
* https://tracker.ceph.com/issues/50223
897
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
898
* https://tracker.ceph.com/issues/50224
899
    qa: test_mirroring_init_failure_with_recovery failure
900
901
h3. 2021 Apr 01
902
903
https://pulpito.ceph.com/pdonnell-2021-04-01_00:45:34-fs-wip-pdonnell-testing-20210331.222326-distro-basic-smithi/
904
905
* https://tracker.ceph.com/issues/48772
906
    qa: pjd: not ok 9, 44, 80
907
* https://tracker.ceph.com/issues/50177
908
    osd: "stalled aio... buggy kernel or bad device?"
909
* https://tracker.ceph.com/issues/48771
910
    qa: iogen: workload fails to cause balancing
911
* https://tracker.ceph.com/issues/49845
912
    qa: failed umount in test_volumes
913
* https://tracker.ceph.com/issues/48773
914
    qa: scrub does not complete
915
* https://tracker.ceph.com/issues/48805
916
    mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details"
917
* https://tracker.ceph.com/issues/50178
918
    qa: "TypeError: run() got an unexpected keyword argument 'shell'"
919
* https://tracker.ceph.com/issues/45434
920
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
921
922
h3. 2021 Mar 24
923
924
https://pulpito.ceph.com/pdonnell-2021-03-24_23:26:35-fs-wip-pdonnell-testing-20210324.190252-distro-basic-smithi/
925
926
* https://tracker.ceph.com/issues/49500
927
    qa: "Assertion `cb_done' failed."
928
* https://tracker.ceph.com/issues/50019
929
    qa: mount failure with cephadm "probably no MDS server is up?"
930
* https://tracker.ceph.com/issues/50020
931
    qa: "RADOS object not found (Failed to operate read op for oid cephfs_mirror)"
932
* https://tracker.ceph.com/issues/48773
933
    qa: scrub does not complete
934
* https://tracker.ceph.com/issues/45434
935
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
936
* https://tracker.ceph.com/issues/48805
937
    mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details"
938
* https://tracker.ceph.com/issues/48772
939
    qa: pjd: not ok 9, 44, 80
940
* https://tracker.ceph.com/issues/50021
941
    qa: snaptest-git-ceph failure during mon thrashing
942
* https://tracker.ceph.com/issues/48771
943
    qa: iogen: workload fails to cause balancing
944
* https://tracker.ceph.com/issues/50016
945
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
946
* https://tracker.ceph.com/issues/49466
947
    qa: "Command failed on gibba030 with status 1: 'set -ex\nsudo dd of=/tmp/tmp.ZEeZBasJer'"
948
949
950
h3. 2021 Mar 18
951
952
https://pulpito.ceph.com/pdonnell-2021-03-18_13:46:31-fs-wip-pdonnell-testing-20210318.024145-distro-basic-smithi/
953
954
* https://tracker.ceph.com/issues/49466
955
    qa: "Command failed on gibba030 with status 1: 'set -ex\nsudo dd of=/tmp/tmp.ZEeZBasJer'"
956
* https://tracker.ceph.com/issues/48773
957
    qa: scrub does not complete
958
* https://tracker.ceph.com/issues/48805
959
    mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details"
960
* https://tracker.ceph.com/issues/45434
961
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
962
* https://tracker.ceph.com/issues/49845
963
    qa: failed umount in test_volumes
964
* https://tracker.ceph.com/issues/49605
965
    mgr: drops command on the floor
966
* https://tracker.ceph.com/issues/48203 (stock kernel update required)
967
    qa: quota failure
968
* https://tracker.ceph.com/issues/49928
969
    client: items pinned in cache preventing unmount x2
970
971
h3. 2021 Mar 15
972
973
https://pulpito.ceph.com/pdonnell-2021-03-15_22:16:56-fs-wip-pdonnell-testing-20210315.182203-distro-basic-smithi/
974
975
* https://tracker.ceph.com/issues/49842
976
    qa: stuck pkg install
977
* https://tracker.ceph.com/issues/49466
978
    qa: "Command failed on gibba030 with status 1: 'set -ex\nsudo dd of=/tmp/tmp.ZEeZBasJer'"
979
* https://tracker.ceph.com/issues/49822
980
    test: test_mirroring_command_idempotency (tasks.cephfs.test_admin.TestMirroringCommands) failure
981
* https://tracker.ceph.com/issues/49240
982
    terminate called after throwing an instance of 'std::bad_alloc'
983
* https://tracker.ceph.com/issues/48773
984
    qa: scrub does not complete
985
* https://tracker.ceph.com/issues/45434
986
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
987
* https://tracker.ceph.com/issues/49500
988
    qa: "Assertion `cb_done' failed."
989
* https://tracker.ceph.com/issues/49843
990
    qa: fs/snaps/snaptest-upchildrealms.sh failure
991
* https://tracker.ceph.com/issues/49845
992
    qa: failed umount in test_volumes
993
* https://tracker.ceph.com/issues/48805
994
    mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details"
995
* https://tracker.ceph.com/issues/49605
996
    mgr: drops command on the floor
997
998
and failure caused by PR: https://github.com/ceph/ceph/pull/39969
999
1000
1001
h3. 2021 Mar 09
1002
1003
https://pulpito.ceph.com/pdonnell-2021-03-09_03:27:39-fs-wip-pdonnell-testing-20210308.214827-distro-basic-smithi/
1004
1005
* https://tracker.ceph.com/issues/49500
1006
    qa: "Assertion `cb_done' failed."
1007
* https://tracker.ceph.com/issues/48805
1008
    mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details"
1009
* https://tracker.ceph.com/issues/48773
1010
    qa: scrub does not complete
1011
* https://tracker.ceph.com/issues/45434
1012
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
1013
* https://tracker.ceph.com/issues/49240
1014
    terminate called after throwing an instance of 'std::bad_alloc'
1015
* https://tracker.ceph.com/issues/49466
1016
    qa: "Command failed on gibba030 with status 1: 'set -ex\nsudo dd of=/tmp/tmp.ZEeZBasJer'"
1017
* https://tracker.ceph.com/issues/49684
1018
    qa: fs:cephadm mount does not wait for mds to be created
1019
* https://tracker.ceph.com/issues/48771
1020
    qa: iogen: workload fails to cause balancing