Project

General

Profile

Main » History » Version 76

Rishabh Dave, 08/26/2022 12:37 PM

1 76 Rishabh Dave
h3. 26 Aug 2022
2
3
http://pulpito.front.sepia.ceph.com/rishabh-2022-08-22_17:49:59-fs-wip-rishabh-testing-2022Aug19-testing-default-smithi/
4
http://pulpito.front.sepia.ceph.com/rishabh-2022-08-24_11:56:51-fs-wip-rishabh-testing-2022Aug19-testing-default-smithi/
5
6
* https://tracker.ceph.com/issues/57206
7
  libcephfs/test.sh: ceph_test_libcephfs_reclaim
8
* https://tracker.ceph.com/issues/56632
9
  Test failure: test_subvolume_snapshot_clone_quota_exceeded (tasks.cephfs.test_volumes.TestSubvolumeSnapshotClones)
10
* https://tracker.ceph.com/issues/56446
11
  Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
12
* https://tracker.ceph.com/issues/51964
13
  qa: test_cephfs_mirror_restart_sync_on_blocklist failure
14
* https://tracker.ceph.com/issues/53859
15
  qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
16
17
* https://tracker.ceph.com/issues/54460
18
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
19
* https://tracker.ceph.com/issues/54462
20
  Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128
21
* https://tracker.ceph.com/issues/54460
22
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
23
* https://tracker.ceph.com/issues/36593
24
  Command failed (workunit test fs/quota/quota.sh) on smithixxx with status 1
25
26
* https://tracker.ceph.com/issues/52624
27
  qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
28
* https://tracker.ceph.com/issues/55804
29
  Command failed (workunit test suites/pjd.sh)
30
* https://tracker.ceph.com/issues/50223
31
  client.xxxx isn't responding to mclientcaps(revoke)
32
33
34 75 Venky Shankar
h3. 2022 Aug 22
35
36
https://pulpito.ceph.com/vshankar-2022-08-12_09:34:24-fs-wip-vshankar-testing1-20220812-072441-testing-default-smithi/
37
https://pulpito.ceph.com/vshankar-2022-08-18_04:30:42-fs-wip-vshankar-testing1-20220818-082047-testing-default-smithi/ (drop problematic PR and re-run)
38
39
* https://tracker.ceph.com/issues/52624
40
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
41
* https://tracker.ceph.com/issues/56446
42
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
43
* https://tracker.ceph.com/issues/55804
44
    Command failed (workunit test suites/pjd.sh)
45
* https://tracker.ceph.com/issues/51278
46
    mds: "FAILED ceph_assert(!segments.empty())"
47
* https://tracker.ceph.com/issues/54460
48
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
49
* https://tracker.ceph.com/issues/57205
50
    Test failure: test_subvolume_group_ls_filter_internal_directories (tasks.cephfs.test_volumes.TestSubvolumeGroups)
51
* https://tracker.ceph.com/issues/57206
52
    ceph_test_libcephfs_reclaim crashes during test
53
* https://tracker.ceph.com/issues/53859
54
  Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
55
* https://tracker.ceph.com/issues/50223
56
    client.xxxx isn't responding to mclientcaps(revoke)
57
58 72 Venky Shankar
h3. 2022 Aug 12
59
60
https://pulpito.ceph.com/vshankar-2022-08-10_04:06:00-fs-wip-vshankar-testing-20220805-190751-testing-default-smithi/
61
https://pulpito.ceph.com/vshankar-2022-08-11_12:16:58-fs-wip-vshankar-testing-20220811-145809-testing-default-smithi/ (drop problematic PR and re-run)
62
63
* https://tracker.ceph.com/issues/52624
64
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
65
* https://tracker.ceph.com/issues/56446
66
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
67
* https://tracker.ceph.com/issues/51964
68
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
69
* https://tracker.ceph.com/issues/55804
70
    Command failed (workunit test suites/pjd.sh)
71
* https://tracker.ceph.com/issues/50223
72
    client.xxxx isn't responding to mclientcaps(revoke)
73
* https://tracker.ceph.com/issues/50821
74
    qa: untar_snap_rm failure during mds thrashing
75
* https://tracker.ceph.com/issues/54460
76 73 Venky Shankar
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
77 72 Venky Shankar
78 71 Venky Shankar
h3. 2022 Aug 04
79
80
https://pulpito.ceph.com/?branch=wip-vshankar-testing1-20220804-123835 (only mgr/volumes, mgr/stats)
81
82
Unrealted teuthology failure on rhel
83
84 69 Rishabh Dave
h3. 2022 Jul 25
85 68 Rishabh Dave
86
http://pulpito.front.sepia.ceph.com/rishabh-2022-07-22_11:34:20-fs-wip-rishabh-testing-2022Jul22-1400-testing-default-smithi/
87
88
1st re-run: http://pulpito.front.sepia.ceph.com/rishabh-2022-07-24_03:51:19-fs-wip-rishabh-testing-2022Jul22-1400-testing-default-smithi
89
2nd re-run: http://pulpito.front.sepia.ceph.com/rishabh-2022-07-24_08:53:36-fs-wip-rishabh-testing-2022Jul22-1400-testing-default-smithi/
90 74 Rishabh Dave
3rd re-run: http://pulpito.front.sepia.ceph.com/rishabh-2022-07-24_08:53:36-fs-wip-rishabh-testing-2022Jul22-1400-testing-default-smithi/
91
4th (final) re-run: http://pulpito.front.sepia.ceph.com/rishabh-2022-07-28_03:59:01-fs-wip-rishabh-testing-2022Jul28-0143-testing-default-smithi/
92 68 Rishabh Dave
93
* https://tracker.ceph.com/issues/55804
94
  Command failed (workunit test suites/pjd.sh)
95
* https://tracker.ceph.com/issues/50223
96
  client.xxxx isn't responding to mclientcaps(revoke)
97
98
* https://tracker.ceph.com/issues/54460
99
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
100
* https://tracker.ceph.com/issues/36593
101
  Command failed (workunit test fs/quota/quota.sh) on smithixxx with status 1
102 1 Patrick Donnelly
* https://tracker.ceph.com/issues/54462
103 74 Rishabh Dave
  Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128~
104 68 Rishabh Dave
105 67 Patrick Donnelly
h3. 2022 July 22
106
107
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20220721.235756
108
109
MDS_HEALTH_DUMMY error in log fixed by followup commit.
110
transient selinux ping failure
111
112
* https://tracker.ceph.com/issues/56694
113
    qa: avoid blocking forever on hung umount
114
* https://tracker.ceph.com/issues/56695
115
    [RHEL stock] pjd test failures
116
* https://tracker.ceph.com/issues/56696
117
    admin keyring disappears during qa run
118
* https://tracker.ceph.com/issues/56697
119
    qa: fs/snaps fails for fuse
120
* https://tracker.ceph.com/issues/50222
121
    osd: 5.2s0 deep-scrub : stat mismatch
122
* https://tracker.ceph.com/issues/56698
123
    client: FAILED ceph_assert(_size == 0)
124
* https://tracker.ceph.com/issues/50223
125
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
126
127
128 66 Rishabh Dave
h3. 2022 Jul 15
129 65 Rishabh Dave
130
http://pulpito.front.sepia.ceph.com/rishabh-2022-07-08_23:53:34-fs-wip-rishabh-testing-2022Jul08-1820-testing-default-smithi/
131
132
re-run: http://pulpito.front.sepia.ceph.com/rishabh-2022-07-15_06:42:04-fs-wip-rishabh-testing-2022Jul08-1820-testing-default-smithi/
133
134
* https://tracker.ceph.com/issues/53859
135
  Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
136
* https://tracker.ceph.com/issues/55804
137
  Command failed (workunit test suites/pjd.sh)
138
* https://tracker.ceph.com/issues/50223
139
  client.xxxx isn't responding to mclientcaps(revoke)
140
* https://tracker.ceph.com/issues/50222
141
  osd: deep-scrub : stat mismatch
142
143
* https://tracker.ceph.com/issues/56632
144
  Test failure: test_subvolume_snapshot_clone_quota_exceeded (tasks.cephfs.test_volumes.TestSubvolumeSnapshotClones)
145
* https://tracker.ceph.com/issues/56634
146
  workunit test fs/snaps/snaptest-intodir.sh
147
* https://tracker.ceph.com/issues/56644
148
  Test failure: test_rapid_creation (tasks.cephfs.test_fragment.TestFragmentation)
149
150
151
152 61 Rishabh Dave
h3. 2022 July 05
153
154
http://pulpito.front.sepia.ceph.com/rishabh-2022-07-02_14:14:52-fs-wip-rishabh-testing-20220702-1631-testing-default-smithi/
155 62 Rishabh Dave
156 64 Rishabh Dave
On 1st re-run some jobs passed - http://pulpito.front.sepia.ceph.com/rishabh-2022-07-03_15:10:28-fs-wip-rishabh-testing-20220702-1631-distro-default-smithi/
157
158
On 2nd re-run only few jobs failed -
159
http://pulpito.front.sepia.ceph.com/rishabh-2022-07-06_05:24:29-fs-wip-rishabh-testing-20220705-2132-distro-default-smithi/
160
http://pulpito.front.sepia.ceph.com/rishabh-2022-07-06_05:24:29-fs-wip-rishabh-testing-20220705-2132-distro-default-smithi/
161 62 Rishabh Dave
162
* https://tracker.ceph.com/issues/56446
163
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
164
* https://tracker.ceph.com/issues/55804
165
    Command failed (workunit test suites/pjd.sh) on smithi047 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/
166
167
* https://tracker.ceph.com/issues/56445
168
    Command failed on smithi080 with status 123: "find /home/ubuntu/cephtest/archive/syslog -name '*.log' -print0 | sudo xargs -0 --no-run-if-empty -- gzip --"
169
* https://tracker.ceph.com/issues/51267
170 63 Rishabh Dave
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithi098 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1
171
* https://tracker.ceph.com/issues/50224
172
    Test failure: test_mirroring_init_failure_with_recovery (tasks.cephfs.test_mirroring.TestMirroring)
173 62 Rishabh Dave
174
175 61 Rishabh Dave
176 58 Venky Shankar
h3. 2022 July 04
177
178
https://pulpito.ceph.com/vshankar-2022-06-29_09:19:00-fs-wip-vshankar-testing-20220627-100931-testing-default-smithi/
179
(rhel runs were borked due to: https://lists.ceph.io/hyperkitty/list/dev@ceph.io/thread/JSZQFUKVLDND4W33PXDGCABPHNSPT6SS/, tests ran with --filter-out=rhel)
180
181
* https://tracker.ceph.com/issues/56445
182
    Command failed on smithi162 with status 123: "find /home/ubuntu/cephtest/archive/syslog -name '*.log' -print0 | sudo xargs -0 --no-run-if-empty -- gzip --"
183
* https://tracker.ceph.com/issues/56446
184 59 Rishabh Dave
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
185
* https://tracker.ceph.com/issues/51964
186
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
187
* https://tracker.ceph.com/issues/52624
188 60 Rishabh Dave
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
189 59 Rishabh Dave
190 57 Venky Shankar
h3. 2022 June 20
191
192
https://pulpito.ceph.com/vshankar-2022-06-15_04:03:39-fs-wip-vshankar-testing1-20220615-072516-testing-default-smithi/
193
https://pulpito.ceph.com/vshankar-2022-06-19_08:22:46-fs-wip-vshankar-testing1-20220619-102531-testing-default-smithi/
194
195
* https://tracker.ceph.com/issues/52624
196
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
197
* https://tracker.ceph.com/issues/55804
198
    qa failure: pjd link tests failed
199
* https://tracker.ceph.com/issues/54108
200
    qa: iogen workunit: "The following counters failed to be set on mds daemons: {'mds.exported', 'mds.imported'}"
201
* https://tracker.ceph.com/issues/55332
202
    Failure in snaptest-git-ceph.sh (it's an async unlink/create bug)
203
204 56 Patrick Donnelly
h3. 2022 June 13
205
206
https://pulpito.ceph.com/pdonnell-2022-06-12_05:08:12-fs:workload-wip-pdonnell-testing-20220612.004943-distro-default-smithi/
207
208
* https://tracker.ceph.com/issues/56024
209
    cephadm: removes ceph.conf during qa run causing command failure
210
* https://tracker.ceph.com/issues/48773
211
    qa: scrub does not complete
212
* https://tracker.ceph.com/issues/56012
213
    mds: src/mds/MDLog.cc: 283: FAILED ceph_assert(!mds->is_ any_replay())
214
215
216 55 Venky Shankar
h3. 2022 Jun 13
217 54 Venky Shankar
218
https://pulpito.ceph.com/vshankar-2022-06-07_00:25:50-fs-wip-vshankar-testing-20220606-223254-testing-default-smithi/
219
https://pulpito.ceph.com/vshankar-2022-06-10_01:04:46-fs-wip-vshankar-testing-20220609-175550-testing-default-smithi/
220
221
* https://tracker.ceph.com/issues/52624
222
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
223
* https://tracker.ceph.com/issues/51964
224
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
225
* https://tracker.ceph.com/issues/53859
226
    qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
227
* https://tracker.ceph.com/issues/55804
228
    qa failure: pjd link tests failed
229
* https://tracker.ceph.com/issues/56003
230
    client: src/include/xlist.h: 81: FAILED ceph_assert(_size == 0)
231
* https://tracker.ceph.com/issues/56011
232
    fs/thrash: snaptest-snap-rm-cmp.sh fails in mds5sum comparison
233
* https://tracker.ceph.com/issues/56012
234
    mds: src/mds/MDLog.cc: 283: FAILED ceph_assert(!mds->is_ any_replay())
235
236 53 Venky Shankar
h3. 2022 Jun 07
237
238
https://pulpito.ceph.com/vshankar-2022-06-06_21:25:41-fs-wip-vshankar-testing1-20220606-230129-testing-default-smithi/
239
https://pulpito.ceph.com/vshankar-2022-06-07_10:53:31-fs-wip-vshankar-testing1-20220607-104134-testing-default-smithi/ (rerun after dropping a problematic PR)
240
241
* https://tracker.ceph.com/issues/52624
242
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
243
* https://tracker.ceph.com/issues/50223
244
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
245
* https://tracker.ceph.com/issues/50224
246
    qa: test_mirroring_init_failure_with_recovery failure
247
248 51 Venky Shankar
h3. 2022 May 12
249
250
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20220509-125847
251 52 Venky Shankar
https://pulpito.ceph.com/vshankar-2022-05-13_17:09:16-fs-wip-vshankar-testing-20220513-120051-testing-default-smithi/ (drop prs + rerun)
252 51 Venky Shankar
253
* https://tracker.ceph.com/issues/52624
254
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
255
* https://tracker.ceph.com/issues/50223
256
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
257
* https://tracker.ceph.com/issues/55332
258
    Failure in snaptest-git-ceph.sh
259
* https://tracker.ceph.com/issues/53859
260
    qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
261
* https://tracker.ceph.com/issues/55538
262 1 Patrick Donnelly
    Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead)
263 52 Venky Shankar
* https://tracker.ceph.com/issues/55258
264
    lots of "heartbeat_check: no reply from X.X.X.X" in OSD logs (cropss up again, though very infrequent)
265 51 Venky Shankar
266 49 Venky Shankar
h3. 2022 May 04
267
268 50 Venky Shankar
https://pulpito.ceph.com/vshankar-2022-05-01_13:18:44-fs-wip-vshankar-testing1-20220428-204527-testing-default-smithi/
269
https://pulpito.ceph.com/vshankar-2022-05-02_16:58:59-fs-wip-vshankar-testing1-20220502-201957-testing-default-smithi/ (after dropping PRs)
270
271 49 Venky Shankar
* https://tracker.ceph.com/issues/52624
272
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
273
* https://tracker.ceph.com/issues/50223
274
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
275
* https://tracker.ceph.com/issues/55332
276
    Failure in snaptest-git-ceph.sh
277
* https://tracker.ceph.com/issues/53859
278
    qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
279
* https://tracker.ceph.com/issues/55516
280
    qa: fs suite tests failing with "json.decoder.JSONDecodeError: Extra data: line 2 column 82 (char 82)"
281
* https://tracker.ceph.com/issues/55537
282
    mds: crash during fs:upgrade test
283
* https://tracker.ceph.com/issues/55538
284
    Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead)
285
286 48 Venky Shankar
h3. 2022 Apr 25
287
288
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20220420-113951 (owner vshankar)
289
290
* https://tracker.ceph.com/issues/52624
291
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
292
* https://tracker.ceph.com/issues/50223
293
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
294
* https://tracker.ceph.com/issues/55258
295
    lots of "heartbeat_check: no reply from X.X.X.X" in OSD logs
296
* https://tracker.ceph.com/issues/55377
297
    kclient: mds revoke Fwb caps stuck after the kclient tries writebcak once
298
299 47 Venky Shankar
h3. 2022 Apr 14
300
301
https://pulpito.ceph.com/?branch=wip-vshankar-testing1-20220411-144044
302
303
* https://tracker.ceph.com/issues/52624
304
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
305
* https://tracker.ceph.com/issues/50223
306
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
307
* https://tracker.ceph.com/issues/52438
308
    qa: ffsb timeout
309
* https://tracker.ceph.com/issues/55170
310
    mds: crash during rejoin (CDir::fetch_keys)
311
* https://tracker.ceph.com/issues/55331
312
    pjd failure
313
* https://tracker.ceph.com/issues/48773
314
    qa: scrub does not complete
315
* https://tracker.ceph.com/issues/55332
316
    Failure in snaptest-git-ceph.sh
317
* https://tracker.ceph.com/issues/55258
318
    lots of "heartbeat_check: no reply from X.X.X.X" in OSD logs
319
320 45 Venky Shankar
h3. 2022 Apr 11
321
322 46 Venky Shankar
https://pulpito.ceph.com/?branch=wip-vshankar-testing-55110-20220408-203242
323 45 Venky Shankar
324
* https://tracker.ceph.com/issues/48773
325
    qa: scrub does not complete
326
* https://tracker.ceph.com/issues/52624
327
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
328
* https://tracker.ceph.com/issues/52438
329
    qa: ffsb timeout
330
* https://tracker.ceph.com/issues/48680
331
    mds: scrubbing stuck "scrub active (0 inodes in the stack)"
332
* https://tracker.ceph.com/issues/55236
333
    qa: fs/snaps tests fails with "hit max job timeout"
334
* https://tracker.ceph.com/issues/54108
335
    qa: iogen workunit: "The following counters failed to be set on mds daemons: {'mds.exported', 'mds.imported'}"
336
* https://tracker.ceph.com/issues/54971
337
    Test failure: test_perf_stats_stale_metrics (tasks.cephfs.test_mds_metrics.TestMDSMetrics)
338
* https://tracker.ceph.com/issues/50223
339
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
340
* https://tracker.ceph.com/issues/55258
341
    lots of "heartbeat_check: no reply from X.X.X.X" in OSD logs
342
343 44 Venky Shankar
h3. 2022 Mar 21
344 42 Venky Shankar
345 43 Venky Shankar
https://pulpito.ceph.com/vshankar-2022-03-20_02:16:37-fs-wip-vshankar-testing-20220319-163539-testing-default-smithi/
346
347
Run didn't go well, lots of failures - debugging by dropping PRs and running against master branch. Only merging unrelated PRs that pass tests.
348
349
350
h3. 2022 Mar 08
351
352 42 Venky Shankar
https://pulpito.ceph.com/vshankar-2022-02-28_04:32:15-fs-wip-vshankar-testing-20220226-211550-testing-default-smithi/
353
354
rerun with
355
- (drop) https://github.com/ceph/ceph/pull/44679
356
- (drop) https://github.com/ceph/ceph/pull/44958
357
https://pulpito.ceph.com/vshankar-2022-03-06_14:47:51-fs-wip-vshankar-testing-20220304-132102-testing-default-smithi/
358
359
* https://tracker.ceph.com/issues/54419 (new)
360
    `ceph orch upgrade start` seems to never reach completion
361
* https://tracker.ceph.com/issues/51964
362
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
363
* https://tracker.ceph.com/issues/52624
364
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
365
* https://tracker.ceph.com/issues/50223
366
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
367
* https://tracker.ceph.com/issues/52438
368
    qa: ffsb timeout
369
* https://tracker.ceph.com/issues/50821
370
    qa: untar_snap_rm failure during mds thrashing
371
372
373 41 Venky Shankar
h3. 2022 Feb 09
374
375
https://pulpito.ceph.com/vshankar-2022-02-05_17:27:49-fs-wip-vshankar-testing-20220201-113815-testing-default-smithi/
376
377
rerun with
378
- (drop) https://github.com/ceph/ceph/pull/37938
379
- (drop) https://github.com/ceph/ceph/pull/44335
380
- (drop) https://github.com/ceph/ceph/pull/44491
381
- (drop) https://github.com/ceph/ceph/pull/44501
382
https://pulpito.ceph.com/vshankar-2022-02-08_14:27:29-fs-wip-vshankar-testing-20220208-181241-testing-default-smithi/
383
384
* https://tracker.ceph.com/issues/51964
385
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
386
* https://tracker.ceph.com/issues/54066
387
    test_subvolume_no_upgrade_v1_sanity fails with `AssertionError: 1000 != 0`
388
* https://tracker.ceph.com/issues/48773
389
    qa: scrub does not complete
390
* https://tracker.ceph.com/issues/52624
391
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
392
* https://tracker.ceph.com/issues/50223
393
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
394
* https://tracker.ceph.com/issues/52438
395
    qa: ffsb timeout
396
397 40 Patrick Donnelly
h3. 2022 Feb 01
398
399
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20220127.171526
400
401
* https://tracker.ceph.com/issues/54107
402
    kclient: hang during umount
403
* https://tracker.ceph.com/issues/54106
404
    kclient: hang during workunit cleanup
405
* https://tracker.ceph.com/issues/54108
406
    qa: iogen workunit: "The following counters failed to be set on mds daemons: {'mds.exported', 'mds.imported'}"
407
* https://tracker.ceph.com/issues/48773
408
    qa: scrub does not complete
409
* https://tracker.ceph.com/issues/52438
410
    qa: ffsb timeout
411
412
413 36 Venky Shankar
h3. 2022 Jan 13
414
415
https://pulpito.ceph.com/vshankar-2022-01-06_13:18:41-fs-wip-vshankar-testing-20220106-145819-testing-default-smithi/
416 39 Venky Shankar
417 36 Venky Shankar
rerun with:
418 38 Venky Shankar
- (add) https://github.com/ceph/ceph/pull/44570
419
- (drop) https://github.com/ceph/ceph/pull/43184
420 36 Venky Shankar
https://pulpito.ceph.com/vshankar-2022-01-13_04:42:40-fs-wip-vshankar-testing-20220106-145819-testing-default-smithi/
421
422
* https://tracker.ceph.com/issues/50223
423
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
424
* https://tracker.ceph.com/issues/51282
425
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
426
* https://tracker.ceph.com/issues/48773
427
    qa: scrub does not complete
428
* https://tracker.ceph.com/issues/52624
429
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
430
* https://tracker.ceph.com/issues/53859
431
    qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
432
433 34 Venky Shankar
h3. 2022 Jan 03
434
435
https://pulpito.ceph.com/vshankar-2021-12-22_07:37:44-fs-wip-vshankar-testing-20211216-114012-testing-default-smithi/
436
https://pulpito.ceph.com/vshankar-2022-01-03_12:27:45-fs-wip-vshankar-testing-20220103-142738-testing-default-smithi/ (rerun)
437
438
* https://tracker.ceph.com/issues/50223
439
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
440
* https://tracker.ceph.com/issues/51964
441
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
442
* https://tracker.ceph.com/issues/51267
443
    CommandFailedError: Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithi096 with status 1:...
444
* https://tracker.ceph.com/issues/51282
445
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
446
* https://tracker.ceph.com/issues/50821
447
    qa: untar_snap_rm failure during mds thrashing
448
* https://tracker.ceph.com/issues/51278
449
    mds: "FAILED ceph_assert(!segments.empty())"
450 35 Ramana Raja
* https://tracker.ceph.com/issues/52279
451
    cephadm tests fail due to: error adding seccomp filter rule for syscall bdflush: requested action matches default action of filter
452
453 34 Venky Shankar
454 33 Patrick Donnelly
h3. 2021 Dec 22
455
456
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211222.014316
457
458
* https://tracker.ceph.com/issues/52624
459
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
460
* https://tracker.ceph.com/issues/50223
461
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
462
* https://tracker.ceph.com/issues/52279
463
    cephadm tests fail due to: error adding seccomp filter rule for syscall bdflush: requested action matches default action of filter
464
* https://tracker.ceph.com/issues/50224
465
    qa: test_mirroring_init_failure_with_recovery failure
466
* https://tracker.ceph.com/issues/48773
467
    qa: scrub does not complete
468
469
470 32 Venky Shankar
h3. 2021 Nov 30
471
472
https://pulpito.ceph.com/vshankar-2021-11-24_07:14:27-fs-wip-vshankar-testing-20211124-094330-testing-default-smithi/
473
https://pulpito.ceph.com/vshankar-2021-11-30_06:23:32-fs-wip-vshankar-testing-20211124-094330-distro-default-smithi/ (rerun w/ QA fixes)
474
475
* https://tracker.ceph.com/issues/53436
476
    mds, mon: mds beacon messages get dropped? (mds never reaches up:active state)
477
* https://tracker.ceph.com/issues/51964
478
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
479
* https://tracker.ceph.com/issues/48812
480
    qa: test_scrub_pause_and_resume_with_abort failure
481
* https://tracker.ceph.com/issues/51076
482
    "wait_for_recovery: failed before timeout expired" during thrashosd test with EC backend.
483
* https://tracker.ceph.com/issues/50223
484
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
485
* https://tracker.ceph.com/issues/52624
486
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
487
* https://tracker.ceph.com/issues/50250
488
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
489
490
491 31 Patrick Donnelly
h3. 2021 November 9
492
493
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211109.180315
494
495
* https://tracker.ceph.com/issues/53214
496
    qa: "dd: error reading '/sys/kernel/debug/ceph/2a934501-6731-4052-a836-f42229a869be.client4874/metrics': Is a directory"
497
* https://tracker.ceph.com/issues/48773
498
    qa: scrub does not complete
499
* https://tracker.ceph.com/issues/50223
500
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
501
* https://tracker.ceph.com/issues/51282
502
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
503
* https://tracker.ceph.com/issues/52624
504
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
505
* https://tracker.ceph.com/issues/53216
506
    qa: "RuntimeError: value of attributes should be either str or None. client_id"
507
* https://tracker.ceph.com/issues/50250
508
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
509
510
511
512 30 Patrick Donnelly
h3. 2021 November 03
513
514
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211103.023355
515
516
* https://tracker.ceph.com/issues/51964
517
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
518
* https://tracker.ceph.com/issues/51282
519
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
520
* https://tracker.ceph.com/issues/52436
521
    fs/ceph: "corrupt mdsmap"
522
* https://tracker.ceph.com/issues/53074
523
    pybind/mgr/cephadm: upgrade sequence does not continue if no MDS are active
524
* https://tracker.ceph.com/issues/53150
525
    pybind/mgr/cephadm/upgrade: tolerate MDS failures during upgrade straddling v16.2.5
526
* https://tracker.ceph.com/issues/53155
527
    MDSMonitor: assertion during upgrade to v16.2.5+
528
529
530 29 Patrick Donnelly
h3. 2021 October 26
531
532
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211025.000447
533
534
* https://tracker.ceph.com/issues/53074
535
    pybind/mgr/cephadm: upgrade sequence does not continue if no MDS are active
536
* https://tracker.ceph.com/issues/52997
537
    testing: hang ing umount
538
* https://tracker.ceph.com/issues/50824
539
    qa: snaptest-git-ceph bus error
540
* https://tracker.ceph.com/issues/52436
541
    fs/ceph: "corrupt mdsmap"
542
* https://tracker.ceph.com/issues/48773
543
    qa: scrub does not complete
544
* https://tracker.ceph.com/issues/53082
545
    ceph-fuse: segmenetation fault in Client::handle_mds_map
546
* https://tracker.ceph.com/issues/50223
547
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
548
* https://tracker.ceph.com/issues/52624
549
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
550
* https://tracker.ceph.com/issues/50224
551
    qa: test_mirroring_init_failure_with_recovery failure
552
* https://tracker.ceph.com/issues/50821
553
    qa: untar_snap_rm failure during mds thrashing
554
* https://tracker.ceph.com/issues/50250
555
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
556
557
558
559 27 Patrick Donnelly
h3. 2021 October 19
560
561 28 Patrick Donnelly
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211019.013028
562 27 Patrick Donnelly
563
* https://tracker.ceph.com/issues/52995
564
    qa: test_standby_count_wanted failure
565
* https://tracker.ceph.com/issues/52948
566
    osd: fails to come up: "teuthology.misc:7 of 8 OSDs are up"
567
* https://tracker.ceph.com/issues/52996
568
    qa: test_perf_counters via test_openfiletable
569
* https://tracker.ceph.com/issues/48772
570
    qa: pjd: not ok 9, 44, 80
571
* https://tracker.ceph.com/issues/52997
572
    testing: hang ing umount
573
* https://tracker.ceph.com/issues/50250
574
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
575
* https://tracker.ceph.com/issues/52624
576
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
577
* https://tracker.ceph.com/issues/50223
578
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
579
* https://tracker.ceph.com/issues/50821
580
    qa: untar_snap_rm failure during mds thrashing
581
* https://tracker.ceph.com/issues/48773
582
    qa: scrub does not complete
583
584
585 26 Patrick Donnelly
h3. 2021 October 12
586
587
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211012.192211
588
589
Some failures caused by teuthology bug: https://tracker.ceph.com/issues/52944
590
591
New test caused failure: https://github.com/ceph/ceph/pull/43297#discussion_r729883167
592
593
594
* https://tracker.ceph.com/issues/51282
595
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
596
* https://tracker.ceph.com/issues/52948
597
    osd: fails to come up: "teuthology.misc:7 of 8 OSDs are up"
598
* https://tracker.ceph.com/issues/48773
599
    qa: scrub does not complete
600
* https://tracker.ceph.com/issues/50224
601
    qa: test_mirroring_init_failure_with_recovery failure
602
* https://tracker.ceph.com/issues/52949
603
    RuntimeError: The following counters failed to be set on mds daemons: {'mds.dir_split'}
604
605
606 25 Patrick Donnelly
h3. 2021 October 02
607 23 Patrick Donnelly
608 24 Patrick Donnelly
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211002.163337
609
610
Some failures caused by cephadm upgrade test. Fixed in follow-up qa commit.
611
612
test_simple failures caused by PR in this set.
613
614
A few reruns because of QA infra noise.
615
616
* https://tracker.ceph.com/issues/52822
617
    qa: failed pacific install on fs:upgrade
618
* https://tracker.ceph.com/issues/52624
619
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
620
* https://tracker.ceph.com/issues/50223
621
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
622
* https://tracker.ceph.com/issues/48773
623
    qa: scrub does not complete
624
625
626
h3. 2021 September 20
627
628 23 Patrick Donnelly
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210917.174826
629
630
* https://tracker.ceph.com/issues/52677
631
    qa: test_simple failure
632
* https://tracker.ceph.com/issues/51279
633
    kclient hangs on umount (testing branch)
634
* https://tracker.ceph.com/issues/50223
635
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
636
* https://tracker.ceph.com/issues/50250
637
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
638
* https://tracker.ceph.com/issues/52624
639
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
640
* https://tracker.ceph.com/issues/52438
641
    qa: ffsb timeout
642
643
644 22 Patrick Donnelly
h3. 2021 September 10
645
646
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210910.181451
647
648
* https://tracker.ceph.com/issues/50223
649
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
650
* https://tracker.ceph.com/issues/50250
651
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
652
* https://tracker.ceph.com/issues/52624
653
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
654
* https://tracker.ceph.com/issues/52625
655
    qa: test_kill_mdstable (tasks.cephfs.test_snapshots.TestSnapshots)
656
* https://tracker.ceph.com/issues/52439
657
    qa: acls does not compile on centos stream
658
* https://tracker.ceph.com/issues/50821
659
    qa: untar_snap_rm failure during mds thrashing
660
* https://tracker.ceph.com/issues/48773
661
    qa: scrub does not complete
662
* https://tracker.ceph.com/issues/52626
663
    mds: ScrubStack.cc: 831: FAILED ceph_assert(diri)
664
* https://tracker.ceph.com/issues/51279
665
    kclient hangs on umount (testing branch)
666
667
668 21 Patrick Donnelly
h3. 2021 August 27
669
670
Several jobs died because of device failures.
671
672
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210827.024746
673
674
* https://tracker.ceph.com/issues/52430
675
    mds: fast async create client mount breaks racy test
676
* https://tracker.ceph.com/issues/52436
677
    fs/ceph: "corrupt mdsmap"
678
* https://tracker.ceph.com/issues/52437
679
    mds: InoTable::replay_release_ids abort via test_inotable_sync
680
* https://tracker.ceph.com/issues/51282
681
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
682
* https://tracker.ceph.com/issues/52438
683
    qa: ffsb timeout
684
* https://tracker.ceph.com/issues/52439
685
    qa: acls does not compile on centos stream
686
687
688 20 Patrick Donnelly
h3. 2021 July 30
689
690
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210729.214022
691
692
* https://tracker.ceph.com/issues/50250
693
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
694
* https://tracker.ceph.com/issues/51282
695
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
696
* https://tracker.ceph.com/issues/48773
697
    qa: scrub does not complete
698
* https://tracker.ceph.com/issues/51975
699
    pybind/mgr/stats: KeyError
700
701
702 19 Patrick Donnelly
h3. 2021 July 28
703
704
https://pulpito.ceph.com/pdonnell-2021-07-28_00:39:45-fs-wip-pdonnell-testing-20210727.213757-distro-basic-smithi/
705
706
with qa fix: https://pulpito.ceph.com/pdonnell-2021-07-28_16:20:28-fs-wip-pdonnell-testing-20210728.141004-distro-basic-smithi/
707
708
* https://tracker.ceph.com/issues/51905
709
    qa: "error reading sessionmap 'mds1_sessionmap'"
710
* https://tracker.ceph.com/issues/48773
711
    qa: scrub does not complete
712
* https://tracker.ceph.com/issues/50250
713
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
714
* https://tracker.ceph.com/issues/51267
715
    CommandFailedError: Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithi096 with status 1:...
716
* https://tracker.ceph.com/issues/51279
717
    kclient hangs on umount (testing branch)
718
719
720 18 Patrick Donnelly
h3. 2021 July 16
721
722
https://pulpito.ceph.com/pdonnell-2021-07-16_05:50:11-fs-wip-pdonnell-testing-20210716.022804-distro-basic-smithi/
723
724
* https://tracker.ceph.com/issues/48773
725
    qa: scrub does not complete
726
* https://tracker.ceph.com/issues/48772
727
    qa: pjd: not ok 9, 44, 80
728
* https://tracker.ceph.com/issues/45434
729
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
730
* https://tracker.ceph.com/issues/51279
731
    kclient hangs on umount (testing branch)
732
* https://tracker.ceph.com/issues/50824
733
    qa: snaptest-git-ceph bus error
734
735
736 17 Patrick Donnelly
h3. 2021 July 04
737
738
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210703.052904
739
740
* https://tracker.ceph.com/issues/48773
741
    qa: scrub does not complete
742
* https://tracker.ceph.com/issues/39150
743
    mon: "FAILED ceph_assert(session_map.sessions.empty())" when out of quorum
744
* https://tracker.ceph.com/issues/45434
745
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
746
* https://tracker.ceph.com/issues/51282
747
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
748
* https://tracker.ceph.com/issues/48771
749
    qa: iogen: workload fails to cause balancing
750
* https://tracker.ceph.com/issues/51279
751
    kclient hangs on umount (testing branch)
752
* https://tracker.ceph.com/issues/50250
753
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
754
755
756 16 Patrick Donnelly
h3. 2021 July 01
757
758
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210701.192056
759
760
* https://tracker.ceph.com/issues/51197
761
    qa: [WRN] Scrub error on inode 0x10000001520 (/client.0/tmp/t/linux-5.4/Documentation/driver-api) see mds.f log and `damage ls` output for details
762
* https://tracker.ceph.com/issues/50866
763
    osd: stat mismatch on objects
764
* https://tracker.ceph.com/issues/48773
765
    qa: scrub does not complete
766
767
768 15 Patrick Donnelly
h3. 2021 June 26
769
770
https://pulpito.ceph.com/pdonnell-2021-06-26_00:57:00-fs-wip-pdonnell-testing-20210625.225421-distro-basic-smithi/
771
772
* https://tracker.ceph.com/issues/51183
773
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
774
* https://tracker.ceph.com/issues/51410
775
    kclient: fails to finish reconnect during MDS thrashing (testing branch)
776
* https://tracker.ceph.com/issues/48773
777
    qa: scrub does not complete
778
* https://tracker.ceph.com/issues/51282
779
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
780
* https://tracker.ceph.com/issues/51169
781
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
782
* https://tracker.ceph.com/issues/48772
783
    qa: pjd: not ok 9, 44, 80
784
785
786 14 Patrick Donnelly
h3. 2021 June 21
787
788
https://pulpito.ceph.com/pdonnell-2021-06-22_00:27:21-fs-wip-pdonnell-testing-20210621.231646-distro-basic-smithi/
789
790
One failure caused by PR: https://github.com/ceph/ceph/pull/41935#issuecomment-866472599
791
792
* https://tracker.ceph.com/issues/51282
793
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
794
* https://tracker.ceph.com/issues/51183
795
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
796
* https://tracker.ceph.com/issues/48773
797
    qa: scrub does not complete
798
* https://tracker.ceph.com/issues/48771
799
    qa: iogen: workload fails to cause balancing
800
* https://tracker.ceph.com/issues/51169
801
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
802
* https://tracker.ceph.com/issues/50495
803
    libcephfs: shutdown race fails with status 141
804
* https://tracker.ceph.com/issues/45434
805
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
806
* https://tracker.ceph.com/issues/50824
807
    qa: snaptest-git-ceph bus error
808
* https://tracker.ceph.com/issues/50223
809
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
810
811
812 13 Patrick Donnelly
h3. 2021 June 16
813
814
https://pulpito.ceph.com/pdonnell-2021-06-16_21:26:55-fs-wip-pdonnell-testing-20210616.191804-distro-basic-smithi/
815
816
MDS abort class of failures caused by PR: https://github.com/ceph/ceph/pull/41667
817
818
* https://tracker.ceph.com/issues/45434
819
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
820
* https://tracker.ceph.com/issues/51169
821
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
822
* https://tracker.ceph.com/issues/43216
823
    MDSMonitor: removes MDS coming out of quorum election
824
* https://tracker.ceph.com/issues/51278
825
    mds: "FAILED ceph_assert(!segments.empty())"
826
* https://tracker.ceph.com/issues/51279
827
    kclient hangs on umount (testing branch)
828
* https://tracker.ceph.com/issues/51280
829
    mds: "FAILED ceph_assert(r == 0 || r == -2)"
830
* https://tracker.ceph.com/issues/51183
831
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
832
* https://tracker.ceph.com/issues/51281
833
    qa: snaptest-snap-rm-cmp.sh: "echo 'FAIL: bad match, /tmp/a 4637e766853d1ad16a7b17079e2c6f03 != real c3883760b18d50e8d78819c54d579b00'"
834
* https://tracker.ceph.com/issues/48773
835
    qa: scrub does not complete
836
* https://tracker.ceph.com/issues/51076
837
    "wait_for_recovery: failed before timeout expired" during thrashosd test with EC backend.
838
* https://tracker.ceph.com/issues/51228
839
    qa: rmdir: failed to remove 'a/.snap/*': No such file or directory
840
* https://tracker.ceph.com/issues/51282
841
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
842
843
844 12 Patrick Donnelly
h3. 2021 June 14
845
846
https://pulpito.ceph.com/pdonnell-2021-06-14_20:53:05-fs-wip-pdonnell-testing-20210614.173325-distro-basic-smithi/
847
848
Some Ubuntu 20.04 upgrade fallout. In particular, upgrade tests are failing due to missing packages for 18.04 Pacific.
849
850
* https://tracker.ceph.com/issues/51169
851
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
852
* https://tracker.ceph.com/issues/51228
853
    qa: rmdir: failed to remove 'a/.snap/*': No such file or directory
854
* https://tracker.ceph.com/issues/48773
855
    qa: scrub does not complete
856
* https://tracker.ceph.com/issues/51183
857
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
858
* https://tracker.ceph.com/issues/45434
859
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
860
* https://tracker.ceph.com/issues/51182
861
    pybind/mgr/snap_schedule: Invalid command: Unexpected argument 'fs=cephfs'
862
* https://tracker.ceph.com/issues/51229
863
    qa: test_multi_snap_schedule list difference failure
864
* https://tracker.ceph.com/issues/50821
865
    qa: untar_snap_rm failure during mds thrashing
866
867
868 11 Patrick Donnelly
h3. 2021 June 13
869
870
https://pulpito.ceph.com/pdonnell-2021-06-12_02:45:35-fs-wip-pdonnell-testing-20210612.002809-distro-basic-smithi/
871
872
Some Ubuntu 20.04 upgrade fallout. In particular, upgrade tests are failing due to missing packages for 18.04 Pacific.
873
874
* https://tracker.ceph.com/issues/51169
875
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
876
* https://tracker.ceph.com/issues/48773
877
    qa: scrub does not complete
878
* https://tracker.ceph.com/issues/51182
879
    pybind/mgr/snap_schedule: Invalid command: Unexpected argument 'fs=cephfs'
880
* https://tracker.ceph.com/issues/51183
881
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
882
* https://tracker.ceph.com/issues/51197
883
    qa: [WRN] Scrub error on inode 0x10000001520 (/client.0/tmp/t/linux-5.4/Documentation/driver-api) see mds.f log and `damage ls` output for details
884
* https://tracker.ceph.com/issues/45434
885
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
886
887 10 Patrick Donnelly
h3. 2021 June 11
888
889
https://pulpito.ceph.com/pdonnell-2021-06-11_18:02:10-fs-wip-pdonnell-testing-20210611.162716-distro-basic-smithi/
890
891
Some Ubuntu 20.04 upgrade fallout. In particular, upgrade tests are failing due to missing packages for 18.04 Pacific.
892
893
* https://tracker.ceph.com/issues/51169
894
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
895
* https://tracker.ceph.com/issues/45434
896
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
897
* https://tracker.ceph.com/issues/48771
898
    qa: iogen: workload fails to cause balancing
899
* https://tracker.ceph.com/issues/43216
900
    MDSMonitor: removes MDS coming out of quorum election
901
* https://tracker.ceph.com/issues/51182
902
    pybind/mgr/snap_schedule: Invalid command: Unexpected argument 'fs=cephfs'
903
* https://tracker.ceph.com/issues/50223
904
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
905
* https://tracker.ceph.com/issues/48773
906
    qa: scrub does not complete
907
* https://tracker.ceph.com/issues/51183
908
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
909
* https://tracker.ceph.com/issues/51184
910
    qa: fs:bugs does not specify distro
911
912
913 9 Patrick Donnelly
h3. 2021 June 03
914
915
https://pulpito.ceph.com/pdonnell-2021-06-03_03:40:33-fs-wip-pdonnell-testing-20210603.020013-distro-basic-smithi/
916
917
* https://tracker.ceph.com/issues/45434
918
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
919
* https://tracker.ceph.com/issues/50016
920
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
921
* https://tracker.ceph.com/issues/50821
922
    qa: untar_snap_rm failure during mds thrashing
923
* https://tracker.ceph.com/issues/50622 (regression)
924
    msg: active_connections regression
925
* https://tracker.ceph.com/issues/49845#note-2 (regression)
926
    qa: failed umount in test_volumes
927
* https://tracker.ceph.com/issues/48773
928
    qa: scrub does not complete
929
* https://tracker.ceph.com/issues/43216
930
    MDSMonitor: removes MDS coming out of quorum election
931
932
933 7 Patrick Donnelly
h3. 2021 May 18
934
935 8 Patrick Donnelly
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210518.214114
936
937
Regression in testing kernel caused some failures. Ilya fixed those and rerun
938
looked better. Some odd new noise in the rerun relating to packaging and "No
939
module named 'tasks.ceph'".
940
941
* https://tracker.ceph.com/issues/50824
942
    qa: snaptest-git-ceph bus error
943
* https://tracker.ceph.com/issues/50622 (regression)
944
    msg: active_connections regression
945
* https://tracker.ceph.com/issues/49845#note-2 (regression)
946
    qa: failed umount in test_volumes
947
* https://tracker.ceph.com/issues/48203 (stock kernel update required)
948
    qa: quota failure
949
950
951
h3. 2021 May 18
952
953 7 Patrick Donnelly
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210518.025642
954
955
* https://tracker.ceph.com/issues/50821
956
    qa: untar_snap_rm failure during mds thrashing
957
* https://tracker.ceph.com/issues/48773
958
    qa: scrub does not complete
959
* https://tracker.ceph.com/issues/45591
960
    mgr: FAILED ceph_assert(daemon != nullptr)
961
* https://tracker.ceph.com/issues/50866
962
    osd: stat mismatch on objects
963
* https://tracker.ceph.com/issues/50016
964
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
965
* https://tracker.ceph.com/issues/50867
966
    qa: fs:mirror: reduced data availability
967
* https://tracker.ceph.com/issues/50821
968
    qa: untar_snap_rm failure during mds thrashing
969
* https://tracker.ceph.com/issues/50622 (regression)
970
    msg: active_connections regression
971
* https://tracker.ceph.com/issues/50223
972
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
973
* https://tracker.ceph.com/issues/50868
974
    qa: "kern.log.gz already exists; not overwritten"
975
* https://tracker.ceph.com/issues/50870
976
    qa: test_full: "rm: cannot remove 'large_file_a': Permission denied"
977
978
979 6 Patrick Donnelly
h3. 2021 May 11
980
981
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210511.232042
982
983
* one class of failures caused by PR
984
* https://tracker.ceph.com/issues/48812
985
    qa: test_scrub_pause_and_resume_with_abort failure
986
* https://tracker.ceph.com/issues/50390
987
    mds: monclient: wait_auth_rotating timed out after 30
988
* https://tracker.ceph.com/issues/48773
989
    qa: scrub does not complete
990
* https://tracker.ceph.com/issues/50821
991
    qa: untar_snap_rm failure during mds thrashing
992
* https://tracker.ceph.com/issues/50224
993
    qa: test_mirroring_init_failure_with_recovery failure
994
* https://tracker.ceph.com/issues/50622 (regression)
995
    msg: active_connections regression
996
* https://tracker.ceph.com/issues/50825
997
    qa: snaptest-git-ceph hang during mon thrashing v2
998
* https://tracker.ceph.com/issues/50821
999
    qa: untar_snap_rm failure during mds thrashing
1000
* https://tracker.ceph.com/issues/50823
1001
    qa: RuntimeError: timeout waiting for cluster to stabilize
1002
1003
1004 5 Patrick Donnelly
h3. 2021 May 14
1005
1006
https://pulpito.ceph.com/pdonnell-2021-05-14_21:45:42-fs-master-distro-basic-smithi/
1007
1008
* https://tracker.ceph.com/issues/48812
1009
    qa: test_scrub_pause_and_resume_with_abort failure
1010
* https://tracker.ceph.com/issues/50821
1011
    qa: untar_snap_rm failure during mds thrashing
1012
* https://tracker.ceph.com/issues/50622 (regression)
1013
    msg: active_connections regression
1014
* https://tracker.ceph.com/issues/50822
1015
    qa: testing kernel patch for client metrics causes mds abort
1016
* https://tracker.ceph.com/issues/48773
1017
    qa: scrub does not complete
1018
* https://tracker.ceph.com/issues/50823
1019
    qa: RuntimeError: timeout waiting for cluster to stabilize
1020
* https://tracker.ceph.com/issues/50824
1021
    qa: snaptest-git-ceph bus error
1022
* https://tracker.ceph.com/issues/50825
1023
    qa: snaptest-git-ceph hang during mon thrashing v2
1024
* https://tracker.ceph.com/issues/50826
1025
    kceph: stock RHEL kernel hangs on snaptests with mon|osd thrashers
1026
1027
1028 4 Patrick Donnelly
h3. 2021 May 01
1029
1030
https://pulpito.ceph.com/pdonnell-2021-05-01_09:07:09-fs-wip-pdonnell-testing-20210501.040415-distro-basic-smithi/
1031
1032
* https://tracker.ceph.com/issues/45434
1033
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
1034
* https://tracker.ceph.com/issues/50281
1035
    qa: untar_snap_rm timeout
1036
* https://tracker.ceph.com/issues/48203 (stock kernel update required)
1037
    qa: quota failure
1038
* https://tracker.ceph.com/issues/48773
1039
    qa: scrub does not complete
1040
* https://tracker.ceph.com/issues/50390
1041
    mds: monclient: wait_auth_rotating timed out after 30
1042
* https://tracker.ceph.com/issues/50250
1043
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details"
1044
* https://tracker.ceph.com/issues/50622 (regression)
1045
    msg: active_connections regression
1046
* https://tracker.ceph.com/issues/45591
1047
    mgr: FAILED ceph_assert(daemon != nullptr)
1048
* https://tracker.ceph.com/issues/50221
1049
    qa: snaptest-git-ceph failure in git diff
1050
* https://tracker.ceph.com/issues/50016
1051
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
1052
1053
1054 3 Patrick Donnelly
h3. 2021 Apr 15
1055
1056
https://pulpito.ceph.com/pdonnell-2021-04-15_01:35:57-fs-wip-pdonnell-testing-20210414.230315-distro-basic-smithi/
1057
1058
* https://tracker.ceph.com/issues/50281
1059
    qa: untar_snap_rm timeout
1060
* https://tracker.ceph.com/issues/50220
1061
    qa: dbench workload timeout
1062
* https://tracker.ceph.com/issues/50246
1063
    mds: failure replaying journal (EMetaBlob)
1064
* https://tracker.ceph.com/issues/50250
1065
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details"
1066
* https://tracker.ceph.com/issues/50016
1067
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
1068
* https://tracker.ceph.com/issues/50222
1069
    osd: 5.2s0 deep-scrub : stat mismatch
1070
* https://tracker.ceph.com/issues/45434
1071
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
1072
* https://tracker.ceph.com/issues/49845
1073
    qa: failed umount in test_volumes
1074
* https://tracker.ceph.com/issues/37808
1075
    osd: osdmap cache weak_refs assert during shutdown
1076
* https://tracker.ceph.com/issues/50387
1077
    client: fs/snaps failure
1078
* https://tracker.ceph.com/issues/50389
1079
    mds: "cluster [ERR] Error recovering journal 0x203: (2) No such file or directory" in cluster log"
1080
* https://tracker.ceph.com/issues/50216
1081
    qa: "ls: cannot access 'lost+found': No such file or directory"
1082
* https://tracker.ceph.com/issues/50390
1083
    mds: monclient: wait_auth_rotating timed out after 30
1084
1085
1086
1087 1 Patrick Donnelly
h3. 2021 Apr 08
1088
1089 2 Patrick Donnelly
https://pulpito.ceph.com/pdonnell-2021-04-08_22:42:24-fs-wip-pdonnell-testing-20210408.192301-distro-basic-smithi/
1090
1091
* https://tracker.ceph.com/issues/45434
1092
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
1093
* https://tracker.ceph.com/issues/50016
1094
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
1095
* https://tracker.ceph.com/issues/48773
1096
    qa: scrub does not complete
1097
* https://tracker.ceph.com/issues/50279
1098
    qa: "Replacing daemon mds.b as rank 0 with standby daemon mds.c"
1099
* https://tracker.ceph.com/issues/50246
1100
    mds: failure replaying journal (EMetaBlob)
1101
* https://tracker.ceph.com/issues/48365
1102
    qa: ffsb build failure on CentOS 8.2
1103
* https://tracker.ceph.com/issues/50216
1104
    qa: "ls: cannot access 'lost+found': No such file or directory"
1105
* https://tracker.ceph.com/issues/50223
1106
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
1107
* https://tracker.ceph.com/issues/50280
1108
    cephadm: RuntimeError: uid/gid not found
1109
* https://tracker.ceph.com/issues/50281
1110
    qa: untar_snap_rm timeout
1111
1112
h3. 2021 Apr 08
1113
1114 1 Patrick Donnelly
https://pulpito.ceph.com/pdonnell-2021-04-08_04:31:36-fs-wip-pdonnell-testing-20210408.024225-distro-basic-smithi/
1115
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210408.142238 (with logic inversion / QA fix)
1116
1117
* https://tracker.ceph.com/issues/50246
1118
    mds: failure replaying journal (EMetaBlob)
1119
* https://tracker.ceph.com/issues/50250
1120
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details"
1121
1122
1123
h3. 2021 Apr 07
1124
1125
https://pulpito.ceph.com/pdonnell-2021-04-07_02:12:41-fs-wip-pdonnell-testing-20210406.213012-distro-basic-smithi/
1126
1127
* https://tracker.ceph.com/issues/50215
1128
    qa: "log [ERR] : error reading sessionmap 'mds2_sessionmap'"
1129
* https://tracker.ceph.com/issues/49466
1130
    qa: "Command failed on gibba030 with status 1: 'set -ex\nsudo dd of=/tmp/tmp.ZEeZBasJer'"
1131
* https://tracker.ceph.com/issues/50216
1132
    qa: "ls: cannot access 'lost+found': No such file or directory"
1133
* https://tracker.ceph.com/issues/48773
1134
    qa: scrub does not complete
1135
* https://tracker.ceph.com/issues/49845
1136
    qa: failed umount in test_volumes
1137
* https://tracker.ceph.com/issues/50220
1138
    qa: dbench workload timeout
1139
* https://tracker.ceph.com/issues/50221
1140
    qa: snaptest-git-ceph failure in git diff
1141
* https://tracker.ceph.com/issues/50222
1142
    osd: 5.2s0 deep-scrub : stat mismatch
1143
* https://tracker.ceph.com/issues/50223
1144
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
1145
* https://tracker.ceph.com/issues/50224
1146
    qa: test_mirroring_init_failure_with_recovery failure
1147
1148
h3. 2021 Apr 01
1149
1150
https://pulpito.ceph.com/pdonnell-2021-04-01_00:45:34-fs-wip-pdonnell-testing-20210331.222326-distro-basic-smithi/
1151
1152
* https://tracker.ceph.com/issues/48772
1153
    qa: pjd: not ok 9, 44, 80
1154
* https://tracker.ceph.com/issues/50177
1155
    osd: "stalled aio... buggy kernel or bad device?"
1156
* https://tracker.ceph.com/issues/48771
1157
    qa: iogen: workload fails to cause balancing
1158
* https://tracker.ceph.com/issues/49845
1159
    qa: failed umount in test_volumes
1160
* https://tracker.ceph.com/issues/48773
1161
    qa: scrub does not complete
1162
* https://tracker.ceph.com/issues/48805
1163
    mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details"
1164
* https://tracker.ceph.com/issues/50178
1165
    qa: "TypeError: run() got an unexpected keyword argument 'shell'"
1166
* https://tracker.ceph.com/issues/45434
1167
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
1168
1169
h3. 2021 Mar 24
1170
1171
https://pulpito.ceph.com/pdonnell-2021-03-24_23:26:35-fs-wip-pdonnell-testing-20210324.190252-distro-basic-smithi/
1172
1173
* https://tracker.ceph.com/issues/49500
1174
    qa: "Assertion `cb_done' failed."
1175
* https://tracker.ceph.com/issues/50019
1176
    qa: mount failure with cephadm "probably no MDS server is up?"
1177
* https://tracker.ceph.com/issues/50020
1178
    qa: "RADOS object not found (Failed to operate read op for oid cephfs_mirror)"
1179
* https://tracker.ceph.com/issues/48773
1180
    qa: scrub does not complete
1181
* https://tracker.ceph.com/issues/45434
1182
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
1183
* https://tracker.ceph.com/issues/48805
1184
    mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details"
1185
* https://tracker.ceph.com/issues/48772
1186
    qa: pjd: not ok 9, 44, 80
1187
* https://tracker.ceph.com/issues/50021
1188
    qa: snaptest-git-ceph failure during mon thrashing
1189
* https://tracker.ceph.com/issues/48771
1190
    qa: iogen: workload fails to cause balancing
1191
* https://tracker.ceph.com/issues/50016
1192
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
1193
* https://tracker.ceph.com/issues/49466
1194
    qa: "Command failed on gibba030 with status 1: 'set -ex\nsudo dd of=/tmp/tmp.ZEeZBasJer'"
1195
1196
1197
h3. 2021 Mar 18
1198
1199
https://pulpito.ceph.com/pdonnell-2021-03-18_13:46:31-fs-wip-pdonnell-testing-20210318.024145-distro-basic-smithi/
1200
1201
* https://tracker.ceph.com/issues/49466
1202
    qa: "Command failed on gibba030 with status 1: 'set -ex\nsudo dd of=/tmp/tmp.ZEeZBasJer'"
1203
* https://tracker.ceph.com/issues/48773
1204
    qa: scrub does not complete
1205
* https://tracker.ceph.com/issues/48805
1206
    mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details"
1207
* https://tracker.ceph.com/issues/45434
1208
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
1209
* https://tracker.ceph.com/issues/49845
1210
    qa: failed umount in test_volumes
1211
* https://tracker.ceph.com/issues/49605
1212
    mgr: drops command on the floor
1213
* https://tracker.ceph.com/issues/48203 (stock kernel update required)
1214
    qa: quota failure
1215
* https://tracker.ceph.com/issues/49928
1216
    client: items pinned in cache preventing unmount x2
1217
1218
h3. 2021 Mar 15
1219
1220
https://pulpito.ceph.com/pdonnell-2021-03-15_22:16:56-fs-wip-pdonnell-testing-20210315.182203-distro-basic-smithi/
1221
1222
* https://tracker.ceph.com/issues/49842
1223
    qa: stuck pkg install
1224
* https://tracker.ceph.com/issues/49466
1225
    qa: "Command failed on gibba030 with status 1: 'set -ex\nsudo dd of=/tmp/tmp.ZEeZBasJer'"
1226
* https://tracker.ceph.com/issues/49822
1227
    test: test_mirroring_command_idempotency (tasks.cephfs.test_admin.TestMirroringCommands) failure
1228
* https://tracker.ceph.com/issues/49240
1229
    terminate called after throwing an instance of 'std::bad_alloc'
1230
* https://tracker.ceph.com/issues/48773
1231
    qa: scrub does not complete
1232
* https://tracker.ceph.com/issues/45434
1233
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
1234
* https://tracker.ceph.com/issues/49500
1235
    qa: "Assertion `cb_done' failed."
1236
* https://tracker.ceph.com/issues/49843
1237
    qa: fs/snaps/snaptest-upchildrealms.sh failure
1238
* https://tracker.ceph.com/issues/49845
1239
    qa: failed umount in test_volumes
1240
* https://tracker.ceph.com/issues/48805
1241
    mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details"
1242
* https://tracker.ceph.com/issues/49605
1243
    mgr: drops command on the floor
1244
1245
and failure caused by PR: https://github.com/ceph/ceph/pull/39969
1246
1247
1248
h3. 2021 Mar 09
1249
1250
https://pulpito.ceph.com/pdonnell-2021-03-09_03:27:39-fs-wip-pdonnell-testing-20210308.214827-distro-basic-smithi/
1251
1252
* https://tracker.ceph.com/issues/49500
1253
    qa: "Assertion `cb_done' failed."
1254
* https://tracker.ceph.com/issues/48805
1255
    mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details"
1256
* https://tracker.ceph.com/issues/48773
1257
    qa: scrub does not complete
1258
* https://tracker.ceph.com/issues/45434
1259
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
1260
* https://tracker.ceph.com/issues/49240
1261
    terminate called after throwing an instance of 'std::bad_alloc'
1262
* https://tracker.ceph.com/issues/49466
1263
    qa: "Command failed on gibba030 with status 1: 'set -ex\nsudo dd of=/tmp/tmp.ZEeZBasJer'"
1264
* https://tracker.ceph.com/issues/49684
1265
    qa: fs:cephadm mount does not wait for mds to be created
1266
* https://tracker.ceph.com/issues/48771
1267
    qa: iogen: workload fails to cause balancing