Project

General

Profile

Main » History » Version 88

Patrick Donnelly, 09/27/2022 02:26 AM

1 79 Venky Shankar
h1. MAIN
2
3 88 Patrick Donnelly
h3. 2022 Sep 26
4
5
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20220923.171109
6
7
* https://tracker.ceph.com/issues/55804
8
    qa failure: pjd link tests failed
9
* https://tracker.ceph.com/issues/57676
10
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
11
* https://tracker.ceph.com/issues/52624
12
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
13
* https://tracker.ceph.com/issues/57580
14
    Test failure: test_newops_getvxattr (tasks.cephfs.test_newops.TestNewOps)
15
* https://tracker.ceph.com/issues/48773
16
    qa: scrub does not complete
17
* https://tracker.ceph.com/issues/57299
18
    qa: test_dump_loads fails with JSONDecodeError
19
* https://tracker.ceph.com/issues/57280
20
    qa: tasks/kernel_cfuse_workunits_untarbuild_blogbench fails - Failed to fetch package version from shaman
21
* https://tracker.ceph.com/issues/57205
22
    Test failure: test_subvolume_group_ls_filter_internal_directories (tasks.cephfs.test_volumes.TestSubvolumeGroups)
23
* https://tracker.ceph.com/issues/57656
24
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
25
* https://tracker.ceph.com/issues/57677
26
    qa: "1 MDSs behind on trimming (MDS_TRIM)"
27
* https://tracker.ceph.com/issues/57206
28
    libcephfs/test.sh: ceph_test_libcephfs_reclaim
29
* https://tracker.ceph.com/issues/57446
30
    qa: test_subvolume_snapshot_info_if_orphan_clone fails
31
* https://tracker.ceph.com/issues/57655 [Exist in main as well]
32
    qa: fs:mixed-clients kernel_untar_build failure
33
34
35 87 Patrick Donnelly
h3. 2022 Sep 22
36
37
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20220920.234701
38
39
* https://tracker.ceph.com/issues/57299
40
    qa: test_dump_loads fails with JSONDecodeError
41
* https://tracker.ceph.com/issues/57205
42
    Test failure: test_subvolume_group_ls_filter_internal_directories (tasks.cephfs.test_volumes.TestSubvolumeGroups)
43
* https://tracker.ceph.com/issues/52624
44
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
45
* https://tracker.ceph.com/issues/57580
46
    Test failure: test_newops_getvxattr (tasks.cephfs.test_newops.TestNewOps)
47
* https://tracker.ceph.com/issues/57280
48
    qa: tasks/kernel_cfuse_workunits_untarbuild_blogbench fails - Failed to fetch package version from shaman
49
* https://tracker.ceph.com/issues/48773
50
    qa: scrub does not complete
51
* https://tracker.ceph.com/issues/56446
52
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
53
* https://tracker.ceph.com/issues/57206
54
    libcephfs/test.sh: ceph_test_libcephfs_reclaim
55
* https://tracker.ceph.com/issues/51267
56
    CommandFailedError: Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithi096 with status 1:...
57
58
NEW:
59
60
* https://tracker.ceph.com/issues/57656
61
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
62
* https://tracker.ceph.com/issues/57655 [Exist in main as well]
63
    qa: fs:mixed-clients kernel_untar_build failure
64
* https://tracker.ceph.com/issues/57657
65
    mds: scrub locates mismatch between child accounted_rstats and self rstats
66
67
Segfault probably caused by: https://github.com/ceph/ceph/pull/47795#issuecomment-1255724799
68
69
70 80 Venky Shankar
h3. 2022 Sep 16
71 79 Venky Shankar
72
https://pulpito.ceph.com/?branch=wip-vshankar-testing1-20220905-132828
73
74
* https://tracker.ceph.com/issues/57446
75
    qa: test_subvolume_snapshot_info_if_orphan_clone fails
76
* https://tracker.ceph.com/issues/57299
77
    qa: test_dump_loads fails with JSONDecodeError
78
* https://tracker.ceph.com/issues/50223
79
    client.xxxx isn't responding to mclientcaps(revoke)
80
* https://tracker.ceph.com/issues/52624
81
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
82
* https://tracker.ceph.com/issues/57205
83
    Test failure: test_subvolume_group_ls_filter_internal_directories (tasks.cephfs.test_volumes.TestSubvolumeGroups)
84
* https://tracker.ceph.com/issues/57280
85
    qa: tasks/kernel_cfuse_workunits_untarbuild_blogbench fails - Failed to fetch package version from shaman
86
* https://tracker.ceph.com/issues/51282
87
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
88
* https://tracker.ceph.com/issues/48203
89
  https://tracker.ceph.com/issues/36593
90
    qa: quota failure
91
    qa: quota failure caused by clients stepping on each other
92
* https://tracker.ceph.com/issues/57580
93
    Test failure: test_newops_getvxattr (tasks.cephfs.test_newops.TestNewOps)
94
95 77 Rishabh Dave
96
h3. 2022 Aug 26
97 76 Rishabh Dave
98
http://pulpito.front.sepia.ceph.com/rishabh-2022-08-22_17:49:59-fs-wip-rishabh-testing-2022Aug19-testing-default-smithi/
99
http://pulpito.front.sepia.ceph.com/rishabh-2022-08-24_11:56:51-fs-wip-rishabh-testing-2022Aug19-testing-default-smithi/
100
101
* https://tracker.ceph.com/issues/57206
102
  libcephfs/test.sh: ceph_test_libcephfs_reclaim
103
* https://tracker.ceph.com/issues/56632
104
  Test failure: test_subvolume_snapshot_clone_quota_exceeded (tasks.cephfs.test_volumes.TestSubvolumeSnapshotClones)
105
* https://tracker.ceph.com/issues/56446
106
  Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
107
* https://tracker.ceph.com/issues/51964
108
  qa: test_cephfs_mirror_restart_sync_on_blocklist failure
109
* https://tracker.ceph.com/issues/53859
110
  qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
111
112
* https://tracker.ceph.com/issues/54460
113
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
114
* https://tracker.ceph.com/issues/54462
115
  Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128
116
* https://tracker.ceph.com/issues/54460
117
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
118
* https://tracker.ceph.com/issues/36593
119
  Command failed (workunit test fs/quota/quota.sh) on smithixxx with status 1
120
121
* https://tracker.ceph.com/issues/52624
122
  qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
123
* https://tracker.ceph.com/issues/55804
124
  Command failed (workunit test suites/pjd.sh)
125
* https://tracker.ceph.com/issues/50223
126
  client.xxxx isn't responding to mclientcaps(revoke)
127
128
129 75 Venky Shankar
h3. 2022 Aug 22
130
131
https://pulpito.ceph.com/vshankar-2022-08-12_09:34:24-fs-wip-vshankar-testing1-20220812-072441-testing-default-smithi/
132
https://pulpito.ceph.com/vshankar-2022-08-18_04:30:42-fs-wip-vshankar-testing1-20220818-082047-testing-default-smithi/ (drop problematic PR and re-run)
133
134
* https://tracker.ceph.com/issues/52624
135
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
136
* https://tracker.ceph.com/issues/56446
137
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
138
* https://tracker.ceph.com/issues/55804
139
    Command failed (workunit test suites/pjd.sh)
140
* https://tracker.ceph.com/issues/51278
141
    mds: "FAILED ceph_assert(!segments.empty())"
142
* https://tracker.ceph.com/issues/54460
143
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
144
* https://tracker.ceph.com/issues/57205
145
    Test failure: test_subvolume_group_ls_filter_internal_directories (tasks.cephfs.test_volumes.TestSubvolumeGroups)
146
* https://tracker.ceph.com/issues/57206
147
    ceph_test_libcephfs_reclaim crashes during test
148
* https://tracker.ceph.com/issues/53859
149
  Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
150
* https://tracker.ceph.com/issues/50223
151
    client.xxxx isn't responding to mclientcaps(revoke)
152
153 72 Venky Shankar
h3. 2022 Aug 12
154
155
https://pulpito.ceph.com/vshankar-2022-08-10_04:06:00-fs-wip-vshankar-testing-20220805-190751-testing-default-smithi/
156
https://pulpito.ceph.com/vshankar-2022-08-11_12:16:58-fs-wip-vshankar-testing-20220811-145809-testing-default-smithi/ (drop problematic PR and re-run)
157
158
* https://tracker.ceph.com/issues/52624
159
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
160
* https://tracker.ceph.com/issues/56446
161
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
162
* https://tracker.ceph.com/issues/51964
163
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
164
* https://tracker.ceph.com/issues/55804
165
    Command failed (workunit test suites/pjd.sh)
166
* https://tracker.ceph.com/issues/50223
167
    client.xxxx isn't responding to mclientcaps(revoke)
168
* https://tracker.ceph.com/issues/50821
169
    qa: untar_snap_rm failure during mds thrashing
170
* https://tracker.ceph.com/issues/54460
171 73 Venky Shankar
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
172 72 Venky Shankar
173 71 Venky Shankar
h3. 2022 Aug 04
174
175
https://pulpito.ceph.com/?branch=wip-vshankar-testing1-20220804-123835 (only mgr/volumes, mgr/stats)
176
177
Unrealted teuthology failure on rhel
178
179 69 Rishabh Dave
h3. 2022 Jul 25
180 68 Rishabh Dave
181
http://pulpito.front.sepia.ceph.com/rishabh-2022-07-22_11:34:20-fs-wip-rishabh-testing-2022Jul22-1400-testing-default-smithi/
182
183
1st re-run: http://pulpito.front.sepia.ceph.com/rishabh-2022-07-24_03:51:19-fs-wip-rishabh-testing-2022Jul22-1400-testing-default-smithi
184
2nd re-run: http://pulpito.front.sepia.ceph.com/rishabh-2022-07-24_08:53:36-fs-wip-rishabh-testing-2022Jul22-1400-testing-default-smithi/
185 74 Rishabh Dave
3rd re-run: http://pulpito.front.sepia.ceph.com/rishabh-2022-07-24_08:53:36-fs-wip-rishabh-testing-2022Jul22-1400-testing-default-smithi/
186
4th (final) re-run: http://pulpito.front.sepia.ceph.com/rishabh-2022-07-28_03:59:01-fs-wip-rishabh-testing-2022Jul28-0143-testing-default-smithi/
187 68 Rishabh Dave
188
* https://tracker.ceph.com/issues/55804
189
  Command failed (workunit test suites/pjd.sh)
190
* https://tracker.ceph.com/issues/50223
191
  client.xxxx isn't responding to mclientcaps(revoke)
192
193
* https://tracker.ceph.com/issues/54460
194
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
195
* https://tracker.ceph.com/issues/36593
196
  Command failed (workunit test fs/quota/quota.sh) on smithixxx with status 1
197 1 Patrick Donnelly
* https://tracker.ceph.com/issues/54462
198 74 Rishabh Dave
  Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128~
199 68 Rishabh Dave
200 67 Patrick Donnelly
h3. 2022 July 22
201
202
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20220721.235756
203
204
MDS_HEALTH_DUMMY error in log fixed by followup commit.
205
transient selinux ping failure
206
207
* https://tracker.ceph.com/issues/56694
208
    qa: avoid blocking forever on hung umount
209
* https://tracker.ceph.com/issues/56695
210
    [RHEL stock] pjd test failures
211
* https://tracker.ceph.com/issues/56696
212
    admin keyring disappears during qa run
213
* https://tracker.ceph.com/issues/56697
214
    qa: fs/snaps fails for fuse
215
* https://tracker.ceph.com/issues/50222
216
    osd: 5.2s0 deep-scrub : stat mismatch
217
* https://tracker.ceph.com/issues/56698
218
    client: FAILED ceph_assert(_size == 0)
219
* https://tracker.ceph.com/issues/50223
220
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
221
222
223 66 Rishabh Dave
h3. 2022 Jul 15
224 65 Rishabh Dave
225
http://pulpito.front.sepia.ceph.com/rishabh-2022-07-08_23:53:34-fs-wip-rishabh-testing-2022Jul08-1820-testing-default-smithi/
226
227
re-run: http://pulpito.front.sepia.ceph.com/rishabh-2022-07-15_06:42:04-fs-wip-rishabh-testing-2022Jul08-1820-testing-default-smithi/
228
229
* https://tracker.ceph.com/issues/53859
230
  Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
231
* https://tracker.ceph.com/issues/55804
232
  Command failed (workunit test suites/pjd.sh)
233
* https://tracker.ceph.com/issues/50223
234
  client.xxxx isn't responding to mclientcaps(revoke)
235
* https://tracker.ceph.com/issues/50222
236
  osd: deep-scrub : stat mismatch
237
238
* https://tracker.ceph.com/issues/56632
239
  Test failure: test_subvolume_snapshot_clone_quota_exceeded (tasks.cephfs.test_volumes.TestSubvolumeSnapshotClones)
240
* https://tracker.ceph.com/issues/56634
241
  workunit test fs/snaps/snaptest-intodir.sh
242
* https://tracker.ceph.com/issues/56644
243
  Test failure: test_rapid_creation (tasks.cephfs.test_fragment.TestFragmentation)
244
245
246
247 61 Rishabh Dave
h3. 2022 July 05
248
249
http://pulpito.front.sepia.ceph.com/rishabh-2022-07-02_14:14:52-fs-wip-rishabh-testing-20220702-1631-testing-default-smithi/
250 62 Rishabh Dave
251 64 Rishabh Dave
On 1st re-run some jobs passed - http://pulpito.front.sepia.ceph.com/rishabh-2022-07-03_15:10:28-fs-wip-rishabh-testing-20220702-1631-distro-default-smithi/
252
253
On 2nd re-run only few jobs failed -
254
http://pulpito.front.sepia.ceph.com/rishabh-2022-07-06_05:24:29-fs-wip-rishabh-testing-20220705-2132-distro-default-smithi/
255
http://pulpito.front.sepia.ceph.com/rishabh-2022-07-06_05:24:29-fs-wip-rishabh-testing-20220705-2132-distro-default-smithi/
256 62 Rishabh Dave
257
* https://tracker.ceph.com/issues/56446
258
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
259
* https://tracker.ceph.com/issues/55804
260
    Command failed (workunit test suites/pjd.sh) on smithi047 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/
261
262
* https://tracker.ceph.com/issues/56445
263
    Command failed on smithi080 with status 123: "find /home/ubuntu/cephtest/archive/syslog -name '*.log' -print0 | sudo xargs -0 --no-run-if-empty -- gzip --"
264
* https://tracker.ceph.com/issues/51267
265 63 Rishabh Dave
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithi098 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1
266
* https://tracker.ceph.com/issues/50224
267
    Test failure: test_mirroring_init_failure_with_recovery (tasks.cephfs.test_mirroring.TestMirroring)
268 62 Rishabh Dave
269
270 61 Rishabh Dave
271 58 Venky Shankar
h3. 2022 July 04
272
273
https://pulpito.ceph.com/vshankar-2022-06-29_09:19:00-fs-wip-vshankar-testing-20220627-100931-testing-default-smithi/
274
(rhel runs were borked due to: https://lists.ceph.io/hyperkitty/list/dev@ceph.io/thread/JSZQFUKVLDND4W33PXDGCABPHNSPT6SS/, tests ran with --filter-out=rhel)
275
276
* https://tracker.ceph.com/issues/56445
277
    Command failed on smithi162 with status 123: "find /home/ubuntu/cephtest/archive/syslog -name '*.log' -print0 | sudo xargs -0 --no-run-if-empty -- gzip --"
278
* https://tracker.ceph.com/issues/56446
279 59 Rishabh Dave
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
280
* https://tracker.ceph.com/issues/51964
281
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
282
* https://tracker.ceph.com/issues/52624
283 60 Rishabh Dave
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
284 59 Rishabh Dave
285 57 Venky Shankar
h3. 2022 June 20
286
287
https://pulpito.ceph.com/vshankar-2022-06-15_04:03:39-fs-wip-vshankar-testing1-20220615-072516-testing-default-smithi/
288
https://pulpito.ceph.com/vshankar-2022-06-19_08:22:46-fs-wip-vshankar-testing1-20220619-102531-testing-default-smithi/
289
290
* https://tracker.ceph.com/issues/52624
291
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
292
* https://tracker.ceph.com/issues/55804
293
    qa failure: pjd link tests failed
294
* https://tracker.ceph.com/issues/54108
295
    qa: iogen workunit: "The following counters failed to be set on mds daemons: {'mds.exported', 'mds.imported'}"
296
* https://tracker.ceph.com/issues/55332
297
    Failure in snaptest-git-ceph.sh (it's an async unlink/create bug)
298
299 56 Patrick Donnelly
h3. 2022 June 13
300
301
https://pulpito.ceph.com/pdonnell-2022-06-12_05:08:12-fs:workload-wip-pdonnell-testing-20220612.004943-distro-default-smithi/
302
303
* https://tracker.ceph.com/issues/56024
304
    cephadm: removes ceph.conf during qa run causing command failure
305
* https://tracker.ceph.com/issues/48773
306
    qa: scrub does not complete
307
* https://tracker.ceph.com/issues/56012
308
    mds: src/mds/MDLog.cc: 283: FAILED ceph_assert(!mds->is_ any_replay())
309
310
311 55 Venky Shankar
h3. 2022 Jun 13
312 54 Venky Shankar
313
https://pulpito.ceph.com/vshankar-2022-06-07_00:25:50-fs-wip-vshankar-testing-20220606-223254-testing-default-smithi/
314
https://pulpito.ceph.com/vshankar-2022-06-10_01:04:46-fs-wip-vshankar-testing-20220609-175550-testing-default-smithi/
315
316
* https://tracker.ceph.com/issues/52624
317
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
318
* https://tracker.ceph.com/issues/51964
319
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
320
* https://tracker.ceph.com/issues/53859
321
    qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
322
* https://tracker.ceph.com/issues/55804
323
    qa failure: pjd link tests failed
324
* https://tracker.ceph.com/issues/56003
325
    client: src/include/xlist.h: 81: FAILED ceph_assert(_size == 0)
326
* https://tracker.ceph.com/issues/56011
327
    fs/thrash: snaptest-snap-rm-cmp.sh fails in mds5sum comparison
328
* https://tracker.ceph.com/issues/56012
329
    mds: src/mds/MDLog.cc: 283: FAILED ceph_assert(!mds->is_ any_replay())
330
331 53 Venky Shankar
h3. 2022 Jun 07
332
333
https://pulpito.ceph.com/vshankar-2022-06-06_21:25:41-fs-wip-vshankar-testing1-20220606-230129-testing-default-smithi/
334
https://pulpito.ceph.com/vshankar-2022-06-07_10:53:31-fs-wip-vshankar-testing1-20220607-104134-testing-default-smithi/ (rerun after dropping a problematic PR)
335
336
* https://tracker.ceph.com/issues/52624
337
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
338
* https://tracker.ceph.com/issues/50223
339
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
340
* https://tracker.ceph.com/issues/50224
341
    qa: test_mirroring_init_failure_with_recovery failure
342
343 51 Venky Shankar
h3. 2022 May 12
344
345
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20220509-125847
346 52 Venky Shankar
https://pulpito.ceph.com/vshankar-2022-05-13_17:09:16-fs-wip-vshankar-testing-20220513-120051-testing-default-smithi/ (drop prs + rerun)
347 51 Venky Shankar
348
* https://tracker.ceph.com/issues/52624
349
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
350
* https://tracker.ceph.com/issues/50223
351
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
352
* https://tracker.ceph.com/issues/55332
353
    Failure in snaptest-git-ceph.sh
354
* https://tracker.ceph.com/issues/53859
355
    qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
356
* https://tracker.ceph.com/issues/55538
357 1 Patrick Donnelly
    Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead)
358 52 Venky Shankar
* https://tracker.ceph.com/issues/55258
359
    lots of "heartbeat_check: no reply from X.X.X.X" in OSD logs (cropss up again, though very infrequent)
360 51 Venky Shankar
361 49 Venky Shankar
h3. 2022 May 04
362
363 50 Venky Shankar
https://pulpito.ceph.com/vshankar-2022-05-01_13:18:44-fs-wip-vshankar-testing1-20220428-204527-testing-default-smithi/
364
https://pulpito.ceph.com/vshankar-2022-05-02_16:58:59-fs-wip-vshankar-testing1-20220502-201957-testing-default-smithi/ (after dropping PRs)
365
366 49 Venky Shankar
* https://tracker.ceph.com/issues/52624
367
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
368
* https://tracker.ceph.com/issues/50223
369
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
370
* https://tracker.ceph.com/issues/55332
371
    Failure in snaptest-git-ceph.sh
372
* https://tracker.ceph.com/issues/53859
373
    qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
374
* https://tracker.ceph.com/issues/55516
375
    qa: fs suite tests failing with "json.decoder.JSONDecodeError: Extra data: line 2 column 82 (char 82)"
376
* https://tracker.ceph.com/issues/55537
377
    mds: crash during fs:upgrade test
378
* https://tracker.ceph.com/issues/55538
379
    Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead)
380
381 48 Venky Shankar
h3. 2022 Apr 25
382
383
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20220420-113951 (owner vshankar)
384
385
* https://tracker.ceph.com/issues/52624
386
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
387
* https://tracker.ceph.com/issues/50223
388
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
389
* https://tracker.ceph.com/issues/55258
390
    lots of "heartbeat_check: no reply from X.X.X.X" in OSD logs
391
* https://tracker.ceph.com/issues/55377
392
    kclient: mds revoke Fwb caps stuck after the kclient tries writebcak once
393
394 47 Venky Shankar
h3. 2022 Apr 14
395
396
https://pulpito.ceph.com/?branch=wip-vshankar-testing1-20220411-144044
397
398
* https://tracker.ceph.com/issues/52624
399
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
400
* https://tracker.ceph.com/issues/50223
401
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
402
* https://tracker.ceph.com/issues/52438
403
    qa: ffsb timeout
404
* https://tracker.ceph.com/issues/55170
405
    mds: crash during rejoin (CDir::fetch_keys)
406
* https://tracker.ceph.com/issues/55331
407
    pjd failure
408
* https://tracker.ceph.com/issues/48773
409
    qa: scrub does not complete
410
* https://tracker.ceph.com/issues/55332
411
    Failure in snaptest-git-ceph.sh
412
* https://tracker.ceph.com/issues/55258
413
    lots of "heartbeat_check: no reply from X.X.X.X" in OSD logs
414
415 45 Venky Shankar
h3. 2022 Apr 11
416
417 46 Venky Shankar
https://pulpito.ceph.com/?branch=wip-vshankar-testing-55110-20220408-203242
418 45 Venky Shankar
419
* https://tracker.ceph.com/issues/48773
420
    qa: scrub does not complete
421
* https://tracker.ceph.com/issues/52624
422
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
423
* https://tracker.ceph.com/issues/52438
424
    qa: ffsb timeout
425
* https://tracker.ceph.com/issues/48680
426
    mds: scrubbing stuck "scrub active (0 inodes in the stack)"
427
* https://tracker.ceph.com/issues/55236
428
    qa: fs/snaps tests fails with "hit max job timeout"
429
* https://tracker.ceph.com/issues/54108
430
    qa: iogen workunit: "The following counters failed to be set on mds daemons: {'mds.exported', 'mds.imported'}"
431
* https://tracker.ceph.com/issues/54971
432
    Test failure: test_perf_stats_stale_metrics (tasks.cephfs.test_mds_metrics.TestMDSMetrics)
433
* https://tracker.ceph.com/issues/50223
434
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
435
* https://tracker.ceph.com/issues/55258
436
    lots of "heartbeat_check: no reply from X.X.X.X" in OSD logs
437
438 44 Venky Shankar
h3. 2022 Mar 21
439 42 Venky Shankar
440 43 Venky Shankar
https://pulpito.ceph.com/vshankar-2022-03-20_02:16:37-fs-wip-vshankar-testing-20220319-163539-testing-default-smithi/
441
442
Run didn't go well, lots of failures - debugging by dropping PRs and running against master branch. Only merging unrelated PRs that pass tests.
443
444
445
h3. 2022 Mar 08
446
447 42 Venky Shankar
https://pulpito.ceph.com/vshankar-2022-02-28_04:32:15-fs-wip-vshankar-testing-20220226-211550-testing-default-smithi/
448
449
rerun with
450
- (drop) https://github.com/ceph/ceph/pull/44679
451
- (drop) https://github.com/ceph/ceph/pull/44958
452
https://pulpito.ceph.com/vshankar-2022-03-06_14:47:51-fs-wip-vshankar-testing-20220304-132102-testing-default-smithi/
453
454
* https://tracker.ceph.com/issues/54419 (new)
455
    `ceph orch upgrade start` seems to never reach completion
456
* https://tracker.ceph.com/issues/51964
457
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
458
* https://tracker.ceph.com/issues/52624
459
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
460
* https://tracker.ceph.com/issues/50223
461
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
462
* https://tracker.ceph.com/issues/52438
463
    qa: ffsb timeout
464
* https://tracker.ceph.com/issues/50821
465
    qa: untar_snap_rm failure during mds thrashing
466
467
468 41 Venky Shankar
h3. 2022 Feb 09
469
470
https://pulpito.ceph.com/vshankar-2022-02-05_17:27:49-fs-wip-vshankar-testing-20220201-113815-testing-default-smithi/
471
472
rerun with
473
- (drop) https://github.com/ceph/ceph/pull/37938
474
- (drop) https://github.com/ceph/ceph/pull/44335
475
- (drop) https://github.com/ceph/ceph/pull/44491
476
- (drop) https://github.com/ceph/ceph/pull/44501
477
https://pulpito.ceph.com/vshankar-2022-02-08_14:27:29-fs-wip-vshankar-testing-20220208-181241-testing-default-smithi/
478
479
* https://tracker.ceph.com/issues/51964
480
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
481
* https://tracker.ceph.com/issues/54066
482
    test_subvolume_no_upgrade_v1_sanity fails with `AssertionError: 1000 != 0`
483
* https://tracker.ceph.com/issues/48773
484
    qa: scrub does not complete
485
* https://tracker.ceph.com/issues/52624
486
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
487
* https://tracker.ceph.com/issues/50223
488
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
489
* https://tracker.ceph.com/issues/52438
490
    qa: ffsb timeout
491
492 40 Patrick Donnelly
h3. 2022 Feb 01
493
494
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20220127.171526
495
496
* https://tracker.ceph.com/issues/54107
497
    kclient: hang during umount
498
* https://tracker.ceph.com/issues/54106
499
    kclient: hang during workunit cleanup
500
* https://tracker.ceph.com/issues/54108
501
    qa: iogen workunit: "The following counters failed to be set on mds daemons: {'mds.exported', 'mds.imported'}"
502
* https://tracker.ceph.com/issues/48773
503
    qa: scrub does not complete
504
* https://tracker.ceph.com/issues/52438
505
    qa: ffsb timeout
506
507
508 36 Venky Shankar
h3. 2022 Jan 13
509
510
https://pulpito.ceph.com/vshankar-2022-01-06_13:18:41-fs-wip-vshankar-testing-20220106-145819-testing-default-smithi/
511 39 Venky Shankar
512 36 Venky Shankar
rerun with:
513 38 Venky Shankar
- (add) https://github.com/ceph/ceph/pull/44570
514
- (drop) https://github.com/ceph/ceph/pull/43184
515 36 Venky Shankar
https://pulpito.ceph.com/vshankar-2022-01-13_04:42:40-fs-wip-vshankar-testing-20220106-145819-testing-default-smithi/
516
517
* https://tracker.ceph.com/issues/50223
518
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
519
* https://tracker.ceph.com/issues/51282
520
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
521
* https://tracker.ceph.com/issues/48773
522
    qa: scrub does not complete
523
* https://tracker.ceph.com/issues/52624
524
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
525
* https://tracker.ceph.com/issues/53859
526
    qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
527
528 34 Venky Shankar
h3. 2022 Jan 03
529
530
https://pulpito.ceph.com/vshankar-2021-12-22_07:37:44-fs-wip-vshankar-testing-20211216-114012-testing-default-smithi/
531
https://pulpito.ceph.com/vshankar-2022-01-03_12:27:45-fs-wip-vshankar-testing-20220103-142738-testing-default-smithi/ (rerun)
532
533
* https://tracker.ceph.com/issues/50223
534
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
535
* https://tracker.ceph.com/issues/51964
536
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
537
* https://tracker.ceph.com/issues/51267
538
    CommandFailedError: Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithi096 with status 1:...
539
* https://tracker.ceph.com/issues/51282
540
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
541
* https://tracker.ceph.com/issues/50821
542
    qa: untar_snap_rm failure during mds thrashing
543
* https://tracker.ceph.com/issues/51278
544
    mds: "FAILED ceph_assert(!segments.empty())"
545 35 Ramana Raja
* https://tracker.ceph.com/issues/52279
546
    cephadm tests fail due to: error adding seccomp filter rule for syscall bdflush: requested action matches default action of filter
547
548 34 Venky Shankar
549 33 Patrick Donnelly
h3. 2021 Dec 22
550
551
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211222.014316
552
553
* https://tracker.ceph.com/issues/52624
554
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
555
* https://tracker.ceph.com/issues/50223
556
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
557
* https://tracker.ceph.com/issues/52279
558
    cephadm tests fail due to: error adding seccomp filter rule for syscall bdflush: requested action matches default action of filter
559
* https://tracker.ceph.com/issues/50224
560
    qa: test_mirroring_init_failure_with_recovery failure
561
* https://tracker.ceph.com/issues/48773
562
    qa: scrub does not complete
563
564
565 32 Venky Shankar
h3. 2021 Nov 30
566
567
https://pulpito.ceph.com/vshankar-2021-11-24_07:14:27-fs-wip-vshankar-testing-20211124-094330-testing-default-smithi/
568
https://pulpito.ceph.com/vshankar-2021-11-30_06:23:32-fs-wip-vshankar-testing-20211124-094330-distro-default-smithi/ (rerun w/ QA fixes)
569
570
* https://tracker.ceph.com/issues/53436
571
    mds, mon: mds beacon messages get dropped? (mds never reaches up:active state)
572
* https://tracker.ceph.com/issues/51964
573
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
574
* https://tracker.ceph.com/issues/48812
575
    qa: test_scrub_pause_and_resume_with_abort failure
576
* https://tracker.ceph.com/issues/51076
577
    "wait_for_recovery: failed before timeout expired" during thrashosd test with EC backend.
578
* https://tracker.ceph.com/issues/50223
579
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
580
* https://tracker.ceph.com/issues/52624
581
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
582
* https://tracker.ceph.com/issues/50250
583
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
584
585
586 31 Patrick Donnelly
h3. 2021 November 9
587
588
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211109.180315
589
590
* https://tracker.ceph.com/issues/53214
591
    qa: "dd: error reading '/sys/kernel/debug/ceph/2a934501-6731-4052-a836-f42229a869be.client4874/metrics': Is a directory"
592
* https://tracker.ceph.com/issues/48773
593
    qa: scrub does not complete
594
* https://tracker.ceph.com/issues/50223
595
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
596
* https://tracker.ceph.com/issues/51282
597
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
598
* https://tracker.ceph.com/issues/52624
599
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
600
* https://tracker.ceph.com/issues/53216
601
    qa: "RuntimeError: value of attributes should be either str or None. client_id"
602
* https://tracker.ceph.com/issues/50250
603
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
604
605
606
607 30 Patrick Donnelly
h3. 2021 November 03
608
609
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211103.023355
610
611
* https://tracker.ceph.com/issues/51964
612
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
613
* https://tracker.ceph.com/issues/51282
614
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
615
* https://tracker.ceph.com/issues/52436
616
    fs/ceph: "corrupt mdsmap"
617
* https://tracker.ceph.com/issues/53074
618
    pybind/mgr/cephadm: upgrade sequence does not continue if no MDS are active
619
* https://tracker.ceph.com/issues/53150
620
    pybind/mgr/cephadm/upgrade: tolerate MDS failures during upgrade straddling v16.2.5
621
* https://tracker.ceph.com/issues/53155
622
    MDSMonitor: assertion during upgrade to v16.2.5+
623
624
625 29 Patrick Donnelly
h3. 2021 October 26
626
627
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211025.000447
628
629
* https://tracker.ceph.com/issues/53074
630
    pybind/mgr/cephadm: upgrade sequence does not continue if no MDS are active
631
* https://tracker.ceph.com/issues/52997
632
    testing: hang ing umount
633
* https://tracker.ceph.com/issues/50824
634
    qa: snaptest-git-ceph bus error
635
* https://tracker.ceph.com/issues/52436
636
    fs/ceph: "corrupt mdsmap"
637
* https://tracker.ceph.com/issues/48773
638
    qa: scrub does not complete
639
* https://tracker.ceph.com/issues/53082
640
    ceph-fuse: segmenetation fault in Client::handle_mds_map
641
* https://tracker.ceph.com/issues/50223
642
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
643
* https://tracker.ceph.com/issues/52624
644
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
645
* https://tracker.ceph.com/issues/50224
646
    qa: test_mirroring_init_failure_with_recovery failure
647
* https://tracker.ceph.com/issues/50821
648
    qa: untar_snap_rm failure during mds thrashing
649
* https://tracker.ceph.com/issues/50250
650
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
651
652
653
654 27 Patrick Donnelly
h3. 2021 October 19
655
656 28 Patrick Donnelly
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211019.013028
657 27 Patrick Donnelly
658
* https://tracker.ceph.com/issues/52995
659
    qa: test_standby_count_wanted failure
660
* https://tracker.ceph.com/issues/52948
661
    osd: fails to come up: "teuthology.misc:7 of 8 OSDs are up"
662
* https://tracker.ceph.com/issues/52996
663
    qa: test_perf_counters via test_openfiletable
664
* https://tracker.ceph.com/issues/48772
665
    qa: pjd: not ok 9, 44, 80
666
* https://tracker.ceph.com/issues/52997
667
    testing: hang ing umount
668
* https://tracker.ceph.com/issues/50250
669
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
670
* https://tracker.ceph.com/issues/52624
671
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
672
* https://tracker.ceph.com/issues/50223
673
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
674
* https://tracker.ceph.com/issues/50821
675
    qa: untar_snap_rm failure during mds thrashing
676
* https://tracker.ceph.com/issues/48773
677
    qa: scrub does not complete
678
679
680 26 Patrick Donnelly
h3. 2021 October 12
681
682
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211012.192211
683
684
Some failures caused by teuthology bug: https://tracker.ceph.com/issues/52944
685
686
New test caused failure: https://github.com/ceph/ceph/pull/43297#discussion_r729883167
687
688
689
* https://tracker.ceph.com/issues/51282
690
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
691
* https://tracker.ceph.com/issues/52948
692
    osd: fails to come up: "teuthology.misc:7 of 8 OSDs are up"
693
* https://tracker.ceph.com/issues/48773
694
    qa: scrub does not complete
695
* https://tracker.ceph.com/issues/50224
696
    qa: test_mirroring_init_failure_with_recovery failure
697
* https://tracker.ceph.com/issues/52949
698
    RuntimeError: The following counters failed to be set on mds daemons: {'mds.dir_split'}
699
700
701 25 Patrick Donnelly
h3. 2021 October 02
702 23 Patrick Donnelly
703 24 Patrick Donnelly
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211002.163337
704
705
Some failures caused by cephadm upgrade test. Fixed in follow-up qa commit.
706
707
test_simple failures caused by PR in this set.
708
709
A few reruns because of QA infra noise.
710
711
* https://tracker.ceph.com/issues/52822
712
    qa: failed pacific install on fs:upgrade
713
* https://tracker.ceph.com/issues/52624
714
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
715
* https://tracker.ceph.com/issues/50223
716
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
717
* https://tracker.ceph.com/issues/48773
718
    qa: scrub does not complete
719
720
721
h3. 2021 September 20
722
723 23 Patrick Donnelly
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210917.174826
724
725
* https://tracker.ceph.com/issues/52677
726
    qa: test_simple failure
727
* https://tracker.ceph.com/issues/51279
728
    kclient hangs on umount (testing branch)
729
* https://tracker.ceph.com/issues/50223
730
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
731
* https://tracker.ceph.com/issues/50250
732
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
733
* https://tracker.ceph.com/issues/52624
734
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
735
* https://tracker.ceph.com/issues/52438
736
    qa: ffsb timeout
737
738
739 22 Patrick Donnelly
h3. 2021 September 10
740
741
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210910.181451
742
743
* https://tracker.ceph.com/issues/50223
744
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
745
* https://tracker.ceph.com/issues/50250
746
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
747
* https://tracker.ceph.com/issues/52624
748
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
749
* https://tracker.ceph.com/issues/52625
750
    qa: test_kill_mdstable (tasks.cephfs.test_snapshots.TestSnapshots)
751
* https://tracker.ceph.com/issues/52439
752
    qa: acls does not compile on centos stream
753
* https://tracker.ceph.com/issues/50821
754
    qa: untar_snap_rm failure during mds thrashing
755
* https://tracker.ceph.com/issues/48773
756
    qa: scrub does not complete
757
* https://tracker.ceph.com/issues/52626
758
    mds: ScrubStack.cc: 831: FAILED ceph_assert(diri)
759
* https://tracker.ceph.com/issues/51279
760
    kclient hangs on umount (testing branch)
761
762
763 21 Patrick Donnelly
h3. 2021 August 27
764
765
Several jobs died because of device failures.
766
767
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210827.024746
768
769
* https://tracker.ceph.com/issues/52430
770
    mds: fast async create client mount breaks racy test
771
* https://tracker.ceph.com/issues/52436
772
    fs/ceph: "corrupt mdsmap"
773
* https://tracker.ceph.com/issues/52437
774
    mds: InoTable::replay_release_ids abort via test_inotable_sync
775
* https://tracker.ceph.com/issues/51282
776
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
777
* https://tracker.ceph.com/issues/52438
778
    qa: ffsb timeout
779
* https://tracker.ceph.com/issues/52439
780
    qa: acls does not compile on centos stream
781
782
783 20 Patrick Donnelly
h3. 2021 July 30
784
785
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210729.214022
786
787
* https://tracker.ceph.com/issues/50250
788
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
789
* https://tracker.ceph.com/issues/51282
790
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
791
* https://tracker.ceph.com/issues/48773
792
    qa: scrub does not complete
793
* https://tracker.ceph.com/issues/51975
794
    pybind/mgr/stats: KeyError
795
796
797 19 Patrick Donnelly
h3. 2021 July 28
798
799
https://pulpito.ceph.com/pdonnell-2021-07-28_00:39:45-fs-wip-pdonnell-testing-20210727.213757-distro-basic-smithi/
800
801
with qa fix: https://pulpito.ceph.com/pdonnell-2021-07-28_16:20:28-fs-wip-pdonnell-testing-20210728.141004-distro-basic-smithi/
802
803
* https://tracker.ceph.com/issues/51905
804
    qa: "error reading sessionmap 'mds1_sessionmap'"
805
* https://tracker.ceph.com/issues/48773
806
    qa: scrub does not complete
807
* https://tracker.ceph.com/issues/50250
808
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
809
* https://tracker.ceph.com/issues/51267
810
    CommandFailedError: Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithi096 with status 1:...
811
* https://tracker.ceph.com/issues/51279
812
    kclient hangs on umount (testing branch)
813
814
815 18 Patrick Donnelly
h3. 2021 July 16
816
817
https://pulpito.ceph.com/pdonnell-2021-07-16_05:50:11-fs-wip-pdonnell-testing-20210716.022804-distro-basic-smithi/
818
819
* https://tracker.ceph.com/issues/48773
820
    qa: scrub does not complete
821
* https://tracker.ceph.com/issues/48772
822
    qa: pjd: not ok 9, 44, 80
823
* https://tracker.ceph.com/issues/45434
824
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
825
* https://tracker.ceph.com/issues/51279
826
    kclient hangs on umount (testing branch)
827
* https://tracker.ceph.com/issues/50824
828
    qa: snaptest-git-ceph bus error
829
830
831 17 Patrick Donnelly
h3. 2021 July 04
832
833
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210703.052904
834
835
* https://tracker.ceph.com/issues/48773
836
    qa: scrub does not complete
837
* https://tracker.ceph.com/issues/39150
838
    mon: "FAILED ceph_assert(session_map.sessions.empty())" when out of quorum
839
* https://tracker.ceph.com/issues/45434
840
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
841
* https://tracker.ceph.com/issues/51282
842
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
843
* https://tracker.ceph.com/issues/48771
844
    qa: iogen: workload fails to cause balancing
845
* https://tracker.ceph.com/issues/51279
846
    kclient hangs on umount (testing branch)
847
* https://tracker.ceph.com/issues/50250
848
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
849
850
851 16 Patrick Donnelly
h3. 2021 July 01
852
853
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210701.192056
854
855
* https://tracker.ceph.com/issues/51197
856
    qa: [WRN] Scrub error on inode 0x10000001520 (/client.0/tmp/t/linux-5.4/Documentation/driver-api) see mds.f log and `damage ls` output for details
857
* https://tracker.ceph.com/issues/50866
858
    osd: stat mismatch on objects
859
* https://tracker.ceph.com/issues/48773
860
    qa: scrub does not complete
861
862
863 15 Patrick Donnelly
h3. 2021 June 26
864
865
https://pulpito.ceph.com/pdonnell-2021-06-26_00:57:00-fs-wip-pdonnell-testing-20210625.225421-distro-basic-smithi/
866
867
* https://tracker.ceph.com/issues/51183
868
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
869
* https://tracker.ceph.com/issues/51410
870
    kclient: fails to finish reconnect during MDS thrashing (testing branch)
871
* https://tracker.ceph.com/issues/48773
872
    qa: scrub does not complete
873
* https://tracker.ceph.com/issues/51282
874
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
875
* https://tracker.ceph.com/issues/51169
876
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
877
* https://tracker.ceph.com/issues/48772
878
    qa: pjd: not ok 9, 44, 80
879
880
881 14 Patrick Donnelly
h3. 2021 June 21
882
883
https://pulpito.ceph.com/pdonnell-2021-06-22_00:27:21-fs-wip-pdonnell-testing-20210621.231646-distro-basic-smithi/
884
885
One failure caused by PR: https://github.com/ceph/ceph/pull/41935#issuecomment-866472599
886
887
* https://tracker.ceph.com/issues/51282
888
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
889
* https://tracker.ceph.com/issues/51183
890
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
891
* https://tracker.ceph.com/issues/48773
892
    qa: scrub does not complete
893
* https://tracker.ceph.com/issues/48771
894
    qa: iogen: workload fails to cause balancing
895
* https://tracker.ceph.com/issues/51169
896
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
897
* https://tracker.ceph.com/issues/50495
898
    libcephfs: shutdown race fails with status 141
899
* https://tracker.ceph.com/issues/45434
900
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
901
* https://tracker.ceph.com/issues/50824
902
    qa: snaptest-git-ceph bus error
903
* https://tracker.ceph.com/issues/50223
904
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
905
906
907 13 Patrick Donnelly
h3. 2021 June 16
908
909
https://pulpito.ceph.com/pdonnell-2021-06-16_21:26:55-fs-wip-pdonnell-testing-20210616.191804-distro-basic-smithi/
910
911
MDS abort class of failures caused by PR: https://github.com/ceph/ceph/pull/41667
912
913
* https://tracker.ceph.com/issues/45434
914
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
915
* https://tracker.ceph.com/issues/51169
916
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
917
* https://tracker.ceph.com/issues/43216
918
    MDSMonitor: removes MDS coming out of quorum election
919
* https://tracker.ceph.com/issues/51278
920
    mds: "FAILED ceph_assert(!segments.empty())"
921
* https://tracker.ceph.com/issues/51279
922
    kclient hangs on umount (testing branch)
923
* https://tracker.ceph.com/issues/51280
924
    mds: "FAILED ceph_assert(r == 0 || r == -2)"
925
* https://tracker.ceph.com/issues/51183
926
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
927
* https://tracker.ceph.com/issues/51281
928
    qa: snaptest-snap-rm-cmp.sh: "echo 'FAIL: bad match, /tmp/a 4637e766853d1ad16a7b17079e2c6f03 != real c3883760b18d50e8d78819c54d579b00'"
929
* https://tracker.ceph.com/issues/48773
930
    qa: scrub does not complete
931
* https://tracker.ceph.com/issues/51076
932
    "wait_for_recovery: failed before timeout expired" during thrashosd test with EC backend.
933
* https://tracker.ceph.com/issues/51228
934
    qa: rmdir: failed to remove 'a/.snap/*': No such file or directory
935
* https://tracker.ceph.com/issues/51282
936
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
937
938
939 12 Patrick Donnelly
h3. 2021 June 14
940
941
https://pulpito.ceph.com/pdonnell-2021-06-14_20:53:05-fs-wip-pdonnell-testing-20210614.173325-distro-basic-smithi/
942
943
Some Ubuntu 20.04 upgrade fallout. In particular, upgrade tests are failing due to missing packages for 18.04 Pacific.
944
945
* https://tracker.ceph.com/issues/51169
946
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
947
* https://tracker.ceph.com/issues/51228
948
    qa: rmdir: failed to remove 'a/.snap/*': No such file or directory
949
* https://tracker.ceph.com/issues/48773
950
    qa: scrub does not complete
951
* https://tracker.ceph.com/issues/51183
952
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
953
* https://tracker.ceph.com/issues/45434
954
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
955
* https://tracker.ceph.com/issues/51182
956
    pybind/mgr/snap_schedule: Invalid command: Unexpected argument 'fs=cephfs'
957
* https://tracker.ceph.com/issues/51229
958
    qa: test_multi_snap_schedule list difference failure
959
* https://tracker.ceph.com/issues/50821
960
    qa: untar_snap_rm failure during mds thrashing
961
962
963 11 Patrick Donnelly
h3. 2021 June 13
964
965
https://pulpito.ceph.com/pdonnell-2021-06-12_02:45:35-fs-wip-pdonnell-testing-20210612.002809-distro-basic-smithi/
966
967
Some Ubuntu 20.04 upgrade fallout. In particular, upgrade tests are failing due to missing packages for 18.04 Pacific.
968
969
* https://tracker.ceph.com/issues/51169
970
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
971
* https://tracker.ceph.com/issues/48773
972
    qa: scrub does not complete
973
* https://tracker.ceph.com/issues/51182
974
    pybind/mgr/snap_schedule: Invalid command: Unexpected argument 'fs=cephfs'
975
* https://tracker.ceph.com/issues/51183
976
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
977
* https://tracker.ceph.com/issues/51197
978
    qa: [WRN] Scrub error on inode 0x10000001520 (/client.0/tmp/t/linux-5.4/Documentation/driver-api) see mds.f log and `damage ls` output for details
979
* https://tracker.ceph.com/issues/45434
980
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
981
982 10 Patrick Donnelly
h3. 2021 June 11
983
984
https://pulpito.ceph.com/pdonnell-2021-06-11_18:02:10-fs-wip-pdonnell-testing-20210611.162716-distro-basic-smithi/
985
986
Some Ubuntu 20.04 upgrade fallout. In particular, upgrade tests are failing due to missing packages for 18.04 Pacific.
987
988
* https://tracker.ceph.com/issues/51169
989
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
990
* https://tracker.ceph.com/issues/45434
991
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
992
* https://tracker.ceph.com/issues/48771
993
    qa: iogen: workload fails to cause balancing
994
* https://tracker.ceph.com/issues/43216
995
    MDSMonitor: removes MDS coming out of quorum election
996
* https://tracker.ceph.com/issues/51182
997
    pybind/mgr/snap_schedule: Invalid command: Unexpected argument 'fs=cephfs'
998
* https://tracker.ceph.com/issues/50223
999
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
1000
* https://tracker.ceph.com/issues/48773
1001
    qa: scrub does not complete
1002
* https://tracker.ceph.com/issues/51183
1003
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
1004
* https://tracker.ceph.com/issues/51184
1005
    qa: fs:bugs does not specify distro
1006
1007
1008 9 Patrick Donnelly
h3. 2021 June 03
1009
1010
https://pulpito.ceph.com/pdonnell-2021-06-03_03:40:33-fs-wip-pdonnell-testing-20210603.020013-distro-basic-smithi/
1011
1012
* https://tracker.ceph.com/issues/45434
1013
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
1014
* https://tracker.ceph.com/issues/50016
1015
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
1016
* https://tracker.ceph.com/issues/50821
1017
    qa: untar_snap_rm failure during mds thrashing
1018
* https://tracker.ceph.com/issues/50622 (regression)
1019
    msg: active_connections regression
1020
* https://tracker.ceph.com/issues/49845#note-2 (regression)
1021
    qa: failed umount in test_volumes
1022
* https://tracker.ceph.com/issues/48773
1023
    qa: scrub does not complete
1024
* https://tracker.ceph.com/issues/43216
1025
    MDSMonitor: removes MDS coming out of quorum election
1026
1027
1028 7 Patrick Donnelly
h3. 2021 May 18
1029
1030 8 Patrick Donnelly
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210518.214114
1031
1032
Regression in testing kernel caused some failures. Ilya fixed those and rerun
1033
looked better. Some odd new noise in the rerun relating to packaging and "No
1034
module named 'tasks.ceph'".
1035
1036
* https://tracker.ceph.com/issues/50824
1037
    qa: snaptest-git-ceph bus error
1038
* https://tracker.ceph.com/issues/50622 (regression)
1039
    msg: active_connections regression
1040
* https://tracker.ceph.com/issues/49845#note-2 (regression)
1041
    qa: failed umount in test_volumes
1042
* https://tracker.ceph.com/issues/48203 (stock kernel update required)
1043
    qa: quota failure
1044
1045
1046
h3. 2021 May 18
1047
1048 7 Patrick Donnelly
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210518.025642
1049
1050
* https://tracker.ceph.com/issues/50821
1051
    qa: untar_snap_rm failure during mds thrashing
1052
* https://tracker.ceph.com/issues/48773
1053
    qa: scrub does not complete
1054
* https://tracker.ceph.com/issues/45591
1055
    mgr: FAILED ceph_assert(daemon != nullptr)
1056
* https://tracker.ceph.com/issues/50866
1057
    osd: stat mismatch on objects
1058
* https://tracker.ceph.com/issues/50016
1059
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
1060
* https://tracker.ceph.com/issues/50867
1061
    qa: fs:mirror: reduced data availability
1062
* https://tracker.ceph.com/issues/50821
1063
    qa: untar_snap_rm failure during mds thrashing
1064
* https://tracker.ceph.com/issues/50622 (regression)
1065
    msg: active_connections regression
1066
* https://tracker.ceph.com/issues/50223
1067
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
1068
* https://tracker.ceph.com/issues/50868
1069
    qa: "kern.log.gz already exists; not overwritten"
1070
* https://tracker.ceph.com/issues/50870
1071
    qa: test_full: "rm: cannot remove 'large_file_a': Permission denied"
1072
1073
1074 6 Patrick Donnelly
h3. 2021 May 11
1075
1076
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210511.232042
1077
1078
* one class of failures caused by PR
1079
* https://tracker.ceph.com/issues/48812
1080
    qa: test_scrub_pause_and_resume_with_abort failure
1081
* https://tracker.ceph.com/issues/50390
1082
    mds: monclient: wait_auth_rotating timed out after 30
1083
* https://tracker.ceph.com/issues/48773
1084
    qa: scrub does not complete
1085
* https://tracker.ceph.com/issues/50821
1086
    qa: untar_snap_rm failure during mds thrashing
1087
* https://tracker.ceph.com/issues/50224
1088
    qa: test_mirroring_init_failure_with_recovery failure
1089
* https://tracker.ceph.com/issues/50622 (regression)
1090
    msg: active_connections regression
1091
* https://tracker.ceph.com/issues/50825
1092
    qa: snaptest-git-ceph hang during mon thrashing v2
1093
* https://tracker.ceph.com/issues/50821
1094
    qa: untar_snap_rm failure during mds thrashing
1095
* https://tracker.ceph.com/issues/50823
1096
    qa: RuntimeError: timeout waiting for cluster to stabilize
1097
1098
1099 5 Patrick Donnelly
h3. 2021 May 14
1100
1101
https://pulpito.ceph.com/pdonnell-2021-05-14_21:45:42-fs-master-distro-basic-smithi/
1102
1103
* https://tracker.ceph.com/issues/48812
1104
    qa: test_scrub_pause_and_resume_with_abort failure
1105
* https://tracker.ceph.com/issues/50821
1106
    qa: untar_snap_rm failure during mds thrashing
1107
* https://tracker.ceph.com/issues/50622 (regression)
1108
    msg: active_connections regression
1109
* https://tracker.ceph.com/issues/50822
1110
    qa: testing kernel patch for client metrics causes mds abort
1111
* https://tracker.ceph.com/issues/48773
1112
    qa: scrub does not complete
1113
* https://tracker.ceph.com/issues/50823
1114
    qa: RuntimeError: timeout waiting for cluster to stabilize
1115
* https://tracker.ceph.com/issues/50824
1116
    qa: snaptest-git-ceph bus error
1117
* https://tracker.ceph.com/issues/50825
1118
    qa: snaptest-git-ceph hang during mon thrashing v2
1119
* https://tracker.ceph.com/issues/50826
1120
    kceph: stock RHEL kernel hangs on snaptests with mon|osd thrashers
1121
1122
1123 4 Patrick Donnelly
h3. 2021 May 01
1124
1125
https://pulpito.ceph.com/pdonnell-2021-05-01_09:07:09-fs-wip-pdonnell-testing-20210501.040415-distro-basic-smithi/
1126
1127
* https://tracker.ceph.com/issues/45434
1128
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
1129
* https://tracker.ceph.com/issues/50281
1130
    qa: untar_snap_rm timeout
1131
* https://tracker.ceph.com/issues/48203 (stock kernel update required)
1132
    qa: quota failure
1133
* https://tracker.ceph.com/issues/48773
1134
    qa: scrub does not complete
1135
* https://tracker.ceph.com/issues/50390
1136
    mds: monclient: wait_auth_rotating timed out after 30
1137
* https://tracker.ceph.com/issues/50250
1138
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details"
1139
* https://tracker.ceph.com/issues/50622 (regression)
1140
    msg: active_connections regression
1141
* https://tracker.ceph.com/issues/45591
1142
    mgr: FAILED ceph_assert(daemon != nullptr)
1143
* https://tracker.ceph.com/issues/50221
1144
    qa: snaptest-git-ceph failure in git diff
1145
* https://tracker.ceph.com/issues/50016
1146
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
1147
1148
1149 3 Patrick Donnelly
h3. 2021 Apr 15
1150
1151
https://pulpito.ceph.com/pdonnell-2021-04-15_01:35:57-fs-wip-pdonnell-testing-20210414.230315-distro-basic-smithi/
1152
1153
* https://tracker.ceph.com/issues/50281
1154
    qa: untar_snap_rm timeout
1155
* https://tracker.ceph.com/issues/50220
1156
    qa: dbench workload timeout
1157
* https://tracker.ceph.com/issues/50246
1158
    mds: failure replaying journal (EMetaBlob)
1159
* https://tracker.ceph.com/issues/50250
1160
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details"
1161
* https://tracker.ceph.com/issues/50016
1162
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
1163
* https://tracker.ceph.com/issues/50222
1164
    osd: 5.2s0 deep-scrub : stat mismatch
1165
* https://tracker.ceph.com/issues/45434
1166
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
1167
* https://tracker.ceph.com/issues/49845
1168
    qa: failed umount in test_volumes
1169
* https://tracker.ceph.com/issues/37808
1170
    osd: osdmap cache weak_refs assert during shutdown
1171
* https://tracker.ceph.com/issues/50387
1172
    client: fs/snaps failure
1173
* https://tracker.ceph.com/issues/50389
1174
    mds: "cluster [ERR] Error recovering journal 0x203: (2) No such file or directory" in cluster log"
1175
* https://tracker.ceph.com/issues/50216
1176
    qa: "ls: cannot access 'lost+found': No such file or directory"
1177
* https://tracker.ceph.com/issues/50390
1178
    mds: monclient: wait_auth_rotating timed out after 30
1179
1180
1181
1182 1 Patrick Donnelly
h3. 2021 Apr 08
1183
1184 2 Patrick Donnelly
https://pulpito.ceph.com/pdonnell-2021-04-08_22:42:24-fs-wip-pdonnell-testing-20210408.192301-distro-basic-smithi/
1185
1186
* https://tracker.ceph.com/issues/45434
1187
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
1188
* https://tracker.ceph.com/issues/50016
1189
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
1190
* https://tracker.ceph.com/issues/48773
1191
    qa: scrub does not complete
1192
* https://tracker.ceph.com/issues/50279
1193
    qa: "Replacing daemon mds.b as rank 0 with standby daemon mds.c"
1194
* https://tracker.ceph.com/issues/50246
1195
    mds: failure replaying journal (EMetaBlob)
1196
* https://tracker.ceph.com/issues/48365
1197
    qa: ffsb build failure on CentOS 8.2
1198
* https://tracker.ceph.com/issues/50216
1199
    qa: "ls: cannot access 'lost+found': No such file or directory"
1200
* https://tracker.ceph.com/issues/50223
1201
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
1202
* https://tracker.ceph.com/issues/50280
1203
    cephadm: RuntimeError: uid/gid not found
1204
* https://tracker.ceph.com/issues/50281
1205
    qa: untar_snap_rm timeout
1206
1207
h3. 2021 Apr 08
1208
1209 1 Patrick Donnelly
https://pulpito.ceph.com/pdonnell-2021-04-08_04:31:36-fs-wip-pdonnell-testing-20210408.024225-distro-basic-smithi/
1210
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210408.142238 (with logic inversion / QA fix)
1211
1212
* https://tracker.ceph.com/issues/50246
1213
    mds: failure replaying journal (EMetaBlob)
1214
* https://tracker.ceph.com/issues/50250
1215
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details"
1216
1217
1218
h3. 2021 Apr 07
1219
1220
https://pulpito.ceph.com/pdonnell-2021-04-07_02:12:41-fs-wip-pdonnell-testing-20210406.213012-distro-basic-smithi/
1221
1222
* https://tracker.ceph.com/issues/50215
1223
    qa: "log [ERR] : error reading sessionmap 'mds2_sessionmap'"
1224
* https://tracker.ceph.com/issues/49466
1225
    qa: "Command failed on gibba030 with status 1: 'set -ex\nsudo dd of=/tmp/tmp.ZEeZBasJer'"
1226
* https://tracker.ceph.com/issues/50216
1227
    qa: "ls: cannot access 'lost+found': No such file or directory"
1228
* https://tracker.ceph.com/issues/48773
1229
    qa: scrub does not complete
1230
* https://tracker.ceph.com/issues/49845
1231
    qa: failed umount in test_volumes
1232
* https://tracker.ceph.com/issues/50220
1233
    qa: dbench workload timeout
1234
* https://tracker.ceph.com/issues/50221
1235
    qa: snaptest-git-ceph failure in git diff
1236
* https://tracker.ceph.com/issues/50222
1237
    osd: 5.2s0 deep-scrub : stat mismatch
1238
* https://tracker.ceph.com/issues/50223
1239
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
1240
* https://tracker.ceph.com/issues/50224
1241
    qa: test_mirroring_init_failure_with_recovery failure
1242
1243
h3. 2021 Apr 01
1244
1245
https://pulpito.ceph.com/pdonnell-2021-04-01_00:45:34-fs-wip-pdonnell-testing-20210331.222326-distro-basic-smithi/
1246
1247
* https://tracker.ceph.com/issues/48772
1248
    qa: pjd: not ok 9, 44, 80
1249
* https://tracker.ceph.com/issues/50177
1250
    osd: "stalled aio... buggy kernel or bad device?"
1251
* https://tracker.ceph.com/issues/48771
1252
    qa: iogen: workload fails to cause balancing
1253
* https://tracker.ceph.com/issues/49845
1254
    qa: failed umount in test_volumes
1255
* https://tracker.ceph.com/issues/48773
1256
    qa: scrub does not complete
1257
* https://tracker.ceph.com/issues/48805
1258
    mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details"
1259
* https://tracker.ceph.com/issues/50178
1260
    qa: "TypeError: run() got an unexpected keyword argument 'shell'"
1261
* https://tracker.ceph.com/issues/45434
1262
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
1263
1264
h3. 2021 Mar 24
1265
1266
https://pulpito.ceph.com/pdonnell-2021-03-24_23:26:35-fs-wip-pdonnell-testing-20210324.190252-distro-basic-smithi/
1267
1268
* https://tracker.ceph.com/issues/49500
1269
    qa: "Assertion `cb_done' failed."
1270
* https://tracker.ceph.com/issues/50019
1271
    qa: mount failure with cephadm "probably no MDS server is up?"
1272
* https://tracker.ceph.com/issues/50020
1273
    qa: "RADOS object not found (Failed to operate read op for oid cephfs_mirror)"
1274
* https://tracker.ceph.com/issues/48773
1275
    qa: scrub does not complete
1276
* https://tracker.ceph.com/issues/45434
1277
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
1278
* https://tracker.ceph.com/issues/48805
1279
    mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details"
1280
* https://tracker.ceph.com/issues/48772
1281
    qa: pjd: not ok 9, 44, 80
1282
* https://tracker.ceph.com/issues/50021
1283
    qa: snaptest-git-ceph failure during mon thrashing
1284
* https://tracker.ceph.com/issues/48771
1285
    qa: iogen: workload fails to cause balancing
1286
* https://tracker.ceph.com/issues/50016
1287
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
1288
* https://tracker.ceph.com/issues/49466
1289
    qa: "Command failed on gibba030 with status 1: 'set -ex\nsudo dd of=/tmp/tmp.ZEeZBasJer'"
1290
1291
1292
h3. 2021 Mar 18
1293
1294
https://pulpito.ceph.com/pdonnell-2021-03-18_13:46:31-fs-wip-pdonnell-testing-20210318.024145-distro-basic-smithi/
1295
1296
* https://tracker.ceph.com/issues/49466
1297
    qa: "Command failed on gibba030 with status 1: 'set -ex\nsudo dd of=/tmp/tmp.ZEeZBasJer'"
1298
* https://tracker.ceph.com/issues/48773
1299
    qa: scrub does not complete
1300
* https://tracker.ceph.com/issues/48805
1301
    mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details"
1302
* https://tracker.ceph.com/issues/45434
1303
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
1304
* https://tracker.ceph.com/issues/49845
1305
    qa: failed umount in test_volumes
1306
* https://tracker.ceph.com/issues/49605
1307
    mgr: drops command on the floor
1308
* https://tracker.ceph.com/issues/48203 (stock kernel update required)
1309
    qa: quota failure
1310
* https://tracker.ceph.com/issues/49928
1311
    client: items pinned in cache preventing unmount x2
1312
1313
h3. 2021 Mar 15
1314
1315
https://pulpito.ceph.com/pdonnell-2021-03-15_22:16:56-fs-wip-pdonnell-testing-20210315.182203-distro-basic-smithi/
1316
1317
* https://tracker.ceph.com/issues/49842
1318
    qa: stuck pkg install
1319
* https://tracker.ceph.com/issues/49466
1320
    qa: "Command failed on gibba030 with status 1: 'set -ex\nsudo dd of=/tmp/tmp.ZEeZBasJer'"
1321
* https://tracker.ceph.com/issues/49822
1322
    test: test_mirroring_command_idempotency (tasks.cephfs.test_admin.TestMirroringCommands) failure
1323
* https://tracker.ceph.com/issues/49240
1324
    terminate called after throwing an instance of 'std::bad_alloc'
1325
* https://tracker.ceph.com/issues/48773
1326
    qa: scrub does not complete
1327
* https://tracker.ceph.com/issues/45434
1328
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
1329
* https://tracker.ceph.com/issues/49500
1330
    qa: "Assertion `cb_done' failed."
1331
* https://tracker.ceph.com/issues/49843
1332
    qa: fs/snaps/snaptest-upchildrealms.sh failure
1333
* https://tracker.ceph.com/issues/49845
1334
    qa: failed umount in test_volumes
1335
* https://tracker.ceph.com/issues/48805
1336
    mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details"
1337
* https://tracker.ceph.com/issues/49605
1338
    mgr: drops command on the floor
1339
1340
and failure caused by PR: https://github.com/ceph/ceph/pull/39969
1341
1342
1343
h3. 2021 Mar 09
1344
1345
https://pulpito.ceph.com/pdonnell-2021-03-09_03:27:39-fs-wip-pdonnell-testing-20210308.214827-distro-basic-smithi/
1346
1347
* https://tracker.ceph.com/issues/49500
1348
    qa: "Assertion `cb_done' failed."
1349
* https://tracker.ceph.com/issues/48805
1350
    mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details"
1351
* https://tracker.ceph.com/issues/48773
1352
    qa: scrub does not complete
1353
* https://tracker.ceph.com/issues/45434
1354
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
1355
* https://tracker.ceph.com/issues/49240
1356
    terminate called after throwing an instance of 'std::bad_alloc'
1357
* https://tracker.ceph.com/issues/49466
1358
    qa: "Command failed on gibba030 with status 1: 'set -ex\nsudo dd of=/tmp/tmp.ZEeZBasJer'"
1359
* https://tracker.ceph.com/issues/49684
1360
    qa: fs:cephadm mount does not wait for mds to be created
1361
* https://tracker.ceph.com/issues/48771
1362
    qa: iogen: workload fails to cause balancing