Project

General

Profile

Main » History » Version 232

Patrick Donnelly, 03/21/2024 12:58 AM

1 221 Patrick Donnelly
h1. <code>main</code> branch
2 1 Patrick Donnelly
3 228 Patrick Donnelly
h3. 2024-03-20
4
5
https://pulpito.ceph.com/pdonnell-2024-03-20_18:16:52-fs-wip-batrick-testing-20240320.145742-distro-default-smithi/
6
7 229 Patrick Donnelly
Ubuntu jobs filtered out because builds were skipped by jenkins/shaman.
8 1 Patrick Donnelly
9 229 Patrick Donnelly
This run has a lot more failures because https://github.com/ceph/ceph/pull/55455 fixed log WRN/ERR checks.
10 228 Patrick Donnelly
11 229 Patrick Donnelly
* https://tracker.ceph.com/issues/57676
12
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
13
* https://tracker.ceph.com/issues/64572
14
    workunits/fsx.sh failure
15
* https://tracker.ceph.com/issues/65018
16
    PG_DEGRADED warnings during cluster creation via cephadm: "Health check failed: Degraded data redundancy: 2/192 objects degraded (1.042%), 1 pg degraded (PG_DEGRADED)"
17
* https://tracker.ceph.com/issues/64707 (new issue)
18
    suites/fsstress.sh hangs on one client - test times out
19 1 Patrick Donnelly
* https://tracker.ceph.com/issues/64988
20
    qa: fs:workloads mgr client evicted indicated by "cluster [WRN] evicting unresponsive client smithi042:x (15288), after 303.306 seconds"
21
* https://tracker.ceph.com/issues/59684
22
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
23 230 Patrick Donnelly
* https://tracker.ceph.com/issues/64972
24
    qa: "ceph tell 4.3a deep-scrub" command not found
25
* https://tracker.ceph.com/issues/54108
26
    qa: iogen workunit: "The following counters failed to be set on mds daemons: {'mds.exported', 'mds.imported'}"
27
* https://tracker.ceph.com/issues/65019
28
    qa/suites/fs/top: [WRN] Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)" in cluster log 
29
* https://tracker.ceph.com/issues/65020
30
    qa: Scrub error on inode 0x1000000356c (/volumes/qa/sv_0/2f8f6bb4-3ea9-47a0-bd79-a0f50dc149d5/client.0/tmp/clients/client7/~dmtmp/PARADOX) see mds.b log and `damage ls` output for details" in cluster log
31
* https://tracker.ceph.com/issues/65021
32
    qa/suites/fs/nfs: cluster [WRN] Health check failed: 1 stray daemon(s) not managed by cephadm (CEPHADM_STRAY_DAEMON)" in cluster log
33
* https://tracker.ceph.com/issues/63699
34
    qa: failed cephfs-shell test_reading_conf
35 231 Patrick Donnelly
* https://tracker.ceph.com/issues/64711
36
    Test failure: test_cephfs_mirror_cancel_mirroring_and_readd (tasks.cephfs.test_mirroring.TestMirroring)
37
* https://tracker.ceph.com/issues/50821
38
    qa: untar_snap_rm failure during mds thrashing
39 232 Patrick Donnelly
* https://tracker.ceph.com/issues/65022
40
    qa: test_max_items_per_obj open procs not fully cleaned up
41 228 Patrick Donnelly
42 226 Venky Shankar
h3.  14th March 2024
43
44
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20240307.013758
45
46 227 Venky Shankar
(pjd.sh failures are related to a bug in the testing kernel. See - https://tracker.ceph.com/issues/64679#note-4)
47 226 Venky Shankar
48
* https://tracker.ceph.com/issues/62067
49
    ffsb.sh failure "Resource temporarily unavailable"
50
* https://tracker.ceph.com/issues/57676
51
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
52
* https://tracker.ceph.com/issues/64502
53
    pacific/quincy/v18.2.0: client: ceph-fuse fails to unmount after upgrade to main
54
* https://tracker.ceph.com/issues/64572
55
    workunits/fsx.sh failure
56
* https://tracker.ceph.com/issues/63700
57
    qa: test_cd_with_args failure
58
* https://tracker.ceph.com/issues/59684
59
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
60
* https://tracker.ceph.com/issues/61243
61
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
62
63 225 Venky Shankar
h3. 5th March 2024
64
65
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20240304.042522
66
67
* https://tracker.ceph.com/issues/57676
68
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
69
* https://tracker.ceph.com/issues/64502
70
    pacific/quincy/v18.2.0: client: ceph-fuse fails to unmount after upgrade to main
71
* https://tracker.ceph.com/issues/63949
72
    leak in mds.c detected by valgrind during CephFS QA run
73
* https://tracker.ceph.com/issues/57656
74
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
75
* https://tracker.ceph.com/issues/63699
76
    qa: failed cephfs-shell test_reading_conf
77
* https://tracker.ceph.com/issues/64572
78
    workunits/fsx.sh failure
79
* https://tracker.ceph.com/issues/64707 (new issue)
80
    suites/fsstress.sh hangs on one client - test times out
81
* https://tracker.ceph.com/issues/59684
82
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
83
* https://tracker.ceph.com/issues/63700
84
    qa: test_cd_with_args failure
85
* https://tracker.ceph.com/issues/64711
86
    Test failure: test_cephfs_mirror_cancel_mirroring_and_readd (tasks.cephfs.test_mirroring.TestMirroring)
87
* https://tracker.ceph.com/issues/64729 (new issue)
88
    mon.a (mon.0) 1281 : cluster 3 [WRN] MDS_SLOW_METADATA_IO: 3 MDSs report slow metadata IOs" in cluster log
89
* https://tracker.ceph.com/issues/64730
90
    fs/misc/multiple_rsync.sh workunit times out
91
92 224 Venky Shankar
h3. 26th Feb 2024
93
94
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20240216.060239
95
96
(This run is a bit messy due to
97
98
  a) OCI runtime issues in the testing kernel with centos9
99
  b) SELinux denials related failures
100
  c) Unrelated MON_DOWN warnings)
101
102
* https://tracker.ceph.com/issues/57676
103
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
104
* https://tracker.ceph.com/issues/63700
105
    qa: test_cd_with_args failure
106
* https://tracker.ceph.com/issues/63949
107
    leak in mds.c detected by valgrind during CephFS QA run
108
* https://tracker.ceph.com/issues/59684
109
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
110
* https://tracker.ceph.com/issues/61243
111
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
112
* https://tracker.ceph.com/issues/63699
113
    qa: failed cephfs-shell test_reading_conf
114
* https://tracker.ceph.com/issues/64172
115
    Test failure: test_multiple_path_r (tasks.cephfs.test_admin.TestFsAuthorize)
116
* https://tracker.ceph.com/issues/57656
117
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
118
* https://tracker.ceph.com/issues/64572
119
    workunits/fsx.sh failure
120
121 222 Patrick Donnelly
h3. 20th Feb 2024
122
123
https://github.com/ceph/ceph/pull/55601
124
https://github.com/ceph/ceph/pull/55659
125
126
https://pulpito.ceph.com/pdonnell-2024-02-20_07:23:03-fs:upgrade:mds_upgrade_sequence-wip-batrick-testing-20240220.022152-distro-default-smithi/
127
128
* https://tracker.ceph.com/issues/64502
129
    client: quincy ceph-fuse fails to unmount after upgrade to main
130
131 223 Patrick Donnelly
This run has numerous problems. #55601 introduces testing for the upgrade sequence from </code>reef/{v18.2.0,v18.2.1,reef}</code> as well as an extra dimension for the ceph-fuse client. The main "big" issue is i64502: the ceph-fuse client is not being unmounted when <code>fusermount -u</code> is called. Instead, the client begins to unmount only after daemons are shut down during test cleanup.
132 218 Venky Shankar
133
h3. 19th Feb 2024
134
135 220 Venky Shankar
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20240217.015652
136
137 218 Venky Shankar
* https://tracker.ceph.com/issues/61243
138
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
139
* https://tracker.ceph.com/issues/63700
140
    qa: test_cd_with_args failure
141
* https://tracker.ceph.com/issues/63141
142
    qa/cephfs: test_idem_unaffected_root_squash fails
143
* https://tracker.ceph.com/issues/59684
144
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
145
* https://tracker.ceph.com/issues/63949
146
    leak in mds.c detected by valgrind during CephFS QA run
147
* https://tracker.ceph.com/issues/63764
148
    Test failure: test_r_with_fsname_and_no_path_in_cap (tasks.cephfs.test_multifs_auth.TestMDSCaps)
149
* https://tracker.ceph.com/issues/63699
150
    qa: failed cephfs-shell test_reading_conf
151 219 Venky Shankar
* https://tracker.ceph.com/issues/64482
152
    ceph: stderr Error: OCI runtime error: crun: bpf create ``: Function not implemented
153 201 Rishabh Dave
154 217 Venky Shankar
h3. 29 Jan 2024
155
156
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20240119.075157-1
157
158
* https://tracker.ceph.com/issues/57676
159
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
160
* https://tracker.ceph.com/issues/63949
161
    leak in mds.c detected by valgrind during CephFS QA run
162
* https://tracker.ceph.com/issues/62067
163
    ffsb.sh failure "Resource temporarily unavailable"
164
* https://tracker.ceph.com/issues/64172
165
    Test failure: test_multiple_path_r (tasks.cephfs.test_admin.TestFsAuthorize)
166
* https://tracker.ceph.com/issues/63265
167
    qa: fs/snaps/snaptest-git-ceph.sh failed when reseting to tag 'v0.1'
168
* https://tracker.ceph.com/issues/61243
169
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
170
* https://tracker.ceph.com/issues/59684
171
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
172
* https://tracker.ceph.com/issues/57656
173
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
174
* https://tracker.ceph.com/issues/64209
175
    snaptest-multiple-capsnaps.sh fails with "got remote process result: 1"
176
177 216 Venky Shankar
h3. 17th Jan 2024
178
179
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20240103.072409-1
180
181
* https://tracker.ceph.com/issues/63764
182
    Test failure: test_r_with_fsname_and_no_path_in_cap (tasks.cephfs.test_multifs_auth.TestMDSCaps)
183
* https://tracker.ceph.com/issues/57676
184
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
185
* https://tracker.ceph.com/issues/51964
186
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
187
* https://tracker.ceph.com/issues/63949
188
    leak in mds.c detected by valgrind during CephFS QA run
189
* https://tracker.ceph.com/issues/62067
190
    ffsb.sh failure "Resource temporarily unavailable"
191
* https://tracker.ceph.com/issues/61243
192
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
193
* https://tracker.ceph.com/issues/63259
194
    mds: failed to store backtrace and force file system read-only
195
* https://tracker.ceph.com/issues/63265
196
    qa: fs/snaps/snaptest-git-ceph.sh failed when reseting to tag 'v0.1'
197
198
h3. 16 Jan 2024
199 215 Rishabh Dave
200 214 Rishabh Dave
https://pulpito.ceph.com/rishabh-2023-12-11_15:37:57-fs-rishabh-2023dec11-testing-default-smithi/
201
https://pulpito.ceph.com/rishabh-2023-12-17_11:19:43-fs-rishabh-2023dec11-testing-default-smithi/
202
https://pulpito.ceph.com/rishabh-2024-01-04_18:43:16-fs-rishabh-2024jan4-testing-default-smithi
203
204
* https://tracker.ceph.com/issues/63764
205
  Test failure: test_r_with_fsname_and_no_path_in_cap (tasks.cephfs.test_multifs_auth.TestMDSCaps)
206
* https://tracker.ceph.com/issues/63141
207
  qa/cephfs: test_idem_unaffected_root_squash fails
208
* https://tracker.ceph.com/issues/62067
209
  ffsb.sh failure "Resource temporarily unavailable" 
210
* https://tracker.ceph.com/issues/51964
211
  qa: test_cephfs_mirror_restart_sync_on_blocklist failure
212
* https://tracker.ceph.com/issues/54462 
213
  Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128
214
* https://tracker.ceph.com/issues/57676
215
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
216
217
* https://tracker.ceph.com/issues/63949
218
  valgrind leak in MDS
219
* https://tracker.ceph.com/issues/64041
220
  qa/cephfs: fs/upgrade/nofs suite attempts to jump more than 2 releases
221
* fsstress failure in last run was due a kernel MM layer failure, unrelated to CephFS
222
* from last run, job #7507400 failed due to MGR; FS wasn't degraded, so it's unrelated to CephFS
223
224 213 Venky Shankar
h3. 06 Dec 2023
225
226
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20231206.125818
227
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20231206.125818-x (rerun w/ squid kickoff changes)
228
229
* https://tracker.ceph.com/issues/63764
230
    Test failure: test_r_with_fsname_and_no_path_in_cap (tasks.cephfs.test_multifs_auth.TestMDSCaps)
231
* https://tracker.ceph.com/issues/63233
232
    mon|client|mds: valgrind reports possible leaks in the MDS
233
* https://tracker.ceph.com/issues/57676
234
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
235
* https://tracker.ceph.com/issues/62580
236
    testing: Test failure: test_snapshot_remove (tasks.cephfs.test_strays.TestStrays)
237
* https://tracker.ceph.com/issues/62067
238
    ffsb.sh failure "Resource temporarily unavailable"
239
* https://tracker.ceph.com/issues/61243
240
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
241
* https://tracker.ceph.com/issues/62081
242
    tasks/fscrypt-common does not finish, timesout
243
* https://tracker.ceph.com/issues/63265
244
    qa: fs/snaps/snaptest-git-ceph.sh failed when reseting to tag 'v0.1'
245
* https://tracker.ceph.com/issues/63806
246
    ffsb.sh workunit failure (MDS: std::out_of_range, damaged)
247
248 211 Patrick Donnelly
h3. 30 Nov 2023
249
250
https://pulpito.ceph.com/pdonnell-2023-11-30_08:05:19-fs:shell-wip-batrick-testing-20231130.014408-distro-default-smithi/
251
252
* https://tracker.ceph.com/issues/63699
253 212 Patrick Donnelly
    qa: failed cephfs-shell test_reading_conf
254
* https://tracker.ceph.com/issues/63700
255
    qa: test_cd_with_args failure
256 211 Patrick Donnelly
257 210 Venky Shankar
h3. 29 Nov 2023
258
259
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20231107.042705
260
261
* https://tracker.ceph.com/issues/63233
262
    mon|client|mds: valgrind reports possible leaks in the MDS
263
* https://tracker.ceph.com/issues/63141
264
    qa/cephfs: test_idem_unaffected_root_squash fails
265
* https://tracker.ceph.com/issues/57676
266
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
267
* https://tracker.ceph.com/issues/57655
268
    qa: fs:mixed-clients kernel_untar_build failure
269
* https://tracker.ceph.com/issues/62067
270
    ffsb.sh failure "Resource temporarily unavailable"
271
* https://tracker.ceph.com/issues/61243
272
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
273
* https://tracker.ceph.com/issues/62510 (pending RHEL back port)
    snaptest-git-ceph.sh failure with fs/thrash
274
* https://tracker.ceph.com/issues/62810
275
    Failure in snaptest-git-ceph.sh (it's an async unlink/create bug) -- Need to fix again
276
277 206 Venky Shankar
h3. 14 Nov 2023
278 207 Milind Changire
(Milind)
279
280
https://pulpito.ceph.com/mchangir-2023-11-13_10:27:15-fs-wip-mchangir-testing-20231110.052303-testing-default-smithi/
281
282
* https://tracker.ceph.com/issues/53859
283
  qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
284
* https://tracker.ceph.com/issues/63233
285
  mon|client|mds: valgrind reports possible leaks in the MDS
286
* https://tracker.ceph.com/issues/63521
287
  qa: Test failure: test_scrub_merge_dirfrags (tasks.cephfs.test_scrub_checks.TestScrubChecks)
288
* https://tracker.ceph.com/issues/57655
289
  qa: fs:mixed-clients kernel_untar_build failure
290
* https://tracker.ceph.com/issues/62580
291
    testing: Test failure: test_snapshot_remove (tasks.cephfs.test_strays.TestStrays)
292
* https://tracker.ceph.com/issues/57676
293
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
294
* https://tracker.ceph.com/issues/61243
295
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
296
* https://tracker.ceph.com/issues/63141
297
    qa/cephfs: test_idem_unaffected_root_squash fails
298
* https://tracker.ceph.com/issues/51964
299
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
300
* https://tracker.ceph.com/issues/63522
301
    No module named 'tasks.ceph_fuse'
302
    No module named 'tasks.kclient'
303
    No module named 'tasks.cephfs.fuse_mount'
304
    No module named 'tasks.ceph'
305
* https://tracker.ceph.com/issues/63523
306
    Command failed - qa/workunits/fs/misc/general_vxattrs.sh
307
308
309
h3. 14 Nov 2023
310 206 Venky Shankar
311
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20231106.073650
312
313
(nvm the fs:upgrade test failure - the PR is excluded from merge)
314
315
* https://tracker.ceph.com/issues/57676
316
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
317
* https://tracker.ceph.com/issues/63233
318
    mon|client|mds: valgrind reports possible leaks in the MDS
319
* https://tracker.ceph.com/issues/63141
320
    qa/cephfs: test_idem_unaffected_root_squash fails
321
* https://tracker.ceph.com/issues/62580
322
    testing: Test failure: test_snapshot_remove (tasks.cephfs.test_strays.TestStrays)
323
* https://tracker.ceph.com/issues/57655
324
    qa: fs:mixed-clients kernel_untar_build failure
325
* https://tracker.ceph.com/issues/51964
326
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
327
* https://tracker.ceph.com/issues/63519
328
    ceph-fuse: reef ceph-fuse crashes with main branch ceph-mds
329
* https://tracker.ceph.com/issues/57087
330
    qa: test_fragmented_injection (tasks.cephfs.test_data_scan.TestDataScan) failure
331
* https://tracker.ceph.com/issues/58945
332
    qa: xfstests-dev's generic test suite has 20 failures with fuse client
333
334 204 Rishabh Dave
h3. 7 Nov 2023
335
336 205 Rishabh Dave
fs: https://pulpito.ceph.com/rishabh-2023-11-04_04:30:51-fs-rishabh-2023nov3-testing-default-smithi/
337
re-run: https://pulpito.ceph.com/rishabh-2023-11-05_14:10:09-fs-rishabh-2023nov3-testing-default-smithi/
338
smoke: https://pulpito.ceph.com/rishabh-2023-11-08_08:39:05-smoke-rishabh-2023nov3-testing-default-smithi/
339 204 Rishabh Dave
340
* https://tracker.ceph.com/issues/53859
341
  qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
342
* https://tracker.ceph.com/issues/63233
343
  mon|client|mds: valgrind reports possible leaks in the MDS
344
* https://tracker.ceph.com/issues/57655
345
  qa: fs:mixed-clients kernel_untar_build failure
346
* https://tracker.ceph.com/issues/57676
347
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
348
349
* https://tracker.ceph.com/issues/63473
350
  fsstress.sh failed with errno 124
351
352 202 Rishabh Dave
h3. 3 Nov 2023
353 203 Rishabh Dave
354 202 Rishabh Dave
https://pulpito.ceph.com/rishabh-2023-10-27_06:26:52-fs-rishabh-2023oct26-testing-default-smithi/
355
356
* https://tracker.ceph.com/issues/63141
357
  qa/cephfs: test_idem_unaffected_root_squash fails
358
* https://tracker.ceph.com/issues/63233
359
  mon|client|mds: valgrind reports possible leaks in the MDS
360
* https://tracker.ceph.com/issues/57656
361
  dbench: write failed on handle 10010 (Resource temporarily unavailable)
362
* https://tracker.ceph.com/issues/57655
363
  qa: fs:mixed-clients kernel_untar_build failure
364
* https://tracker.ceph.com/issues/57676
365
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
366
367
* https://tracker.ceph.com/issues/59531
368
  "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio)"
369
* https://tracker.ceph.com/issues/52624
370
  qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
371
372 198 Patrick Donnelly
h3. 24 October 2023
373
374
https://pulpito.ceph.com/?branch=wip-batrick-testing-20231024.144545
375
376 200 Patrick Donnelly
Two failures:
377
378
https://pulpito.ceph.com/pdonnell-2023-10-26_05:21:22-fs-wip-batrick-testing-20231024.144545-distro-default-smithi/7438459/
379
https://pulpito.ceph.com/pdonnell-2023-10-26_05:21:22-fs-wip-batrick-testing-20231024.144545-distro-default-smithi/7438468/
380
381
probably related to https://github.com/ceph/ceph/pull/53255. Killing the mount as part of the test did not complete. Will research more.
382
383 198 Patrick Donnelly
* https://tracker.ceph.com/issues/52624
384
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
385
* https://tracker.ceph.com/issues/57676
386 199 Patrick Donnelly
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
387
* https://tracker.ceph.com/issues/63233
388
    mon|client|mds: valgrind reports possible leaks in the MDS
389
* https://tracker.ceph.com/issues/59531
390
    "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS 
391
* https://tracker.ceph.com/issues/57655
392
    qa: fs:mixed-clients kernel_untar_build failure
393 200 Patrick Donnelly
* https://tracker.ceph.com/issues/62067
394
    ffsb.sh failure "Resource temporarily unavailable"
395
* https://tracker.ceph.com/issues/63411
396
    qa: flush journal may cause timeouts of `scrub status`
397
* https://tracker.ceph.com/issues/61243
398
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
399
* https://tracker.ceph.com/issues/63141
400 198 Patrick Donnelly
    test_idem_unaffected_root_squash (test_admin.TestFsAuthorizeUpdate) fails
401 148 Rishabh Dave
402 195 Venky Shankar
h3. 18 Oct 2023
403
404
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20231018.065603
405
406
* https://tracker.ceph.com/issues/52624
407
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
408
* https://tracker.ceph.com/issues/57676
409
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
410
* https://tracker.ceph.com/issues/63233
411
    mon|client|mds: valgrind reports possible leaks in the MDS
412
* https://tracker.ceph.com/issues/63141
413
    qa/cephfs: test_idem_unaffected_root_squash fails
414
* https://tracker.ceph.com/issues/59531
415
  "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio)" 
416
* https://tracker.ceph.com/issues/62658
417
    error during scrub thrashing: reached maximum tries (31) after waiting for 900 seconds
418
* https://tracker.ceph.com/issues/62580
419
    testing: Test failure: test_snapshot_remove (tasks.cephfs.test_strays.TestStrays)
420
* https://tracker.ceph.com/issues/62067
421
    ffsb.sh failure "Resource temporarily unavailable"
422
* https://tracker.ceph.com/issues/57655
423
    qa: fs:mixed-clients kernel_untar_build failure
424
* https://tracker.ceph.com/issues/62036
425
    src/mds/MDCache.cc: 5131: FAILED ceph_assert(isolated_inodes.empty())
426
* https://tracker.ceph.com/issues/58945
427
    qa: xfstests-dev's generic test suite has 20 failures with fuse client
428
* https://tracker.ceph.com/issues/62847
429
    mds: blogbench requests stuck (5mds+scrub+snaps-flush)
430
431 193 Venky Shankar
h3. 13 Oct 2023
432
433
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20231013.093215
434
435
* https://tracker.ceph.com/issues/52624
436
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
437
* https://tracker.ceph.com/issues/62936
438
    Test failure: test_mirroring_init_failure_with_recovery (tasks.cephfs.test_mirroring.TestMirroring)
439
* https://tracker.ceph.com/issues/47292
440
    cephfs-shell: test_df_for_valid_file failure
441
* https://tracker.ceph.com/issues/63141
442
    qa/cephfs: test_idem_unaffected_root_squash fails
443
* https://tracker.ceph.com/issues/62081
444
    tasks/fscrypt-common does not finish, timesout
445 1 Patrick Donnelly
* https://tracker.ceph.com/issues/58945
446
    qa: xfstests-dev's generic test suite has 20 failures with fuse client
447 194 Venky Shankar
* https://tracker.ceph.com/issues/63233
448
    mon|client|mds: valgrind reports possible leaks in the MDS
449 193 Venky Shankar
450 190 Patrick Donnelly
h3. 16 Oct 2023
451
452
https://pulpito.ceph.com/?branch=wip-batrick-testing-20231016.203825
453
454 192 Patrick Donnelly
Infrastructure issues:
455
* /teuthology/pdonnell-2023-10-19_12:04:12-fs-wip-batrick-testing-20231016.203825-distro-default-smithi/7432286/teuthology.log
456
    Host lost.
457
458 196 Patrick Donnelly
One followup fix:
459
* https://pulpito.ceph.com/pdonnell-2023-10-20_00:33:29-fs-wip-batrick-testing-20231016.203825-distro-default-smithi/
460
461 192 Patrick Donnelly
Failures:
462
463
* https://tracker.ceph.com/issues/56694
464
    qa: avoid blocking forever on hung umount
465
* https://tracker.ceph.com/issues/63089
466
    qa: tasks/mirror times out
467
* https://tracker.ceph.com/issues/52624
468
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
469
* https://tracker.ceph.com/issues/59531
470
  "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio)" 
471
* https://tracker.ceph.com/issues/57676
472
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
473
* https://tracker.ceph.com/issues/62658 
474
    error during scrub thrashing: reached maximum tries (31) after waiting for 900 seconds
475
* https://tracker.ceph.com/issues/61243
476
  test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
477
* https://tracker.ceph.com/issues/57656
478
  dbench: write failed on handle 10010 (Resource temporarily unavailable)
479
* https://tracker.ceph.com/issues/63233
480
  mon|client|mds: valgrind reports possible leaks in the MDS
481 197 Patrick Donnelly
* https://tracker.ceph.com/issues/63278
482
  kclient: may wrongly decode session messages and believe it is blocklisted (dead jobs)
483 192 Patrick Donnelly
484 189 Rishabh Dave
h3. 9 Oct 2023
485
486
https://pulpito.ceph.com/rishabh-2023-10-06_11:56:52-fs-rishabh-cephfs-mon-testing-default-smithi/
487
488
* https://tracker.ceph.com/issues/54460
489
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
490
* https://tracker.ceph.com/issues/63141
491
  test_idem_unaffected_root_squash (test_admin.TestFsAuthorizeUpdate) fails
492
* https://tracker.ceph.com/issues/62937
493
  logrotate doesn't support parallel execution on same set of logfiles
494
* https://tracker.ceph.com/issues/61400
495
  valgrind+ceph-mon issues
496
* https://tracker.ceph.com/issues/57676
497
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
498
* https://tracker.ceph.com/issues/55805
499
  error during scrub thrashing reached max tries in 900 secs
500
501 188 Venky Shankar
h3. 26 Sep 2023
502
503
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230926.081818
504
505
* https://tracker.ceph.com/issues/52624
506
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
507
* https://tracker.ceph.com/issues/62873
508
    qa: FAIL: test_client_blocklisted_oldest_tid (tasks.cephfs.test_client_limits.TestClientLimits)
509
* https://tracker.ceph.com/issues/61400
510
    valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
511
* https://tracker.ceph.com/issues/57676
512
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
513
* https://tracker.ceph.com/issues/62682
514
    mon: no mdsmap broadcast after "fs set joinable" is set to true
515
* https://tracker.ceph.com/issues/63089
516
    qa: tasks/mirror times out
517
518 185 Rishabh Dave
h3. 22 Sep 2023
519
520
https://pulpito.ceph.com/rishabh-2023-09-12_12:12:15-fs-wip-rishabh-2023sep12-b2-testing-default-smithi/
521
522
* https://tracker.ceph.com/issues/59348
523
  qa: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota
524
* https://tracker.ceph.com/issues/59344
525
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
526
* https://tracker.ceph.com/issues/59531
527
  "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio)" 
528
* https://tracker.ceph.com/issues/61574
529
  build failure for mdtest project
530
* https://tracker.ceph.com/issues/62702
531
  fsstress.sh: MDS slow requests for the internal 'rename' requests
532
* https://tracker.ceph.com/issues/57676
533
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
534
535
* https://tracker.ceph.com/issues/62863 
536
  deadlock in ceph-fuse causes teuthology job to hang and fail
537
* https://tracker.ceph.com/issues/62870
538
  test_cluster_info (tasks.cephfs.test_nfs.TestNFS)
539
* https://tracker.ceph.com/issues/62873
540
  test_client_blocklisted_oldest_tid (tasks.cephfs.test_client_limits.TestClientLimits)
541
542 186 Venky Shankar
h3. 20 Sep 2023
543
544
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230920.072635
545
546
* https://tracker.ceph.com/issues/52624
547
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
548
* https://tracker.ceph.com/issues/61400
549
    valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
550
* https://tracker.ceph.com/issues/61399
551
    libmpich: undefined references to fi_strerror
552
* https://tracker.ceph.com/issues/62081
553
    tasks/fscrypt-common does not finish, timesout
554
* https://tracker.ceph.com/issues/62658 
555
    error during scrub thrashing: reached maximum tries (31) after waiting for 900 seconds
556
* https://tracker.ceph.com/issues/62915
557
    qa/suites/fs/nfs: No orchestrator configured (try `ceph orch set backend`) while running test cases
558
* https://tracker.ceph.com/issues/59531
559
    quincy: "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio)"
560
* https://tracker.ceph.com/issues/62873
561
  qa: FAIL: test_client_blocklisted_oldest_tid (tasks.cephfs.test_client_limits.TestClientLimits)
562
* https://tracker.ceph.com/issues/62936
563
    Test failure: test_mirroring_init_failure_with_recovery (tasks.cephfs.test_mirroring.TestMirroring)
564
* https://tracker.ceph.com/issues/62937
565
    Command failed on smithi027 with status 3: 'sudo logrotate /etc/logrotate.d/ceph-test.conf'
566
* https://tracker.ceph.com/issues/62510
567
    snaptest-git-ceph.sh failure with fs/thrash
568
* https://tracker.ceph.com/issues/62081
569
    tasks/fscrypt-common does not finish, timesout
570
* https://tracker.ceph.com/issues/62126
571
    test failure: suites/blogbench.sh stops running
572 187 Venky Shankar
* https://tracker.ceph.com/issues/62682
573
    mon: no mdsmap broadcast after "fs set joinable" is set to true
574 186 Venky Shankar
575 184 Milind Changire
h3. 19 Sep 2023
576
577
http://pulpito.front.sepia.ceph.com/mchangir-2023-09-12_05:40:22-fs-wip-mchangir-testing-20230908.140927-testing-default-smithi/
578
579
* https://tracker.ceph.com/issues/58220#note-9
580
  workunit fs/test_python.sh: test_disk_quota_exceeeded_error failure
581
* https://tracker.ceph.com/issues/62702
582
  Command failed (workunit test suites/fsstress.sh) on smithi124 with status 124
583
* https://tracker.ceph.com/issues/57676
584
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
585
* https://tracker.ceph.com/issues/59348
586
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
587
* https://tracker.ceph.com/issues/52624
588
  qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
589
* https://tracker.ceph.com/issues/51964
590
  qa: test_cephfs_mirror_restart_sync_on_blocklist failure
591
* https://tracker.ceph.com/issues/61243
592
  test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
593
* https://tracker.ceph.com/issues/59344
594
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
595
* https://tracker.ceph.com/issues/62873
596
  qa: FAIL: test_client_blocklisted_oldest_tid (tasks.cephfs.test_client_limits.TestClientLimits)
597
* https://tracker.ceph.com/issues/59413
598
  cephfs: qa snaptest-git-ceph.sh failed with "got remote process result: 128"
599
* https://tracker.ceph.com/issues/53859
600
  qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
601
* https://tracker.ceph.com/issues/62482
602
  qa: cluster [WRN] Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
603
604 178 Patrick Donnelly
605 177 Venky Shankar
h3. 13 Sep 2023
606
607
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230908.065909
608
609
* https://tracker.ceph.com/issues/52624
610
      qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
611
* https://tracker.ceph.com/issues/57655
612
    qa: fs:mixed-clients kernel_untar_build failure
613
* https://tracker.ceph.com/issues/57676
614
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
615
* https://tracker.ceph.com/issues/61243
616
    qa: tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev - 17 tests failed
617
* https://tracker.ceph.com/issues/62567
618
    postgres workunit times out - MDS_SLOW_REQUEST in logs
619
* https://tracker.ceph.com/issues/61400
620
    valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
621
* https://tracker.ceph.com/issues/61399
622
    libmpich: undefined references to fi_strerror
623
* https://tracker.ceph.com/issues/57655
624
    qa: fs:mixed-clients kernel_untar_build failure
625
* https://tracker.ceph.com/issues/57676
626
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
627
* https://tracker.ceph.com/issues/51964
628
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
629
* https://tracker.ceph.com/issues/62081
630
    tasks/fscrypt-common does not finish, timesout
631 178 Patrick Donnelly
632 179 Patrick Donnelly
h3. 2023 Sep 12
633 178 Patrick Donnelly
634
https://pulpito.ceph.com/pdonnell-2023-09-12_14:07:50-fs-wip-batrick-testing-20230912.122437-distro-default-smithi/
635 1 Patrick Donnelly
636 181 Patrick Donnelly
A few failures caused by qa refactoring in https://github.com/ceph/ceph/pull/48130 ; notably:
637
638 182 Patrick Donnelly
* Test failure: test_export_pin_many (tasks.cephfs.test_exports.TestExportPin) caused by fragmentation from config changes.
639 181 Patrick Donnelly
640
Failures:
641
642 179 Patrick Donnelly
* https://tracker.ceph.com/issues/59348
643
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
644
* https://tracker.ceph.com/issues/57656
645
  dbench: write failed on handle 10010 (Resource temporarily unavailable)
646
* https://tracker.ceph.com/issues/55805
647
  error scrub thrashing reached max tries in 900 secs
648
* https://tracker.ceph.com/issues/62067
649
    ffsb.sh failure "Resource temporarily unavailable"
650
* https://tracker.ceph.com/issues/59344
651
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
652
* https://tracker.ceph.com/issues/61399
653 180 Patrick Donnelly
  libmpich: undefined references to fi_strerror
654
* https://tracker.ceph.com/issues/62832
655
  common: config_proxy deadlock during shutdown (and possibly other times)
656
* https://tracker.ceph.com/issues/59413
657 1 Patrick Donnelly
  cephfs: qa snaptest-git-ceph.sh failed with "got remote process result: 128"
658 181 Patrick Donnelly
* https://tracker.ceph.com/issues/57676
659
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
660
* https://tracker.ceph.com/issues/62567
661
  Command failed with status 1: "sudo TESTDIR=/home/ubuntu/cephtest bash -c 'sudo -u postgres -- pgbench -s 500 -i'"
662
* https://tracker.ceph.com/issues/54460
663
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
664
* https://tracker.ceph.com/issues/58220#note-9
665
  workunit fs/test_python.sh: test_disk_quota_exceeeded_error failure
666
* https://tracker.ceph.com/issues/59348
667
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
668 183 Patrick Donnelly
* https://tracker.ceph.com/issues/62847
669
    mds: blogbench requests stuck (5mds+scrub+snaps-flush)
670
* https://tracker.ceph.com/issues/62848
671
    qa: fail_fs upgrade scenario hanging
672
* https://tracker.ceph.com/issues/62081
673
    tasks/fscrypt-common does not finish, timesout
674 177 Venky Shankar
675 176 Venky Shankar
h3. 11 Sep 2023
676 175 Venky Shankar
677
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230830.153114
678
679
* https://tracker.ceph.com/issues/52624
680
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
681
* https://tracker.ceph.com/issues/61399
682
    qa: build failure for ior (the failed instance is when compiling `mdtest`)
683
* https://tracker.ceph.com/issues/57655
684
    qa: fs:mixed-clients kernel_untar_build failure
685
* https://tracker.ceph.com/issues/61399
686
    ior build failure
687
* https://tracker.ceph.com/issues/59531
688
    quincy: "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio)"
689
* https://tracker.ceph.com/issues/59344
690
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
691
* https://tracker.ceph.com/issues/59346
692
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
693
* https://tracker.ceph.com/issues/59348
694
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
695
* https://tracker.ceph.com/issues/57676
696
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
697
* https://tracker.ceph.com/issues/61243
698
  qa: tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev - 17 tests failed
699
* https://tracker.ceph.com/issues/62567
700
  postgres workunit times out - MDS_SLOW_REQUEST in logs
701
702
703 174 Rishabh Dave
h3. 6 Sep 2023 Run 2
704
705
https://pulpito.ceph.com/rishabh-2023-08-25_01:50:32-fs-wip-rishabh-2023aug3-b5-testing-default-smithi/ 
706
707
* https://tracker.ceph.com/issues/51964
708
  test_cephfs_mirror_restart_sync_on_blocklist failure
709
* https://tracker.ceph.com/issues/59348
710
  test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
711
* https://tracker.ceph.com/issues/53859
712
  qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
713
* https://tracker.ceph.com/issues/61892
714
  test_strays.TestStrays.test_snapshot_remove failed
715
* https://tracker.ceph.com/issues/54460
716
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
717
* https://tracker.ceph.com/issues/59346
718
  fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write"
719
* https://tracker.ceph.com/issues/59344
720
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
721
* https://tracker.ceph.com/issues/62484
722
  qa: ffsb.sh test failure
723
* https://tracker.ceph.com/issues/62567
724
  Command failed with status 1: "sudo TESTDIR=/home/ubuntu/cephtest bash -c 'sudo -u postgres -- pgbench -s 500 -i'"
725
  
726
* https://tracker.ceph.com/issues/61399
727
  ior build failure
728
* https://tracker.ceph.com/issues/57676
729
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
730
* https://tracker.ceph.com/issues/55805
731
  error scrub thrashing reached max tries in 900 secs
732
733 172 Rishabh Dave
h3. 6 Sep 2023
734 171 Rishabh Dave
735 173 Rishabh Dave
https://pulpito.ceph.com/rishabh-2023-08-10_20:16:46-fs-wip-rishabh-2023Aug1-b4-testing-default-smithi/
736 171 Rishabh Dave
737 1 Patrick Donnelly
* https://tracker.ceph.com/issues/53859
738
  qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
739 173 Rishabh Dave
* https://tracker.ceph.com/issues/51964
740
  qa: test_cephfs_mirror_restart_sync_on_blocklist failure
741 1 Patrick Donnelly
* https://tracker.ceph.com/issues/61892
742 173 Rishabh Dave
  test_snapshot_remove (test_strays.TestStrays) failed
743
* https://tracker.ceph.com/issues/59348
744
  qa: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota
745
* https://tracker.ceph.com/issues/54462
746
  Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128
747
* https://tracker.ceph.com/issues/62556
748
  test_acls: xfstests_dev: python2 is missing
749
* https://tracker.ceph.com/issues/62067
750
  ffsb.sh failure "Resource temporarily unavailable"
751
* https://tracker.ceph.com/issues/57656
752
  dbench: write failed on handle 10010 (Resource temporarily unavailable)
753 1 Patrick Donnelly
* https://tracker.ceph.com/issues/59346
754
  fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write"
755 171 Rishabh Dave
* https://tracker.ceph.com/issues/59344
756 173 Rishabh Dave
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
757
758 171 Rishabh Dave
* https://tracker.ceph.com/issues/61399
759
  ior build failure
760
* https://tracker.ceph.com/issues/57676
761
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
762
* https://tracker.ceph.com/issues/55805
763
  error scrub thrashing reached max tries in 900 secs
764 173 Rishabh Dave
765
* https://tracker.ceph.com/issues/62567
766
  Command failed on smithi008 with status 1: "sudo TESTDIR=/home/ubuntu/cephtest bash -c 'sudo -u postgres -- pgbench -s 500 -i'"
767
* https://tracker.ceph.com/issues/62702
768
  workunit test suites/fsstress.sh on smithi066 with status 124
769 170 Rishabh Dave
770
h3. 5 Sep 2023
771
772
https://pulpito.ceph.com/rishabh-2023-08-25_06:38:25-fs-wip-rishabh-2023aug3-b5-testing-default-smithi/
773
orch:cephadm suite run: http://pulpito.front.sepia.ceph.com/rishabh-2023-09-05_12:16:09-orch:cephadm-wip-rishabh-2023aug3-b5-testing-default-smithi/
774
  this run has failures but acc to Adam King these are not relevant and should be ignored
775
776
* https://tracker.ceph.com/issues/61892
777
  test_snapshot_remove (test_strays.TestStrays) failed
778
* https://tracker.ceph.com/issues/59348
779
  test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota
780
* https://tracker.ceph.com/issues/54462
781
  Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128
782
* https://tracker.ceph.com/issues/62067
783
  ffsb.sh failure "Resource temporarily unavailable"
784
* https://tracker.ceph.com/issues/57656 
785
  dbench: write failed on handle 10010 (Resource temporarily unavailable)
786
* https://tracker.ceph.com/issues/59346
787
  fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write"
788
* https://tracker.ceph.com/issues/59344
789
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
790
* https://tracker.ceph.com/issues/50223
791
  client.xxxx isn't responding to mclientcaps(revoke)
792
* https://tracker.ceph.com/issues/57655
793
  qa: fs:mixed-clients kernel_untar_build failure
794
* https://tracker.ceph.com/issues/62187
795
  iozone.sh: line 5: iozone: command not found
796
 
797
* https://tracker.ceph.com/issues/61399
798
  ior build failure
799
* https://tracker.ceph.com/issues/57676
800
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
801
* https://tracker.ceph.com/issues/55805
802
  error scrub thrashing reached max tries in 900 secs
803 169 Venky Shankar
804
805
h3. 31 Aug 2023
806
807
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230824.045828
808
809
* https://tracker.ceph.com/issues/52624
810
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
811
* https://tracker.ceph.com/issues/62187
812
    iozone: command not found
813
* https://tracker.ceph.com/issues/61399
814
    ior build failure
815
* https://tracker.ceph.com/issues/59531
816
    quincy: "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio)"
817
* https://tracker.ceph.com/issues/61399
818
    qa: build failure for ior (the failed instance is when compiling `mdtest`)
819
* https://tracker.ceph.com/issues/57655
820
    qa: fs:mixed-clients kernel_untar_build failure
821
* https://tracker.ceph.com/issues/59344
822
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
823
* https://tracker.ceph.com/issues/59346
824
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
825
* https://tracker.ceph.com/issues/59348
826
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
827
* https://tracker.ceph.com/issues/59413
828
    cephfs: qa snaptest-git-ceph.sh failed with "got remote process result: 128"
829
* https://tracker.ceph.com/issues/62653
830
    qa: unimplemented fcntl command: 1036 with fsstress
831
* https://tracker.ceph.com/issues/61400
832
    valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
833
* https://tracker.ceph.com/issues/62658
834
    error during scrub thrashing: reached maximum tries (31) after waiting for 900 seconds
835
* https://tracker.ceph.com/issues/62188
836
    AttributeError: 'RemoteProcess' object has no attribute 'read' (happens only with multis-auth test)
837 168 Venky Shankar
838
839
h3. 25 Aug 2023
840
841
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230822.064807
842
843
* https://tracker.ceph.com/issues/59344
844
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
845
* https://tracker.ceph.com/issues/59346
846
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
847
* https://tracker.ceph.com/issues/59348
848
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
849
* https://tracker.ceph.com/issues/57655
850
    qa: fs:mixed-clients kernel_untar_build failure
851
* https://tracker.ceph.com/issues/61243
852
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
853
* https://tracker.ceph.com/issues/61399
854
    ior build failure
855
* https://tracker.ceph.com/issues/61399
856
    qa: build failure for ior (the failed instance is when compiling `mdtest`)
857
* https://tracker.ceph.com/issues/62484
858
    qa: ffsb.sh test failure
859
* https://tracker.ceph.com/issues/59531
860
    quincy: "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio)"
861
* https://tracker.ceph.com/issues/62510
862
    snaptest-git-ceph.sh failure with fs/thrash
863 167 Venky Shankar
864
865
h3. 24 Aug 2023
866
867
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230822.060131
868
869
* https://tracker.ceph.com/issues/57676
870
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
871
* https://tracker.ceph.com/issues/51964
872
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
873
* https://tracker.ceph.com/issues/59344
874
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
875
* https://tracker.ceph.com/issues/59346
876
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
877
* https://tracker.ceph.com/issues/59348
878
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
879
* https://tracker.ceph.com/issues/61399
880
    ior build failure
881
* https://tracker.ceph.com/issues/61399
882
    qa: build failure for ior (the failed instance is when compiling `mdtest`)
883
* https://tracker.ceph.com/issues/62510
884
    snaptest-git-ceph.sh failure with fs/thrash
885
* https://tracker.ceph.com/issues/62484
886
    qa: ffsb.sh test failure
887
* https://tracker.ceph.com/issues/57087
888
    qa: test_fragmented_injection (tasks.cephfs.test_data_scan.TestDataScan) failure
889
* https://tracker.ceph.com/issues/57656
890
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
891
* https://tracker.ceph.com/issues/62187
892
    iozone: command not found
893
* https://tracker.ceph.com/issues/62188
894
    AttributeError: 'RemoteProcess' object has no attribute 'read' (happens only with multis-auth test)
895
* https://tracker.ceph.com/issues/62567
896
    postgres workunit times out - MDS_SLOW_REQUEST in logs
897 166 Venky Shankar
898
899
h3. 22 Aug 2023
900
901
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230809.035933
902
903
* https://tracker.ceph.com/issues/57676
904
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
905
* https://tracker.ceph.com/issues/51964
906
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
907
* https://tracker.ceph.com/issues/59344
908
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
909
* https://tracker.ceph.com/issues/59346
910
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
911
* https://tracker.ceph.com/issues/59348
912
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
913
* https://tracker.ceph.com/issues/61399
914
    ior build failure
915
* https://tracker.ceph.com/issues/61399
916
    qa: build failure for ior (the failed instance is when compiling `mdtest`)
917
* https://tracker.ceph.com/issues/57655
918
    qa: fs:mixed-clients kernel_untar_build failure
919
* https://tracker.ceph.com/issues/61243
920
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
921
* https://tracker.ceph.com/issues/62188
922
    AttributeError: 'RemoteProcess' object has no attribute 'read' (happens only with multis-auth test)
923
* https://tracker.ceph.com/issues/62510
924
    snaptest-git-ceph.sh failure with fs/thrash
925
* https://tracker.ceph.com/issues/62511
926
    src/mds/MDLog.cc: 299: FAILED ceph_assert(!mds_is_shutting_down)
927 165 Venky Shankar
928
929
h3. 14 Aug 2023
930
931
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230808.093601
932
933
* https://tracker.ceph.com/issues/51964
934
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
935
* https://tracker.ceph.com/issues/61400
936
    valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
937
* https://tracker.ceph.com/issues/61399
938
    ior build failure
939
* https://tracker.ceph.com/issues/59348
940
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
941
* https://tracker.ceph.com/issues/59531
942
    cluster [WRN] OSD bench result of 137706.272521 IOPS exceeded the threshold
943
* https://tracker.ceph.com/issues/59344
944
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
945
* https://tracker.ceph.com/issues/59346
946
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
947
* https://tracker.ceph.com/issues/61399
948
    qa: build failure for ior (the failed instance is when compiling `mdtest`)
949
* https://tracker.ceph.com/issues/59684 [kclient bug]
950
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
951
* https://tracker.ceph.com/issues/61243 (NEW)
952
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
953
* https://tracker.ceph.com/issues/57655
954
    qa: fs:mixed-clients kernel_untar_build failure
955
* https://tracker.ceph.com/issues/57656
956
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
957 163 Venky Shankar
958
959
h3. 28 JULY 2023
960
961
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230725.053049
962
963
* https://tracker.ceph.com/issues/51964
964
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
965
* https://tracker.ceph.com/issues/61400
966
    valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
967
* https://tracker.ceph.com/issues/61399
968
    ior build failure
969
* https://tracker.ceph.com/issues/57676
970
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
971
* https://tracker.ceph.com/issues/59348
972
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
973
* https://tracker.ceph.com/issues/59531
974
    cluster [WRN] OSD bench result of 137706.272521 IOPS exceeded the threshold
975
* https://tracker.ceph.com/issues/59344
976
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
977
* https://tracker.ceph.com/issues/59346
978
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
979
* https://github.com/ceph/ceph/pull/52556
980
    task/test_nfs: AttributeError: 'TestNFS' object has no attribute 'run_ceph_cmd' (see note #4)
981
* https://tracker.ceph.com/issues/62187
982
    iozone: command not found
983
* https://tracker.ceph.com/issues/61399
984
    qa: build failure for ior (the failed instance is when compiling `mdtest`)
985
* https://tracker.ceph.com/issues/62188
986 164 Rishabh Dave
    AttributeError: 'RemoteProcess' object has no attribute 'read' (happens only with multis-auth test)
987 158 Rishabh Dave
988
h3. 24 Jul 2023
989
990
https://pulpito.ceph.com/rishabh-2023-07-13_21:35:13-fs-wip-rishabh-2023Jul13-testing-default-smithi/
991
https://pulpito.ceph.com/rishabh-2023-07-14_10:26:42-fs-wip-rishabh-2023Jul13-testing-default-smithi/
992
There were few failure from one of the PRs under testing. Following run confirms that removing this PR fixes these failures -
993
https://pulpito.ceph.com/rishabh-2023-07-18_02:11:50-fs-wip-rishabh-2023Jul13-m-quota-testing-default-smithi/
994
One more extra run to check if blogbench.sh fail every time:
995
https://pulpito.ceph.com/rishabh-2023-07-21_17:58:19-fs-wip-rishabh-2023Jul13-m-quota-testing-default-smithi/
996
blogbench.sh failure were seen on above runs for first time, following run with main branch that confirms that "blogbench.sh" was not related to any of the PRs that are under testing -
997 161 Rishabh Dave
https://pulpito.ceph.com/rishabh-2023-07-21_21:30:53-fs-wip-rishabh-2023Jul13-base-2-testing-default-smithi/
998
999
* https://tracker.ceph.com/issues/61892
1000
  test_snapshot_remove (test_strays.TestStrays) failed
1001
* https://tracker.ceph.com/issues/53859
1002
  test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
1003
* https://tracker.ceph.com/issues/61982
1004
  test_clean_stale_subvolume_snapshot_metadata (tasks.cephfs.test_volumes.TestSubvolumeSnapshots)
1005
* https://tracker.ceph.com/issues/52438
1006
  qa: ffsb timeout
1007
* https://tracker.ceph.com/issues/54460
1008
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1009
* https://tracker.ceph.com/issues/57655
1010
  qa: fs:mixed-clients kernel_untar_build failure
1011
* https://tracker.ceph.com/issues/48773
1012
  reached max tries: scrub does not complete
1013
* https://tracker.ceph.com/issues/58340
1014
  mds: fsstress.sh hangs with multimds
1015
* https://tracker.ceph.com/issues/61400
1016
  valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
1017
* https://tracker.ceph.com/issues/57206
1018
  libcephfs/test.sh: ceph_test_libcephfs_reclaim
1019
  
1020
* https://tracker.ceph.com/issues/57656
1021
  [testing] dbench: write failed on handle 10010 (Resource temporarily unavailable)
1022
* https://tracker.ceph.com/issues/61399
1023
  ior build failure
1024
* https://tracker.ceph.com/issues/57676
1025
  error during scrub thrashing: backtrace
1026
  
1027
* https://tracker.ceph.com/issues/38452
1028
  'sudo -u postgres -- pgbench -s 500 -i' failed
1029 158 Rishabh Dave
* https://tracker.ceph.com/issues/62126
1030 157 Venky Shankar
  blogbench.sh failure
1031
1032
h3. 18 July 2023
1033
1034
* https://tracker.ceph.com/issues/52624
1035
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
1036
* https://tracker.ceph.com/issues/57676
1037
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1038
* https://tracker.ceph.com/issues/54460
1039
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1040
* https://tracker.ceph.com/issues/57655
1041
    qa: fs:mixed-clients kernel_untar_build failure
1042
* https://tracker.ceph.com/issues/51964
1043
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1044
* https://tracker.ceph.com/issues/59344
1045
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
1046
* https://tracker.ceph.com/issues/61182
1047
    cephfs-mirror-ha-workunit: reached maximum tries (50) after waiting for 300 seconds
1048
* https://tracker.ceph.com/issues/61957
1049
    test_client_limits.TestClientLimits.test_client_release_bug
1050
* https://tracker.ceph.com/issues/59348
1051
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1052
* https://tracker.ceph.com/issues/61892
1053
    test_strays.TestStrays.test_snapshot_remove failed
1054
* https://tracker.ceph.com/issues/59346
1055
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
1056
* https://tracker.ceph.com/issues/44565
1057
    src/mds/SimpleLock.h: 528: FAILED ceph_assert(state == LOCK_XLOCK || state == LOCK_XLOCKDONE || state == LOCK_XLOCKSNAP || state == LOCK_LOCK_XLOCK || state == LOCK_LOCK || is_locallock())
1058
* https://tracker.ceph.com/issues/62067
1059
    ffsb.sh failure "Resource temporarily unavailable"
1060 156 Venky Shankar
1061
1062
h3. 17 July 2023
1063
1064
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230704.040136
1065
1066
* https://tracker.ceph.com/issues/61982
1067
    Test failure: test_clean_stale_subvolume_snapshot_metadata (tasks.cephfs.test_volumes.TestSubvolumeSnapshots)
1068
* https://tracker.ceph.com/issues/59344
1069
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
1070
* https://tracker.ceph.com/issues/61182
1071
    cephfs-mirror-ha-workunit: reached maximum tries (50) after waiting for 300 seconds
1072
* https://tracker.ceph.com/issues/61957
1073
    test_client_limits.TestClientLimits.test_client_release_bug
1074
* https://tracker.ceph.com/issues/61400
1075
    valgrind+ceph-mon: segmentation fault in rocksdb+tcmalloc
1076
* https://tracker.ceph.com/issues/59348
1077
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1078
* https://tracker.ceph.com/issues/61892
1079
    test_strays.TestStrays.test_snapshot_remove failed
1080
* https://tracker.ceph.com/issues/59346
1081
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
1082
* https://tracker.ceph.com/issues/62036
1083
    src/mds/MDCache.cc: 5131: FAILED ceph_assert(isolated_inodes.empty())
1084
* https://tracker.ceph.com/issues/61737
1085
    coredump from '/bin/podman pull quay.ceph.io/ceph-ci/ceph:pacific'
1086
* https://tracker.ceph.com/issues/44565
1087
    src/mds/SimpleLock.h: 528: FAILED ceph_assert(state == LOCK_XLOCK || state == LOCK_XLOCKDONE || state == LOCK_XLOCKSNAP || state == LOCK_LOCK_XLOCK || state == LOCK_LOCK || is_locallock())
1088 155 Rishabh Dave
1089 1 Patrick Donnelly
1090 153 Rishabh Dave
h3. 13 July 2023 Run 2
1091 152 Rishabh Dave
1092
1093
https://pulpito.ceph.com/rishabh-2023-07-08_23:33:40-fs-wip-rishabh-2023Jul9-testing-default-smithi/
1094
https://pulpito.ceph.com/rishabh-2023-07-09_20:19:09-fs-wip-rishabh-2023Jul9-testing-default-smithi/
1095
1096
* https://tracker.ceph.com/issues/61957
1097
  test_client_limits.TestClientLimits.test_client_release_bug
1098
* https://tracker.ceph.com/issues/61982
1099
  Test failure: test_clean_stale_subvolume_snapshot_metadata (tasks.cephfs.test_volumes.TestSubvolumeSnapshots)
1100
* https://tracker.ceph.com/issues/59348
1101
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1102
* https://tracker.ceph.com/issues/59344
1103
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
1104
* https://tracker.ceph.com/issues/54460
1105
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1106
* https://tracker.ceph.com/issues/57655
1107
  qa: fs:mixed-clients kernel_untar_build failure
1108
* https://tracker.ceph.com/issues/61400
1109
  valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
1110
* https://tracker.ceph.com/issues/61399
1111
  ior build failure
1112
1113 151 Venky Shankar
h3. 13 July 2023
1114
1115
https://pulpito.ceph.com/vshankar-2023-07-04_11:45:30-fs-wip-vshankar-testing-20230704.040242-testing-default-smithi/
1116
1117
* https://tracker.ceph.com/issues/54460
1118
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1119
* https://tracker.ceph.com/issues/61400
1120
    valgrind+ceph-mon: segmentation fault in rocksdb+tcmalloc
1121
* https://tracker.ceph.com/issues/57655
1122
    qa: fs:mixed-clients kernel_untar_build failure
1123
* https://tracker.ceph.com/issues/61945
1124
    LibCephFS.DelegTimeout failure
1125
* https://tracker.ceph.com/issues/52624
1126
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
1127
* https://tracker.ceph.com/issues/57676
1128
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1129
* https://tracker.ceph.com/issues/59348
1130
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1131
* https://tracker.ceph.com/issues/59344
1132
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
1133
* https://tracker.ceph.com/issues/51964
1134
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1135
* https://tracker.ceph.com/issues/59346
1136
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
1137
* https://tracker.ceph.com/issues/61982
1138
    Test failure: test_clean_stale_subvolume_snapshot_metadata (tasks.cephfs.test_volumes.TestSubvolumeSnapshots)
1139 150 Rishabh Dave
1140
1141
h3. 13 Jul 2023
1142
1143
https://pulpito.ceph.com/rishabh-2023-07-05_22:21:20-fs-wip-rishabh-2023Jul5-testing-default-smithi/
1144
https://pulpito.ceph.com/rishabh-2023-07-06_19:33:28-fs-wip-rishabh-2023Jul5-testing-default-smithi/
1145
1146
* https://tracker.ceph.com/issues/61957
1147
  test_client_limits.TestClientLimits.test_client_release_bug
1148
* https://tracker.ceph.com/issues/59348
1149
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1150
* https://tracker.ceph.com/issues/59346
1151
  fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
1152
* https://tracker.ceph.com/issues/48773
1153
  scrub does not complete: reached max tries
1154
* https://tracker.ceph.com/issues/59344
1155
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
1156
* https://tracker.ceph.com/issues/52438
1157
  qa: ffsb timeout
1158
* https://tracker.ceph.com/issues/57656
1159
  [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
1160
* https://tracker.ceph.com/issues/58742
1161
  xfstests-dev: kcephfs: generic
1162
* https://tracker.ceph.com/issues/61399
1163 148 Rishabh Dave
  libmpich: undefined references to fi_strerror
1164 149 Rishabh Dave
1165 148 Rishabh Dave
h3. 12 July 2023
1166
1167
https://pulpito.ceph.com/rishabh-2023-07-05_18:32:52-fs-wip-rishabh-CephManager-in-CephFSTestCase-testing-default-smithi/
1168
https://pulpito.ceph.com/rishabh-2023-07-06_19:46:43-fs-wip-rishabh-CephManager-in-CephFSTestCase-testing-default-smithi/
1169
1170
* https://tracker.ceph.com/issues/61892
1171
  test_strays.TestStrays.test_snapshot_remove failed
1172
* https://tracker.ceph.com/issues/59348
1173
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1174
* https://tracker.ceph.com/issues/53859
1175
  qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
1176
* https://tracker.ceph.com/issues/59346
1177
  fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write"
1178
* https://tracker.ceph.com/issues/58742
1179
  xfstests-dev: kcephfs: generic
1180
* https://tracker.ceph.com/issues/59344
1181
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
1182
* https://tracker.ceph.com/issues/52438
1183
  qa: ffsb timeout
1184
* https://tracker.ceph.com/issues/57656
1185
  [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
1186
* https://tracker.ceph.com/issues/54460
1187
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1188
* https://tracker.ceph.com/issues/57655
1189
  qa: fs:mixed-clients kernel_untar_build failure
1190
* https://tracker.ceph.com/issues/61182
1191
  cephfs-mirror-ha-workunit: reached maximum tries (50) after waiting for 300 seconds
1192
* https://tracker.ceph.com/issues/61400
1193
  valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
1194 147 Rishabh Dave
* https://tracker.ceph.com/issues/48773
1195 146 Patrick Donnelly
  reached max tries: scrub does not complete
1196
1197
h3. 05 July 2023
1198
1199
https://pulpito.ceph.com/pdonnell-2023-07-05_03:38:33-fs:libcephfs-wip-pdonnell-testing-20230705.003205-distro-default-smithi/
1200
1201 137 Rishabh Dave
* https://tracker.ceph.com/issues/59346
1202 143 Rishabh Dave
  fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write"
1203
1204
h3. 27 Jun 2023
1205
1206
https://pulpito.ceph.com/rishabh-2023-06-21_23:38:17-fs-wip-rishabh-improvements-authmon-testing-default-smithi/
1207 144 Rishabh Dave
https://pulpito.ceph.com/rishabh-2023-06-23_17:37:30-fs-wip-rishabh-improvements-authmon-distro-default-smithi/
1208
1209
* https://tracker.ceph.com/issues/59348
1210
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1211
* https://tracker.ceph.com/issues/54460
1212
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1213
* https://tracker.ceph.com/issues/59346
1214
  fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write"
1215
* https://tracker.ceph.com/issues/59344
1216
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
1217
* https://tracker.ceph.com/issues/61399
1218
  libmpich: undefined references to fi_strerror
1219
* https://tracker.ceph.com/issues/50223
1220
  client.xxxx isn't responding to mclientcaps(revoke)
1221 143 Rishabh Dave
* https://tracker.ceph.com/issues/61831
1222
  Test failure: test_mirroring_init_failure_with_recovery (tasks.cephfs.test_mirroring.TestMirroring)
1223 142 Venky Shankar
1224
1225
h3. 22 June 2023
1226
1227
* https://tracker.ceph.com/issues/57676
1228
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1229
* https://tracker.ceph.com/issues/54460
1230
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1231
* https://tracker.ceph.com/issues/59344
1232
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
1233
* https://tracker.ceph.com/issues/59348
1234
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1235
* https://tracker.ceph.com/issues/61400
1236
    valgrind+ceph-mon: segmentation fault in rocksdb+tcmalloc
1237
* https://tracker.ceph.com/issues/57655
1238
    qa: fs:mixed-clients kernel_untar_build failure
1239
* https://tracker.ceph.com/issues/61394
1240
    qa/quincy: cluster [WRN] evicting unresponsive client smithi152 (4298), after 303.726 seconds" in cluster log
1241
* https://tracker.ceph.com/issues/61762
1242
    qa: wait_for_clean: failed before timeout expired
1243
* https://tracker.ceph.com/issues/61775
1244
    cephfs-mirror: mirror daemon does not shutdown (in mirror ha tests)
1245
* https://tracker.ceph.com/issues/44565
1246
    src/mds/SimpleLock.h: 528: FAILED ceph_assert(state == LOCK_XLOCK || state == LOCK_XLOCKDONE || state == LOCK_XLOCKSNAP || state == LOCK_LOCK_XLOCK || state == LOCK_LOCK || is_locallock())
1247
* https://tracker.ceph.com/issues/61790
1248
    cephfs client to mds comms remain silent after reconnect
1249
* https://tracker.ceph.com/issues/61791
1250
    snaptest-git-ceph.sh test timed out (job dead)
1251 139 Venky Shankar
1252
1253
h3. 20 June 2023
1254
1255
https://pulpito.ceph.com/vshankar-2023-06-15_04:58:28-fs-wip-vshankar-testing-20230614.124123-testing-default-smithi/
1256
1257
* https://tracker.ceph.com/issues/57676
1258
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1259
* https://tracker.ceph.com/issues/54460
1260
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1261 140 Venky Shankar
* https://tracker.ceph.com/issues/54462
1262 1 Patrick Donnelly
  Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128
1263 141 Venky Shankar
* https://tracker.ceph.com/issues/58340
1264 139 Venky Shankar
  mds: fsstress.sh hangs with multimds
1265
* https://tracker.ceph.com/issues/59344
1266
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
1267
* https://tracker.ceph.com/issues/59348
1268
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1269
* https://tracker.ceph.com/issues/57656
1270
  [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
1271
* https://tracker.ceph.com/issues/61400
1272
    valgrind+ceph-mon: segmentation fault in rocksdb+tcmalloc
1273
* https://tracker.ceph.com/issues/57655
1274
    qa: fs:mixed-clients kernel_untar_build failure
1275
* https://tracker.ceph.com/issues/44565
1276
    src/mds/SimpleLock.h: 528: FAILED ceph_assert(state == LOCK_XLOCK || state == LOCK_XLOCKDONE || state == LOCK_XLOCKSNAP || state == LOCK_LOCK_XLOCK || state == LOCK_LOCK || is_locallock())
1277
* https://tracker.ceph.com/issues/61737
1278 138 Rishabh Dave
    coredump from '/bin/podman pull quay.ceph.io/ceph-ci/ceph:pacific'
1279
1280
h3. 16 June 2023
1281
1282 1 Patrick Donnelly
https://pulpito.ceph.com/rishabh-2023-05-16_10:39:13-fs-wip-rishabh-2023May15-1524-testing-default-smithi/
1283 145 Rishabh Dave
https://pulpito.ceph.com/rishabh-2023-05-17_11:09:48-fs-wip-rishabh-2023May15-1524-testing-default-smithi/
1284 138 Rishabh Dave
https://pulpito.ceph.com/rishabh-2023-05-18_10:01:53-fs-wip-rishabh-2023May15-1524-testing-default-smithi/
1285 1 Patrick Donnelly
(bins were rebuilt with a subset of orig PRs) https://pulpito.ceph.com/rishabh-2023-06-09_10:19:22-fs-wip-rishabh-2023Jun9-1308-testing-default-smithi/
1286
1287
1288
* https://tracker.ceph.com/issues/59344
1289
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
1290 138 Rishabh Dave
* https://tracker.ceph.com/issues/59348
1291
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1292 145 Rishabh Dave
* https://tracker.ceph.com/issues/59346
1293
  fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write"
1294
* https://tracker.ceph.com/issues/57656
1295
  [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
1296
* https://tracker.ceph.com/issues/54460
1297
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1298 138 Rishabh Dave
* https://tracker.ceph.com/issues/54462
1299
  Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128
1300 145 Rishabh Dave
* https://tracker.ceph.com/issues/61399
1301
  libmpich: undefined references to fi_strerror
1302
* https://tracker.ceph.com/issues/58945
1303
  xfstests-dev: ceph-fuse: generic 
1304 138 Rishabh Dave
* https://tracker.ceph.com/issues/58742
1305 136 Patrick Donnelly
  xfstests-dev: kcephfs: generic
1306
1307
h3. 24 May 2023
1308
1309
https://pulpito.ceph.com/pdonnell-2023-05-23_18:20:18-fs-wip-pdonnell-testing-20230523.134409-distro-default-smithi/
1310
1311
* https://tracker.ceph.com/issues/57676
1312
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1313
* https://tracker.ceph.com/issues/59683
1314
    Error: Unable to find a match: userspace-rcu-devel libedit-devel device-mapper-devel with fscrypt tests
1315
* https://tracker.ceph.com/issues/61399
1316
    qa: "[Makefile:299: ior] Error 1"
1317
* https://tracker.ceph.com/issues/61265
1318
    qa: tasks.cephfs.fuse_mount:process failed to terminate after unmount
1319
* https://tracker.ceph.com/issues/59348
1320
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1321
* https://tracker.ceph.com/issues/59346
1322
    qa/workunits/fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write"
1323
* https://tracker.ceph.com/issues/61400
1324
    valgrind+ceph-mon: segmentation fault in rocksdb+tcmalloc
1325
* https://tracker.ceph.com/issues/54460
1326
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1327
* https://tracker.ceph.com/issues/51964
1328
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1329
* https://tracker.ceph.com/issues/59344
1330
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
1331
* https://tracker.ceph.com/issues/61407
1332
    mds: abort on CInode::verify_dirfrags
1333
* https://tracker.ceph.com/issues/48773
1334
    qa: scrub does not complete
1335
* https://tracker.ceph.com/issues/57655
1336
    qa: fs:mixed-clients kernel_untar_build failure
1337
* https://tracker.ceph.com/issues/61409
1338 128 Venky Shankar
    qa: _test_stale_caps does not wait for file flush before stat
1339
1340
h3. 15 May 2023
1341 130 Venky Shankar
1342 128 Venky Shankar
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230509.090020
1343
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230509.090020-6
1344
1345
* https://tracker.ceph.com/issues/52624
1346
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
1347
* https://tracker.ceph.com/issues/54460
1348
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1349
* https://tracker.ceph.com/issues/57676
1350
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1351
* https://tracker.ceph.com/issues/59684 [kclient bug]
1352
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
1353
* https://tracker.ceph.com/issues/59348
1354
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1355 131 Venky Shankar
* https://tracker.ceph.com/issues/61148
1356
    dbench test results in call trace in dmesg [kclient bug]
1357 133 Kotresh Hiremath Ravishankar
* https://tracker.ceph.com/issues/58340
1358 134 Kotresh Hiremath Ravishankar
    mds: fsstress.sh hangs with multimds
1359 125 Venky Shankar
1360
 
1361 129 Rishabh Dave
h3. 11 May 2023
1362
1363
https://pulpito.ceph.com/yuriw-2023-05-10_18:21:40-fs-wip-yuri7-testing-2023-05-10-0742-distro-default-smithi/
1364
1365
* https://tracker.ceph.com/issues/59684 [kclient bug]
1366
  Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
1367
* https://tracker.ceph.com/issues/59348
1368
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1369
* https://tracker.ceph.com/issues/57655
1370
  qa: fs:mixed-clients kernel_untar_build failure
1371
* https://tracker.ceph.com/issues/57676
1372
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
1373
* https://tracker.ceph.com/issues/55805
1374
  error during scrub thrashing reached max tries in 900 secs
1375
* https://tracker.ceph.com/issues/54460
1376
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1377
* https://tracker.ceph.com/issues/57656
1378
  [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
1379
* https://tracker.ceph.com/issues/58220
1380
  Command failed (workunit test fs/quota/quota.sh) on smithi081 with status 1:
1381 1 Patrick Donnelly
* https://tracker.ceph.com/issues/58220#note-9
1382
  workunit fs/test_python.sh: test_disk_quota_exceeeded_error failure
1383 134 Kotresh Hiremath Ravishankar
* https://tracker.ceph.com/issues/59342
1384
  qa/workunits/kernel_untar_build.sh failed when compiling the Linux source
1385 135 Kotresh Hiremath Ravishankar
* https://tracker.ceph.com/issues/58949
1386
    test_cephfs.test_disk_quota_exceeeded_error - AssertionError: DiskQuotaExceeded not raised by write
1387 129 Rishabh Dave
* https://tracker.ceph.com/issues/61243 (NEW)
1388
  test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
1389
1390 125 Venky Shankar
h3. 11 May 2023
1391 127 Venky Shankar
1392
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230509.054005
1393 126 Venky Shankar
1394 125 Venky Shankar
(no fsstress job failure [https://tracker.ceph.com/issues/58340] since https://github.com/ceph/ceph/pull/49553
1395
 was included in the branch, however, the PR got updated and needs retest).
1396
1397
* https://tracker.ceph.com/issues/52624
1398
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
1399
* https://tracker.ceph.com/issues/54460
1400
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1401
* https://tracker.ceph.com/issues/57676
1402
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1403
* https://tracker.ceph.com/issues/59683
1404
    Error: Unable to find a match: userspace-rcu-devel libedit-devel device-mapper-devel with fscrypt tests
1405
* https://tracker.ceph.com/issues/59684 [kclient bug]
1406
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
1407
* https://tracker.ceph.com/issues/59348
1408 124 Venky Shankar
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1409
1410
h3. 09 May 2023
1411
1412
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230506.143554
1413
1414
* https://tracker.ceph.com/issues/52624
1415
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
1416
* https://tracker.ceph.com/issues/58340
1417
    mds: fsstress.sh hangs with multimds
1418
* https://tracker.ceph.com/issues/54460
1419
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1420
* https://tracker.ceph.com/issues/57676
1421
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1422
* https://tracker.ceph.com/issues/51964
1423
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1424
* https://tracker.ceph.com/issues/59350
1425
    qa: test_flag_scrub_mdsdir (tasks.cephfs.test_scrub_checks.TestScrubChecks) ... ERROR
1426
* https://tracker.ceph.com/issues/59683
1427
    Error: Unable to find a match: userspace-rcu-devel libedit-devel device-mapper-devel with fscrypt tests
1428
* https://tracker.ceph.com/issues/59684 [kclient bug]
1429
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
1430
* https://tracker.ceph.com/issues/59348
1431 123 Venky Shankar
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1432
1433
h3. 10 Apr 2023
1434
1435
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230330.105356
1436
1437
* https://tracker.ceph.com/issues/52624
1438
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
1439
* https://tracker.ceph.com/issues/58340
1440
    mds: fsstress.sh hangs with multimds
1441
* https://tracker.ceph.com/issues/54460
1442
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1443
* https://tracker.ceph.com/issues/57676
1444
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1445 119 Rishabh Dave
* https://tracker.ceph.com/issues/51964
1446 120 Rishabh Dave
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1447 121 Rishabh Dave
1448 120 Rishabh Dave
h3. 31 Mar 2023
1449 122 Rishabh Dave
1450
run: http://pulpito.front.sepia.ceph.com/rishabh-2023-03-03_21:39:49-fs-wip-rishabh-2023Mar03-2316-testing-default-smithi/
1451 120 Rishabh Dave
re-run (some PRs removed from batch): http://pulpito.front.sepia.ceph.com/rishabh-2023-03-11_05:54:03-fs-wip-rishabh-2023Mar10-1727-testing-default-smithi/
1452
re-run (some PRs removed from batch): http://pulpito.front.sepia.ceph.com/rishabh-2023-03-23_08:27:28-fs-wip-rishabh-2023Mar20-2250-testing-default-smithi/
1453
1454
There were many more re-runs for "failed+dead" jobs as well as for individual jobs. half of the PRs from the batch were removed (gradually over subsequent re-runs).
1455
1456
* https://tracker.ceph.com/issues/57676
1457
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
1458
* https://tracker.ceph.com/issues/54460
1459
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1460
* https://tracker.ceph.com/issues/58220
1461
  Command failed (workunit test fs/quota/quota.sh) on smithixxx with status 1
1462
* https://tracker.ceph.com/issues/58220#note-9
1463
  workunit fs/test_python.sh: test_disk_quota_exceeeded_error failure
1464
* https://tracker.ceph.com/issues/56695
1465
  Command failed (workunit test suites/pjd.sh)
1466
* https://tracker.ceph.com/issues/58564 
1467
  workuit dbench failed with error code 1
1468
* https://tracker.ceph.com/issues/57206
1469
  libcephfs/test.sh: ceph_test_libcephfs_reclaim
1470
* https://tracker.ceph.com/issues/57580
1471
  Test failure: test_newops_getvxattr (tasks.cephfs.test_newops.TestNewOps)
1472
* https://tracker.ceph.com/issues/58940
1473
  ceph osd hit ceph_abort
1474
* https://tracker.ceph.com/issues/55805
1475 118 Venky Shankar
  error scrub thrashing reached max tries in 900 secs
1476
1477
h3. 30 March 2023
1478
1479
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230315.085747
1480
1481
* https://tracker.ceph.com/issues/58938
1482
    qa: xfstests-dev's generic test suite has 7 failures with kclient
1483
* https://tracker.ceph.com/issues/51964
1484
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1485
* https://tracker.ceph.com/issues/58340
1486 114 Venky Shankar
    mds: fsstress.sh hangs with multimds
1487
1488 115 Venky Shankar
h3. 29 March 2023
1489 114 Venky Shankar
1490
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230317.095222
1491
1492
* https://tracker.ceph.com/issues/56695
1493
    [RHEL stock] pjd test failures
1494
* https://tracker.ceph.com/issues/57676
1495
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1496
* https://tracker.ceph.com/issues/57087
1497
    qa: test_fragmented_injection (tasks.cephfs.test_data_scan.TestDataScan) failure
1498 116 Venky Shankar
* https://tracker.ceph.com/issues/58340
1499
    mds: fsstress.sh hangs with multimds
1500 114 Venky Shankar
* https://tracker.ceph.com/issues/57655
1501
    qa: fs:mixed-clients kernel_untar_build failure
1502 117 Venky Shankar
* https://tracker.ceph.com/issues/59230
1503
    Test failure: test_object_deletion (tasks.cephfs.test_damage.TestDamage)
1504 114 Venky Shankar
* https://tracker.ceph.com/issues/58938
1505 113 Venky Shankar
    qa: xfstests-dev's generic test suite has 7 failures with kclient
1506
1507
h3. 13 Mar 2023
1508
1509
* https://tracker.ceph.com/issues/56695
1510
    [RHEL stock] pjd test failures
1511
* https://tracker.ceph.com/issues/57676
1512
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1513
* https://tracker.ceph.com/issues/51964
1514
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1515
* https://tracker.ceph.com/issues/54460
1516
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1517
* https://tracker.ceph.com/issues/57656
1518 112 Venky Shankar
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
1519
1520
h3. 09 Mar 2023
1521
1522
https://pulpito.ceph.com/vshankar-2023-03-03_04:39:14-fs-wip-vshankar-testing-20230303.023823-testing-default-smithi/
1523
https://pulpito.ceph.com/vshankar-2023-03-08_15:12:36-fs-wip-vshankar-testing-20230308.112059-testing-default-smithi/
1524
1525
* https://tracker.ceph.com/issues/56695
1526
    [RHEL stock] pjd test failures
1527
* https://tracker.ceph.com/issues/57676
1528
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1529
* https://tracker.ceph.com/issues/51964
1530
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1531
* https://tracker.ceph.com/issues/54460
1532
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1533
* https://tracker.ceph.com/issues/58340
1534
    mds: fsstress.sh hangs with multimds
1535
* https://tracker.ceph.com/issues/57087
1536 111 Venky Shankar
    qa: test_fragmented_injection (tasks.cephfs.test_data_scan.TestDataScan) failure
1537
1538
h3. 07 Mar 2023
1539
1540
https://pulpito.ceph.com/vshankar-2023-03-02_09:21:58-fs-wip-vshankar-testing-20230222.044949-testing-default-smithi/
1541
https://pulpito.ceph.com/vshankar-2023-03-07_05:15:12-fs-wip-vshankar-testing-20230307.030510-testing-default-smithi/
1542
1543
* https://tracker.ceph.com/issues/56695
1544
    [RHEL stock] pjd test failures
1545
* https://tracker.ceph.com/issues/57676
1546
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1547
* https://tracker.ceph.com/issues/51964
1548
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1549
* https://tracker.ceph.com/issues/57656
1550
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
1551
* https://tracker.ceph.com/issues/57655
1552
    qa: fs:mixed-clients kernel_untar_build failure
1553
* https://tracker.ceph.com/issues/58220
1554
    Command failed (workunit test fs/quota/quota.sh) on smithi081 with status 1:
1555
* https://tracker.ceph.com/issues/54460
1556
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1557
* https://tracker.ceph.com/issues/58934
1558 109 Venky Shankar
    snaptest-git-ceph.sh failure with ceph-fuse
1559
1560
h3. 28 Feb 2023
1561
1562
https://pulpito.ceph.com/vshankar-2023-02-24_02:11:45-fs-wip-vshankar-testing-20230222.025426-testing-default-smithi/
1563
1564
* https://tracker.ceph.com/issues/56695
1565
    [RHEL stock] pjd test failures
1566
* https://tracker.ceph.com/issues/57676
1567
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1568 110 Venky Shankar
* https://tracker.ceph.com/issues/56446
1569 109 Venky Shankar
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
1570
1571 107 Venky Shankar
(teuthology infra issues causing testing delays - merging PRs which have tests passing)
1572
1573
h3. 25 Jan 2023
1574
1575
https://pulpito.ceph.com/vshankar-2023-01-25_07:57:32-fs-wip-vshankar-testing-20230125.055346-testing-default-smithi/
1576
1577
* https://tracker.ceph.com/issues/52624
1578
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
1579
* https://tracker.ceph.com/issues/56695
1580
    [RHEL stock] pjd test failures
1581
* https://tracker.ceph.com/issues/57676
1582
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1583
* https://tracker.ceph.com/issues/56446
1584
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
1585
* https://tracker.ceph.com/issues/57206
1586
    libcephfs/test.sh: ceph_test_libcephfs_reclaim
1587
* https://tracker.ceph.com/issues/58220
1588
    Command failed (workunit test fs/quota/quota.sh) on smithi081 with status 1:
1589
* https://tracker.ceph.com/issues/58340
1590
  mds: fsstress.sh hangs with multimds
1591
* https://tracker.ceph.com/issues/56011
1592
    fs/thrash: snaptest-snap-rm-cmp.sh fails in mds5sum comparison
1593
* https://tracker.ceph.com/issues/54460
1594 101 Rishabh Dave
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1595
1596
h3. 30 JAN 2023
1597
1598
run: http://pulpito.front.sepia.ceph.com/rishabh-2022-11-28_08:04:11-fs-wip-rishabh-testing-2022Nov24-1818-testing-default-smithi/
1599
re-run: http://pulpito.front.sepia.ceph.com/rishabh-2023-01-13_12:08:33-fs-wip-rishabh-testing-2022Nov24-11Jan2023-distro-default-smithi/
1600 105 Rishabh Dave
re-run of re-run: http://pulpito.front.sepia.ceph.com/rishabh-2023-01-23_18:53:32-fs-wip-rishabh-testing-2022Nov24-11Jan2023-distro-default-smithi/
1601
1602 101 Rishabh Dave
* https://tracker.ceph.com/issues/52624
1603
  qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
1604
* https://tracker.ceph.com/issues/56695
1605
  [RHEL stock] pjd test failures
1606
* https://tracker.ceph.com/issues/57676
1607
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
1608
* https://tracker.ceph.com/issues/55332
1609
  Failure in snaptest-git-ceph.sh
1610
* https://tracker.ceph.com/issues/51964
1611
  qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1612
* https://tracker.ceph.com/issues/56446
1613
  Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
1614
* https://tracker.ceph.com/issues/57655 
1615
  qa: fs:mixed-clients kernel_untar_build failure
1616
* https://tracker.ceph.com/issues/54460
1617
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1618 103 Rishabh Dave
* https://tracker.ceph.com/issues/58340
1619
  mds: fsstress.sh hangs with multimds
1620 101 Rishabh Dave
* https://tracker.ceph.com/issues/58219
1621 102 Rishabh Dave
  Command crashed: 'ceph-dencoder type inode_backtrace_t import - decode dump_json'
1622
1623
* "Failed to load ceph-mgr modules: prometheus" in cluster log"
1624 106 Rishabh Dave
  http://pulpito.front.sepia.ceph.com/rishabh-2023-01-23_18:53:32-fs-wip-rishabh-testing-2022Nov24-11Jan2023-distro-default-smithi/7134086
1625
  Acc to Venky this was fixed in https://github.com/ceph/ceph/commit/cf6089200d96fc56b08ee17a4e31f19823370dc8
1626 102 Rishabh Dave
* Created https://tracker.ceph.com/issues/58564
1627 100 Venky Shankar
  workunit test suites/dbench.sh failed error code 1
1628
1629
h3. 15 Dec 2022
1630
1631
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20221215.112736
1632
1633
* https://tracker.ceph.com/issues/52624
1634
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1635
* https://tracker.ceph.com/issues/56695
1636
    [RHEL stock] pjd test failures
1637
* https://tracker.ceph.com/issues/58219
1638
* https://tracker.ceph.com/issues/57655
1639
* qa: fs:mixed-clients kernel_untar_build failure
1640
    Test failure: test_journal_migration (tasks.cephfs.test_journal_migration.TestJournalMigration)
1641
* https://tracker.ceph.com/issues/57676
1642
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1643
* https://tracker.ceph.com/issues/58340
1644 96 Venky Shankar
    mds: fsstress.sh hangs with multimds
1645
1646
h3. 08 Dec 2022
1647 99 Venky Shankar
1648 96 Venky Shankar
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20221130.043104
1649
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20221209.043803
1650
1651
(lots of transient git.ceph.com failures)
1652
1653
* https://tracker.ceph.com/issues/52624
1654
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1655
* https://tracker.ceph.com/issues/56695
1656
    [RHEL stock] pjd test failures
1657
* https://tracker.ceph.com/issues/57655
1658
    qa: fs:mixed-clients kernel_untar_build failure
1659
* https://tracker.ceph.com/issues/58219
1660
    Test failure: test_journal_migration (tasks.cephfs.test_journal_migration.TestJournalMigration)
1661
* https://tracker.ceph.com/issues/58220
1662
    Command failed (workunit test fs/quota/quota.sh) on smithi081 with status 1:
1663 97 Venky Shankar
* https://tracker.ceph.com/issues/57676
1664
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1665 98 Venky Shankar
* https://tracker.ceph.com/issues/53859
1666
    qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
1667
* https://tracker.ceph.com/issues/54460
1668
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1669 96 Venky Shankar
* https://tracker.ceph.com/issues/58244
1670 95 Venky Shankar
    Test failure: test_rebuild_inotable (tasks.cephfs.test_data_scan.TestDataScan)
1671
1672
h3. 14 Oct 2022
1673
1674
https://pulpito.ceph.com/vshankar-2022-10-12_04:56:59-fs-wip-vshankar-testing-20221011-145847-testing-default-smithi/
1675
https://pulpito.ceph.com/vshankar-2022-10-14_04:04:57-fs-wip-vshankar-testing-20221014-072608-testing-default-smithi/
1676
1677
* https://tracker.ceph.com/issues/52624
1678
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1679
* https://tracker.ceph.com/issues/55804
1680
    Command failed (workunit test suites/pjd.sh)
1681
* https://tracker.ceph.com/issues/51964
1682
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1683
* https://tracker.ceph.com/issues/57682
1684
    client: ERROR: test_reconnect_after_blocklisted
1685 90 Rishabh Dave
* https://tracker.ceph.com/issues/54460
1686 91 Rishabh Dave
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1687
1688
h3. 10 Oct 2022
1689 92 Rishabh Dave
1690 91 Rishabh Dave
http://pulpito.front.sepia.ceph.com/rishabh-2022-09-30_19:45:21-fs-wip-rishabh-testing-30Sep2022-testing-default-smithi/
1691
1692
reruns
1693
* fs-thrash, passed: http://pulpito.front.sepia.ceph.com/rishabh-2022-10-04_13:19:47-fs-wip-rishabh-testing-30Sep2022-testing-default-smithi/
1694 94 Rishabh Dave
* fs-verify, passed: http://pulpito.front.sepia.ceph.com/rishabh-2022-10-05_12:25:37-fs-wip-rishabh-testing-30Sep2022-testing-default-smithi/
1695 91 Rishabh Dave
* cephadm failures also passed after many re-runs: http://pulpito.front.sepia.ceph.com/rishabh-2022-10-06_13:50:51-fs-wip-rishabh-testing-30Sep2022-2-testing-default-smithi/
1696 93 Rishabh Dave
    ** needed this PR to be merged in ceph-ci branch - https://github.com/ceph/ceph/pull/47458
1697 91 Rishabh Dave
1698
known bugs
1699
* https://tracker.ceph.com/issues/52624
1700
  qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1701
* https://tracker.ceph.com/issues/50223
1702
  client.xxxx isn't responding to mclientcaps(revoke
1703
* https://tracker.ceph.com/issues/57299
1704
  qa: test_dump_loads fails with JSONDecodeError
1705
* https://tracker.ceph.com/issues/57655 [Exist in main as well]
1706
  qa: fs:mixed-clients kernel_untar_build failure
1707
* https://tracker.ceph.com/issues/57206
1708 90 Rishabh Dave
  libcephfs/test.sh: ceph_test_libcephfs_reclaim
1709
1710
h3. 2022 Sep 29
1711
1712
http://pulpito.front.sepia.ceph.com/rishabh-2022-09-14_12:48:43-fs-wip-rishabh-testing-2022Sep9-1708-testing-default-smithi/
1713
1714
* https://tracker.ceph.com/issues/55804
1715
  Command failed (workunit test suites/pjd.sh)
1716
* https://tracker.ceph.com/issues/36593
1717
  Command failed (workunit test fs/quota/quota.sh) on smithixxx with status 1
1718
* https://tracker.ceph.com/issues/52624
1719
  qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1720
* https://tracker.ceph.com/issues/51964
1721
  qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1722
* https://tracker.ceph.com/issues/56632
1723
  Test failure: test_subvolume_snapshot_clone_quota_exceeded
1724
* https://tracker.ceph.com/issues/50821
1725 88 Patrick Donnelly
  qa: untar_snap_rm failure during mds thrashing
1726
1727
h3. 2022 Sep 26
1728
1729
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20220923.171109
1730
1731
* https://tracker.ceph.com/issues/55804
1732
    qa failure: pjd link tests failed
1733
* https://tracker.ceph.com/issues/57676
1734
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1735
* https://tracker.ceph.com/issues/52624
1736
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1737
* https://tracker.ceph.com/issues/57580
1738
    Test failure: test_newops_getvxattr (tasks.cephfs.test_newops.TestNewOps)
1739
* https://tracker.ceph.com/issues/48773
1740
    qa: scrub does not complete
1741
* https://tracker.ceph.com/issues/57299
1742
    qa: test_dump_loads fails with JSONDecodeError
1743
* https://tracker.ceph.com/issues/57280
1744
    qa: tasks/kernel_cfuse_workunits_untarbuild_blogbench fails - Failed to fetch package version from shaman
1745
* https://tracker.ceph.com/issues/57205
1746
    Test failure: test_subvolume_group_ls_filter_internal_directories (tasks.cephfs.test_volumes.TestSubvolumeGroups)
1747
* https://tracker.ceph.com/issues/57656
1748
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
1749
* https://tracker.ceph.com/issues/57677
1750
    qa: "1 MDSs behind on trimming (MDS_TRIM)"
1751
* https://tracker.ceph.com/issues/57206
1752
    libcephfs/test.sh: ceph_test_libcephfs_reclaim
1753
* https://tracker.ceph.com/issues/57446
1754
    qa: test_subvolume_snapshot_info_if_orphan_clone fails
1755 89 Patrick Donnelly
* https://tracker.ceph.com/issues/57655 [Exist in main as well]
1756
    qa: fs:mixed-clients kernel_untar_build failure
1757 88 Patrick Donnelly
* https://tracker.ceph.com/issues/57682
1758
    client: ERROR: test_reconnect_after_blocklisted
1759 87 Patrick Donnelly
1760
1761
h3. 2022 Sep 22
1762
1763
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20220920.234701
1764
1765
* https://tracker.ceph.com/issues/57299
1766
    qa: test_dump_loads fails with JSONDecodeError
1767
* https://tracker.ceph.com/issues/57205
1768
    Test failure: test_subvolume_group_ls_filter_internal_directories (tasks.cephfs.test_volumes.TestSubvolumeGroups)
1769
* https://tracker.ceph.com/issues/52624
1770
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1771
* https://tracker.ceph.com/issues/57580
1772
    Test failure: test_newops_getvxattr (tasks.cephfs.test_newops.TestNewOps)
1773
* https://tracker.ceph.com/issues/57280
1774
    qa: tasks/kernel_cfuse_workunits_untarbuild_blogbench fails - Failed to fetch package version from shaman
1775
* https://tracker.ceph.com/issues/48773
1776
    qa: scrub does not complete
1777
* https://tracker.ceph.com/issues/56446
1778
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
1779
* https://tracker.ceph.com/issues/57206
1780
    libcephfs/test.sh: ceph_test_libcephfs_reclaim
1781
* https://tracker.ceph.com/issues/51267
1782
    CommandFailedError: Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithi096 with status 1:...
1783
1784
NEW:
1785
1786
* https://tracker.ceph.com/issues/57656
1787
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
1788
* https://tracker.ceph.com/issues/57655 [Exist in main as well]
1789
    qa: fs:mixed-clients kernel_untar_build failure
1790
* https://tracker.ceph.com/issues/57657
1791
    mds: scrub locates mismatch between child accounted_rstats and self rstats
1792
1793
Segfault probably caused by: https://github.com/ceph/ceph/pull/47795#issuecomment-1255724799
1794 80 Venky Shankar
1795 79 Venky Shankar
1796
h3. 2022 Sep 16
1797
1798
https://pulpito.ceph.com/?branch=wip-vshankar-testing1-20220905-132828
1799
1800
* https://tracker.ceph.com/issues/57446
1801
    qa: test_subvolume_snapshot_info_if_orphan_clone fails
1802
* https://tracker.ceph.com/issues/57299
1803
    qa: test_dump_loads fails with JSONDecodeError
1804
* https://tracker.ceph.com/issues/50223
1805
    client.xxxx isn't responding to mclientcaps(revoke)
1806
* https://tracker.ceph.com/issues/52624
1807
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1808
* https://tracker.ceph.com/issues/57205
1809
    Test failure: test_subvolume_group_ls_filter_internal_directories (tasks.cephfs.test_volumes.TestSubvolumeGroups)
1810
* https://tracker.ceph.com/issues/57280
1811
    qa: tasks/kernel_cfuse_workunits_untarbuild_blogbench fails - Failed to fetch package version from shaman
1812
* https://tracker.ceph.com/issues/51282
1813
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
1814
* https://tracker.ceph.com/issues/48203
1815
  https://tracker.ceph.com/issues/36593
1816
    qa: quota failure
1817
    qa: quota failure caused by clients stepping on each other
1818
* https://tracker.ceph.com/issues/57580
1819 77 Rishabh Dave
    Test failure: test_newops_getvxattr (tasks.cephfs.test_newops.TestNewOps)
1820
1821 76 Rishabh Dave
1822
h3. 2022 Aug 26
1823
1824
http://pulpito.front.sepia.ceph.com/rishabh-2022-08-22_17:49:59-fs-wip-rishabh-testing-2022Aug19-testing-default-smithi/
1825
http://pulpito.front.sepia.ceph.com/rishabh-2022-08-24_11:56:51-fs-wip-rishabh-testing-2022Aug19-testing-default-smithi/
1826
1827
* https://tracker.ceph.com/issues/57206
1828
  libcephfs/test.sh: ceph_test_libcephfs_reclaim
1829
* https://tracker.ceph.com/issues/56632
1830
  Test failure: test_subvolume_snapshot_clone_quota_exceeded (tasks.cephfs.test_volumes.TestSubvolumeSnapshotClones)
1831
* https://tracker.ceph.com/issues/56446
1832
  Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
1833
* https://tracker.ceph.com/issues/51964
1834
  qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1835
* https://tracker.ceph.com/issues/53859
1836
  qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
1837
1838
* https://tracker.ceph.com/issues/54460
1839
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1840
* https://tracker.ceph.com/issues/54462
1841
  Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128
1842
* https://tracker.ceph.com/issues/54460
1843
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1844
* https://tracker.ceph.com/issues/36593
1845
  Command failed (workunit test fs/quota/quota.sh) on smithixxx with status 1
1846
1847
* https://tracker.ceph.com/issues/52624
1848
  qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1849
* https://tracker.ceph.com/issues/55804
1850
  Command failed (workunit test suites/pjd.sh)
1851
* https://tracker.ceph.com/issues/50223
1852
  client.xxxx isn't responding to mclientcaps(revoke)
1853 75 Venky Shankar
1854
1855
h3. 2022 Aug 22
1856
1857
https://pulpito.ceph.com/vshankar-2022-08-12_09:34:24-fs-wip-vshankar-testing1-20220812-072441-testing-default-smithi/
1858
https://pulpito.ceph.com/vshankar-2022-08-18_04:30:42-fs-wip-vshankar-testing1-20220818-082047-testing-default-smithi/ (drop problematic PR and re-run)
1859
1860
* https://tracker.ceph.com/issues/52624
1861
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1862
* https://tracker.ceph.com/issues/56446
1863
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
1864
* https://tracker.ceph.com/issues/55804
1865
    Command failed (workunit test suites/pjd.sh)
1866
* https://tracker.ceph.com/issues/51278
1867
    mds: "FAILED ceph_assert(!segments.empty())"
1868
* https://tracker.ceph.com/issues/54460
1869
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1870
* https://tracker.ceph.com/issues/57205
1871
    Test failure: test_subvolume_group_ls_filter_internal_directories (tasks.cephfs.test_volumes.TestSubvolumeGroups)
1872
* https://tracker.ceph.com/issues/57206
1873
    ceph_test_libcephfs_reclaim crashes during test
1874
* https://tracker.ceph.com/issues/53859
1875
  Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
1876
* https://tracker.ceph.com/issues/50223
1877 72 Venky Shankar
    client.xxxx isn't responding to mclientcaps(revoke)
1878
1879
h3. 2022 Aug 12
1880
1881
https://pulpito.ceph.com/vshankar-2022-08-10_04:06:00-fs-wip-vshankar-testing-20220805-190751-testing-default-smithi/
1882
https://pulpito.ceph.com/vshankar-2022-08-11_12:16:58-fs-wip-vshankar-testing-20220811-145809-testing-default-smithi/ (drop problematic PR and re-run)
1883
1884
* https://tracker.ceph.com/issues/52624
1885
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1886
* https://tracker.ceph.com/issues/56446
1887
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
1888
* https://tracker.ceph.com/issues/51964
1889
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1890
* https://tracker.ceph.com/issues/55804
1891
    Command failed (workunit test suites/pjd.sh)
1892
* https://tracker.ceph.com/issues/50223
1893
    client.xxxx isn't responding to mclientcaps(revoke)
1894
* https://tracker.ceph.com/issues/50821
1895 73 Venky Shankar
    qa: untar_snap_rm failure during mds thrashing
1896 72 Venky Shankar
* https://tracker.ceph.com/issues/54460
1897 71 Venky Shankar
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1898
1899
h3. 2022 Aug 04
1900
1901
https://pulpito.ceph.com/?branch=wip-vshankar-testing1-20220804-123835 (only mgr/volumes, mgr/stats)
1902
1903 69 Rishabh Dave
Unrealted teuthology failure on rhel
1904 68 Rishabh Dave
1905
h3. 2022 Jul 25
1906
1907
http://pulpito.front.sepia.ceph.com/rishabh-2022-07-22_11:34:20-fs-wip-rishabh-testing-2022Jul22-1400-testing-default-smithi/
1908
1909 74 Rishabh Dave
1st re-run: http://pulpito.front.sepia.ceph.com/rishabh-2022-07-24_03:51:19-fs-wip-rishabh-testing-2022Jul22-1400-testing-default-smithi
1910
2nd re-run: http://pulpito.front.sepia.ceph.com/rishabh-2022-07-24_08:53:36-fs-wip-rishabh-testing-2022Jul22-1400-testing-default-smithi/
1911 68 Rishabh Dave
3rd re-run: http://pulpito.front.sepia.ceph.com/rishabh-2022-07-24_08:53:36-fs-wip-rishabh-testing-2022Jul22-1400-testing-default-smithi/
1912
4th (final) re-run: http://pulpito.front.sepia.ceph.com/rishabh-2022-07-28_03:59:01-fs-wip-rishabh-testing-2022Jul28-0143-testing-default-smithi/
1913
1914
* https://tracker.ceph.com/issues/55804
1915
  Command failed (workunit test suites/pjd.sh)
1916
* https://tracker.ceph.com/issues/50223
1917
  client.xxxx isn't responding to mclientcaps(revoke)
1918
1919
* https://tracker.ceph.com/issues/54460
1920
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1921 1 Patrick Donnelly
* https://tracker.ceph.com/issues/36593
1922 74 Rishabh Dave
  Command failed (workunit test fs/quota/quota.sh) on smithixxx with status 1
1923 68 Rishabh Dave
* https://tracker.ceph.com/issues/54462
1924 67 Patrick Donnelly
  Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128~
1925
1926
h3. 2022 July 22
1927
1928
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20220721.235756
1929
1930
MDS_HEALTH_DUMMY error in log fixed by followup commit.
1931
transient selinux ping failure
1932
1933
* https://tracker.ceph.com/issues/56694
1934
    qa: avoid blocking forever on hung umount
1935
* https://tracker.ceph.com/issues/56695
1936
    [RHEL stock] pjd test failures
1937
* https://tracker.ceph.com/issues/56696
1938
    admin keyring disappears during qa run
1939
* https://tracker.ceph.com/issues/56697
1940
    qa: fs/snaps fails for fuse
1941
* https://tracker.ceph.com/issues/50222
1942
    osd: 5.2s0 deep-scrub : stat mismatch
1943
* https://tracker.ceph.com/issues/56698
1944
    client: FAILED ceph_assert(_size == 0)
1945
* https://tracker.ceph.com/issues/50223
1946
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
1947 66 Rishabh Dave
1948 65 Rishabh Dave
1949
h3. 2022 Jul 15
1950
1951
http://pulpito.front.sepia.ceph.com/rishabh-2022-07-08_23:53:34-fs-wip-rishabh-testing-2022Jul08-1820-testing-default-smithi/
1952
1953
re-run: http://pulpito.front.sepia.ceph.com/rishabh-2022-07-15_06:42:04-fs-wip-rishabh-testing-2022Jul08-1820-testing-default-smithi/
1954
1955
* https://tracker.ceph.com/issues/53859
1956
  Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
1957
* https://tracker.ceph.com/issues/55804
1958
  Command failed (workunit test suites/pjd.sh)
1959
* https://tracker.ceph.com/issues/50223
1960
  client.xxxx isn't responding to mclientcaps(revoke)
1961
* https://tracker.ceph.com/issues/50222
1962
  osd: deep-scrub : stat mismatch
1963
1964
* https://tracker.ceph.com/issues/56632
1965
  Test failure: test_subvolume_snapshot_clone_quota_exceeded (tasks.cephfs.test_volumes.TestSubvolumeSnapshotClones)
1966
* https://tracker.ceph.com/issues/56634
1967
  workunit test fs/snaps/snaptest-intodir.sh
1968
* https://tracker.ceph.com/issues/56644
1969
  Test failure: test_rapid_creation (tasks.cephfs.test_fragment.TestFragmentation)
1970
1971 61 Rishabh Dave
1972
1973
h3. 2022 July 05
1974 62 Rishabh Dave
1975 64 Rishabh Dave
http://pulpito.front.sepia.ceph.com/rishabh-2022-07-02_14:14:52-fs-wip-rishabh-testing-20220702-1631-testing-default-smithi/
1976
1977
On 1st re-run some jobs passed - http://pulpito.front.sepia.ceph.com/rishabh-2022-07-03_15:10:28-fs-wip-rishabh-testing-20220702-1631-distro-default-smithi/
1978
1979
On 2nd re-run only few jobs failed -
1980 62 Rishabh Dave
http://pulpito.front.sepia.ceph.com/rishabh-2022-07-06_05:24:29-fs-wip-rishabh-testing-20220705-2132-distro-default-smithi/
1981
http://pulpito.front.sepia.ceph.com/rishabh-2022-07-06_05:24:29-fs-wip-rishabh-testing-20220705-2132-distro-default-smithi/
1982
1983
* https://tracker.ceph.com/issues/56446
1984
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
1985
* https://tracker.ceph.com/issues/55804
1986
    Command failed (workunit test suites/pjd.sh) on smithi047 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/
1987
1988
* https://tracker.ceph.com/issues/56445
1989 63 Rishabh Dave
    Command failed on smithi080 with status 123: "find /home/ubuntu/cephtest/archive/syslog -name '*.log' -print0 | sudo xargs -0 --no-run-if-empty -- gzip --"
1990
* https://tracker.ceph.com/issues/51267
1991
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithi098 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1
1992 62 Rishabh Dave
* https://tracker.ceph.com/issues/50224
1993
    Test failure: test_mirroring_init_failure_with_recovery (tasks.cephfs.test_mirroring.TestMirroring)
1994 61 Rishabh Dave
1995 58 Venky Shankar
1996
1997
h3. 2022 July 04
1998
1999
https://pulpito.ceph.com/vshankar-2022-06-29_09:19:00-fs-wip-vshankar-testing-20220627-100931-testing-default-smithi/
2000
(rhel runs were borked due to: https://lists.ceph.io/hyperkitty/list/dev@ceph.io/thread/JSZQFUKVLDND4W33PXDGCABPHNSPT6SS/, tests ran with --filter-out=rhel)
2001
2002
* https://tracker.ceph.com/issues/56445
2003 59 Rishabh Dave
    Command failed on smithi162 with status 123: "find /home/ubuntu/cephtest/archive/syslog -name '*.log' -print0 | sudo xargs -0 --no-run-if-empty -- gzip --"
2004
* https://tracker.ceph.com/issues/56446
2005
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
2006
* https://tracker.ceph.com/issues/51964
2007 60 Rishabh Dave
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
2008 59 Rishabh Dave
* https://tracker.ceph.com/issues/52624
2009 57 Venky Shankar
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2010
2011
h3. 2022 June 20
2012
2013
https://pulpito.ceph.com/vshankar-2022-06-15_04:03:39-fs-wip-vshankar-testing1-20220615-072516-testing-default-smithi/
2014
https://pulpito.ceph.com/vshankar-2022-06-19_08:22:46-fs-wip-vshankar-testing1-20220619-102531-testing-default-smithi/
2015
2016
* https://tracker.ceph.com/issues/52624
2017
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2018
* https://tracker.ceph.com/issues/55804
2019
    qa failure: pjd link tests failed
2020
* https://tracker.ceph.com/issues/54108
2021
    qa: iogen workunit: "The following counters failed to be set on mds daemons: {'mds.exported', 'mds.imported'}"
2022
* https://tracker.ceph.com/issues/55332
2023 56 Patrick Donnelly
    Failure in snaptest-git-ceph.sh (it's an async unlink/create bug)
2024
2025
h3. 2022 June 13
2026
2027
https://pulpito.ceph.com/pdonnell-2022-06-12_05:08:12-fs:workload-wip-pdonnell-testing-20220612.004943-distro-default-smithi/
2028
2029
* https://tracker.ceph.com/issues/56024
2030
    cephadm: removes ceph.conf during qa run causing command failure
2031
* https://tracker.ceph.com/issues/48773
2032
    qa: scrub does not complete
2033
* https://tracker.ceph.com/issues/56012
2034
    mds: src/mds/MDLog.cc: 283: FAILED ceph_assert(!mds->is_ any_replay())
2035 55 Venky Shankar
2036 54 Venky Shankar
2037
h3. 2022 Jun 13
2038
2039
https://pulpito.ceph.com/vshankar-2022-06-07_00:25:50-fs-wip-vshankar-testing-20220606-223254-testing-default-smithi/
2040
https://pulpito.ceph.com/vshankar-2022-06-10_01:04:46-fs-wip-vshankar-testing-20220609-175550-testing-default-smithi/
2041
2042
* https://tracker.ceph.com/issues/52624
2043
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2044
* https://tracker.ceph.com/issues/51964
2045
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
2046
* https://tracker.ceph.com/issues/53859
2047
    qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
2048
* https://tracker.ceph.com/issues/55804
2049
    qa failure: pjd link tests failed
2050
* https://tracker.ceph.com/issues/56003
2051
    client: src/include/xlist.h: 81: FAILED ceph_assert(_size == 0)
2052
* https://tracker.ceph.com/issues/56011
2053
    fs/thrash: snaptest-snap-rm-cmp.sh fails in mds5sum comparison
2054
* https://tracker.ceph.com/issues/56012
2055 53 Venky Shankar
    mds: src/mds/MDLog.cc: 283: FAILED ceph_assert(!mds->is_ any_replay())
2056
2057
h3. 2022 Jun 07
2058
2059
https://pulpito.ceph.com/vshankar-2022-06-06_21:25:41-fs-wip-vshankar-testing1-20220606-230129-testing-default-smithi/
2060
https://pulpito.ceph.com/vshankar-2022-06-07_10:53:31-fs-wip-vshankar-testing1-20220607-104134-testing-default-smithi/ (rerun after dropping a problematic PR)
2061
2062
* https://tracker.ceph.com/issues/52624
2063
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2064
* https://tracker.ceph.com/issues/50223
2065
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2066
* https://tracker.ceph.com/issues/50224
2067 51 Venky Shankar
    qa: test_mirroring_init_failure_with_recovery failure
2068
2069
h3. 2022 May 12
2070 52 Venky Shankar
2071 51 Venky Shankar
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20220509-125847
2072
https://pulpito.ceph.com/vshankar-2022-05-13_17:09:16-fs-wip-vshankar-testing-20220513-120051-testing-default-smithi/ (drop prs + rerun)
2073
2074
* https://tracker.ceph.com/issues/52624
2075
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2076
* https://tracker.ceph.com/issues/50223
2077
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2078
* https://tracker.ceph.com/issues/55332
2079
    Failure in snaptest-git-ceph.sh
2080
* https://tracker.ceph.com/issues/53859
2081 1 Patrick Donnelly
    qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
2082 52 Venky Shankar
* https://tracker.ceph.com/issues/55538
2083
    Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead)
2084 51 Venky Shankar
* https://tracker.ceph.com/issues/55258
2085 49 Venky Shankar
    lots of "heartbeat_check: no reply from X.X.X.X" in OSD logs (cropss up again, though very infrequent)
2086
2087 50 Venky Shankar
h3. 2022 May 04
2088
2089
https://pulpito.ceph.com/vshankar-2022-05-01_13:18:44-fs-wip-vshankar-testing1-20220428-204527-testing-default-smithi/
2090 49 Venky Shankar
https://pulpito.ceph.com/vshankar-2022-05-02_16:58:59-fs-wip-vshankar-testing1-20220502-201957-testing-default-smithi/ (after dropping PRs)
2091
2092
* https://tracker.ceph.com/issues/52624
2093
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2094
* https://tracker.ceph.com/issues/50223
2095
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2096
* https://tracker.ceph.com/issues/55332
2097
    Failure in snaptest-git-ceph.sh
2098
* https://tracker.ceph.com/issues/53859
2099
    qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
2100
* https://tracker.ceph.com/issues/55516
2101
    qa: fs suite tests failing with "json.decoder.JSONDecodeError: Extra data: line 2 column 82 (char 82)"
2102
* https://tracker.ceph.com/issues/55537
2103
    mds: crash during fs:upgrade test
2104
* https://tracker.ceph.com/issues/55538
2105 48 Venky Shankar
    Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead)
2106
2107
h3. 2022 Apr 25
2108
2109
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20220420-113951 (owner vshankar)
2110
2111
* https://tracker.ceph.com/issues/52624
2112
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2113
* https://tracker.ceph.com/issues/50223
2114
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2115
* https://tracker.ceph.com/issues/55258
2116
    lots of "heartbeat_check: no reply from X.X.X.X" in OSD logs
2117
* https://tracker.ceph.com/issues/55377
2118 47 Venky Shankar
    kclient: mds revoke Fwb caps stuck after the kclient tries writebcak once
2119
2120
h3. 2022 Apr 14
2121
2122
https://pulpito.ceph.com/?branch=wip-vshankar-testing1-20220411-144044
2123
2124
* https://tracker.ceph.com/issues/52624
2125
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2126
* https://tracker.ceph.com/issues/50223
2127
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2128
* https://tracker.ceph.com/issues/52438
2129
    qa: ffsb timeout
2130
* https://tracker.ceph.com/issues/55170
2131
    mds: crash during rejoin (CDir::fetch_keys)
2132
* https://tracker.ceph.com/issues/55331
2133
    pjd failure
2134
* https://tracker.ceph.com/issues/48773
2135
    qa: scrub does not complete
2136
* https://tracker.ceph.com/issues/55332
2137
    Failure in snaptest-git-ceph.sh
2138
* https://tracker.ceph.com/issues/55258
2139 45 Venky Shankar
    lots of "heartbeat_check: no reply from X.X.X.X" in OSD logs
2140
2141 46 Venky Shankar
h3. 2022 Apr 11
2142 45 Venky Shankar
2143
https://pulpito.ceph.com/?branch=wip-vshankar-testing-55110-20220408-203242
2144
2145
* https://tracker.ceph.com/issues/48773
2146
    qa: scrub does not complete
2147
* https://tracker.ceph.com/issues/52624
2148
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2149
* https://tracker.ceph.com/issues/52438
2150
    qa: ffsb timeout
2151
* https://tracker.ceph.com/issues/48680
2152
    mds: scrubbing stuck "scrub active (0 inodes in the stack)"
2153
* https://tracker.ceph.com/issues/55236
2154
    qa: fs/snaps tests fails with "hit max job timeout"
2155
* https://tracker.ceph.com/issues/54108
2156
    qa: iogen workunit: "The following counters failed to be set on mds daemons: {'mds.exported', 'mds.imported'}"
2157
* https://tracker.ceph.com/issues/54971
2158
    Test failure: test_perf_stats_stale_metrics (tasks.cephfs.test_mds_metrics.TestMDSMetrics)
2159
* https://tracker.ceph.com/issues/50223
2160
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2161
* https://tracker.ceph.com/issues/55258
2162 44 Venky Shankar
    lots of "heartbeat_check: no reply from X.X.X.X" in OSD logs
2163 42 Venky Shankar
2164 43 Venky Shankar
h3. 2022 Mar 21
2165
2166
https://pulpito.ceph.com/vshankar-2022-03-20_02:16:37-fs-wip-vshankar-testing-20220319-163539-testing-default-smithi/
2167
2168
Run didn't go well, lots of failures - debugging by dropping PRs and running against master branch. Only merging unrelated PRs that pass tests.
2169
2170
2171 42 Venky Shankar
h3. 2022 Mar 08
2172
2173
https://pulpito.ceph.com/vshankar-2022-02-28_04:32:15-fs-wip-vshankar-testing-20220226-211550-testing-default-smithi/
2174
2175
rerun with
2176
- (drop) https://github.com/ceph/ceph/pull/44679
2177
- (drop) https://github.com/ceph/ceph/pull/44958
2178
https://pulpito.ceph.com/vshankar-2022-03-06_14:47:51-fs-wip-vshankar-testing-20220304-132102-testing-default-smithi/
2179
2180
* https://tracker.ceph.com/issues/54419 (new)
2181
    `ceph orch upgrade start` seems to never reach completion
2182
* https://tracker.ceph.com/issues/51964
2183
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
2184
* https://tracker.ceph.com/issues/52624
2185
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2186
* https://tracker.ceph.com/issues/50223
2187
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2188
* https://tracker.ceph.com/issues/52438
2189
    qa: ffsb timeout
2190
* https://tracker.ceph.com/issues/50821
2191
    qa: untar_snap_rm failure during mds thrashing
2192 41 Venky Shankar
2193
2194
h3. 2022 Feb 09
2195
2196
https://pulpito.ceph.com/vshankar-2022-02-05_17:27:49-fs-wip-vshankar-testing-20220201-113815-testing-default-smithi/
2197
2198
rerun with
2199
- (drop) https://github.com/ceph/ceph/pull/37938
2200
- (drop) https://github.com/ceph/ceph/pull/44335
2201
- (drop) https://github.com/ceph/ceph/pull/44491
2202
- (drop) https://github.com/ceph/ceph/pull/44501
2203
https://pulpito.ceph.com/vshankar-2022-02-08_14:27:29-fs-wip-vshankar-testing-20220208-181241-testing-default-smithi/
2204
2205
* https://tracker.ceph.com/issues/51964
2206
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
2207
* https://tracker.ceph.com/issues/54066
2208
    test_subvolume_no_upgrade_v1_sanity fails with `AssertionError: 1000 != 0`
2209
* https://tracker.ceph.com/issues/48773
2210
    qa: scrub does not complete
2211
* https://tracker.ceph.com/issues/52624
2212
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2213
* https://tracker.ceph.com/issues/50223
2214
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2215
* https://tracker.ceph.com/issues/52438
2216 40 Patrick Donnelly
    qa: ffsb timeout
2217
2218
h3. 2022 Feb 01
2219
2220
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20220127.171526
2221
2222
* https://tracker.ceph.com/issues/54107
2223
    kclient: hang during umount
2224
* https://tracker.ceph.com/issues/54106
2225
    kclient: hang during workunit cleanup
2226
* https://tracker.ceph.com/issues/54108
2227
    qa: iogen workunit: "The following counters failed to be set on mds daemons: {'mds.exported', 'mds.imported'}"
2228
* https://tracker.ceph.com/issues/48773
2229
    qa: scrub does not complete
2230
* https://tracker.ceph.com/issues/52438
2231
    qa: ffsb timeout
2232 36 Venky Shankar
2233
2234
h3. 2022 Jan 13
2235 39 Venky Shankar
2236 36 Venky Shankar
https://pulpito.ceph.com/vshankar-2022-01-06_13:18:41-fs-wip-vshankar-testing-20220106-145819-testing-default-smithi/
2237 38 Venky Shankar
2238
rerun with:
2239 36 Venky Shankar
- (add) https://github.com/ceph/ceph/pull/44570
2240
- (drop) https://github.com/ceph/ceph/pull/43184
2241
https://pulpito.ceph.com/vshankar-2022-01-13_04:42:40-fs-wip-vshankar-testing-20220106-145819-testing-default-smithi/
2242
2243
* https://tracker.ceph.com/issues/50223
2244
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2245
* https://tracker.ceph.com/issues/51282
2246
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2247
* https://tracker.ceph.com/issues/48773
2248
    qa: scrub does not complete
2249
* https://tracker.ceph.com/issues/52624
2250
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2251
* https://tracker.ceph.com/issues/53859
2252 34 Venky Shankar
    qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
2253
2254
h3. 2022 Jan 03
2255
2256
https://pulpito.ceph.com/vshankar-2021-12-22_07:37:44-fs-wip-vshankar-testing-20211216-114012-testing-default-smithi/
2257
https://pulpito.ceph.com/vshankar-2022-01-03_12:27:45-fs-wip-vshankar-testing-20220103-142738-testing-default-smithi/ (rerun)
2258
2259
* https://tracker.ceph.com/issues/50223
2260
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2261
* https://tracker.ceph.com/issues/51964
2262
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
2263
* https://tracker.ceph.com/issues/51267
2264
    CommandFailedError: Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithi096 with status 1:...
2265
* https://tracker.ceph.com/issues/51282
2266
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2267
* https://tracker.ceph.com/issues/50821
2268
    qa: untar_snap_rm failure during mds thrashing
2269 35 Ramana Raja
* https://tracker.ceph.com/issues/51278
2270
    mds: "FAILED ceph_assert(!segments.empty())"
2271
* https://tracker.ceph.com/issues/52279
2272 34 Venky Shankar
    cephadm tests fail due to: error adding seccomp filter rule for syscall bdflush: requested action matches default action of filter
2273 33 Patrick Donnelly
2274
2275
h3. 2021 Dec 22
2276
2277
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211222.014316
2278
2279
* https://tracker.ceph.com/issues/52624
2280
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2281
* https://tracker.ceph.com/issues/50223
2282
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2283
* https://tracker.ceph.com/issues/52279
2284
    cephadm tests fail due to: error adding seccomp filter rule for syscall bdflush: requested action matches default action of filter
2285
* https://tracker.ceph.com/issues/50224
2286
    qa: test_mirroring_init_failure_with_recovery failure
2287
* https://tracker.ceph.com/issues/48773
2288
    qa: scrub does not complete
2289 32 Venky Shankar
2290
2291
h3. 2021 Nov 30
2292
2293
https://pulpito.ceph.com/vshankar-2021-11-24_07:14:27-fs-wip-vshankar-testing-20211124-094330-testing-default-smithi/
2294
https://pulpito.ceph.com/vshankar-2021-11-30_06:23:32-fs-wip-vshankar-testing-20211124-094330-distro-default-smithi/ (rerun w/ QA fixes)
2295
2296
* https://tracker.ceph.com/issues/53436
2297
    mds, mon: mds beacon messages get dropped? (mds never reaches up:active state)
2298
* https://tracker.ceph.com/issues/51964
2299
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
2300
* https://tracker.ceph.com/issues/48812
2301
    qa: test_scrub_pause_and_resume_with_abort failure
2302
* https://tracker.ceph.com/issues/51076
2303
    "wait_for_recovery: failed before timeout expired" during thrashosd test with EC backend.
2304
* https://tracker.ceph.com/issues/50223
2305
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2306
* https://tracker.ceph.com/issues/52624
2307
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2308
* https://tracker.ceph.com/issues/50250
2309
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
2310 31 Patrick Donnelly
2311
2312
h3. 2021 November 9
2313
2314
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211109.180315
2315
2316
* https://tracker.ceph.com/issues/53214
2317
    qa: "dd: error reading '/sys/kernel/debug/ceph/2a934501-6731-4052-a836-f42229a869be.client4874/metrics': Is a directory"
2318
* https://tracker.ceph.com/issues/48773
2319
    qa: scrub does not complete
2320
* https://tracker.ceph.com/issues/50223
2321
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2322
* https://tracker.ceph.com/issues/51282
2323
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2324
* https://tracker.ceph.com/issues/52624
2325
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2326
* https://tracker.ceph.com/issues/53216
2327
    qa: "RuntimeError: value of attributes should be either str or None. client_id"
2328
* https://tracker.ceph.com/issues/50250
2329
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
2330
2331 30 Patrick Donnelly
2332
2333
h3. 2021 November 03
2334
2335
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211103.023355
2336
2337
* https://tracker.ceph.com/issues/51964
2338
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
2339
* https://tracker.ceph.com/issues/51282
2340
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2341
* https://tracker.ceph.com/issues/52436
2342
    fs/ceph: "corrupt mdsmap"
2343
* https://tracker.ceph.com/issues/53074
2344
    pybind/mgr/cephadm: upgrade sequence does not continue if no MDS are active
2345
* https://tracker.ceph.com/issues/53150
2346
    pybind/mgr/cephadm/upgrade: tolerate MDS failures during upgrade straddling v16.2.5
2347
* https://tracker.ceph.com/issues/53155
2348
    MDSMonitor: assertion during upgrade to v16.2.5+
2349 29 Patrick Donnelly
2350
2351
h3. 2021 October 26
2352
2353
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211025.000447
2354
2355
* https://tracker.ceph.com/issues/53074
2356
    pybind/mgr/cephadm: upgrade sequence does not continue if no MDS are active
2357
* https://tracker.ceph.com/issues/52997
2358
    testing: hang ing umount
2359
* https://tracker.ceph.com/issues/50824
2360
    qa: snaptest-git-ceph bus error
2361
* https://tracker.ceph.com/issues/52436
2362
    fs/ceph: "corrupt mdsmap"
2363
* https://tracker.ceph.com/issues/48773
2364
    qa: scrub does not complete
2365
* https://tracker.ceph.com/issues/53082
2366
    ceph-fuse: segmenetation fault in Client::handle_mds_map
2367
* https://tracker.ceph.com/issues/50223
2368
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2369
* https://tracker.ceph.com/issues/52624
2370
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2371
* https://tracker.ceph.com/issues/50224
2372
    qa: test_mirroring_init_failure_with_recovery failure
2373
* https://tracker.ceph.com/issues/50821
2374
    qa: untar_snap_rm failure during mds thrashing
2375
* https://tracker.ceph.com/issues/50250
2376
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
2377
2378 27 Patrick Donnelly
2379
2380 28 Patrick Donnelly
h3. 2021 October 19
2381 27 Patrick Donnelly
2382
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211019.013028
2383
2384
* https://tracker.ceph.com/issues/52995
2385
    qa: test_standby_count_wanted failure
2386
* https://tracker.ceph.com/issues/52948
2387
    osd: fails to come up: "teuthology.misc:7 of 8 OSDs are up"
2388
* https://tracker.ceph.com/issues/52996
2389
    qa: test_perf_counters via test_openfiletable
2390
* https://tracker.ceph.com/issues/48772
2391
    qa: pjd: not ok 9, 44, 80
2392
* https://tracker.ceph.com/issues/52997
2393
    testing: hang ing umount
2394
* https://tracker.ceph.com/issues/50250
2395
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
2396
* https://tracker.ceph.com/issues/52624
2397
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2398
* https://tracker.ceph.com/issues/50223
2399
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2400
* https://tracker.ceph.com/issues/50821
2401
    qa: untar_snap_rm failure during mds thrashing
2402
* https://tracker.ceph.com/issues/48773
2403
    qa: scrub does not complete
2404 26 Patrick Donnelly
2405
2406
h3. 2021 October 12
2407
2408
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211012.192211
2409
2410
Some failures caused by teuthology bug: https://tracker.ceph.com/issues/52944
2411
2412
New test caused failure: https://github.com/ceph/ceph/pull/43297#discussion_r729883167
2413
2414
2415
* https://tracker.ceph.com/issues/51282
2416
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2417
* https://tracker.ceph.com/issues/52948
2418
    osd: fails to come up: "teuthology.misc:7 of 8 OSDs are up"
2419
* https://tracker.ceph.com/issues/48773
2420
    qa: scrub does not complete
2421
* https://tracker.ceph.com/issues/50224
2422
    qa: test_mirroring_init_failure_with_recovery failure
2423
* https://tracker.ceph.com/issues/52949
2424
    RuntimeError: The following counters failed to be set on mds daemons: {'mds.dir_split'}
2425 25 Patrick Donnelly
2426 23 Patrick Donnelly
2427 24 Patrick Donnelly
h3. 2021 October 02
2428
2429
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211002.163337
2430
2431
Some failures caused by cephadm upgrade test. Fixed in follow-up qa commit.
2432
2433
test_simple failures caused by PR in this set.
2434
2435
A few reruns because of QA infra noise.
2436
2437
* https://tracker.ceph.com/issues/52822
2438
    qa: failed pacific install on fs:upgrade
2439
* https://tracker.ceph.com/issues/52624
2440
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2441
* https://tracker.ceph.com/issues/50223
2442
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2443
* https://tracker.ceph.com/issues/48773
2444
    qa: scrub does not complete
2445
2446
2447 23 Patrick Donnelly
h3. 2021 September 20
2448
2449
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210917.174826
2450
2451
* https://tracker.ceph.com/issues/52677
2452
    qa: test_simple failure
2453
* https://tracker.ceph.com/issues/51279
2454
    kclient hangs on umount (testing branch)
2455
* https://tracker.ceph.com/issues/50223
2456
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2457
* https://tracker.ceph.com/issues/50250
2458
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
2459
* https://tracker.ceph.com/issues/52624
2460
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2461
* https://tracker.ceph.com/issues/52438
2462
    qa: ffsb timeout
2463 22 Patrick Donnelly
2464
2465
h3. 2021 September 10
2466
2467
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210910.181451
2468
2469
* https://tracker.ceph.com/issues/50223
2470
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2471
* https://tracker.ceph.com/issues/50250
2472
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
2473
* https://tracker.ceph.com/issues/52624
2474
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2475
* https://tracker.ceph.com/issues/52625
2476
    qa: test_kill_mdstable (tasks.cephfs.test_snapshots.TestSnapshots)
2477
* https://tracker.ceph.com/issues/52439
2478
    qa: acls does not compile on centos stream
2479
* https://tracker.ceph.com/issues/50821
2480
    qa: untar_snap_rm failure during mds thrashing
2481
* https://tracker.ceph.com/issues/48773
2482
    qa: scrub does not complete
2483
* https://tracker.ceph.com/issues/52626
2484
    mds: ScrubStack.cc: 831: FAILED ceph_assert(diri)
2485
* https://tracker.ceph.com/issues/51279
2486
    kclient hangs on umount (testing branch)
2487 21 Patrick Donnelly
2488
2489
h3. 2021 August 27
2490
2491
Several jobs died because of device failures.
2492
2493
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210827.024746
2494
2495
* https://tracker.ceph.com/issues/52430
2496
    mds: fast async create client mount breaks racy test
2497
* https://tracker.ceph.com/issues/52436
2498
    fs/ceph: "corrupt mdsmap"
2499
* https://tracker.ceph.com/issues/52437
2500
    mds: InoTable::replay_release_ids abort via test_inotable_sync
2501
* https://tracker.ceph.com/issues/51282
2502
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2503
* https://tracker.ceph.com/issues/52438
2504
    qa: ffsb timeout
2505
* https://tracker.ceph.com/issues/52439
2506
    qa: acls does not compile on centos stream
2507 20 Patrick Donnelly
2508
2509
h3. 2021 July 30
2510
2511
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210729.214022
2512
2513
* https://tracker.ceph.com/issues/50250
2514
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
2515
* https://tracker.ceph.com/issues/51282
2516
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2517
* https://tracker.ceph.com/issues/48773
2518
    qa: scrub does not complete
2519
* https://tracker.ceph.com/issues/51975
2520
    pybind/mgr/stats: KeyError
2521 19 Patrick Donnelly
2522
2523
h3. 2021 July 28
2524
2525
https://pulpito.ceph.com/pdonnell-2021-07-28_00:39:45-fs-wip-pdonnell-testing-20210727.213757-distro-basic-smithi/
2526
2527
with qa fix: https://pulpito.ceph.com/pdonnell-2021-07-28_16:20:28-fs-wip-pdonnell-testing-20210728.141004-distro-basic-smithi/
2528
2529
* https://tracker.ceph.com/issues/51905
2530
    qa: "error reading sessionmap 'mds1_sessionmap'"
2531
* https://tracker.ceph.com/issues/48773
2532
    qa: scrub does not complete
2533
* https://tracker.ceph.com/issues/50250
2534
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
2535
* https://tracker.ceph.com/issues/51267
2536
    CommandFailedError: Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithi096 with status 1:...
2537
* https://tracker.ceph.com/issues/51279
2538
    kclient hangs on umount (testing branch)
2539 18 Patrick Donnelly
2540
2541
h3. 2021 July 16
2542
2543
https://pulpito.ceph.com/pdonnell-2021-07-16_05:50:11-fs-wip-pdonnell-testing-20210716.022804-distro-basic-smithi/
2544
2545
* https://tracker.ceph.com/issues/48773
2546
    qa: scrub does not complete
2547
* https://tracker.ceph.com/issues/48772
2548
    qa: pjd: not ok 9, 44, 80
2549
* https://tracker.ceph.com/issues/45434
2550
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2551
* https://tracker.ceph.com/issues/51279
2552
    kclient hangs on umount (testing branch)
2553
* https://tracker.ceph.com/issues/50824
2554
    qa: snaptest-git-ceph bus error
2555 17 Patrick Donnelly
2556
2557
h3. 2021 July 04
2558
2559
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210703.052904
2560
2561
* https://tracker.ceph.com/issues/48773
2562
    qa: scrub does not complete
2563
* https://tracker.ceph.com/issues/39150
2564
    mon: "FAILED ceph_assert(session_map.sessions.empty())" when out of quorum
2565
* https://tracker.ceph.com/issues/45434
2566
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2567
* https://tracker.ceph.com/issues/51282
2568
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2569
* https://tracker.ceph.com/issues/48771
2570
    qa: iogen: workload fails to cause balancing
2571
* https://tracker.ceph.com/issues/51279
2572
    kclient hangs on umount (testing branch)
2573
* https://tracker.ceph.com/issues/50250
2574
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
2575 16 Patrick Donnelly
2576
2577
h3. 2021 July 01
2578
2579
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210701.192056
2580
2581
* https://tracker.ceph.com/issues/51197
2582
    qa: [WRN] Scrub error on inode 0x10000001520 (/client.0/tmp/t/linux-5.4/Documentation/driver-api) see mds.f log and `damage ls` output for details
2583
* https://tracker.ceph.com/issues/50866
2584
    osd: stat mismatch on objects
2585
* https://tracker.ceph.com/issues/48773
2586
    qa: scrub does not complete
2587 15 Patrick Donnelly
2588
2589
h3. 2021 June 26
2590
2591
https://pulpito.ceph.com/pdonnell-2021-06-26_00:57:00-fs-wip-pdonnell-testing-20210625.225421-distro-basic-smithi/
2592
2593
* https://tracker.ceph.com/issues/51183
2594
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
2595
* https://tracker.ceph.com/issues/51410
2596
    kclient: fails to finish reconnect during MDS thrashing (testing branch)
2597
* https://tracker.ceph.com/issues/48773
2598
    qa: scrub does not complete
2599
* https://tracker.ceph.com/issues/51282
2600
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2601
* https://tracker.ceph.com/issues/51169
2602
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
2603
* https://tracker.ceph.com/issues/48772
2604
    qa: pjd: not ok 9, 44, 80
2605 14 Patrick Donnelly
2606
2607
h3. 2021 June 21
2608
2609
https://pulpito.ceph.com/pdonnell-2021-06-22_00:27:21-fs-wip-pdonnell-testing-20210621.231646-distro-basic-smithi/
2610
2611
One failure caused by PR: https://github.com/ceph/ceph/pull/41935#issuecomment-866472599
2612
2613
* https://tracker.ceph.com/issues/51282
2614
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2615
* https://tracker.ceph.com/issues/51183
2616
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
2617
* https://tracker.ceph.com/issues/48773
2618
    qa: scrub does not complete
2619
* https://tracker.ceph.com/issues/48771
2620
    qa: iogen: workload fails to cause balancing
2621
* https://tracker.ceph.com/issues/51169
2622
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
2623
* https://tracker.ceph.com/issues/50495
2624
    libcephfs: shutdown race fails with status 141
2625
* https://tracker.ceph.com/issues/45434
2626
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2627
* https://tracker.ceph.com/issues/50824
2628
    qa: snaptest-git-ceph bus error
2629
* https://tracker.ceph.com/issues/50223
2630
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2631 13 Patrick Donnelly
2632
2633
h3. 2021 June 16
2634
2635
https://pulpito.ceph.com/pdonnell-2021-06-16_21:26:55-fs-wip-pdonnell-testing-20210616.191804-distro-basic-smithi/
2636
2637
MDS abort class of failures caused by PR: https://github.com/ceph/ceph/pull/41667
2638
2639
* https://tracker.ceph.com/issues/45434
2640
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2641
* https://tracker.ceph.com/issues/51169
2642
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
2643
* https://tracker.ceph.com/issues/43216
2644
    MDSMonitor: removes MDS coming out of quorum election
2645
* https://tracker.ceph.com/issues/51278
2646
    mds: "FAILED ceph_assert(!segments.empty())"
2647
* https://tracker.ceph.com/issues/51279
2648
    kclient hangs on umount (testing branch)
2649
* https://tracker.ceph.com/issues/51280
2650
    mds: "FAILED ceph_assert(r == 0 || r == -2)"
2651
* https://tracker.ceph.com/issues/51183
2652
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
2653
* https://tracker.ceph.com/issues/51281
2654
    qa: snaptest-snap-rm-cmp.sh: "echo 'FAIL: bad match, /tmp/a 4637e766853d1ad16a7b17079e2c6f03 != real c3883760b18d50e8d78819c54d579b00'"
2655
* https://tracker.ceph.com/issues/48773
2656
    qa: scrub does not complete
2657
* https://tracker.ceph.com/issues/51076
2658
    "wait_for_recovery: failed before timeout expired" during thrashosd test with EC backend.
2659
* https://tracker.ceph.com/issues/51228
2660
    qa: rmdir: failed to remove 'a/.snap/*': No such file or directory
2661
* https://tracker.ceph.com/issues/51282
2662
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2663 12 Patrick Donnelly
2664
2665
h3. 2021 June 14
2666
2667
https://pulpito.ceph.com/pdonnell-2021-06-14_20:53:05-fs-wip-pdonnell-testing-20210614.173325-distro-basic-smithi/
2668
2669
Some Ubuntu 20.04 upgrade fallout. In particular, upgrade tests are failing due to missing packages for 18.04 Pacific.
2670
2671
* https://tracker.ceph.com/issues/51169
2672
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
2673
* https://tracker.ceph.com/issues/51228
2674
    qa: rmdir: failed to remove 'a/.snap/*': No such file or directory
2675
* https://tracker.ceph.com/issues/48773
2676
    qa: scrub does not complete
2677
* https://tracker.ceph.com/issues/51183
2678
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
2679
* https://tracker.ceph.com/issues/45434
2680
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2681
* https://tracker.ceph.com/issues/51182
2682
    pybind/mgr/snap_schedule: Invalid command: Unexpected argument 'fs=cephfs'
2683
* https://tracker.ceph.com/issues/51229
2684
    qa: test_multi_snap_schedule list difference failure
2685
* https://tracker.ceph.com/issues/50821
2686
    qa: untar_snap_rm failure during mds thrashing
2687 11 Patrick Donnelly
2688
2689
h3. 2021 June 13
2690
2691
https://pulpito.ceph.com/pdonnell-2021-06-12_02:45:35-fs-wip-pdonnell-testing-20210612.002809-distro-basic-smithi/
2692
2693
Some Ubuntu 20.04 upgrade fallout. In particular, upgrade tests are failing due to missing packages for 18.04 Pacific.
2694
2695
* https://tracker.ceph.com/issues/51169
2696
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
2697
* https://tracker.ceph.com/issues/48773
2698
    qa: scrub does not complete
2699
* https://tracker.ceph.com/issues/51182
2700
    pybind/mgr/snap_schedule: Invalid command: Unexpected argument 'fs=cephfs'
2701
* https://tracker.ceph.com/issues/51183
2702
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
2703
* https://tracker.ceph.com/issues/51197
2704
    qa: [WRN] Scrub error on inode 0x10000001520 (/client.0/tmp/t/linux-5.4/Documentation/driver-api) see mds.f log and `damage ls` output for details
2705
* https://tracker.ceph.com/issues/45434
2706 10 Patrick Donnelly
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2707
2708
h3. 2021 June 11
2709
2710
https://pulpito.ceph.com/pdonnell-2021-06-11_18:02:10-fs-wip-pdonnell-testing-20210611.162716-distro-basic-smithi/
2711
2712
Some Ubuntu 20.04 upgrade fallout. In particular, upgrade tests are failing due to missing packages for 18.04 Pacific.
2713
2714
* https://tracker.ceph.com/issues/51169
2715
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
2716
* https://tracker.ceph.com/issues/45434
2717
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2718
* https://tracker.ceph.com/issues/48771
2719
    qa: iogen: workload fails to cause balancing
2720
* https://tracker.ceph.com/issues/43216
2721
    MDSMonitor: removes MDS coming out of quorum election
2722
* https://tracker.ceph.com/issues/51182
2723
    pybind/mgr/snap_schedule: Invalid command: Unexpected argument 'fs=cephfs'
2724
* https://tracker.ceph.com/issues/50223
2725
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2726
* https://tracker.ceph.com/issues/48773
2727
    qa: scrub does not complete
2728
* https://tracker.ceph.com/issues/51183
2729
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
2730
* https://tracker.ceph.com/issues/51184
2731
    qa: fs:bugs does not specify distro
2732 9 Patrick Donnelly
2733
2734
h3. 2021 June 03
2735
2736
https://pulpito.ceph.com/pdonnell-2021-06-03_03:40:33-fs-wip-pdonnell-testing-20210603.020013-distro-basic-smithi/
2737
2738
* https://tracker.ceph.com/issues/45434
2739
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2740
* https://tracker.ceph.com/issues/50016
2741
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
2742
* https://tracker.ceph.com/issues/50821
2743
    qa: untar_snap_rm failure during mds thrashing
2744
* https://tracker.ceph.com/issues/50622 (regression)
2745
    msg: active_connections regression
2746
* https://tracker.ceph.com/issues/49845#note-2 (regression)
2747
    qa: failed umount in test_volumes
2748
* https://tracker.ceph.com/issues/48773
2749
    qa: scrub does not complete
2750
* https://tracker.ceph.com/issues/43216
2751
    MDSMonitor: removes MDS coming out of quorum election
2752 7 Patrick Donnelly
2753
2754 8 Patrick Donnelly
h3. 2021 May 18
2755
2756
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210518.214114
2757
2758
Regression in testing kernel caused some failures. Ilya fixed those and rerun
2759
looked better. Some odd new noise in the rerun relating to packaging and "No
2760
module named 'tasks.ceph'".
2761
2762
* https://tracker.ceph.com/issues/50824
2763
    qa: snaptest-git-ceph bus error
2764
* https://tracker.ceph.com/issues/50622 (regression)
2765
    msg: active_connections regression
2766
* https://tracker.ceph.com/issues/49845#note-2 (regression)
2767
    qa: failed umount in test_volumes
2768
* https://tracker.ceph.com/issues/48203 (stock kernel update required)
2769
    qa: quota failure
2770
2771
2772 7 Patrick Donnelly
h3. 2021 May 18
2773
2774
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210518.025642
2775
2776
* https://tracker.ceph.com/issues/50821
2777
    qa: untar_snap_rm failure during mds thrashing
2778
* https://tracker.ceph.com/issues/48773
2779
    qa: scrub does not complete
2780
* https://tracker.ceph.com/issues/45591
2781
    mgr: FAILED ceph_assert(daemon != nullptr)
2782
* https://tracker.ceph.com/issues/50866
2783
    osd: stat mismatch on objects
2784
* https://tracker.ceph.com/issues/50016
2785
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
2786
* https://tracker.ceph.com/issues/50867
2787
    qa: fs:mirror: reduced data availability
2788
* https://tracker.ceph.com/issues/50821
2789
    qa: untar_snap_rm failure during mds thrashing
2790
* https://tracker.ceph.com/issues/50622 (regression)
2791
    msg: active_connections regression
2792
* https://tracker.ceph.com/issues/50223
2793
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2794
* https://tracker.ceph.com/issues/50868
2795
    qa: "kern.log.gz already exists; not overwritten"
2796
* https://tracker.ceph.com/issues/50870
2797
    qa: test_full: "rm: cannot remove 'large_file_a': Permission denied"
2798 6 Patrick Donnelly
2799
2800
h3. 2021 May 11
2801
2802
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210511.232042
2803
2804
* one class of failures caused by PR
2805
* https://tracker.ceph.com/issues/48812
2806
    qa: test_scrub_pause_and_resume_with_abort failure
2807
* https://tracker.ceph.com/issues/50390
2808
    mds: monclient: wait_auth_rotating timed out after 30
2809
* https://tracker.ceph.com/issues/48773
2810
    qa: scrub does not complete
2811
* https://tracker.ceph.com/issues/50821
2812
    qa: untar_snap_rm failure during mds thrashing
2813
* https://tracker.ceph.com/issues/50224
2814
    qa: test_mirroring_init_failure_with_recovery failure
2815
* https://tracker.ceph.com/issues/50622 (regression)
2816
    msg: active_connections regression
2817
* https://tracker.ceph.com/issues/50825
2818
    qa: snaptest-git-ceph hang during mon thrashing v2
2819
* https://tracker.ceph.com/issues/50821
2820
    qa: untar_snap_rm failure during mds thrashing
2821
* https://tracker.ceph.com/issues/50823
2822
    qa: RuntimeError: timeout waiting for cluster to stabilize
2823 5 Patrick Donnelly
2824
2825
h3. 2021 May 14
2826
2827
https://pulpito.ceph.com/pdonnell-2021-05-14_21:45:42-fs-master-distro-basic-smithi/
2828
2829
* https://tracker.ceph.com/issues/48812
2830
    qa: test_scrub_pause_and_resume_with_abort failure
2831
* https://tracker.ceph.com/issues/50821
2832
    qa: untar_snap_rm failure during mds thrashing
2833
* https://tracker.ceph.com/issues/50622 (regression)
2834
    msg: active_connections regression
2835
* https://tracker.ceph.com/issues/50822
2836
    qa: testing kernel patch for client metrics causes mds abort
2837
* https://tracker.ceph.com/issues/48773
2838
    qa: scrub does not complete
2839
* https://tracker.ceph.com/issues/50823
2840
    qa: RuntimeError: timeout waiting for cluster to stabilize
2841
* https://tracker.ceph.com/issues/50824
2842
    qa: snaptest-git-ceph bus error
2843
* https://tracker.ceph.com/issues/50825
2844
    qa: snaptest-git-ceph hang during mon thrashing v2
2845
* https://tracker.ceph.com/issues/50826
2846
    kceph: stock RHEL kernel hangs on snaptests with mon|osd thrashers
2847 4 Patrick Donnelly
2848
2849
h3. 2021 May 01
2850
2851
https://pulpito.ceph.com/pdonnell-2021-05-01_09:07:09-fs-wip-pdonnell-testing-20210501.040415-distro-basic-smithi/
2852
2853
* https://tracker.ceph.com/issues/45434
2854
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2855
* https://tracker.ceph.com/issues/50281
2856
    qa: untar_snap_rm timeout
2857
* https://tracker.ceph.com/issues/48203 (stock kernel update required)
2858
    qa: quota failure
2859
* https://tracker.ceph.com/issues/48773
2860
    qa: scrub does not complete
2861
* https://tracker.ceph.com/issues/50390
2862
    mds: monclient: wait_auth_rotating timed out after 30
2863
* https://tracker.ceph.com/issues/50250
2864
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details"
2865
* https://tracker.ceph.com/issues/50622 (regression)
2866
    msg: active_connections regression
2867
* https://tracker.ceph.com/issues/45591
2868
    mgr: FAILED ceph_assert(daemon != nullptr)
2869
* https://tracker.ceph.com/issues/50221
2870
    qa: snaptest-git-ceph failure in git diff
2871
* https://tracker.ceph.com/issues/50016
2872
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
2873 3 Patrick Donnelly
2874
2875
h3. 2021 Apr 15
2876
2877
https://pulpito.ceph.com/pdonnell-2021-04-15_01:35:57-fs-wip-pdonnell-testing-20210414.230315-distro-basic-smithi/
2878
2879
* https://tracker.ceph.com/issues/50281
2880
    qa: untar_snap_rm timeout
2881
* https://tracker.ceph.com/issues/50220
2882
    qa: dbench workload timeout
2883
* https://tracker.ceph.com/issues/50246
2884
    mds: failure replaying journal (EMetaBlob)
2885
* https://tracker.ceph.com/issues/50250
2886
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details"
2887
* https://tracker.ceph.com/issues/50016
2888
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
2889
* https://tracker.ceph.com/issues/50222
2890
    osd: 5.2s0 deep-scrub : stat mismatch
2891
* https://tracker.ceph.com/issues/45434
2892
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2893
* https://tracker.ceph.com/issues/49845
2894
    qa: failed umount in test_volumes
2895
* https://tracker.ceph.com/issues/37808
2896
    osd: osdmap cache weak_refs assert during shutdown
2897
* https://tracker.ceph.com/issues/50387
2898
    client: fs/snaps failure
2899
* https://tracker.ceph.com/issues/50389
2900
    mds: "cluster [ERR] Error recovering journal 0x203: (2) No such file or directory" in cluster log"
2901
* https://tracker.ceph.com/issues/50216
2902
    qa: "ls: cannot access 'lost+found': No such file or directory"
2903
* https://tracker.ceph.com/issues/50390
2904
    mds: monclient: wait_auth_rotating timed out after 30
2905
2906 1 Patrick Donnelly
2907
2908 2 Patrick Donnelly
h3. 2021 Apr 08
2909
2910
https://pulpito.ceph.com/pdonnell-2021-04-08_22:42:24-fs-wip-pdonnell-testing-20210408.192301-distro-basic-smithi/
2911
2912
* https://tracker.ceph.com/issues/45434
2913
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2914
* https://tracker.ceph.com/issues/50016
2915
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
2916
* https://tracker.ceph.com/issues/48773
2917
    qa: scrub does not complete
2918
* https://tracker.ceph.com/issues/50279
2919
    qa: "Replacing daemon mds.b as rank 0 with standby daemon mds.c"
2920
* https://tracker.ceph.com/issues/50246
2921
    mds: failure replaying journal (EMetaBlob)
2922
* https://tracker.ceph.com/issues/48365
2923
    qa: ffsb build failure on CentOS 8.2
2924
* https://tracker.ceph.com/issues/50216
2925
    qa: "ls: cannot access 'lost+found': No such file or directory"
2926
* https://tracker.ceph.com/issues/50223
2927
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2928
* https://tracker.ceph.com/issues/50280
2929
    cephadm: RuntimeError: uid/gid not found
2930
* https://tracker.ceph.com/issues/50281
2931
    qa: untar_snap_rm timeout
2932
2933 1 Patrick Donnelly
h3. 2021 Apr 08
2934
2935
https://pulpito.ceph.com/pdonnell-2021-04-08_04:31:36-fs-wip-pdonnell-testing-20210408.024225-distro-basic-smithi/
2936
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210408.142238 (with logic inversion / QA fix)
2937
2938
* https://tracker.ceph.com/issues/50246
2939
    mds: failure replaying journal (EMetaBlob)
2940
* https://tracker.ceph.com/issues/50250
2941
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details"
2942
2943
2944
h3. 2021 Apr 07
2945
2946
https://pulpito.ceph.com/pdonnell-2021-04-07_02:12:41-fs-wip-pdonnell-testing-20210406.213012-distro-basic-smithi/
2947
2948
* https://tracker.ceph.com/issues/50215
2949
    qa: "log [ERR] : error reading sessionmap 'mds2_sessionmap'"
2950
* https://tracker.ceph.com/issues/49466
2951
    qa: "Command failed on gibba030 with status 1: 'set -ex\nsudo dd of=/tmp/tmp.ZEeZBasJer'"
2952
* https://tracker.ceph.com/issues/50216
2953
    qa: "ls: cannot access 'lost+found': No such file or directory"
2954
* https://tracker.ceph.com/issues/48773
2955
    qa: scrub does not complete
2956
* https://tracker.ceph.com/issues/49845
2957
    qa: failed umount in test_volumes
2958
* https://tracker.ceph.com/issues/50220
2959
    qa: dbench workload timeout
2960
* https://tracker.ceph.com/issues/50221
2961
    qa: snaptest-git-ceph failure in git diff
2962
* https://tracker.ceph.com/issues/50222
2963
    osd: 5.2s0 deep-scrub : stat mismatch
2964
* https://tracker.ceph.com/issues/50223
2965
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2966
* https://tracker.ceph.com/issues/50224
2967
    qa: test_mirroring_init_failure_with_recovery failure
2968
2969
h3. 2021 Apr 01
2970
2971
https://pulpito.ceph.com/pdonnell-2021-04-01_00:45:34-fs-wip-pdonnell-testing-20210331.222326-distro-basic-smithi/
2972
2973
* https://tracker.ceph.com/issues/48772
2974
    qa: pjd: not ok 9, 44, 80
2975
* https://tracker.ceph.com/issues/50177
2976
    osd: "stalled aio... buggy kernel or bad device?"
2977
* https://tracker.ceph.com/issues/48771
2978
    qa: iogen: workload fails to cause balancing
2979
* https://tracker.ceph.com/issues/49845
2980
    qa: failed umount in test_volumes
2981
* https://tracker.ceph.com/issues/48773
2982
    qa: scrub does not complete
2983
* https://tracker.ceph.com/issues/48805
2984
    mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details"
2985
* https://tracker.ceph.com/issues/50178
2986
    qa: "TypeError: run() got an unexpected keyword argument 'shell'"
2987
* https://tracker.ceph.com/issues/45434
2988
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2989
2990
h3. 2021 Mar 24
2991
2992
https://pulpito.ceph.com/pdonnell-2021-03-24_23:26:35-fs-wip-pdonnell-testing-20210324.190252-distro-basic-smithi/
2993
2994
* https://tracker.ceph.com/issues/49500
2995
    qa: "Assertion `cb_done' failed."
2996
* https://tracker.ceph.com/issues/50019
2997
    qa: mount failure with cephadm "probably no MDS server is up?"
2998
* https://tracker.ceph.com/issues/50020
2999
    qa: "RADOS object not found (Failed to operate read op for oid cephfs_mirror)"
3000
* https://tracker.ceph.com/issues/48773
3001
    qa: scrub does not complete
3002
* https://tracker.ceph.com/issues/45434
3003
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
3004
* https://tracker.ceph.com/issues/48805
3005
    mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details"
3006
* https://tracker.ceph.com/issues/48772
3007
    qa: pjd: not ok 9, 44, 80
3008
* https://tracker.ceph.com/issues/50021
3009
    qa: snaptest-git-ceph failure during mon thrashing
3010
* https://tracker.ceph.com/issues/48771
3011
    qa: iogen: workload fails to cause balancing
3012
* https://tracker.ceph.com/issues/50016
3013
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
3014
* https://tracker.ceph.com/issues/49466
3015
    qa: "Command failed on gibba030 with status 1: 'set -ex\nsudo dd of=/tmp/tmp.ZEeZBasJer'"
3016
3017
3018
h3. 2021 Mar 18
3019
3020
https://pulpito.ceph.com/pdonnell-2021-03-18_13:46:31-fs-wip-pdonnell-testing-20210318.024145-distro-basic-smithi/
3021
3022
* https://tracker.ceph.com/issues/49466
3023
    qa: "Command failed on gibba030 with status 1: 'set -ex\nsudo dd of=/tmp/tmp.ZEeZBasJer'"
3024
* https://tracker.ceph.com/issues/48773
3025
    qa: scrub does not complete
3026
* https://tracker.ceph.com/issues/48805
3027
    mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details"
3028
* https://tracker.ceph.com/issues/45434
3029
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
3030
* https://tracker.ceph.com/issues/49845
3031
    qa: failed umount in test_volumes
3032
* https://tracker.ceph.com/issues/49605
3033
    mgr: drops command on the floor
3034
* https://tracker.ceph.com/issues/48203 (stock kernel update required)
3035
    qa: quota failure
3036
* https://tracker.ceph.com/issues/49928
3037
    client: items pinned in cache preventing unmount x2
3038
3039
h3. 2021 Mar 15
3040
3041
https://pulpito.ceph.com/pdonnell-2021-03-15_22:16:56-fs-wip-pdonnell-testing-20210315.182203-distro-basic-smithi/
3042
3043
* https://tracker.ceph.com/issues/49842
3044
    qa: stuck pkg install
3045
* https://tracker.ceph.com/issues/49466
3046
    qa: "Command failed on gibba030 with status 1: 'set -ex\nsudo dd of=/tmp/tmp.ZEeZBasJer'"
3047
* https://tracker.ceph.com/issues/49822
3048
    test: test_mirroring_command_idempotency (tasks.cephfs.test_admin.TestMirroringCommands) failure
3049
* https://tracker.ceph.com/issues/49240
3050
    terminate called after throwing an instance of 'std::bad_alloc'
3051
* https://tracker.ceph.com/issues/48773
3052
    qa: scrub does not complete
3053
* https://tracker.ceph.com/issues/45434
3054
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
3055
* https://tracker.ceph.com/issues/49500
3056
    qa: "Assertion `cb_done' failed."
3057
* https://tracker.ceph.com/issues/49843
3058
    qa: fs/snaps/snaptest-upchildrealms.sh failure
3059
* https://tracker.ceph.com/issues/49845
3060
    qa: failed umount in test_volumes
3061
* https://tracker.ceph.com/issues/48805
3062
    mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details"
3063
* https://tracker.ceph.com/issues/49605
3064
    mgr: drops command on the floor
3065
3066
and failure caused by PR: https://github.com/ceph/ceph/pull/39969
3067
3068
3069
h3. 2021 Mar 09
3070
3071
https://pulpito.ceph.com/pdonnell-2021-03-09_03:27:39-fs-wip-pdonnell-testing-20210308.214827-distro-basic-smithi/
3072
3073
* https://tracker.ceph.com/issues/49500
3074
    qa: "Assertion `cb_done' failed."
3075
* https://tracker.ceph.com/issues/48805
3076
    mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details"
3077
* https://tracker.ceph.com/issues/48773
3078
    qa: scrub does not complete
3079
* https://tracker.ceph.com/issues/45434
3080
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
3081
* https://tracker.ceph.com/issues/49240
3082
    terminate called after throwing an instance of 'std::bad_alloc'
3083
* https://tracker.ceph.com/issues/49466
3084
    qa: "Command failed on gibba030 with status 1: 'set -ex\nsudo dd of=/tmp/tmp.ZEeZBasJer'"
3085
* https://tracker.ceph.com/issues/49684
3086
    qa: fs:cephadm mount does not wait for mds to be created
3087
* https://tracker.ceph.com/issues/48771
3088
    qa: iogen: workload fails to cause balancing