Project

General

Profile

Main » History » Version 237

Patrick Donnelly, 03/28/2024 06:30 PM

1 221 Patrick Donnelly
h1. <code>main</code> branch
2 1 Patrick Donnelly
3 236 Patrick Donnelly
h3. 2024-03-28
4
5
https://tracker.ceph.com/issues/65213
6
7 237 Patrick Donnelly
* "qa: error during scrub thrashing: rank damage found: {'backtrace'}":https://tracker.ceph.com/issues/57676
8
* "workunits/fsx.sh failure":https://tracker.ceph.com/issues/64572
9
* "PG_DEGRADED warnings during cluster creation via cephadm: Health check failed: Degraded data":https://tracker.ceph.com/issues/65018
10
    
11
    
12 236 Patrick Donnelly
13 235 Milind Changire
h3. 2024-03-25
14
15
https://pulpito.ceph.com/mchangir-2024-03-22_09:46:06-fs:upgrade-wip-mchangir-testing-main-20240318.032620-testing-default-smithi/
16
* https://tracker.ceph.com/issues/64502
17
  fusermount -u fails with: teuthology.exceptions.MaxWhileTries: reached maximum tries (51) after waiting for 300 seconds
18
19
https://pulpito.ceph.com/mchangir-2024-03-22_09:48:09-fs:libcephfs-wip-mchangir-testing-main-20240318.032620-testing-default-smithi/
20
21
* https://tracker.ceph.com/issues/62245
22
  libcephfs/test.sh failed - https://tracker.ceph.com/issues/62245#note-3
23
24
25 228 Patrick Donnelly
h3. 2024-03-20
26
27 234 Patrick Donnelly
https://pulpito.ceph.com/?branch=wip-batrick-testing-20240320.145742
28 228 Patrick Donnelly
29 233 Patrick Donnelly
https://github.com/batrick/ceph/commit/360516069d9393362c4cc6eb9371680fe16d66ab
30
31 229 Patrick Donnelly
Ubuntu jobs filtered out because builds were skipped by jenkins/shaman.
32 1 Patrick Donnelly
33 229 Patrick Donnelly
This run has a lot more failures because https://github.com/ceph/ceph/pull/55455 fixed log WRN/ERR checks.
34 228 Patrick Donnelly
35 229 Patrick Donnelly
* https://tracker.ceph.com/issues/57676
36
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
37
* https://tracker.ceph.com/issues/64572
38
    workunits/fsx.sh failure
39
* https://tracker.ceph.com/issues/65018
40
    PG_DEGRADED warnings during cluster creation via cephadm: "Health check failed: Degraded data redundancy: 2/192 objects degraded (1.042%), 1 pg degraded (PG_DEGRADED)"
41
* https://tracker.ceph.com/issues/64707 (new issue)
42
    suites/fsstress.sh hangs on one client - test times out
43 1 Patrick Donnelly
* https://tracker.ceph.com/issues/64988
44
    qa: fs:workloads mgr client evicted indicated by "cluster [WRN] evicting unresponsive client smithi042:x (15288), after 303.306 seconds"
45
* https://tracker.ceph.com/issues/59684
46
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
47 230 Patrick Donnelly
* https://tracker.ceph.com/issues/64972
48
    qa: "ceph tell 4.3a deep-scrub" command not found
49
* https://tracker.ceph.com/issues/54108
50
    qa: iogen workunit: "The following counters failed to be set on mds daemons: {'mds.exported', 'mds.imported'}"
51
* https://tracker.ceph.com/issues/65019
52
    qa/suites/fs/top: [WRN] Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)" in cluster log 
53
* https://tracker.ceph.com/issues/65020
54
    qa: Scrub error on inode 0x1000000356c (/volumes/qa/sv_0/2f8f6bb4-3ea9-47a0-bd79-a0f50dc149d5/client.0/tmp/clients/client7/~dmtmp/PARADOX) see mds.b log and `damage ls` output for details" in cluster log
55
* https://tracker.ceph.com/issues/65021
56
    qa/suites/fs/nfs: cluster [WRN] Health check failed: 1 stray daemon(s) not managed by cephadm (CEPHADM_STRAY_DAEMON)" in cluster log
57
* https://tracker.ceph.com/issues/63699
58
    qa: failed cephfs-shell test_reading_conf
59 231 Patrick Donnelly
* https://tracker.ceph.com/issues/64711
60
    Test failure: test_cephfs_mirror_cancel_mirroring_and_readd (tasks.cephfs.test_mirroring.TestMirroring)
61
* https://tracker.ceph.com/issues/50821
62
    qa: untar_snap_rm failure during mds thrashing
63 232 Patrick Donnelly
* https://tracker.ceph.com/issues/65022
64
    qa: test_max_items_per_obj open procs not fully cleaned up
65 228 Patrick Donnelly
66 226 Venky Shankar
h3.  14th March 2024
67
68
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20240307.013758
69
70 227 Venky Shankar
(pjd.sh failures are related to a bug in the testing kernel. See - https://tracker.ceph.com/issues/64679#note-4)
71 226 Venky Shankar
72
* https://tracker.ceph.com/issues/62067
73
    ffsb.sh failure "Resource temporarily unavailable"
74
* https://tracker.ceph.com/issues/57676
75
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
76
* https://tracker.ceph.com/issues/64502
77
    pacific/quincy/v18.2.0: client: ceph-fuse fails to unmount after upgrade to main
78
* https://tracker.ceph.com/issues/64572
79
    workunits/fsx.sh failure
80
* https://tracker.ceph.com/issues/63700
81
    qa: test_cd_with_args failure
82
* https://tracker.ceph.com/issues/59684
83
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
84
* https://tracker.ceph.com/issues/61243
85
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
86
87 225 Venky Shankar
h3. 5th March 2024
88
89
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20240304.042522
90
91
* https://tracker.ceph.com/issues/57676
92
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
93
* https://tracker.ceph.com/issues/64502
94
    pacific/quincy/v18.2.0: client: ceph-fuse fails to unmount after upgrade to main
95
* https://tracker.ceph.com/issues/63949
96
    leak in mds.c detected by valgrind during CephFS QA run
97
* https://tracker.ceph.com/issues/57656
98
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
99
* https://tracker.ceph.com/issues/63699
100
    qa: failed cephfs-shell test_reading_conf
101
* https://tracker.ceph.com/issues/64572
102
    workunits/fsx.sh failure
103
* https://tracker.ceph.com/issues/64707 (new issue)
104
    suites/fsstress.sh hangs on one client - test times out
105
* https://tracker.ceph.com/issues/59684
106
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
107
* https://tracker.ceph.com/issues/63700
108
    qa: test_cd_with_args failure
109
* https://tracker.ceph.com/issues/64711
110
    Test failure: test_cephfs_mirror_cancel_mirroring_and_readd (tasks.cephfs.test_mirroring.TestMirroring)
111
* https://tracker.ceph.com/issues/64729 (new issue)
112
    mon.a (mon.0) 1281 : cluster 3 [WRN] MDS_SLOW_METADATA_IO: 3 MDSs report slow metadata IOs" in cluster log
113
* https://tracker.ceph.com/issues/64730
114
    fs/misc/multiple_rsync.sh workunit times out
115
116 224 Venky Shankar
h3. 26th Feb 2024
117
118
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20240216.060239
119
120
(This run is a bit messy due to
121
122
  a) OCI runtime issues in the testing kernel with centos9
123
  b) SELinux denials related failures
124
  c) Unrelated MON_DOWN warnings)
125
126
* https://tracker.ceph.com/issues/57676
127
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
128
* https://tracker.ceph.com/issues/63700
129
    qa: test_cd_with_args failure
130
* https://tracker.ceph.com/issues/63949
131
    leak in mds.c detected by valgrind during CephFS QA run
132
* https://tracker.ceph.com/issues/59684
133
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
134
* https://tracker.ceph.com/issues/61243
135
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
136
* https://tracker.ceph.com/issues/63699
137
    qa: failed cephfs-shell test_reading_conf
138
* https://tracker.ceph.com/issues/64172
139
    Test failure: test_multiple_path_r (tasks.cephfs.test_admin.TestFsAuthorize)
140
* https://tracker.ceph.com/issues/57656
141
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
142
* https://tracker.ceph.com/issues/64572
143
    workunits/fsx.sh failure
144
145 222 Patrick Donnelly
h3. 20th Feb 2024
146
147
https://github.com/ceph/ceph/pull/55601
148
https://github.com/ceph/ceph/pull/55659
149
150
https://pulpito.ceph.com/pdonnell-2024-02-20_07:23:03-fs:upgrade:mds_upgrade_sequence-wip-batrick-testing-20240220.022152-distro-default-smithi/
151
152
* https://tracker.ceph.com/issues/64502
153
    client: quincy ceph-fuse fails to unmount after upgrade to main
154
155 223 Patrick Donnelly
This run has numerous problems. #55601 introduces testing for the upgrade sequence from </code>reef/{v18.2.0,v18.2.1,reef}</code> as well as an extra dimension for the ceph-fuse client. The main "big" issue is i64502: the ceph-fuse client is not being unmounted when <code>fusermount -u</code> is called. Instead, the client begins to unmount only after daemons are shut down during test cleanup.
156 218 Venky Shankar
157
h3. 19th Feb 2024
158
159 220 Venky Shankar
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20240217.015652
160
161 218 Venky Shankar
* https://tracker.ceph.com/issues/61243
162
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
163
* https://tracker.ceph.com/issues/63700
164
    qa: test_cd_with_args failure
165
* https://tracker.ceph.com/issues/63141
166
    qa/cephfs: test_idem_unaffected_root_squash fails
167
* https://tracker.ceph.com/issues/59684
168
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
169
* https://tracker.ceph.com/issues/63949
170
    leak in mds.c detected by valgrind during CephFS QA run
171
* https://tracker.ceph.com/issues/63764
172
    Test failure: test_r_with_fsname_and_no_path_in_cap (tasks.cephfs.test_multifs_auth.TestMDSCaps)
173
* https://tracker.ceph.com/issues/63699
174
    qa: failed cephfs-shell test_reading_conf
175 219 Venky Shankar
* https://tracker.ceph.com/issues/64482
176
    ceph: stderr Error: OCI runtime error: crun: bpf create ``: Function not implemented
177 201 Rishabh Dave
178 217 Venky Shankar
h3. 29 Jan 2024
179
180
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20240119.075157-1
181
182
* https://tracker.ceph.com/issues/57676
183
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
184
* https://tracker.ceph.com/issues/63949
185
    leak in mds.c detected by valgrind during CephFS QA run
186
* https://tracker.ceph.com/issues/62067
187
    ffsb.sh failure "Resource temporarily unavailable"
188
* https://tracker.ceph.com/issues/64172
189
    Test failure: test_multiple_path_r (tasks.cephfs.test_admin.TestFsAuthorize)
190
* https://tracker.ceph.com/issues/63265
191
    qa: fs/snaps/snaptest-git-ceph.sh failed when reseting to tag 'v0.1'
192
* https://tracker.ceph.com/issues/61243
193
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
194
* https://tracker.ceph.com/issues/59684
195
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
196
* https://tracker.ceph.com/issues/57656
197
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
198
* https://tracker.ceph.com/issues/64209
199
    snaptest-multiple-capsnaps.sh fails with "got remote process result: 1"
200
201 216 Venky Shankar
h3. 17th Jan 2024
202
203
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20240103.072409-1
204
205
* https://tracker.ceph.com/issues/63764
206
    Test failure: test_r_with_fsname_and_no_path_in_cap (tasks.cephfs.test_multifs_auth.TestMDSCaps)
207
* https://tracker.ceph.com/issues/57676
208
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
209
* https://tracker.ceph.com/issues/51964
210
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
211
* https://tracker.ceph.com/issues/63949
212
    leak in mds.c detected by valgrind during CephFS QA run
213
* https://tracker.ceph.com/issues/62067
214
    ffsb.sh failure "Resource temporarily unavailable"
215
* https://tracker.ceph.com/issues/61243
216
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
217
* https://tracker.ceph.com/issues/63259
218
    mds: failed to store backtrace and force file system read-only
219
* https://tracker.ceph.com/issues/63265
220
    qa: fs/snaps/snaptest-git-ceph.sh failed when reseting to tag 'v0.1'
221
222
h3. 16 Jan 2024
223 215 Rishabh Dave
224 214 Rishabh Dave
https://pulpito.ceph.com/rishabh-2023-12-11_15:37:57-fs-rishabh-2023dec11-testing-default-smithi/
225
https://pulpito.ceph.com/rishabh-2023-12-17_11:19:43-fs-rishabh-2023dec11-testing-default-smithi/
226
https://pulpito.ceph.com/rishabh-2024-01-04_18:43:16-fs-rishabh-2024jan4-testing-default-smithi
227
228
* https://tracker.ceph.com/issues/63764
229
  Test failure: test_r_with_fsname_and_no_path_in_cap (tasks.cephfs.test_multifs_auth.TestMDSCaps)
230
* https://tracker.ceph.com/issues/63141
231
  qa/cephfs: test_idem_unaffected_root_squash fails
232
* https://tracker.ceph.com/issues/62067
233
  ffsb.sh failure "Resource temporarily unavailable" 
234
* https://tracker.ceph.com/issues/51964
235
  qa: test_cephfs_mirror_restart_sync_on_blocklist failure
236
* https://tracker.ceph.com/issues/54462 
237
  Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128
238
* https://tracker.ceph.com/issues/57676
239
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
240
241
* https://tracker.ceph.com/issues/63949
242
  valgrind leak in MDS
243
* https://tracker.ceph.com/issues/64041
244
  qa/cephfs: fs/upgrade/nofs suite attempts to jump more than 2 releases
245
* fsstress failure in last run was due a kernel MM layer failure, unrelated to CephFS
246
* from last run, job #7507400 failed due to MGR; FS wasn't degraded, so it's unrelated to CephFS
247
248 213 Venky Shankar
h3. 06 Dec 2023
249
250
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20231206.125818
251
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20231206.125818-x (rerun w/ squid kickoff changes)
252
253
* https://tracker.ceph.com/issues/63764
254
    Test failure: test_r_with_fsname_and_no_path_in_cap (tasks.cephfs.test_multifs_auth.TestMDSCaps)
255
* https://tracker.ceph.com/issues/63233
256
    mon|client|mds: valgrind reports possible leaks in the MDS
257
* https://tracker.ceph.com/issues/57676
258
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
259
* https://tracker.ceph.com/issues/62580
260
    testing: Test failure: test_snapshot_remove (tasks.cephfs.test_strays.TestStrays)
261
* https://tracker.ceph.com/issues/62067
262
    ffsb.sh failure "Resource temporarily unavailable"
263
* https://tracker.ceph.com/issues/61243
264
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
265
* https://tracker.ceph.com/issues/62081
266
    tasks/fscrypt-common does not finish, timesout
267
* https://tracker.ceph.com/issues/63265
268
    qa: fs/snaps/snaptest-git-ceph.sh failed when reseting to tag 'v0.1'
269
* https://tracker.ceph.com/issues/63806
270
    ffsb.sh workunit failure (MDS: std::out_of_range, damaged)
271
272 211 Patrick Donnelly
h3. 30 Nov 2023
273
274
https://pulpito.ceph.com/pdonnell-2023-11-30_08:05:19-fs:shell-wip-batrick-testing-20231130.014408-distro-default-smithi/
275
276
* https://tracker.ceph.com/issues/63699
277 212 Patrick Donnelly
    qa: failed cephfs-shell test_reading_conf
278
* https://tracker.ceph.com/issues/63700
279
    qa: test_cd_with_args failure
280 211 Patrick Donnelly
281 210 Venky Shankar
h3. 29 Nov 2023
282
283
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20231107.042705
284
285
* https://tracker.ceph.com/issues/63233
286
    mon|client|mds: valgrind reports possible leaks in the MDS
287
* https://tracker.ceph.com/issues/63141
288
    qa/cephfs: test_idem_unaffected_root_squash fails
289
* https://tracker.ceph.com/issues/57676
290
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
291
* https://tracker.ceph.com/issues/57655
292
    qa: fs:mixed-clients kernel_untar_build failure
293
* https://tracker.ceph.com/issues/62067
294
    ffsb.sh failure "Resource temporarily unavailable"
295
* https://tracker.ceph.com/issues/61243
296
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
297
* https://tracker.ceph.com/issues/62510 (pending RHEL back port)
    snaptest-git-ceph.sh failure with fs/thrash
298
* https://tracker.ceph.com/issues/62810
299
    Failure in snaptest-git-ceph.sh (it's an async unlink/create bug) -- Need to fix again
300
301 206 Venky Shankar
h3. 14 Nov 2023
302 207 Milind Changire
(Milind)
303
304
https://pulpito.ceph.com/mchangir-2023-11-13_10:27:15-fs-wip-mchangir-testing-20231110.052303-testing-default-smithi/
305
306
* https://tracker.ceph.com/issues/53859
307
  qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
308
* https://tracker.ceph.com/issues/63233
309
  mon|client|mds: valgrind reports possible leaks in the MDS
310
* https://tracker.ceph.com/issues/63521
311
  qa: Test failure: test_scrub_merge_dirfrags (tasks.cephfs.test_scrub_checks.TestScrubChecks)
312
* https://tracker.ceph.com/issues/57655
313
  qa: fs:mixed-clients kernel_untar_build failure
314
* https://tracker.ceph.com/issues/62580
315
    testing: Test failure: test_snapshot_remove (tasks.cephfs.test_strays.TestStrays)
316
* https://tracker.ceph.com/issues/57676
317
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
318
* https://tracker.ceph.com/issues/61243
319
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
320
* https://tracker.ceph.com/issues/63141
321
    qa/cephfs: test_idem_unaffected_root_squash fails
322
* https://tracker.ceph.com/issues/51964
323
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
324
* https://tracker.ceph.com/issues/63522
325
    No module named 'tasks.ceph_fuse'
326
    No module named 'tasks.kclient'
327
    No module named 'tasks.cephfs.fuse_mount'
328
    No module named 'tasks.ceph'
329
* https://tracker.ceph.com/issues/63523
330
    Command failed - qa/workunits/fs/misc/general_vxattrs.sh
331
332
333
h3. 14 Nov 2023
334 206 Venky Shankar
335
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20231106.073650
336
337
(nvm the fs:upgrade test failure - the PR is excluded from merge)
338
339
* https://tracker.ceph.com/issues/57676
340
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
341
* https://tracker.ceph.com/issues/63233
342
    mon|client|mds: valgrind reports possible leaks in the MDS
343
* https://tracker.ceph.com/issues/63141
344
    qa/cephfs: test_idem_unaffected_root_squash fails
345
* https://tracker.ceph.com/issues/62580
346
    testing: Test failure: test_snapshot_remove (tasks.cephfs.test_strays.TestStrays)
347
* https://tracker.ceph.com/issues/57655
348
    qa: fs:mixed-clients kernel_untar_build failure
349
* https://tracker.ceph.com/issues/51964
350
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
351
* https://tracker.ceph.com/issues/63519
352
    ceph-fuse: reef ceph-fuse crashes with main branch ceph-mds
353
* https://tracker.ceph.com/issues/57087
354
    qa: test_fragmented_injection (tasks.cephfs.test_data_scan.TestDataScan) failure
355
* https://tracker.ceph.com/issues/58945
356
    qa: xfstests-dev's generic test suite has 20 failures with fuse client
357
358 204 Rishabh Dave
h3. 7 Nov 2023
359
360 205 Rishabh Dave
fs: https://pulpito.ceph.com/rishabh-2023-11-04_04:30:51-fs-rishabh-2023nov3-testing-default-smithi/
361
re-run: https://pulpito.ceph.com/rishabh-2023-11-05_14:10:09-fs-rishabh-2023nov3-testing-default-smithi/
362
smoke: https://pulpito.ceph.com/rishabh-2023-11-08_08:39:05-smoke-rishabh-2023nov3-testing-default-smithi/
363 204 Rishabh Dave
364
* https://tracker.ceph.com/issues/53859
365
  qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
366
* https://tracker.ceph.com/issues/63233
367
  mon|client|mds: valgrind reports possible leaks in the MDS
368
* https://tracker.ceph.com/issues/57655
369
  qa: fs:mixed-clients kernel_untar_build failure
370
* https://tracker.ceph.com/issues/57676
371
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
372
373
* https://tracker.ceph.com/issues/63473
374
  fsstress.sh failed with errno 124
375
376 202 Rishabh Dave
h3. 3 Nov 2023
377 203 Rishabh Dave
378 202 Rishabh Dave
https://pulpito.ceph.com/rishabh-2023-10-27_06:26:52-fs-rishabh-2023oct26-testing-default-smithi/
379
380
* https://tracker.ceph.com/issues/63141
381
  qa/cephfs: test_idem_unaffected_root_squash fails
382
* https://tracker.ceph.com/issues/63233
383
  mon|client|mds: valgrind reports possible leaks in the MDS
384
* https://tracker.ceph.com/issues/57656
385
  dbench: write failed on handle 10010 (Resource temporarily unavailable)
386
* https://tracker.ceph.com/issues/57655
387
  qa: fs:mixed-clients kernel_untar_build failure
388
* https://tracker.ceph.com/issues/57676
389
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
390
391
* https://tracker.ceph.com/issues/59531
392
  "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio)"
393
* https://tracker.ceph.com/issues/52624
394
  qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
395
396 198 Patrick Donnelly
h3. 24 October 2023
397
398
https://pulpito.ceph.com/?branch=wip-batrick-testing-20231024.144545
399
400 200 Patrick Donnelly
Two failures:
401
402
https://pulpito.ceph.com/pdonnell-2023-10-26_05:21:22-fs-wip-batrick-testing-20231024.144545-distro-default-smithi/7438459/
403
https://pulpito.ceph.com/pdonnell-2023-10-26_05:21:22-fs-wip-batrick-testing-20231024.144545-distro-default-smithi/7438468/
404
405
probably related to https://github.com/ceph/ceph/pull/53255. Killing the mount as part of the test did not complete. Will research more.
406
407 198 Patrick Donnelly
* https://tracker.ceph.com/issues/52624
408
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
409
* https://tracker.ceph.com/issues/57676
410 199 Patrick Donnelly
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
411
* https://tracker.ceph.com/issues/63233
412
    mon|client|mds: valgrind reports possible leaks in the MDS
413
* https://tracker.ceph.com/issues/59531
414
    "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS 
415
* https://tracker.ceph.com/issues/57655
416
    qa: fs:mixed-clients kernel_untar_build failure
417 200 Patrick Donnelly
* https://tracker.ceph.com/issues/62067
418
    ffsb.sh failure "Resource temporarily unavailable"
419
* https://tracker.ceph.com/issues/63411
420
    qa: flush journal may cause timeouts of `scrub status`
421
* https://tracker.ceph.com/issues/61243
422
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
423
* https://tracker.ceph.com/issues/63141
424 198 Patrick Donnelly
    test_idem_unaffected_root_squash (test_admin.TestFsAuthorizeUpdate) fails
425 148 Rishabh Dave
426 195 Venky Shankar
h3. 18 Oct 2023
427
428
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20231018.065603
429
430
* https://tracker.ceph.com/issues/52624
431
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
432
* https://tracker.ceph.com/issues/57676
433
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
434
* https://tracker.ceph.com/issues/63233
435
    mon|client|mds: valgrind reports possible leaks in the MDS
436
* https://tracker.ceph.com/issues/63141
437
    qa/cephfs: test_idem_unaffected_root_squash fails
438
* https://tracker.ceph.com/issues/59531
439
  "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio)" 
440
* https://tracker.ceph.com/issues/62658
441
    error during scrub thrashing: reached maximum tries (31) after waiting for 900 seconds
442
* https://tracker.ceph.com/issues/62580
443
    testing: Test failure: test_snapshot_remove (tasks.cephfs.test_strays.TestStrays)
444
* https://tracker.ceph.com/issues/62067
445
    ffsb.sh failure "Resource temporarily unavailable"
446
* https://tracker.ceph.com/issues/57655
447
    qa: fs:mixed-clients kernel_untar_build failure
448
* https://tracker.ceph.com/issues/62036
449
    src/mds/MDCache.cc: 5131: FAILED ceph_assert(isolated_inodes.empty())
450
* https://tracker.ceph.com/issues/58945
451
    qa: xfstests-dev's generic test suite has 20 failures with fuse client
452
* https://tracker.ceph.com/issues/62847
453
    mds: blogbench requests stuck (5mds+scrub+snaps-flush)
454
455 193 Venky Shankar
h3. 13 Oct 2023
456
457
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20231013.093215
458
459
* https://tracker.ceph.com/issues/52624
460
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
461
* https://tracker.ceph.com/issues/62936
462
    Test failure: test_mirroring_init_failure_with_recovery (tasks.cephfs.test_mirroring.TestMirroring)
463
* https://tracker.ceph.com/issues/47292
464
    cephfs-shell: test_df_for_valid_file failure
465
* https://tracker.ceph.com/issues/63141
466
    qa/cephfs: test_idem_unaffected_root_squash fails
467
* https://tracker.ceph.com/issues/62081
468
    tasks/fscrypt-common does not finish, timesout
469 1 Patrick Donnelly
* https://tracker.ceph.com/issues/58945
470
    qa: xfstests-dev's generic test suite has 20 failures with fuse client
471 194 Venky Shankar
* https://tracker.ceph.com/issues/63233
472
    mon|client|mds: valgrind reports possible leaks in the MDS
473 193 Venky Shankar
474 190 Patrick Donnelly
h3. 16 Oct 2023
475
476
https://pulpito.ceph.com/?branch=wip-batrick-testing-20231016.203825
477
478 192 Patrick Donnelly
Infrastructure issues:
479
* /teuthology/pdonnell-2023-10-19_12:04:12-fs-wip-batrick-testing-20231016.203825-distro-default-smithi/7432286/teuthology.log
480
    Host lost.
481
482 196 Patrick Donnelly
One followup fix:
483
* https://pulpito.ceph.com/pdonnell-2023-10-20_00:33:29-fs-wip-batrick-testing-20231016.203825-distro-default-smithi/
484
485 192 Patrick Donnelly
Failures:
486
487
* https://tracker.ceph.com/issues/56694
488
    qa: avoid blocking forever on hung umount
489
* https://tracker.ceph.com/issues/63089
490
    qa: tasks/mirror times out
491
* https://tracker.ceph.com/issues/52624
492
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
493
* https://tracker.ceph.com/issues/59531
494
  "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio)" 
495
* https://tracker.ceph.com/issues/57676
496
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
497
* https://tracker.ceph.com/issues/62658 
498
    error during scrub thrashing: reached maximum tries (31) after waiting for 900 seconds
499
* https://tracker.ceph.com/issues/61243
500
  test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
501
* https://tracker.ceph.com/issues/57656
502
  dbench: write failed on handle 10010 (Resource temporarily unavailable)
503
* https://tracker.ceph.com/issues/63233
504
  mon|client|mds: valgrind reports possible leaks in the MDS
505 197 Patrick Donnelly
* https://tracker.ceph.com/issues/63278
506
  kclient: may wrongly decode session messages and believe it is blocklisted (dead jobs)
507 192 Patrick Donnelly
508 189 Rishabh Dave
h3. 9 Oct 2023
509
510
https://pulpito.ceph.com/rishabh-2023-10-06_11:56:52-fs-rishabh-cephfs-mon-testing-default-smithi/
511
512
* https://tracker.ceph.com/issues/54460
513
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
514
* https://tracker.ceph.com/issues/63141
515
  test_idem_unaffected_root_squash (test_admin.TestFsAuthorizeUpdate) fails
516
* https://tracker.ceph.com/issues/62937
517
  logrotate doesn't support parallel execution on same set of logfiles
518
* https://tracker.ceph.com/issues/61400
519
  valgrind+ceph-mon issues
520
* https://tracker.ceph.com/issues/57676
521
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
522
* https://tracker.ceph.com/issues/55805
523
  error during scrub thrashing reached max tries in 900 secs
524
525 188 Venky Shankar
h3. 26 Sep 2023
526
527
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230926.081818
528
529
* https://tracker.ceph.com/issues/52624
530
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
531
* https://tracker.ceph.com/issues/62873
532
    qa: FAIL: test_client_blocklisted_oldest_tid (tasks.cephfs.test_client_limits.TestClientLimits)
533
* https://tracker.ceph.com/issues/61400
534
    valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
535
* https://tracker.ceph.com/issues/57676
536
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
537
* https://tracker.ceph.com/issues/62682
538
    mon: no mdsmap broadcast after "fs set joinable" is set to true
539
* https://tracker.ceph.com/issues/63089
540
    qa: tasks/mirror times out
541
542 185 Rishabh Dave
h3. 22 Sep 2023
543
544
https://pulpito.ceph.com/rishabh-2023-09-12_12:12:15-fs-wip-rishabh-2023sep12-b2-testing-default-smithi/
545
546
* https://tracker.ceph.com/issues/59348
547
  qa: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota
548
* https://tracker.ceph.com/issues/59344
549
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
550
* https://tracker.ceph.com/issues/59531
551
  "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio)" 
552
* https://tracker.ceph.com/issues/61574
553
  build failure for mdtest project
554
* https://tracker.ceph.com/issues/62702
555
  fsstress.sh: MDS slow requests for the internal 'rename' requests
556
* https://tracker.ceph.com/issues/57676
557
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
558
559
* https://tracker.ceph.com/issues/62863 
560
  deadlock in ceph-fuse causes teuthology job to hang and fail
561
* https://tracker.ceph.com/issues/62870
562
  test_cluster_info (tasks.cephfs.test_nfs.TestNFS)
563
* https://tracker.ceph.com/issues/62873
564
  test_client_blocklisted_oldest_tid (tasks.cephfs.test_client_limits.TestClientLimits)
565
566 186 Venky Shankar
h3. 20 Sep 2023
567
568
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230920.072635
569
570
* https://tracker.ceph.com/issues/52624
571
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
572
* https://tracker.ceph.com/issues/61400
573
    valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
574
* https://tracker.ceph.com/issues/61399
575
    libmpich: undefined references to fi_strerror
576
* https://tracker.ceph.com/issues/62081
577
    tasks/fscrypt-common does not finish, timesout
578
* https://tracker.ceph.com/issues/62658 
579
    error during scrub thrashing: reached maximum tries (31) after waiting for 900 seconds
580
* https://tracker.ceph.com/issues/62915
581
    qa/suites/fs/nfs: No orchestrator configured (try `ceph orch set backend`) while running test cases
582
* https://tracker.ceph.com/issues/59531
583
    quincy: "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio)"
584
* https://tracker.ceph.com/issues/62873
585
  qa: FAIL: test_client_blocklisted_oldest_tid (tasks.cephfs.test_client_limits.TestClientLimits)
586
* https://tracker.ceph.com/issues/62936
587
    Test failure: test_mirroring_init_failure_with_recovery (tasks.cephfs.test_mirroring.TestMirroring)
588
* https://tracker.ceph.com/issues/62937
589
    Command failed on smithi027 with status 3: 'sudo logrotate /etc/logrotate.d/ceph-test.conf'
590
* https://tracker.ceph.com/issues/62510
591
    snaptest-git-ceph.sh failure with fs/thrash
592
* https://tracker.ceph.com/issues/62081
593
    tasks/fscrypt-common does not finish, timesout
594
* https://tracker.ceph.com/issues/62126
595
    test failure: suites/blogbench.sh stops running
596 187 Venky Shankar
* https://tracker.ceph.com/issues/62682
597
    mon: no mdsmap broadcast after "fs set joinable" is set to true
598 186 Venky Shankar
599 184 Milind Changire
h3. 19 Sep 2023
600
601
http://pulpito.front.sepia.ceph.com/mchangir-2023-09-12_05:40:22-fs-wip-mchangir-testing-20230908.140927-testing-default-smithi/
602
603
* https://tracker.ceph.com/issues/58220#note-9
604
  workunit fs/test_python.sh: test_disk_quota_exceeeded_error failure
605
* https://tracker.ceph.com/issues/62702
606
  Command failed (workunit test suites/fsstress.sh) on smithi124 with status 124
607
* https://tracker.ceph.com/issues/57676
608
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
609
* https://tracker.ceph.com/issues/59348
610
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
611
* https://tracker.ceph.com/issues/52624
612
  qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
613
* https://tracker.ceph.com/issues/51964
614
  qa: test_cephfs_mirror_restart_sync_on_blocklist failure
615
* https://tracker.ceph.com/issues/61243
616
  test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
617
* https://tracker.ceph.com/issues/59344
618
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
619
* https://tracker.ceph.com/issues/62873
620
  qa: FAIL: test_client_blocklisted_oldest_tid (tasks.cephfs.test_client_limits.TestClientLimits)
621
* https://tracker.ceph.com/issues/59413
622
  cephfs: qa snaptest-git-ceph.sh failed with "got remote process result: 128"
623
* https://tracker.ceph.com/issues/53859
624
  qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
625
* https://tracker.ceph.com/issues/62482
626
  qa: cluster [WRN] Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
627
628 178 Patrick Donnelly
629 177 Venky Shankar
h3. 13 Sep 2023
630
631
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230908.065909
632
633
* https://tracker.ceph.com/issues/52624
634
      qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
635
* https://tracker.ceph.com/issues/57655
636
    qa: fs:mixed-clients kernel_untar_build failure
637
* https://tracker.ceph.com/issues/57676
638
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
639
* https://tracker.ceph.com/issues/61243
640
    qa: tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev - 17 tests failed
641
* https://tracker.ceph.com/issues/62567
642
    postgres workunit times out - MDS_SLOW_REQUEST in logs
643
* https://tracker.ceph.com/issues/61400
644
    valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
645
* https://tracker.ceph.com/issues/61399
646
    libmpich: undefined references to fi_strerror
647
* https://tracker.ceph.com/issues/57655
648
    qa: fs:mixed-clients kernel_untar_build failure
649
* https://tracker.ceph.com/issues/57676
650
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
651
* https://tracker.ceph.com/issues/51964
652
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
653
* https://tracker.ceph.com/issues/62081
654
    tasks/fscrypt-common does not finish, timesout
655 178 Patrick Donnelly
656 179 Patrick Donnelly
h3. 2023 Sep 12
657 178 Patrick Donnelly
658
https://pulpito.ceph.com/pdonnell-2023-09-12_14:07:50-fs-wip-batrick-testing-20230912.122437-distro-default-smithi/
659 1 Patrick Donnelly
660 181 Patrick Donnelly
A few failures caused by qa refactoring in https://github.com/ceph/ceph/pull/48130 ; notably:
661
662 182 Patrick Donnelly
* Test failure: test_export_pin_many (tasks.cephfs.test_exports.TestExportPin) caused by fragmentation from config changes.
663 181 Patrick Donnelly
664
Failures:
665
666 179 Patrick Donnelly
* https://tracker.ceph.com/issues/59348
667
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
668
* https://tracker.ceph.com/issues/57656
669
  dbench: write failed on handle 10010 (Resource temporarily unavailable)
670
* https://tracker.ceph.com/issues/55805
671
  error scrub thrashing reached max tries in 900 secs
672
* https://tracker.ceph.com/issues/62067
673
    ffsb.sh failure "Resource temporarily unavailable"
674
* https://tracker.ceph.com/issues/59344
675
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
676
* https://tracker.ceph.com/issues/61399
677 180 Patrick Donnelly
  libmpich: undefined references to fi_strerror
678
* https://tracker.ceph.com/issues/62832
679
  common: config_proxy deadlock during shutdown (and possibly other times)
680
* https://tracker.ceph.com/issues/59413
681 1 Patrick Donnelly
  cephfs: qa snaptest-git-ceph.sh failed with "got remote process result: 128"
682 181 Patrick Donnelly
* https://tracker.ceph.com/issues/57676
683
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
684
* https://tracker.ceph.com/issues/62567
685
  Command failed with status 1: "sudo TESTDIR=/home/ubuntu/cephtest bash -c 'sudo -u postgres -- pgbench -s 500 -i'"
686
* https://tracker.ceph.com/issues/54460
687
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
688
* https://tracker.ceph.com/issues/58220#note-9
689
  workunit fs/test_python.sh: test_disk_quota_exceeeded_error failure
690
* https://tracker.ceph.com/issues/59348
691
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
692 183 Patrick Donnelly
* https://tracker.ceph.com/issues/62847
693
    mds: blogbench requests stuck (5mds+scrub+snaps-flush)
694
* https://tracker.ceph.com/issues/62848
695
    qa: fail_fs upgrade scenario hanging
696
* https://tracker.ceph.com/issues/62081
697
    tasks/fscrypt-common does not finish, timesout
698 177 Venky Shankar
699 176 Venky Shankar
h3. 11 Sep 2023
700 175 Venky Shankar
701
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230830.153114
702
703
* https://tracker.ceph.com/issues/52624
704
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
705
* https://tracker.ceph.com/issues/61399
706
    qa: build failure for ior (the failed instance is when compiling `mdtest`)
707
* https://tracker.ceph.com/issues/57655
708
    qa: fs:mixed-clients kernel_untar_build failure
709
* https://tracker.ceph.com/issues/61399
710
    ior build failure
711
* https://tracker.ceph.com/issues/59531
712
    quincy: "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio)"
713
* https://tracker.ceph.com/issues/59344
714
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
715
* https://tracker.ceph.com/issues/59346
716
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
717
* https://tracker.ceph.com/issues/59348
718
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
719
* https://tracker.ceph.com/issues/57676
720
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
721
* https://tracker.ceph.com/issues/61243
722
  qa: tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev - 17 tests failed
723
* https://tracker.ceph.com/issues/62567
724
  postgres workunit times out - MDS_SLOW_REQUEST in logs
725
726
727 174 Rishabh Dave
h3. 6 Sep 2023 Run 2
728
729
https://pulpito.ceph.com/rishabh-2023-08-25_01:50:32-fs-wip-rishabh-2023aug3-b5-testing-default-smithi/ 
730
731
* https://tracker.ceph.com/issues/51964
732
  test_cephfs_mirror_restart_sync_on_blocklist failure
733
* https://tracker.ceph.com/issues/59348
734
  test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
735
* https://tracker.ceph.com/issues/53859
736
  qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
737
* https://tracker.ceph.com/issues/61892
738
  test_strays.TestStrays.test_snapshot_remove failed
739
* https://tracker.ceph.com/issues/54460
740
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
741
* https://tracker.ceph.com/issues/59346
742
  fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write"
743
* https://tracker.ceph.com/issues/59344
744
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
745
* https://tracker.ceph.com/issues/62484
746
  qa: ffsb.sh test failure
747
* https://tracker.ceph.com/issues/62567
748
  Command failed with status 1: "sudo TESTDIR=/home/ubuntu/cephtest bash -c 'sudo -u postgres -- pgbench -s 500 -i'"
749
  
750
* https://tracker.ceph.com/issues/61399
751
  ior build failure
752
* https://tracker.ceph.com/issues/57676
753
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
754
* https://tracker.ceph.com/issues/55805
755
  error scrub thrashing reached max tries in 900 secs
756
757 172 Rishabh Dave
h3. 6 Sep 2023
758 171 Rishabh Dave
759 173 Rishabh Dave
https://pulpito.ceph.com/rishabh-2023-08-10_20:16:46-fs-wip-rishabh-2023Aug1-b4-testing-default-smithi/
760 171 Rishabh Dave
761 1 Patrick Donnelly
* https://tracker.ceph.com/issues/53859
762
  qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
763 173 Rishabh Dave
* https://tracker.ceph.com/issues/51964
764
  qa: test_cephfs_mirror_restart_sync_on_blocklist failure
765 1 Patrick Donnelly
* https://tracker.ceph.com/issues/61892
766 173 Rishabh Dave
  test_snapshot_remove (test_strays.TestStrays) failed
767
* https://tracker.ceph.com/issues/59348
768
  qa: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota
769
* https://tracker.ceph.com/issues/54462
770
  Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128
771
* https://tracker.ceph.com/issues/62556
772
  test_acls: xfstests_dev: python2 is missing
773
* https://tracker.ceph.com/issues/62067
774
  ffsb.sh failure "Resource temporarily unavailable"
775
* https://tracker.ceph.com/issues/57656
776
  dbench: write failed on handle 10010 (Resource temporarily unavailable)
777 1 Patrick Donnelly
* https://tracker.ceph.com/issues/59346
778
  fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write"
779 171 Rishabh Dave
* https://tracker.ceph.com/issues/59344
780 173 Rishabh Dave
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
781
782 171 Rishabh Dave
* https://tracker.ceph.com/issues/61399
783
  ior build failure
784
* https://tracker.ceph.com/issues/57676
785
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
786
* https://tracker.ceph.com/issues/55805
787
  error scrub thrashing reached max tries in 900 secs
788 173 Rishabh Dave
789
* https://tracker.ceph.com/issues/62567
790
  Command failed on smithi008 with status 1: "sudo TESTDIR=/home/ubuntu/cephtest bash -c 'sudo -u postgres -- pgbench -s 500 -i'"
791
* https://tracker.ceph.com/issues/62702
792
  workunit test suites/fsstress.sh on smithi066 with status 124
793 170 Rishabh Dave
794
h3. 5 Sep 2023
795
796
https://pulpito.ceph.com/rishabh-2023-08-25_06:38:25-fs-wip-rishabh-2023aug3-b5-testing-default-smithi/
797
orch:cephadm suite run: http://pulpito.front.sepia.ceph.com/rishabh-2023-09-05_12:16:09-orch:cephadm-wip-rishabh-2023aug3-b5-testing-default-smithi/
798
  this run has failures but acc to Adam King these are not relevant and should be ignored
799
800
* https://tracker.ceph.com/issues/61892
801
  test_snapshot_remove (test_strays.TestStrays) failed
802
* https://tracker.ceph.com/issues/59348
803
  test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota
804
* https://tracker.ceph.com/issues/54462
805
  Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128
806
* https://tracker.ceph.com/issues/62067
807
  ffsb.sh failure "Resource temporarily unavailable"
808
* https://tracker.ceph.com/issues/57656 
809
  dbench: write failed on handle 10010 (Resource temporarily unavailable)
810
* https://tracker.ceph.com/issues/59346
811
  fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write"
812
* https://tracker.ceph.com/issues/59344
813
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
814
* https://tracker.ceph.com/issues/50223
815
  client.xxxx isn't responding to mclientcaps(revoke)
816
* https://tracker.ceph.com/issues/57655
817
  qa: fs:mixed-clients kernel_untar_build failure
818
* https://tracker.ceph.com/issues/62187
819
  iozone.sh: line 5: iozone: command not found
820
 
821
* https://tracker.ceph.com/issues/61399
822
  ior build failure
823
* https://tracker.ceph.com/issues/57676
824
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
825
* https://tracker.ceph.com/issues/55805
826
  error scrub thrashing reached max tries in 900 secs
827 169 Venky Shankar
828
829
h3. 31 Aug 2023
830
831
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230824.045828
832
833
* https://tracker.ceph.com/issues/52624
834
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
835
* https://tracker.ceph.com/issues/62187
836
    iozone: command not found
837
* https://tracker.ceph.com/issues/61399
838
    ior build failure
839
* https://tracker.ceph.com/issues/59531
840
    quincy: "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio)"
841
* https://tracker.ceph.com/issues/61399
842
    qa: build failure for ior (the failed instance is when compiling `mdtest`)
843
* https://tracker.ceph.com/issues/57655
844
    qa: fs:mixed-clients kernel_untar_build failure
845
* https://tracker.ceph.com/issues/59344
846
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
847
* https://tracker.ceph.com/issues/59346
848
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
849
* https://tracker.ceph.com/issues/59348
850
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
851
* https://tracker.ceph.com/issues/59413
852
    cephfs: qa snaptest-git-ceph.sh failed with "got remote process result: 128"
853
* https://tracker.ceph.com/issues/62653
854
    qa: unimplemented fcntl command: 1036 with fsstress
855
* https://tracker.ceph.com/issues/61400
856
    valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
857
* https://tracker.ceph.com/issues/62658
858
    error during scrub thrashing: reached maximum tries (31) after waiting for 900 seconds
859
* https://tracker.ceph.com/issues/62188
860
    AttributeError: 'RemoteProcess' object has no attribute 'read' (happens only with multis-auth test)
861 168 Venky Shankar
862
863
h3. 25 Aug 2023
864
865
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230822.064807
866
867
* https://tracker.ceph.com/issues/59344
868
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
869
* https://tracker.ceph.com/issues/59346
870
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
871
* https://tracker.ceph.com/issues/59348
872
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
873
* https://tracker.ceph.com/issues/57655
874
    qa: fs:mixed-clients kernel_untar_build failure
875
* https://tracker.ceph.com/issues/61243
876
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
877
* https://tracker.ceph.com/issues/61399
878
    ior build failure
879
* https://tracker.ceph.com/issues/61399
880
    qa: build failure for ior (the failed instance is when compiling `mdtest`)
881
* https://tracker.ceph.com/issues/62484
882
    qa: ffsb.sh test failure
883
* https://tracker.ceph.com/issues/59531
884
    quincy: "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio)"
885
* https://tracker.ceph.com/issues/62510
886
    snaptest-git-ceph.sh failure with fs/thrash
887 167 Venky Shankar
888
889
h3. 24 Aug 2023
890
891
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230822.060131
892
893
* https://tracker.ceph.com/issues/57676
894
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
895
* https://tracker.ceph.com/issues/51964
896
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
897
* https://tracker.ceph.com/issues/59344
898
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
899
* https://tracker.ceph.com/issues/59346
900
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
901
* https://tracker.ceph.com/issues/59348
902
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
903
* https://tracker.ceph.com/issues/61399
904
    ior build failure
905
* https://tracker.ceph.com/issues/61399
906
    qa: build failure for ior (the failed instance is when compiling `mdtest`)
907
* https://tracker.ceph.com/issues/62510
908
    snaptest-git-ceph.sh failure with fs/thrash
909
* https://tracker.ceph.com/issues/62484
910
    qa: ffsb.sh test failure
911
* https://tracker.ceph.com/issues/57087
912
    qa: test_fragmented_injection (tasks.cephfs.test_data_scan.TestDataScan) failure
913
* https://tracker.ceph.com/issues/57656
914
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
915
* https://tracker.ceph.com/issues/62187
916
    iozone: command not found
917
* https://tracker.ceph.com/issues/62188
918
    AttributeError: 'RemoteProcess' object has no attribute 'read' (happens only with multis-auth test)
919
* https://tracker.ceph.com/issues/62567
920
    postgres workunit times out - MDS_SLOW_REQUEST in logs
921 166 Venky Shankar
922
923
h3. 22 Aug 2023
924
925
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230809.035933
926
927
* https://tracker.ceph.com/issues/57676
928
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
929
* https://tracker.ceph.com/issues/51964
930
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
931
* https://tracker.ceph.com/issues/59344
932
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
933
* https://tracker.ceph.com/issues/59346
934
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
935
* https://tracker.ceph.com/issues/59348
936
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
937
* https://tracker.ceph.com/issues/61399
938
    ior build failure
939
* https://tracker.ceph.com/issues/61399
940
    qa: build failure for ior (the failed instance is when compiling `mdtest`)
941
* https://tracker.ceph.com/issues/57655
942
    qa: fs:mixed-clients kernel_untar_build failure
943
* https://tracker.ceph.com/issues/61243
944
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
945
* https://tracker.ceph.com/issues/62188
946
    AttributeError: 'RemoteProcess' object has no attribute 'read' (happens only with multis-auth test)
947
* https://tracker.ceph.com/issues/62510
948
    snaptest-git-ceph.sh failure with fs/thrash
949
* https://tracker.ceph.com/issues/62511
950
    src/mds/MDLog.cc: 299: FAILED ceph_assert(!mds_is_shutting_down)
951 165 Venky Shankar
952
953
h3. 14 Aug 2023
954
955
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230808.093601
956
957
* https://tracker.ceph.com/issues/51964
958
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
959
* https://tracker.ceph.com/issues/61400
960
    valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
961
* https://tracker.ceph.com/issues/61399
962
    ior build failure
963
* https://tracker.ceph.com/issues/59348
964
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
965
* https://tracker.ceph.com/issues/59531
966
    cluster [WRN] OSD bench result of 137706.272521 IOPS exceeded the threshold
967
* https://tracker.ceph.com/issues/59344
968
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
969
* https://tracker.ceph.com/issues/59346
970
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
971
* https://tracker.ceph.com/issues/61399
972
    qa: build failure for ior (the failed instance is when compiling `mdtest`)
973
* https://tracker.ceph.com/issues/59684 [kclient bug]
974
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
975
* https://tracker.ceph.com/issues/61243 (NEW)
976
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
977
* https://tracker.ceph.com/issues/57655
978
    qa: fs:mixed-clients kernel_untar_build failure
979
* https://tracker.ceph.com/issues/57656
980
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
981 163 Venky Shankar
982
983
h3. 28 JULY 2023
984
985
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230725.053049
986
987
* https://tracker.ceph.com/issues/51964
988
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
989
* https://tracker.ceph.com/issues/61400
990
    valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
991
* https://tracker.ceph.com/issues/61399
992
    ior build failure
993
* https://tracker.ceph.com/issues/57676
994
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
995
* https://tracker.ceph.com/issues/59348
996
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
997
* https://tracker.ceph.com/issues/59531
998
    cluster [WRN] OSD bench result of 137706.272521 IOPS exceeded the threshold
999
* https://tracker.ceph.com/issues/59344
1000
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
1001
* https://tracker.ceph.com/issues/59346
1002
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
1003
* https://github.com/ceph/ceph/pull/52556
1004
    task/test_nfs: AttributeError: 'TestNFS' object has no attribute 'run_ceph_cmd' (see note #4)
1005
* https://tracker.ceph.com/issues/62187
1006
    iozone: command not found
1007
* https://tracker.ceph.com/issues/61399
1008
    qa: build failure for ior (the failed instance is when compiling `mdtest`)
1009
* https://tracker.ceph.com/issues/62188
1010 164 Rishabh Dave
    AttributeError: 'RemoteProcess' object has no attribute 'read' (happens only with multis-auth test)
1011 158 Rishabh Dave
1012
h3. 24 Jul 2023
1013
1014
https://pulpito.ceph.com/rishabh-2023-07-13_21:35:13-fs-wip-rishabh-2023Jul13-testing-default-smithi/
1015
https://pulpito.ceph.com/rishabh-2023-07-14_10:26:42-fs-wip-rishabh-2023Jul13-testing-default-smithi/
1016
There were few failure from one of the PRs under testing. Following run confirms that removing this PR fixes these failures -
1017
https://pulpito.ceph.com/rishabh-2023-07-18_02:11:50-fs-wip-rishabh-2023Jul13-m-quota-testing-default-smithi/
1018
One more extra run to check if blogbench.sh fail every time:
1019
https://pulpito.ceph.com/rishabh-2023-07-21_17:58:19-fs-wip-rishabh-2023Jul13-m-quota-testing-default-smithi/
1020
blogbench.sh failure were seen on above runs for first time, following run with main branch that confirms that "blogbench.sh" was not related to any of the PRs that are under testing -
1021 161 Rishabh Dave
https://pulpito.ceph.com/rishabh-2023-07-21_21:30:53-fs-wip-rishabh-2023Jul13-base-2-testing-default-smithi/
1022
1023
* https://tracker.ceph.com/issues/61892
1024
  test_snapshot_remove (test_strays.TestStrays) failed
1025
* https://tracker.ceph.com/issues/53859
1026
  test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
1027
* https://tracker.ceph.com/issues/61982
1028
  test_clean_stale_subvolume_snapshot_metadata (tasks.cephfs.test_volumes.TestSubvolumeSnapshots)
1029
* https://tracker.ceph.com/issues/52438
1030
  qa: ffsb timeout
1031
* https://tracker.ceph.com/issues/54460
1032
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1033
* https://tracker.ceph.com/issues/57655
1034
  qa: fs:mixed-clients kernel_untar_build failure
1035
* https://tracker.ceph.com/issues/48773
1036
  reached max tries: scrub does not complete
1037
* https://tracker.ceph.com/issues/58340
1038
  mds: fsstress.sh hangs with multimds
1039
* https://tracker.ceph.com/issues/61400
1040
  valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
1041
* https://tracker.ceph.com/issues/57206
1042
  libcephfs/test.sh: ceph_test_libcephfs_reclaim
1043
  
1044
* https://tracker.ceph.com/issues/57656
1045
  [testing] dbench: write failed on handle 10010 (Resource temporarily unavailable)
1046
* https://tracker.ceph.com/issues/61399
1047
  ior build failure
1048
* https://tracker.ceph.com/issues/57676
1049
  error during scrub thrashing: backtrace
1050
  
1051
* https://tracker.ceph.com/issues/38452
1052
  'sudo -u postgres -- pgbench -s 500 -i' failed
1053 158 Rishabh Dave
* https://tracker.ceph.com/issues/62126
1054 157 Venky Shankar
  blogbench.sh failure
1055
1056
h3. 18 July 2023
1057
1058
* https://tracker.ceph.com/issues/52624
1059
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
1060
* https://tracker.ceph.com/issues/57676
1061
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1062
* https://tracker.ceph.com/issues/54460
1063
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1064
* https://tracker.ceph.com/issues/57655
1065
    qa: fs:mixed-clients kernel_untar_build failure
1066
* https://tracker.ceph.com/issues/51964
1067
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1068
* https://tracker.ceph.com/issues/59344
1069
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
1070
* https://tracker.ceph.com/issues/61182
1071
    cephfs-mirror-ha-workunit: reached maximum tries (50) after waiting for 300 seconds
1072
* https://tracker.ceph.com/issues/61957
1073
    test_client_limits.TestClientLimits.test_client_release_bug
1074
* https://tracker.ceph.com/issues/59348
1075
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1076
* https://tracker.ceph.com/issues/61892
1077
    test_strays.TestStrays.test_snapshot_remove failed
1078
* https://tracker.ceph.com/issues/59346
1079
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
1080
* https://tracker.ceph.com/issues/44565
1081
    src/mds/SimpleLock.h: 528: FAILED ceph_assert(state == LOCK_XLOCK || state == LOCK_XLOCKDONE || state == LOCK_XLOCKSNAP || state == LOCK_LOCK_XLOCK || state == LOCK_LOCK || is_locallock())
1082
* https://tracker.ceph.com/issues/62067
1083
    ffsb.sh failure "Resource temporarily unavailable"
1084 156 Venky Shankar
1085
1086
h3. 17 July 2023
1087
1088
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230704.040136
1089
1090
* https://tracker.ceph.com/issues/61982
1091
    Test failure: test_clean_stale_subvolume_snapshot_metadata (tasks.cephfs.test_volumes.TestSubvolumeSnapshots)
1092
* https://tracker.ceph.com/issues/59344
1093
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
1094
* https://tracker.ceph.com/issues/61182
1095
    cephfs-mirror-ha-workunit: reached maximum tries (50) after waiting for 300 seconds
1096
* https://tracker.ceph.com/issues/61957
1097
    test_client_limits.TestClientLimits.test_client_release_bug
1098
* https://tracker.ceph.com/issues/61400
1099
    valgrind+ceph-mon: segmentation fault in rocksdb+tcmalloc
1100
* https://tracker.ceph.com/issues/59348
1101
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1102
* https://tracker.ceph.com/issues/61892
1103
    test_strays.TestStrays.test_snapshot_remove failed
1104
* https://tracker.ceph.com/issues/59346
1105
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
1106
* https://tracker.ceph.com/issues/62036
1107
    src/mds/MDCache.cc: 5131: FAILED ceph_assert(isolated_inodes.empty())
1108
* https://tracker.ceph.com/issues/61737
1109
    coredump from '/bin/podman pull quay.ceph.io/ceph-ci/ceph:pacific'
1110
* https://tracker.ceph.com/issues/44565
1111
    src/mds/SimpleLock.h: 528: FAILED ceph_assert(state == LOCK_XLOCK || state == LOCK_XLOCKDONE || state == LOCK_XLOCKSNAP || state == LOCK_LOCK_XLOCK || state == LOCK_LOCK || is_locallock())
1112 155 Rishabh Dave
1113 1 Patrick Donnelly
1114 153 Rishabh Dave
h3. 13 July 2023 Run 2
1115 152 Rishabh Dave
1116
1117
https://pulpito.ceph.com/rishabh-2023-07-08_23:33:40-fs-wip-rishabh-2023Jul9-testing-default-smithi/
1118
https://pulpito.ceph.com/rishabh-2023-07-09_20:19:09-fs-wip-rishabh-2023Jul9-testing-default-smithi/
1119
1120
* https://tracker.ceph.com/issues/61957
1121
  test_client_limits.TestClientLimits.test_client_release_bug
1122
* https://tracker.ceph.com/issues/61982
1123
  Test failure: test_clean_stale_subvolume_snapshot_metadata (tasks.cephfs.test_volumes.TestSubvolumeSnapshots)
1124
* https://tracker.ceph.com/issues/59348
1125
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1126
* https://tracker.ceph.com/issues/59344
1127
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
1128
* https://tracker.ceph.com/issues/54460
1129
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1130
* https://tracker.ceph.com/issues/57655
1131
  qa: fs:mixed-clients kernel_untar_build failure
1132
* https://tracker.ceph.com/issues/61400
1133
  valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
1134
* https://tracker.ceph.com/issues/61399
1135
  ior build failure
1136
1137 151 Venky Shankar
h3. 13 July 2023
1138
1139
https://pulpito.ceph.com/vshankar-2023-07-04_11:45:30-fs-wip-vshankar-testing-20230704.040242-testing-default-smithi/
1140
1141
* https://tracker.ceph.com/issues/54460
1142
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1143
* https://tracker.ceph.com/issues/61400
1144
    valgrind+ceph-mon: segmentation fault in rocksdb+tcmalloc
1145
* https://tracker.ceph.com/issues/57655
1146
    qa: fs:mixed-clients kernel_untar_build failure
1147
* https://tracker.ceph.com/issues/61945
1148
    LibCephFS.DelegTimeout failure
1149
* https://tracker.ceph.com/issues/52624
1150
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
1151
* https://tracker.ceph.com/issues/57676
1152
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1153
* https://tracker.ceph.com/issues/59348
1154
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1155
* https://tracker.ceph.com/issues/59344
1156
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
1157
* https://tracker.ceph.com/issues/51964
1158
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1159
* https://tracker.ceph.com/issues/59346
1160
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
1161
* https://tracker.ceph.com/issues/61982
1162
    Test failure: test_clean_stale_subvolume_snapshot_metadata (tasks.cephfs.test_volumes.TestSubvolumeSnapshots)
1163 150 Rishabh Dave
1164
1165
h3. 13 Jul 2023
1166
1167
https://pulpito.ceph.com/rishabh-2023-07-05_22:21:20-fs-wip-rishabh-2023Jul5-testing-default-smithi/
1168
https://pulpito.ceph.com/rishabh-2023-07-06_19:33:28-fs-wip-rishabh-2023Jul5-testing-default-smithi/
1169
1170
* https://tracker.ceph.com/issues/61957
1171
  test_client_limits.TestClientLimits.test_client_release_bug
1172
* https://tracker.ceph.com/issues/59348
1173
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1174
* https://tracker.ceph.com/issues/59346
1175
  fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
1176
* https://tracker.ceph.com/issues/48773
1177
  scrub does not complete: reached max tries
1178
* https://tracker.ceph.com/issues/59344
1179
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
1180
* https://tracker.ceph.com/issues/52438
1181
  qa: ffsb timeout
1182
* https://tracker.ceph.com/issues/57656
1183
  [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
1184
* https://tracker.ceph.com/issues/58742
1185
  xfstests-dev: kcephfs: generic
1186
* https://tracker.ceph.com/issues/61399
1187 148 Rishabh Dave
  libmpich: undefined references to fi_strerror
1188 149 Rishabh Dave
1189 148 Rishabh Dave
h3. 12 July 2023
1190
1191
https://pulpito.ceph.com/rishabh-2023-07-05_18:32:52-fs-wip-rishabh-CephManager-in-CephFSTestCase-testing-default-smithi/
1192
https://pulpito.ceph.com/rishabh-2023-07-06_19:46:43-fs-wip-rishabh-CephManager-in-CephFSTestCase-testing-default-smithi/
1193
1194
* https://tracker.ceph.com/issues/61892
1195
  test_strays.TestStrays.test_snapshot_remove failed
1196
* https://tracker.ceph.com/issues/59348
1197
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1198
* https://tracker.ceph.com/issues/53859
1199
  qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
1200
* https://tracker.ceph.com/issues/59346
1201
  fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write"
1202
* https://tracker.ceph.com/issues/58742
1203
  xfstests-dev: kcephfs: generic
1204
* https://tracker.ceph.com/issues/59344
1205
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
1206
* https://tracker.ceph.com/issues/52438
1207
  qa: ffsb timeout
1208
* https://tracker.ceph.com/issues/57656
1209
  [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
1210
* https://tracker.ceph.com/issues/54460
1211
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1212
* https://tracker.ceph.com/issues/57655
1213
  qa: fs:mixed-clients kernel_untar_build failure
1214
* https://tracker.ceph.com/issues/61182
1215
  cephfs-mirror-ha-workunit: reached maximum tries (50) after waiting for 300 seconds
1216
* https://tracker.ceph.com/issues/61400
1217
  valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
1218 147 Rishabh Dave
* https://tracker.ceph.com/issues/48773
1219 146 Patrick Donnelly
  reached max tries: scrub does not complete
1220
1221
h3. 05 July 2023
1222
1223
https://pulpito.ceph.com/pdonnell-2023-07-05_03:38:33-fs:libcephfs-wip-pdonnell-testing-20230705.003205-distro-default-smithi/
1224
1225 137 Rishabh Dave
* https://tracker.ceph.com/issues/59346
1226 143 Rishabh Dave
  fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write"
1227
1228
h3. 27 Jun 2023
1229
1230
https://pulpito.ceph.com/rishabh-2023-06-21_23:38:17-fs-wip-rishabh-improvements-authmon-testing-default-smithi/
1231 144 Rishabh Dave
https://pulpito.ceph.com/rishabh-2023-06-23_17:37:30-fs-wip-rishabh-improvements-authmon-distro-default-smithi/
1232
1233
* https://tracker.ceph.com/issues/59348
1234
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1235
* https://tracker.ceph.com/issues/54460
1236
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1237
* https://tracker.ceph.com/issues/59346
1238
  fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write"
1239
* https://tracker.ceph.com/issues/59344
1240
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
1241
* https://tracker.ceph.com/issues/61399
1242
  libmpich: undefined references to fi_strerror
1243
* https://tracker.ceph.com/issues/50223
1244
  client.xxxx isn't responding to mclientcaps(revoke)
1245 143 Rishabh Dave
* https://tracker.ceph.com/issues/61831
1246
  Test failure: test_mirroring_init_failure_with_recovery (tasks.cephfs.test_mirroring.TestMirroring)
1247 142 Venky Shankar
1248
1249
h3. 22 June 2023
1250
1251
* https://tracker.ceph.com/issues/57676
1252
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1253
* https://tracker.ceph.com/issues/54460
1254
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1255
* https://tracker.ceph.com/issues/59344
1256
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
1257
* https://tracker.ceph.com/issues/59348
1258
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1259
* https://tracker.ceph.com/issues/61400
1260
    valgrind+ceph-mon: segmentation fault in rocksdb+tcmalloc
1261
* https://tracker.ceph.com/issues/57655
1262
    qa: fs:mixed-clients kernel_untar_build failure
1263
* https://tracker.ceph.com/issues/61394
1264
    qa/quincy: cluster [WRN] evicting unresponsive client smithi152 (4298), after 303.726 seconds" in cluster log
1265
* https://tracker.ceph.com/issues/61762
1266
    qa: wait_for_clean: failed before timeout expired
1267
* https://tracker.ceph.com/issues/61775
1268
    cephfs-mirror: mirror daemon does not shutdown (in mirror ha tests)
1269
* https://tracker.ceph.com/issues/44565
1270
    src/mds/SimpleLock.h: 528: FAILED ceph_assert(state == LOCK_XLOCK || state == LOCK_XLOCKDONE || state == LOCK_XLOCKSNAP || state == LOCK_LOCK_XLOCK || state == LOCK_LOCK || is_locallock())
1271
* https://tracker.ceph.com/issues/61790
1272
    cephfs client to mds comms remain silent after reconnect
1273
* https://tracker.ceph.com/issues/61791
1274
    snaptest-git-ceph.sh test timed out (job dead)
1275 139 Venky Shankar
1276
1277
h3. 20 June 2023
1278
1279
https://pulpito.ceph.com/vshankar-2023-06-15_04:58:28-fs-wip-vshankar-testing-20230614.124123-testing-default-smithi/
1280
1281
* https://tracker.ceph.com/issues/57676
1282
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1283
* https://tracker.ceph.com/issues/54460
1284
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1285 140 Venky Shankar
* https://tracker.ceph.com/issues/54462
1286 1 Patrick Donnelly
  Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128
1287 141 Venky Shankar
* https://tracker.ceph.com/issues/58340
1288 139 Venky Shankar
  mds: fsstress.sh hangs with multimds
1289
* https://tracker.ceph.com/issues/59344
1290
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
1291
* https://tracker.ceph.com/issues/59348
1292
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1293
* https://tracker.ceph.com/issues/57656
1294
  [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
1295
* https://tracker.ceph.com/issues/61400
1296
    valgrind+ceph-mon: segmentation fault in rocksdb+tcmalloc
1297
* https://tracker.ceph.com/issues/57655
1298
    qa: fs:mixed-clients kernel_untar_build failure
1299
* https://tracker.ceph.com/issues/44565
1300
    src/mds/SimpleLock.h: 528: FAILED ceph_assert(state == LOCK_XLOCK || state == LOCK_XLOCKDONE || state == LOCK_XLOCKSNAP || state == LOCK_LOCK_XLOCK || state == LOCK_LOCK || is_locallock())
1301
* https://tracker.ceph.com/issues/61737
1302 138 Rishabh Dave
    coredump from '/bin/podman pull quay.ceph.io/ceph-ci/ceph:pacific'
1303
1304
h3. 16 June 2023
1305
1306 1 Patrick Donnelly
https://pulpito.ceph.com/rishabh-2023-05-16_10:39:13-fs-wip-rishabh-2023May15-1524-testing-default-smithi/
1307 145 Rishabh Dave
https://pulpito.ceph.com/rishabh-2023-05-17_11:09:48-fs-wip-rishabh-2023May15-1524-testing-default-smithi/
1308 138 Rishabh Dave
https://pulpito.ceph.com/rishabh-2023-05-18_10:01:53-fs-wip-rishabh-2023May15-1524-testing-default-smithi/
1309 1 Patrick Donnelly
(bins were rebuilt with a subset of orig PRs) https://pulpito.ceph.com/rishabh-2023-06-09_10:19:22-fs-wip-rishabh-2023Jun9-1308-testing-default-smithi/
1310
1311
1312
* https://tracker.ceph.com/issues/59344
1313
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
1314 138 Rishabh Dave
* https://tracker.ceph.com/issues/59348
1315
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1316 145 Rishabh Dave
* https://tracker.ceph.com/issues/59346
1317
  fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write"
1318
* https://tracker.ceph.com/issues/57656
1319
  [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
1320
* https://tracker.ceph.com/issues/54460
1321
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1322 138 Rishabh Dave
* https://tracker.ceph.com/issues/54462
1323
  Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128
1324 145 Rishabh Dave
* https://tracker.ceph.com/issues/61399
1325
  libmpich: undefined references to fi_strerror
1326
* https://tracker.ceph.com/issues/58945
1327
  xfstests-dev: ceph-fuse: generic 
1328 138 Rishabh Dave
* https://tracker.ceph.com/issues/58742
1329 136 Patrick Donnelly
  xfstests-dev: kcephfs: generic
1330
1331
h3. 24 May 2023
1332
1333
https://pulpito.ceph.com/pdonnell-2023-05-23_18:20:18-fs-wip-pdonnell-testing-20230523.134409-distro-default-smithi/
1334
1335
* https://tracker.ceph.com/issues/57676
1336
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1337
* https://tracker.ceph.com/issues/59683
1338
    Error: Unable to find a match: userspace-rcu-devel libedit-devel device-mapper-devel with fscrypt tests
1339
* https://tracker.ceph.com/issues/61399
1340
    qa: "[Makefile:299: ior] Error 1"
1341
* https://tracker.ceph.com/issues/61265
1342
    qa: tasks.cephfs.fuse_mount:process failed to terminate after unmount
1343
* https://tracker.ceph.com/issues/59348
1344
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1345
* https://tracker.ceph.com/issues/59346
1346
    qa/workunits/fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write"
1347
* https://tracker.ceph.com/issues/61400
1348
    valgrind+ceph-mon: segmentation fault in rocksdb+tcmalloc
1349
* https://tracker.ceph.com/issues/54460
1350
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1351
* https://tracker.ceph.com/issues/51964
1352
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1353
* https://tracker.ceph.com/issues/59344
1354
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
1355
* https://tracker.ceph.com/issues/61407
1356
    mds: abort on CInode::verify_dirfrags
1357
* https://tracker.ceph.com/issues/48773
1358
    qa: scrub does not complete
1359
* https://tracker.ceph.com/issues/57655
1360
    qa: fs:mixed-clients kernel_untar_build failure
1361
* https://tracker.ceph.com/issues/61409
1362 128 Venky Shankar
    qa: _test_stale_caps does not wait for file flush before stat
1363
1364
h3. 15 May 2023
1365 130 Venky Shankar
1366 128 Venky Shankar
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230509.090020
1367
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230509.090020-6
1368
1369
* https://tracker.ceph.com/issues/52624
1370
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
1371
* https://tracker.ceph.com/issues/54460
1372
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1373
* https://tracker.ceph.com/issues/57676
1374
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1375
* https://tracker.ceph.com/issues/59684 [kclient bug]
1376
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
1377
* https://tracker.ceph.com/issues/59348
1378
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1379 131 Venky Shankar
* https://tracker.ceph.com/issues/61148
1380
    dbench test results in call trace in dmesg [kclient bug]
1381 133 Kotresh Hiremath Ravishankar
* https://tracker.ceph.com/issues/58340
1382 134 Kotresh Hiremath Ravishankar
    mds: fsstress.sh hangs with multimds
1383 125 Venky Shankar
1384
 
1385 129 Rishabh Dave
h3. 11 May 2023
1386
1387
https://pulpito.ceph.com/yuriw-2023-05-10_18:21:40-fs-wip-yuri7-testing-2023-05-10-0742-distro-default-smithi/
1388
1389
* https://tracker.ceph.com/issues/59684 [kclient bug]
1390
  Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
1391
* https://tracker.ceph.com/issues/59348
1392
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1393
* https://tracker.ceph.com/issues/57655
1394
  qa: fs:mixed-clients kernel_untar_build failure
1395
* https://tracker.ceph.com/issues/57676
1396
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
1397
* https://tracker.ceph.com/issues/55805
1398
  error during scrub thrashing reached max tries in 900 secs
1399
* https://tracker.ceph.com/issues/54460
1400
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1401
* https://tracker.ceph.com/issues/57656
1402
  [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
1403
* https://tracker.ceph.com/issues/58220
1404
  Command failed (workunit test fs/quota/quota.sh) on smithi081 with status 1:
1405 1 Patrick Donnelly
* https://tracker.ceph.com/issues/58220#note-9
1406
  workunit fs/test_python.sh: test_disk_quota_exceeeded_error failure
1407 134 Kotresh Hiremath Ravishankar
* https://tracker.ceph.com/issues/59342
1408
  qa/workunits/kernel_untar_build.sh failed when compiling the Linux source
1409 135 Kotresh Hiremath Ravishankar
* https://tracker.ceph.com/issues/58949
1410
    test_cephfs.test_disk_quota_exceeeded_error - AssertionError: DiskQuotaExceeded not raised by write
1411 129 Rishabh Dave
* https://tracker.ceph.com/issues/61243 (NEW)
1412
  test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
1413
1414 125 Venky Shankar
h3. 11 May 2023
1415 127 Venky Shankar
1416
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230509.054005
1417 126 Venky Shankar
1418 125 Venky Shankar
(no fsstress job failure [https://tracker.ceph.com/issues/58340] since https://github.com/ceph/ceph/pull/49553
1419
 was included in the branch, however, the PR got updated and needs retest).
1420
1421
* https://tracker.ceph.com/issues/52624
1422
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
1423
* https://tracker.ceph.com/issues/54460
1424
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1425
* https://tracker.ceph.com/issues/57676
1426
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1427
* https://tracker.ceph.com/issues/59683
1428
    Error: Unable to find a match: userspace-rcu-devel libedit-devel device-mapper-devel with fscrypt tests
1429
* https://tracker.ceph.com/issues/59684 [kclient bug]
1430
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
1431
* https://tracker.ceph.com/issues/59348
1432 124 Venky Shankar
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1433
1434
h3. 09 May 2023
1435
1436
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230506.143554
1437
1438
* https://tracker.ceph.com/issues/52624
1439
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
1440
* https://tracker.ceph.com/issues/58340
1441
    mds: fsstress.sh hangs with multimds
1442
* https://tracker.ceph.com/issues/54460
1443
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1444
* https://tracker.ceph.com/issues/57676
1445
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1446
* https://tracker.ceph.com/issues/51964
1447
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1448
* https://tracker.ceph.com/issues/59350
1449
    qa: test_flag_scrub_mdsdir (tasks.cephfs.test_scrub_checks.TestScrubChecks) ... ERROR
1450
* https://tracker.ceph.com/issues/59683
1451
    Error: Unable to find a match: userspace-rcu-devel libedit-devel device-mapper-devel with fscrypt tests
1452
* https://tracker.ceph.com/issues/59684 [kclient bug]
1453
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
1454
* https://tracker.ceph.com/issues/59348
1455 123 Venky Shankar
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1456
1457
h3. 10 Apr 2023
1458
1459
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230330.105356
1460
1461
* https://tracker.ceph.com/issues/52624
1462
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
1463
* https://tracker.ceph.com/issues/58340
1464
    mds: fsstress.sh hangs with multimds
1465
* https://tracker.ceph.com/issues/54460
1466
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1467
* https://tracker.ceph.com/issues/57676
1468
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1469 119 Rishabh Dave
* https://tracker.ceph.com/issues/51964
1470 120 Rishabh Dave
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1471 121 Rishabh Dave
1472 120 Rishabh Dave
h3. 31 Mar 2023
1473 122 Rishabh Dave
1474
run: http://pulpito.front.sepia.ceph.com/rishabh-2023-03-03_21:39:49-fs-wip-rishabh-2023Mar03-2316-testing-default-smithi/
1475 120 Rishabh Dave
re-run (some PRs removed from batch): http://pulpito.front.sepia.ceph.com/rishabh-2023-03-11_05:54:03-fs-wip-rishabh-2023Mar10-1727-testing-default-smithi/
1476
re-run (some PRs removed from batch): http://pulpito.front.sepia.ceph.com/rishabh-2023-03-23_08:27:28-fs-wip-rishabh-2023Mar20-2250-testing-default-smithi/
1477
1478
There were many more re-runs for "failed+dead" jobs as well as for individual jobs. half of the PRs from the batch were removed (gradually over subsequent re-runs).
1479
1480
* https://tracker.ceph.com/issues/57676
1481
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
1482
* https://tracker.ceph.com/issues/54460
1483
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1484
* https://tracker.ceph.com/issues/58220
1485
  Command failed (workunit test fs/quota/quota.sh) on smithixxx with status 1
1486
* https://tracker.ceph.com/issues/58220#note-9
1487
  workunit fs/test_python.sh: test_disk_quota_exceeeded_error failure
1488
* https://tracker.ceph.com/issues/56695
1489
  Command failed (workunit test suites/pjd.sh)
1490
* https://tracker.ceph.com/issues/58564 
1491
  workuit dbench failed with error code 1
1492
* https://tracker.ceph.com/issues/57206
1493
  libcephfs/test.sh: ceph_test_libcephfs_reclaim
1494
* https://tracker.ceph.com/issues/57580
1495
  Test failure: test_newops_getvxattr (tasks.cephfs.test_newops.TestNewOps)
1496
* https://tracker.ceph.com/issues/58940
1497
  ceph osd hit ceph_abort
1498
* https://tracker.ceph.com/issues/55805
1499 118 Venky Shankar
  error scrub thrashing reached max tries in 900 secs
1500
1501
h3. 30 March 2023
1502
1503
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230315.085747
1504
1505
* https://tracker.ceph.com/issues/58938
1506
    qa: xfstests-dev's generic test suite has 7 failures with kclient
1507
* https://tracker.ceph.com/issues/51964
1508
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1509
* https://tracker.ceph.com/issues/58340
1510 114 Venky Shankar
    mds: fsstress.sh hangs with multimds
1511
1512 115 Venky Shankar
h3. 29 March 2023
1513 114 Venky Shankar
1514
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230317.095222
1515
1516
* https://tracker.ceph.com/issues/56695
1517
    [RHEL stock] pjd test failures
1518
* https://tracker.ceph.com/issues/57676
1519
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1520
* https://tracker.ceph.com/issues/57087
1521
    qa: test_fragmented_injection (tasks.cephfs.test_data_scan.TestDataScan) failure
1522 116 Venky Shankar
* https://tracker.ceph.com/issues/58340
1523
    mds: fsstress.sh hangs with multimds
1524 114 Venky Shankar
* https://tracker.ceph.com/issues/57655
1525
    qa: fs:mixed-clients kernel_untar_build failure
1526 117 Venky Shankar
* https://tracker.ceph.com/issues/59230
1527
    Test failure: test_object_deletion (tasks.cephfs.test_damage.TestDamage)
1528 114 Venky Shankar
* https://tracker.ceph.com/issues/58938
1529 113 Venky Shankar
    qa: xfstests-dev's generic test suite has 7 failures with kclient
1530
1531
h3. 13 Mar 2023
1532
1533
* https://tracker.ceph.com/issues/56695
1534
    [RHEL stock] pjd test failures
1535
* https://tracker.ceph.com/issues/57676
1536
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1537
* https://tracker.ceph.com/issues/51964
1538
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1539
* https://tracker.ceph.com/issues/54460
1540
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1541
* https://tracker.ceph.com/issues/57656
1542 112 Venky Shankar
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
1543
1544
h3. 09 Mar 2023
1545
1546
https://pulpito.ceph.com/vshankar-2023-03-03_04:39:14-fs-wip-vshankar-testing-20230303.023823-testing-default-smithi/
1547
https://pulpito.ceph.com/vshankar-2023-03-08_15:12:36-fs-wip-vshankar-testing-20230308.112059-testing-default-smithi/
1548
1549
* https://tracker.ceph.com/issues/56695
1550
    [RHEL stock] pjd test failures
1551
* https://tracker.ceph.com/issues/57676
1552
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1553
* https://tracker.ceph.com/issues/51964
1554
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1555
* https://tracker.ceph.com/issues/54460
1556
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1557
* https://tracker.ceph.com/issues/58340
1558
    mds: fsstress.sh hangs with multimds
1559
* https://tracker.ceph.com/issues/57087
1560 111 Venky Shankar
    qa: test_fragmented_injection (tasks.cephfs.test_data_scan.TestDataScan) failure
1561
1562
h3. 07 Mar 2023
1563
1564
https://pulpito.ceph.com/vshankar-2023-03-02_09:21:58-fs-wip-vshankar-testing-20230222.044949-testing-default-smithi/
1565
https://pulpito.ceph.com/vshankar-2023-03-07_05:15:12-fs-wip-vshankar-testing-20230307.030510-testing-default-smithi/
1566
1567
* https://tracker.ceph.com/issues/56695
1568
    [RHEL stock] pjd test failures
1569
* https://tracker.ceph.com/issues/57676
1570
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1571
* https://tracker.ceph.com/issues/51964
1572
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1573
* https://tracker.ceph.com/issues/57656
1574
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
1575
* https://tracker.ceph.com/issues/57655
1576
    qa: fs:mixed-clients kernel_untar_build failure
1577
* https://tracker.ceph.com/issues/58220
1578
    Command failed (workunit test fs/quota/quota.sh) on smithi081 with status 1:
1579
* https://tracker.ceph.com/issues/54460
1580
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1581
* https://tracker.ceph.com/issues/58934
1582 109 Venky Shankar
    snaptest-git-ceph.sh failure with ceph-fuse
1583
1584
h3. 28 Feb 2023
1585
1586
https://pulpito.ceph.com/vshankar-2023-02-24_02:11:45-fs-wip-vshankar-testing-20230222.025426-testing-default-smithi/
1587
1588
* https://tracker.ceph.com/issues/56695
1589
    [RHEL stock] pjd test failures
1590
* https://tracker.ceph.com/issues/57676
1591
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1592 110 Venky Shankar
* https://tracker.ceph.com/issues/56446
1593 109 Venky Shankar
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
1594
1595 107 Venky Shankar
(teuthology infra issues causing testing delays - merging PRs which have tests passing)
1596
1597
h3. 25 Jan 2023
1598
1599
https://pulpito.ceph.com/vshankar-2023-01-25_07:57:32-fs-wip-vshankar-testing-20230125.055346-testing-default-smithi/
1600
1601
* https://tracker.ceph.com/issues/52624
1602
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
1603
* https://tracker.ceph.com/issues/56695
1604
    [RHEL stock] pjd test failures
1605
* https://tracker.ceph.com/issues/57676
1606
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1607
* https://tracker.ceph.com/issues/56446
1608
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
1609
* https://tracker.ceph.com/issues/57206
1610
    libcephfs/test.sh: ceph_test_libcephfs_reclaim
1611
* https://tracker.ceph.com/issues/58220
1612
    Command failed (workunit test fs/quota/quota.sh) on smithi081 with status 1:
1613
* https://tracker.ceph.com/issues/58340
1614
  mds: fsstress.sh hangs with multimds
1615
* https://tracker.ceph.com/issues/56011
1616
    fs/thrash: snaptest-snap-rm-cmp.sh fails in mds5sum comparison
1617
* https://tracker.ceph.com/issues/54460
1618 101 Rishabh Dave
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1619
1620
h3. 30 JAN 2023
1621
1622
run: http://pulpito.front.sepia.ceph.com/rishabh-2022-11-28_08:04:11-fs-wip-rishabh-testing-2022Nov24-1818-testing-default-smithi/
1623
re-run: http://pulpito.front.sepia.ceph.com/rishabh-2023-01-13_12:08:33-fs-wip-rishabh-testing-2022Nov24-11Jan2023-distro-default-smithi/
1624 105 Rishabh Dave
re-run of re-run: http://pulpito.front.sepia.ceph.com/rishabh-2023-01-23_18:53:32-fs-wip-rishabh-testing-2022Nov24-11Jan2023-distro-default-smithi/
1625
1626 101 Rishabh Dave
* https://tracker.ceph.com/issues/52624
1627
  qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
1628
* https://tracker.ceph.com/issues/56695
1629
  [RHEL stock] pjd test failures
1630
* https://tracker.ceph.com/issues/57676
1631
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
1632
* https://tracker.ceph.com/issues/55332
1633
  Failure in snaptest-git-ceph.sh
1634
* https://tracker.ceph.com/issues/51964
1635
  qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1636
* https://tracker.ceph.com/issues/56446
1637
  Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
1638
* https://tracker.ceph.com/issues/57655 
1639
  qa: fs:mixed-clients kernel_untar_build failure
1640
* https://tracker.ceph.com/issues/54460
1641
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1642 103 Rishabh Dave
* https://tracker.ceph.com/issues/58340
1643
  mds: fsstress.sh hangs with multimds
1644 101 Rishabh Dave
* https://tracker.ceph.com/issues/58219
1645 102 Rishabh Dave
  Command crashed: 'ceph-dencoder type inode_backtrace_t import - decode dump_json'
1646
1647
* "Failed to load ceph-mgr modules: prometheus" in cluster log"
1648 106 Rishabh Dave
  http://pulpito.front.sepia.ceph.com/rishabh-2023-01-23_18:53:32-fs-wip-rishabh-testing-2022Nov24-11Jan2023-distro-default-smithi/7134086
1649
  Acc to Venky this was fixed in https://github.com/ceph/ceph/commit/cf6089200d96fc56b08ee17a4e31f19823370dc8
1650 102 Rishabh Dave
* Created https://tracker.ceph.com/issues/58564
1651 100 Venky Shankar
  workunit test suites/dbench.sh failed error code 1
1652
1653
h3. 15 Dec 2022
1654
1655
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20221215.112736
1656
1657
* https://tracker.ceph.com/issues/52624
1658
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1659
* https://tracker.ceph.com/issues/56695
1660
    [RHEL stock] pjd test failures
1661
* https://tracker.ceph.com/issues/58219
1662
* https://tracker.ceph.com/issues/57655
1663
* qa: fs:mixed-clients kernel_untar_build failure
1664
    Test failure: test_journal_migration (tasks.cephfs.test_journal_migration.TestJournalMigration)
1665
* https://tracker.ceph.com/issues/57676
1666
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1667
* https://tracker.ceph.com/issues/58340
1668 96 Venky Shankar
    mds: fsstress.sh hangs with multimds
1669
1670
h3. 08 Dec 2022
1671 99 Venky Shankar
1672 96 Venky Shankar
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20221130.043104
1673
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20221209.043803
1674
1675
(lots of transient git.ceph.com failures)
1676
1677
* https://tracker.ceph.com/issues/52624
1678
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1679
* https://tracker.ceph.com/issues/56695
1680
    [RHEL stock] pjd test failures
1681
* https://tracker.ceph.com/issues/57655
1682
    qa: fs:mixed-clients kernel_untar_build failure
1683
* https://tracker.ceph.com/issues/58219
1684
    Test failure: test_journal_migration (tasks.cephfs.test_journal_migration.TestJournalMigration)
1685
* https://tracker.ceph.com/issues/58220
1686
    Command failed (workunit test fs/quota/quota.sh) on smithi081 with status 1:
1687 97 Venky Shankar
* https://tracker.ceph.com/issues/57676
1688
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1689 98 Venky Shankar
* https://tracker.ceph.com/issues/53859
1690
    qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
1691
* https://tracker.ceph.com/issues/54460
1692
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1693 96 Venky Shankar
* https://tracker.ceph.com/issues/58244
1694 95 Venky Shankar
    Test failure: test_rebuild_inotable (tasks.cephfs.test_data_scan.TestDataScan)
1695
1696
h3. 14 Oct 2022
1697
1698
https://pulpito.ceph.com/vshankar-2022-10-12_04:56:59-fs-wip-vshankar-testing-20221011-145847-testing-default-smithi/
1699
https://pulpito.ceph.com/vshankar-2022-10-14_04:04:57-fs-wip-vshankar-testing-20221014-072608-testing-default-smithi/
1700
1701
* https://tracker.ceph.com/issues/52624
1702
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1703
* https://tracker.ceph.com/issues/55804
1704
    Command failed (workunit test suites/pjd.sh)
1705
* https://tracker.ceph.com/issues/51964
1706
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1707
* https://tracker.ceph.com/issues/57682
1708
    client: ERROR: test_reconnect_after_blocklisted
1709 90 Rishabh Dave
* https://tracker.ceph.com/issues/54460
1710 91 Rishabh Dave
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1711
1712
h3. 10 Oct 2022
1713 92 Rishabh Dave
1714 91 Rishabh Dave
http://pulpito.front.sepia.ceph.com/rishabh-2022-09-30_19:45:21-fs-wip-rishabh-testing-30Sep2022-testing-default-smithi/
1715
1716
reruns
1717
* fs-thrash, passed: http://pulpito.front.sepia.ceph.com/rishabh-2022-10-04_13:19:47-fs-wip-rishabh-testing-30Sep2022-testing-default-smithi/
1718 94 Rishabh Dave
* fs-verify, passed: http://pulpito.front.sepia.ceph.com/rishabh-2022-10-05_12:25:37-fs-wip-rishabh-testing-30Sep2022-testing-default-smithi/
1719 91 Rishabh Dave
* cephadm failures also passed after many re-runs: http://pulpito.front.sepia.ceph.com/rishabh-2022-10-06_13:50:51-fs-wip-rishabh-testing-30Sep2022-2-testing-default-smithi/
1720 93 Rishabh Dave
    ** needed this PR to be merged in ceph-ci branch - https://github.com/ceph/ceph/pull/47458
1721 91 Rishabh Dave
1722
known bugs
1723
* https://tracker.ceph.com/issues/52624
1724
  qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1725
* https://tracker.ceph.com/issues/50223
1726
  client.xxxx isn't responding to mclientcaps(revoke
1727
* https://tracker.ceph.com/issues/57299
1728
  qa: test_dump_loads fails with JSONDecodeError
1729
* https://tracker.ceph.com/issues/57655 [Exist in main as well]
1730
  qa: fs:mixed-clients kernel_untar_build failure
1731
* https://tracker.ceph.com/issues/57206
1732 90 Rishabh Dave
  libcephfs/test.sh: ceph_test_libcephfs_reclaim
1733
1734
h3. 2022 Sep 29
1735
1736
http://pulpito.front.sepia.ceph.com/rishabh-2022-09-14_12:48:43-fs-wip-rishabh-testing-2022Sep9-1708-testing-default-smithi/
1737
1738
* https://tracker.ceph.com/issues/55804
1739
  Command failed (workunit test suites/pjd.sh)
1740
* https://tracker.ceph.com/issues/36593
1741
  Command failed (workunit test fs/quota/quota.sh) on smithixxx with status 1
1742
* https://tracker.ceph.com/issues/52624
1743
  qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1744
* https://tracker.ceph.com/issues/51964
1745
  qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1746
* https://tracker.ceph.com/issues/56632
1747
  Test failure: test_subvolume_snapshot_clone_quota_exceeded
1748
* https://tracker.ceph.com/issues/50821
1749 88 Patrick Donnelly
  qa: untar_snap_rm failure during mds thrashing
1750
1751
h3. 2022 Sep 26
1752
1753
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20220923.171109
1754
1755
* https://tracker.ceph.com/issues/55804
1756
    qa failure: pjd link tests failed
1757
* https://tracker.ceph.com/issues/57676
1758
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1759
* https://tracker.ceph.com/issues/52624
1760
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1761
* https://tracker.ceph.com/issues/57580
1762
    Test failure: test_newops_getvxattr (tasks.cephfs.test_newops.TestNewOps)
1763
* https://tracker.ceph.com/issues/48773
1764
    qa: scrub does not complete
1765
* https://tracker.ceph.com/issues/57299
1766
    qa: test_dump_loads fails with JSONDecodeError
1767
* https://tracker.ceph.com/issues/57280
1768
    qa: tasks/kernel_cfuse_workunits_untarbuild_blogbench fails - Failed to fetch package version from shaman
1769
* https://tracker.ceph.com/issues/57205
1770
    Test failure: test_subvolume_group_ls_filter_internal_directories (tasks.cephfs.test_volumes.TestSubvolumeGroups)
1771
* https://tracker.ceph.com/issues/57656
1772
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
1773
* https://tracker.ceph.com/issues/57677
1774
    qa: "1 MDSs behind on trimming (MDS_TRIM)"
1775
* https://tracker.ceph.com/issues/57206
1776
    libcephfs/test.sh: ceph_test_libcephfs_reclaim
1777
* https://tracker.ceph.com/issues/57446
1778
    qa: test_subvolume_snapshot_info_if_orphan_clone fails
1779 89 Patrick Donnelly
* https://tracker.ceph.com/issues/57655 [Exist in main as well]
1780
    qa: fs:mixed-clients kernel_untar_build failure
1781 88 Patrick Donnelly
* https://tracker.ceph.com/issues/57682
1782
    client: ERROR: test_reconnect_after_blocklisted
1783 87 Patrick Donnelly
1784
1785
h3. 2022 Sep 22
1786
1787
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20220920.234701
1788
1789
* https://tracker.ceph.com/issues/57299
1790
    qa: test_dump_loads fails with JSONDecodeError
1791
* https://tracker.ceph.com/issues/57205
1792
    Test failure: test_subvolume_group_ls_filter_internal_directories (tasks.cephfs.test_volumes.TestSubvolumeGroups)
1793
* https://tracker.ceph.com/issues/52624
1794
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1795
* https://tracker.ceph.com/issues/57580
1796
    Test failure: test_newops_getvxattr (tasks.cephfs.test_newops.TestNewOps)
1797
* https://tracker.ceph.com/issues/57280
1798
    qa: tasks/kernel_cfuse_workunits_untarbuild_blogbench fails - Failed to fetch package version from shaman
1799
* https://tracker.ceph.com/issues/48773
1800
    qa: scrub does not complete
1801
* https://tracker.ceph.com/issues/56446
1802
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
1803
* https://tracker.ceph.com/issues/57206
1804
    libcephfs/test.sh: ceph_test_libcephfs_reclaim
1805
* https://tracker.ceph.com/issues/51267
1806
    CommandFailedError: Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithi096 with status 1:...
1807
1808
NEW:
1809
1810
* https://tracker.ceph.com/issues/57656
1811
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
1812
* https://tracker.ceph.com/issues/57655 [Exist in main as well]
1813
    qa: fs:mixed-clients kernel_untar_build failure
1814
* https://tracker.ceph.com/issues/57657
1815
    mds: scrub locates mismatch between child accounted_rstats and self rstats
1816
1817
Segfault probably caused by: https://github.com/ceph/ceph/pull/47795#issuecomment-1255724799
1818 80 Venky Shankar
1819 79 Venky Shankar
1820
h3. 2022 Sep 16
1821
1822
https://pulpito.ceph.com/?branch=wip-vshankar-testing1-20220905-132828
1823
1824
* https://tracker.ceph.com/issues/57446
1825
    qa: test_subvolume_snapshot_info_if_orphan_clone fails
1826
* https://tracker.ceph.com/issues/57299
1827
    qa: test_dump_loads fails with JSONDecodeError
1828
* https://tracker.ceph.com/issues/50223
1829
    client.xxxx isn't responding to mclientcaps(revoke)
1830
* https://tracker.ceph.com/issues/52624
1831
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1832
* https://tracker.ceph.com/issues/57205
1833
    Test failure: test_subvolume_group_ls_filter_internal_directories (tasks.cephfs.test_volumes.TestSubvolumeGroups)
1834
* https://tracker.ceph.com/issues/57280
1835
    qa: tasks/kernel_cfuse_workunits_untarbuild_blogbench fails - Failed to fetch package version from shaman
1836
* https://tracker.ceph.com/issues/51282
1837
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
1838
* https://tracker.ceph.com/issues/48203
1839
  https://tracker.ceph.com/issues/36593
1840
    qa: quota failure
1841
    qa: quota failure caused by clients stepping on each other
1842
* https://tracker.ceph.com/issues/57580
1843 77 Rishabh Dave
    Test failure: test_newops_getvxattr (tasks.cephfs.test_newops.TestNewOps)
1844
1845 76 Rishabh Dave
1846
h3. 2022 Aug 26
1847
1848
http://pulpito.front.sepia.ceph.com/rishabh-2022-08-22_17:49:59-fs-wip-rishabh-testing-2022Aug19-testing-default-smithi/
1849
http://pulpito.front.sepia.ceph.com/rishabh-2022-08-24_11:56:51-fs-wip-rishabh-testing-2022Aug19-testing-default-smithi/
1850
1851
* https://tracker.ceph.com/issues/57206
1852
  libcephfs/test.sh: ceph_test_libcephfs_reclaim
1853
* https://tracker.ceph.com/issues/56632
1854
  Test failure: test_subvolume_snapshot_clone_quota_exceeded (tasks.cephfs.test_volumes.TestSubvolumeSnapshotClones)
1855
* https://tracker.ceph.com/issues/56446
1856
  Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
1857
* https://tracker.ceph.com/issues/51964
1858
  qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1859
* https://tracker.ceph.com/issues/53859
1860
  qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
1861
1862
* https://tracker.ceph.com/issues/54460
1863
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1864
* https://tracker.ceph.com/issues/54462
1865
  Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128
1866
* https://tracker.ceph.com/issues/54460
1867
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1868
* https://tracker.ceph.com/issues/36593
1869
  Command failed (workunit test fs/quota/quota.sh) on smithixxx with status 1
1870
1871
* https://tracker.ceph.com/issues/52624
1872
  qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1873
* https://tracker.ceph.com/issues/55804
1874
  Command failed (workunit test suites/pjd.sh)
1875
* https://tracker.ceph.com/issues/50223
1876
  client.xxxx isn't responding to mclientcaps(revoke)
1877 75 Venky Shankar
1878
1879
h3. 2022 Aug 22
1880
1881
https://pulpito.ceph.com/vshankar-2022-08-12_09:34:24-fs-wip-vshankar-testing1-20220812-072441-testing-default-smithi/
1882
https://pulpito.ceph.com/vshankar-2022-08-18_04:30:42-fs-wip-vshankar-testing1-20220818-082047-testing-default-smithi/ (drop problematic PR and re-run)
1883
1884
* https://tracker.ceph.com/issues/52624
1885
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1886
* https://tracker.ceph.com/issues/56446
1887
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
1888
* https://tracker.ceph.com/issues/55804
1889
    Command failed (workunit test suites/pjd.sh)
1890
* https://tracker.ceph.com/issues/51278
1891
    mds: "FAILED ceph_assert(!segments.empty())"
1892
* https://tracker.ceph.com/issues/54460
1893
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1894
* https://tracker.ceph.com/issues/57205
1895
    Test failure: test_subvolume_group_ls_filter_internal_directories (tasks.cephfs.test_volumes.TestSubvolumeGroups)
1896
* https://tracker.ceph.com/issues/57206
1897
    ceph_test_libcephfs_reclaim crashes during test
1898
* https://tracker.ceph.com/issues/53859
1899
  Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
1900
* https://tracker.ceph.com/issues/50223
1901 72 Venky Shankar
    client.xxxx isn't responding to mclientcaps(revoke)
1902
1903
h3. 2022 Aug 12
1904
1905
https://pulpito.ceph.com/vshankar-2022-08-10_04:06:00-fs-wip-vshankar-testing-20220805-190751-testing-default-smithi/
1906
https://pulpito.ceph.com/vshankar-2022-08-11_12:16:58-fs-wip-vshankar-testing-20220811-145809-testing-default-smithi/ (drop problematic PR and re-run)
1907
1908
* https://tracker.ceph.com/issues/52624
1909
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1910
* https://tracker.ceph.com/issues/56446
1911
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
1912
* https://tracker.ceph.com/issues/51964
1913
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1914
* https://tracker.ceph.com/issues/55804
1915
    Command failed (workunit test suites/pjd.sh)
1916
* https://tracker.ceph.com/issues/50223
1917
    client.xxxx isn't responding to mclientcaps(revoke)
1918
* https://tracker.ceph.com/issues/50821
1919 73 Venky Shankar
    qa: untar_snap_rm failure during mds thrashing
1920 72 Venky Shankar
* https://tracker.ceph.com/issues/54460
1921 71 Venky Shankar
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1922
1923
h3. 2022 Aug 04
1924
1925
https://pulpito.ceph.com/?branch=wip-vshankar-testing1-20220804-123835 (only mgr/volumes, mgr/stats)
1926
1927 69 Rishabh Dave
Unrealted teuthology failure on rhel
1928 68 Rishabh Dave
1929
h3. 2022 Jul 25
1930
1931
http://pulpito.front.sepia.ceph.com/rishabh-2022-07-22_11:34:20-fs-wip-rishabh-testing-2022Jul22-1400-testing-default-smithi/
1932
1933 74 Rishabh Dave
1st re-run: http://pulpito.front.sepia.ceph.com/rishabh-2022-07-24_03:51:19-fs-wip-rishabh-testing-2022Jul22-1400-testing-default-smithi
1934
2nd re-run: http://pulpito.front.sepia.ceph.com/rishabh-2022-07-24_08:53:36-fs-wip-rishabh-testing-2022Jul22-1400-testing-default-smithi/
1935 68 Rishabh Dave
3rd re-run: http://pulpito.front.sepia.ceph.com/rishabh-2022-07-24_08:53:36-fs-wip-rishabh-testing-2022Jul22-1400-testing-default-smithi/
1936
4th (final) re-run: http://pulpito.front.sepia.ceph.com/rishabh-2022-07-28_03:59:01-fs-wip-rishabh-testing-2022Jul28-0143-testing-default-smithi/
1937
1938
* https://tracker.ceph.com/issues/55804
1939
  Command failed (workunit test suites/pjd.sh)
1940
* https://tracker.ceph.com/issues/50223
1941
  client.xxxx isn't responding to mclientcaps(revoke)
1942
1943
* https://tracker.ceph.com/issues/54460
1944
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1945 1 Patrick Donnelly
* https://tracker.ceph.com/issues/36593
1946 74 Rishabh Dave
  Command failed (workunit test fs/quota/quota.sh) on smithixxx with status 1
1947 68 Rishabh Dave
* https://tracker.ceph.com/issues/54462
1948 67 Patrick Donnelly
  Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128~
1949
1950
h3. 2022 July 22
1951
1952
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20220721.235756
1953
1954
MDS_HEALTH_DUMMY error in log fixed by followup commit.
1955
transient selinux ping failure
1956
1957
* https://tracker.ceph.com/issues/56694
1958
    qa: avoid blocking forever on hung umount
1959
* https://tracker.ceph.com/issues/56695
1960
    [RHEL stock] pjd test failures
1961
* https://tracker.ceph.com/issues/56696
1962
    admin keyring disappears during qa run
1963
* https://tracker.ceph.com/issues/56697
1964
    qa: fs/snaps fails for fuse
1965
* https://tracker.ceph.com/issues/50222
1966
    osd: 5.2s0 deep-scrub : stat mismatch
1967
* https://tracker.ceph.com/issues/56698
1968
    client: FAILED ceph_assert(_size == 0)
1969
* https://tracker.ceph.com/issues/50223
1970
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
1971 66 Rishabh Dave
1972 65 Rishabh Dave
1973
h3. 2022 Jul 15
1974
1975
http://pulpito.front.sepia.ceph.com/rishabh-2022-07-08_23:53:34-fs-wip-rishabh-testing-2022Jul08-1820-testing-default-smithi/
1976
1977
re-run: http://pulpito.front.sepia.ceph.com/rishabh-2022-07-15_06:42:04-fs-wip-rishabh-testing-2022Jul08-1820-testing-default-smithi/
1978
1979
* https://tracker.ceph.com/issues/53859
1980
  Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
1981
* https://tracker.ceph.com/issues/55804
1982
  Command failed (workunit test suites/pjd.sh)
1983
* https://tracker.ceph.com/issues/50223
1984
  client.xxxx isn't responding to mclientcaps(revoke)
1985
* https://tracker.ceph.com/issues/50222
1986
  osd: deep-scrub : stat mismatch
1987
1988
* https://tracker.ceph.com/issues/56632
1989
  Test failure: test_subvolume_snapshot_clone_quota_exceeded (tasks.cephfs.test_volumes.TestSubvolumeSnapshotClones)
1990
* https://tracker.ceph.com/issues/56634
1991
  workunit test fs/snaps/snaptest-intodir.sh
1992
* https://tracker.ceph.com/issues/56644
1993
  Test failure: test_rapid_creation (tasks.cephfs.test_fragment.TestFragmentation)
1994
1995 61 Rishabh Dave
1996
1997
h3. 2022 July 05
1998 62 Rishabh Dave
1999 64 Rishabh Dave
http://pulpito.front.sepia.ceph.com/rishabh-2022-07-02_14:14:52-fs-wip-rishabh-testing-20220702-1631-testing-default-smithi/
2000
2001
On 1st re-run some jobs passed - http://pulpito.front.sepia.ceph.com/rishabh-2022-07-03_15:10:28-fs-wip-rishabh-testing-20220702-1631-distro-default-smithi/
2002
2003
On 2nd re-run only few jobs failed -
2004 62 Rishabh Dave
http://pulpito.front.sepia.ceph.com/rishabh-2022-07-06_05:24:29-fs-wip-rishabh-testing-20220705-2132-distro-default-smithi/
2005
http://pulpito.front.sepia.ceph.com/rishabh-2022-07-06_05:24:29-fs-wip-rishabh-testing-20220705-2132-distro-default-smithi/
2006
2007
* https://tracker.ceph.com/issues/56446
2008
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
2009
* https://tracker.ceph.com/issues/55804
2010
    Command failed (workunit test suites/pjd.sh) on smithi047 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/
2011
2012
* https://tracker.ceph.com/issues/56445
2013 63 Rishabh Dave
    Command failed on smithi080 with status 123: "find /home/ubuntu/cephtest/archive/syslog -name '*.log' -print0 | sudo xargs -0 --no-run-if-empty -- gzip --"
2014
* https://tracker.ceph.com/issues/51267
2015
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithi098 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1
2016 62 Rishabh Dave
* https://tracker.ceph.com/issues/50224
2017
    Test failure: test_mirroring_init_failure_with_recovery (tasks.cephfs.test_mirroring.TestMirroring)
2018 61 Rishabh Dave
2019 58 Venky Shankar
2020
2021
h3. 2022 July 04
2022
2023
https://pulpito.ceph.com/vshankar-2022-06-29_09:19:00-fs-wip-vshankar-testing-20220627-100931-testing-default-smithi/
2024
(rhel runs were borked due to: https://lists.ceph.io/hyperkitty/list/dev@ceph.io/thread/JSZQFUKVLDND4W33PXDGCABPHNSPT6SS/, tests ran with --filter-out=rhel)
2025
2026
* https://tracker.ceph.com/issues/56445
2027 59 Rishabh Dave
    Command failed on smithi162 with status 123: "find /home/ubuntu/cephtest/archive/syslog -name '*.log' -print0 | sudo xargs -0 --no-run-if-empty -- gzip --"
2028
* https://tracker.ceph.com/issues/56446
2029
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
2030
* https://tracker.ceph.com/issues/51964
2031 60 Rishabh Dave
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
2032 59 Rishabh Dave
* https://tracker.ceph.com/issues/52624
2033 57 Venky Shankar
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2034
2035
h3. 2022 June 20
2036
2037
https://pulpito.ceph.com/vshankar-2022-06-15_04:03:39-fs-wip-vshankar-testing1-20220615-072516-testing-default-smithi/
2038
https://pulpito.ceph.com/vshankar-2022-06-19_08:22:46-fs-wip-vshankar-testing1-20220619-102531-testing-default-smithi/
2039
2040
* https://tracker.ceph.com/issues/52624
2041
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2042
* https://tracker.ceph.com/issues/55804
2043
    qa failure: pjd link tests failed
2044
* https://tracker.ceph.com/issues/54108
2045
    qa: iogen workunit: "The following counters failed to be set on mds daemons: {'mds.exported', 'mds.imported'}"
2046
* https://tracker.ceph.com/issues/55332
2047 56 Patrick Donnelly
    Failure in snaptest-git-ceph.sh (it's an async unlink/create bug)
2048
2049
h3. 2022 June 13
2050
2051
https://pulpito.ceph.com/pdonnell-2022-06-12_05:08:12-fs:workload-wip-pdonnell-testing-20220612.004943-distro-default-smithi/
2052
2053
* https://tracker.ceph.com/issues/56024
2054
    cephadm: removes ceph.conf during qa run causing command failure
2055
* https://tracker.ceph.com/issues/48773
2056
    qa: scrub does not complete
2057
* https://tracker.ceph.com/issues/56012
2058
    mds: src/mds/MDLog.cc: 283: FAILED ceph_assert(!mds->is_ any_replay())
2059 55 Venky Shankar
2060 54 Venky Shankar
2061
h3. 2022 Jun 13
2062
2063
https://pulpito.ceph.com/vshankar-2022-06-07_00:25:50-fs-wip-vshankar-testing-20220606-223254-testing-default-smithi/
2064
https://pulpito.ceph.com/vshankar-2022-06-10_01:04:46-fs-wip-vshankar-testing-20220609-175550-testing-default-smithi/
2065
2066
* https://tracker.ceph.com/issues/52624
2067
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2068
* https://tracker.ceph.com/issues/51964
2069
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
2070
* https://tracker.ceph.com/issues/53859
2071
    qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
2072
* https://tracker.ceph.com/issues/55804
2073
    qa failure: pjd link tests failed
2074
* https://tracker.ceph.com/issues/56003
2075
    client: src/include/xlist.h: 81: FAILED ceph_assert(_size == 0)
2076
* https://tracker.ceph.com/issues/56011
2077
    fs/thrash: snaptest-snap-rm-cmp.sh fails in mds5sum comparison
2078
* https://tracker.ceph.com/issues/56012
2079 53 Venky Shankar
    mds: src/mds/MDLog.cc: 283: FAILED ceph_assert(!mds->is_ any_replay())
2080
2081
h3. 2022 Jun 07
2082
2083
https://pulpito.ceph.com/vshankar-2022-06-06_21:25:41-fs-wip-vshankar-testing1-20220606-230129-testing-default-smithi/
2084
https://pulpito.ceph.com/vshankar-2022-06-07_10:53:31-fs-wip-vshankar-testing1-20220607-104134-testing-default-smithi/ (rerun after dropping a problematic PR)
2085
2086
* https://tracker.ceph.com/issues/52624
2087
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2088
* https://tracker.ceph.com/issues/50223
2089
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2090
* https://tracker.ceph.com/issues/50224
2091 51 Venky Shankar
    qa: test_mirroring_init_failure_with_recovery failure
2092
2093
h3. 2022 May 12
2094 52 Venky Shankar
2095 51 Venky Shankar
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20220509-125847
2096
https://pulpito.ceph.com/vshankar-2022-05-13_17:09:16-fs-wip-vshankar-testing-20220513-120051-testing-default-smithi/ (drop prs + rerun)
2097
2098
* https://tracker.ceph.com/issues/52624
2099
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2100
* https://tracker.ceph.com/issues/50223
2101
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2102
* https://tracker.ceph.com/issues/55332
2103
    Failure in snaptest-git-ceph.sh
2104
* https://tracker.ceph.com/issues/53859
2105 1 Patrick Donnelly
    qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
2106 52 Venky Shankar
* https://tracker.ceph.com/issues/55538
2107
    Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead)
2108 51 Venky Shankar
* https://tracker.ceph.com/issues/55258
2109 49 Venky Shankar
    lots of "heartbeat_check: no reply from X.X.X.X" in OSD logs (cropss up again, though very infrequent)
2110
2111 50 Venky Shankar
h3. 2022 May 04
2112
2113
https://pulpito.ceph.com/vshankar-2022-05-01_13:18:44-fs-wip-vshankar-testing1-20220428-204527-testing-default-smithi/
2114 49 Venky Shankar
https://pulpito.ceph.com/vshankar-2022-05-02_16:58:59-fs-wip-vshankar-testing1-20220502-201957-testing-default-smithi/ (after dropping PRs)
2115
2116
* https://tracker.ceph.com/issues/52624
2117
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2118
* https://tracker.ceph.com/issues/50223
2119
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2120
* https://tracker.ceph.com/issues/55332
2121
    Failure in snaptest-git-ceph.sh
2122
* https://tracker.ceph.com/issues/53859
2123
    qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
2124
* https://tracker.ceph.com/issues/55516
2125
    qa: fs suite tests failing with "json.decoder.JSONDecodeError: Extra data: line 2 column 82 (char 82)"
2126
* https://tracker.ceph.com/issues/55537
2127
    mds: crash during fs:upgrade test
2128
* https://tracker.ceph.com/issues/55538
2129 48 Venky Shankar
    Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead)
2130
2131
h3. 2022 Apr 25
2132
2133
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20220420-113951 (owner vshankar)
2134
2135
* https://tracker.ceph.com/issues/52624
2136
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2137
* https://tracker.ceph.com/issues/50223
2138
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2139
* https://tracker.ceph.com/issues/55258
2140
    lots of "heartbeat_check: no reply from X.X.X.X" in OSD logs
2141
* https://tracker.ceph.com/issues/55377
2142 47 Venky Shankar
    kclient: mds revoke Fwb caps stuck after the kclient tries writebcak once
2143
2144
h3. 2022 Apr 14
2145
2146
https://pulpito.ceph.com/?branch=wip-vshankar-testing1-20220411-144044
2147
2148
* https://tracker.ceph.com/issues/52624
2149
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2150
* https://tracker.ceph.com/issues/50223
2151
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2152
* https://tracker.ceph.com/issues/52438
2153
    qa: ffsb timeout
2154
* https://tracker.ceph.com/issues/55170
2155
    mds: crash during rejoin (CDir::fetch_keys)
2156
* https://tracker.ceph.com/issues/55331
2157
    pjd failure
2158
* https://tracker.ceph.com/issues/48773
2159
    qa: scrub does not complete
2160
* https://tracker.ceph.com/issues/55332
2161
    Failure in snaptest-git-ceph.sh
2162
* https://tracker.ceph.com/issues/55258
2163 45 Venky Shankar
    lots of "heartbeat_check: no reply from X.X.X.X" in OSD logs
2164
2165 46 Venky Shankar
h3. 2022 Apr 11
2166 45 Venky Shankar
2167
https://pulpito.ceph.com/?branch=wip-vshankar-testing-55110-20220408-203242
2168
2169
* https://tracker.ceph.com/issues/48773
2170
    qa: scrub does not complete
2171
* https://tracker.ceph.com/issues/52624
2172
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2173
* https://tracker.ceph.com/issues/52438
2174
    qa: ffsb timeout
2175
* https://tracker.ceph.com/issues/48680
2176
    mds: scrubbing stuck "scrub active (0 inodes in the stack)"
2177
* https://tracker.ceph.com/issues/55236
2178
    qa: fs/snaps tests fails with "hit max job timeout"
2179
* https://tracker.ceph.com/issues/54108
2180
    qa: iogen workunit: "The following counters failed to be set on mds daemons: {'mds.exported', 'mds.imported'}"
2181
* https://tracker.ceph.com/issues/54971
2182
    Test failure: test_perf_stats_stale_metrics (tasks.cephfs.test_mds_metrics.TestMDSMetrics)
2183
* https://tracker.ceph.com/issues/50223
2184
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2185
* https://tracker.ceph.com/issues/55258
2186 44 Venky Shankar
    lots of "heartbeat_check: no reply from X.X.X.X" in OSD logs
2187 42 Venky Shankar
2188 43 Venky Shankar
h3. 2022 Mar 21
2189
2190
https://pulpito.ceph.com/vshankar-2022-03-20_02:16:37-fs-wip-vshankar-testing-20220319-163539-testing-default-smithi/
2191
2192
Run didn't go well, lots of failures - debugging by dropping PRs and running against master branch. Only merging unrelated PRs that pass tests.
2193
2194
2195 42 Venky Shankar
h3. 2022 Mar 08
2196
2197
https://pulpito.ceph.com/vshankar-2022-02-28_04:32:15-fs-wip-vshankar-testing-20220226-211550-testing-default-smithi/
2198
2199
rerun with
2200
- (drop) https://github.com/ceph/ceph/pull/44679
2201
- (drop) https://github.com/ceph/ceph/pull/44958
2202
https://pulpito.ceph.com/vshankar-2022-03-06_14:47:51-fs-wip-vshankar-testing-20220304-132102-testing-default-smithi/
2203
2204
* https://tracker.ceph.com/issues/54419 (new)
2205
    `ceph orch upgrade start` seems to never reach completion
2206
* https://tracker.ceph.com/issues/51964
2207
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
2208
* https://tracker.ceph.com/issues/52624
2209
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2210
* https://tracker.ceph.com/issues/50223
2211
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2212
* https://tracker.ceph.com/issues/52438
2213
    qa: ffsb timeout
2214
* https://tracker.ceph.com/issues/50821
2215
    qa: untar_snap_rm failure during mds thrashing
2216 41 Venky Shankar
2217
2218
h3. 2022 Feb 09
2219
2220
https://pulpito.ceph.com/vshankar-2022-02-05_17:27:49-fs-wip-vshankar-testing-20220201-113815-testing-default-smithi/
2221
2222
rerun with
2223
- (drop) https://github.com/ceph/ceph/pull/37938
2224
- (drop) https://github.com/ceph/ceph/pull/44335
2225
- (drop) https://github.com/ceph/ceph/pull/44491
2226
- (drop) https://github.com/ceph/ceph/pull/44501
2227
https://pulpito.ceph.com/vshankar-2022-02-08_14:27:29-fs-wip-vshankar-testing-20220208-181241-testing-default-smithi/
2228
2229
* https://tracker.ceph.com/issues/51964
2230
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
2231
* https://tracker.ceph.com/issues/54066
2232
    test_subvolume_no_upgrade_v1_sanity fails with `AssertionError: 1000 != 0`
2233
* https://tracker.ceph.com/issues/48773
2234
    qa: scrub does not complete
2235
* https://tracker.ceph.com/issues/52624
2236
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2237
* https://tracker.ceph.com/issues/50223
2238
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2239
* https://tracker.ceph.com/issues/52438
2240 40 Patrick Donnelly
    qa: ffsb timeout
2241
2242
h3. 2022 Feb 01
2243
2244
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20220127.171526
2245
2246
* https://tracker.ceph.com/issues/54107
2247
    kclient: hang during umount
2248
* https://tracker.ceph.com/issues/54106
2249
    kclient: hang during workunit cleanup
2250
* https://tracker.ceph.com/issues/54108
2251
    qa: iogen workunit: "The following counters failed to be set on mds daemons: {'mds.exported', 'mds.imported'}"
2252
* https://tracker.ceph.com/issues/48773
2253
    qa: scrub does not complete
2254
* https://tracker.ceph.com/issues/52438
2255
    qa: ffsb timeout
2256 36 Venky Shankar
2257
2258
h3. 2022 Jan 13
2259 39 Venky Shankar
2260 36 Venky Shankar
https://pulpito.ceph.com/vshankar-2022-01-06_13:18:41-fs-wip-vshankar-testing-20220106-145819-testing-default-smithi/
2261 38 Venky Shankar
2262
rerun with:
2263 36 Venky Shankar
- (add) https://github.com/ceph/ceph/pull/44570
2264
- (drop) https://github.com/ceph/ceph/pull/43184
2265
https://pulpito.ceph.com/vshankar-2022-01-13_04:42:40-fs-wip-vshankar-testing-20220106-145819-testing-default-smithi/
2266
2267
* https://tracker.ceph.com/issues/50223
2268
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2269
* https://tracker.ceph.com/issues/51282
2270
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2271
* https://tracker.ceph.com/issues/48773
2272
    qa: scrub does not complete
2273
* https://tracker.ceph.com/issues/52624
2274
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2275
* https://tracker.ceph.com/issues/53859
2276 34 Venky Shankar
    qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
2277
2278
h3. 2022 Jan 03
2279
2280
https://pulpito.ceph.com/vshankar-2021-12-22_07:37:44-fs-wip-vshankar-testing-20211216-114012-testing-default-smithi/
2281
https://pulpito.ceph.com/vshankar-2022-01-03_12:27:45-fs-wip-vshankar-testing-20220103-142738-testing-default-smithi/ (rerun)
2282
2283
* https://tracker.ceph.com/issues/50223
2284
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2285
* https://tracker.ceph.com/issues/51964
2286
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
2287
* https://tracker.ceph.com/issues/51267
2288
    CommandFailedError: Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithi096 with status 1:...
2289
* https://tracker.ceph.com/issues/51282
2290
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2291
* https://tracker.ceph.com/issues/50821
2292
    qa: untar_snap_rm failure during mds thrashing
2293 35 Ramana Raja
* https://tracker.ceph.com/issues/51278
2294
    mds: "FAILED ceph_assert(!segments.empty())"
2295
* https://tracker.ceph.com/issues/52279
2296 34 Venky Shankar
    cephadm tests fail due to: error adding seccomp filter rule for syscall bdflush: requested action matches default action of filter
2297 33 Patrick Donnelly
2298
2299
h3. 2021 Dec 22
2300
2301
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211222.014316
2302
2303
* https://tracker.ceph.com/issues/52624
2304
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2305
* https://tracker.ceph.com/issues/50223
2306
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2307
* https://tracker.ceph.com/issues/52279
2308
    cephadm tests fail due to: error adding seccomp filter rule for syscall bdflush: requested action matches default action of filter
2309
* https://tracker.ceph.com/issues/50224
2310
    qa: test_mirroring_init_failure_with_recovery failure
2311
* https://tracker.ceph.com/issues/48773
2312
    qa: scrub does not complete
2313 32 Venky Shankar
2314
2315
h3. 2021 Nov 30
2316
2317
https://pulpito.ceph.com/vshankar-2021-11-24_07:14:27-fs-wip-vshankar-testing-20211124-094330-testing-default-smithi/
2318
https://pulpito.ceph.com/vshankar-2021-11-30_06:23:32-fs-wip-vshankar-testing-20211124-094330-distro-default-smithi/ (rerun w/ QA fixes)
2319
2320
* https://tracker.ceph.com/issues/53436
2321
    mds, mon: mds beacon messages get dropped? (mds never reaches up:active state)
2322
* https://tracker.ceph.com/issues/51964
2323
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
2324
* https://tracker.ceph.com/issues/48812
2325
    qa: test_scrub_pause_and_resume_with_abort failure
2326
* https://tracker.ceph.com/issues/51076
2327
    "wait_for_recovery: failed before timeout expired" during thrashosd test with EC backend.
2328
* https://tracker.ceph.com/issues/50223
2329
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2330
* https://tracker.ceph.com/issues/52624
2331
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2332
* https://tracker.ceph.com/issues/50250
2333
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
2334 31 Patrick Donnelly
2335
2336
h3. 2021 November 9
2337
2338
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211109.180315
2339
2340
* https://tracker.ceph.com/issues/53214
2341
    qa: "dd: error reading '/sys/kernel/debug/ceph/2a934501-6731-4052-a836-f42229a869be.client4874/metrics': Is a directory"
2342
* https://tracker.ceph.com/issues/48773
2343
    qa: scrub does not complete
2344
* https://tracker.ceph.com/issues/50223
2345
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2346
* https://tracker.ceph.com/issues/51282
2347
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2348
* https://tracker.ceph.com/issues/52624
2349
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2350
* https://tracker.ceph.com/issues/53216
2351
    qa: "RuntimeError: value of attributes should be either str or None. client_id"
2352
* https://tracker.ceph.com/issues/50250
2353
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
2354
2355 30 Patrick Donnelly
2356
2357
h3. 2021 November 03
2358
2359
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211103.023355
2360
2361
* https://tracker.ceph.com/issues/51964
2362
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
2363
* https://tracker.ceph.com/issues/51282
2364
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2365
* https://tracker.ceph.com/issues/52436
2366
    fs/ceph: "corrupt mdsmap"
2367
* https://tracker.ceph.com/issues/53074
2368
    pybind/mgr/cephadm: upgrade sequence does not continue if no MDS are active
2369
* https://tracker.ceph.com/issues/53150
2370
    pybind/mgr/cephadm/upgrade: tolerate MDS failures during upgrade straddling v16.2.5
2371
* https://tracker.ceph.com/issues/53155
2372
    MDSMonitor: assertion during upgrade to v16.2.5+
2373 29 Patrick Donnelly
2374
2375
h3. 2021 October 26
2376
2377
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211025.000447
2378
2379
* https://tracker.ceph.com/issues/53074
2380
    pybind/mgr/cephadm: upgrade sequence does not continue if no MDS are active
2381
* https://tracker.ceph.com/issues/52997
2382
    testing: hang ing umount
2383
* https://tracker.ceph.com/issues/50824
2384
    qa: snaptest-git-ceph bus error
2385
* https://tracker.ceph.com/issues/52436
2386
    fs/ceph: "corrupt mdsmap"
2387
* https://tracker.ceph.com/issues/48773
2388
    qa: scrub does not complete
2389
* https://tracker.ceph.com/issues/53082
2390
    ceph-fuse: segmenetation fault in Client::handle_mds_map
2391
* https://tracker.ceph.com/issues/50223
2392
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2393
* https://tracker.ceph.com/issues/52624
2394
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2395
* https://tracker.ceph.com/issues/50224
2396
    qa: test_mirroring_init_failure_with_recovery failure
2397
* https://tracker.ceph.com/issues/50821
2398
    qa: untar_snap_rm failure during mds thrashing
2399
* https://tracker.ceph.com/issues/50250
2400
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
2401
2402 27 Patrick Donnelly
2403
2404 28 Patrick Donnelly
h3. 2021 October 19
2405 27 Patrick Donnelly
2406
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211019.013028
2407
2408
* https://tracker.ceph.com/issues/52995
2409
    qa: test_standby_count_wanted failure
2410
* https://tracker.ceph.com/issues/52948
2411
    osd: fails to come up: "teuthology.misc:7 of 8 OSDs are up"
2412
* https://tracker.ceph.com/issues/52996
2413
    qa: test_perf_counters via test_openfiletable
2414
* https://tracker.ceph.com/issues/48772
2415
    qa: pjd: not ok 9, 44, 80
2416
* https://tracker.ceph.com/issues/52997
2417
    testing: hang ing umount
2418
* https://tracker.ceph.com/issues/50250
2419
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
2420
* https://tracker.ceph.com/issues/52624
2421
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2422
* https://tracker.ceph.com/issues/50223
2423
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2424
* https://tracker.ceph.com/issues/50821
2425
    qa: untar_snap_rm failure during mds thrashing
2426
* https://tracker.ceph.com/issues/48773
2427
    qa: scrub does not complete
2428 26 Patrick Donnelly
2429
2430
h3. 2021 October 12
2431
2432
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211012.192211
2433
2434
Some failures caused by teuthology bug: https://tracker.ceph.com/issues/52944
2435
2436
New test caused failure: https://github.com/ceph/ceph/pull/43297#discussion_r729883167
2437
2438
2439
* https://tracker.ceph.com/issues/51282
2440
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2441
* https://tracker.ceph.com/issues/52948
2442
    osd: fails to come up: "teuthology.misc:7 of 8 OSDs are up"
2443
* https://tracker.ceph.com/issues/48773
2444
    qa: scrub does not complete
2445
* https://tracker.ceph.com/issues/50224
2446
    qa: test_mirroring_init_failure_with_recovery failure
2447
* https://tracker.ceph.com/issues/52949
2448
    RuntimeError: The following counters failed to be set on mds daemons: {'mds.dir_split'}
2449 25 Patrick Donnelly
2450 23 Patrick Donnelly
2451 24 Patrick Donnelly
h3. 2021 October 02
2452
2453
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211002.163337
2454
2455
Some failures caused by cephadm upgrade test. Fixed in follow-up qa commit.
2456
2457
test_simple failures caused by PR in this set.
2458
2459
A few reruns because of QA infra noise.
2460
2461
* https://tracker.ceph.com/issues/52822
2462
    qa: failed pacific install on fs:upgrade
2463
* https://tracker.ceph.com/issues/52624
2464
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2465
* https://tracker.ceph.com/issues/50223
2466
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2467
* https://tracker.ceph.com/issues/48773
2468
    qa: scrub does not complete
2469
2470
2471 23 Patrick Donnelly
h3. 2021 September 20
2472
2473
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210917.174826
2474
2475
* https://tracker.ceph.com/issues/52677
2476
    qa: test_simple failure
2477
* https://tracker.ceph.com/issues/51279
2478
    kclient hangs on umount (testing branch)
2479
* https://tracker.ceph.com/issues/50223
2480
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2481
* https://tracker.ceph.com/issues/50250
2482
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
2483
* https://tracker.ceph.com/issues/52624
2484
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2485
* https://tracker.ceph.com/issues/52438
2486
    qa: ffsb timeout
2487 22 Patrick Donnelly
2488
2489
h3. 2021 September 10
2490
2491
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210910.181451
2492
2493
* https://tracker.ceph.com/issues/50223
2494
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2495
* https://tracker.ceph.com/issues/50250
2496
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
2497
* https://tracker.ceph.com/issues/52624
2498
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2499
* https://tracker.ceph.com/issues/52625
2500
    qa: test_kill_mdstable (tasks.cephfs.test_snapshots.TestSnapshots)
2501
* https://tracker.ceph.com/issues/52439
2502
    qa: acls does not compile on centos stream
2503
* https://tracker.ceph.com/issues/50821
2504
    qa: untar_snap_rm failure during mds thrashing
2505
* https://tracker.ceph.com/issues/48773
2506
    qa: scrub does not complete
2507
* https://tracker.ceph.com/issues/52626
2508
    mds: ScrubStack.cc: 831: FAILED ceph_assert(diri)
2509
* https://tracker.ceph.com/issues/51279
2510
    kclient hangs on umount (testing branch)
2511 21 Patrick Donnelly
2512
2513
h3. 2021 August 27
2514
2515
Several jobs died because of device failures.
2516
2517
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210827.024746
2518
2519
* https://tracker.ceph.com/issues/52430
2520
    mds: fast async create client mount breaks racy test
2521
* https://tracker.ceph.com/issues/52436
2522
    fs/ceph: "corrupt mdsmap"
2523
* https://tracker.ceph.com/issues/52437
2524
    mds: InoTable::replay_release_ids abort via test_inotable_sync
2525
* https://tracker.ceph.com/issues/51282
2526
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2527
* https://tracker.ceph.com/issues/52438
2528
    qa: ffsb timeout
2529
* https://tracker.ceph.com/issues/52439
2530
    qa: acls does not compile on centos stream
2531 20 Patrick Donnelly
2532
2533
h3. 2021 July 30
2534
2535
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210729.214022
2536
2537
* https://tracker.ceph.com/issues/50250
2538
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
2539
* https://tracker.ceph.com/issues/51282
2540
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2541
* https://tracker.ceph.com/issues/48773
2542
    qa: scrub does not complete
2543
* https://tracker.ceph.com/issues/51975
2544
    pybind/mgr/stats: KeyError
2545 19 Patrick Donnelly
2546
2547
h3. 2021 July 28
2548
2549
https://pulpito.ceph.com/pdonnell-2021-07-28_00:39:45-fs-wip-pdonnell-testing-20210727.213757-distro-basic-smithi/
2550
2551
with qa fix: https://pulpito.ceph.com/pdonnell-2021-07-28_16:20:28-fs-wip-pdonnell-testing-20210728.141004-distro-basic-smithi/
2552
2553
* https://tracker.ceph.com/issues/51905
2554
    qa: "error reading sessionmap 'mds1_sessionmap'"
2555
* https://tracker.ceph.com/issues/48773
2556
    qa: scrub does not complete
2557
* https://tracker.ceph.com/issues/50250
2558
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
2559
* https://tracker.ceph.com/issues/51267
2560
    CommandFailedError: Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithi096 with status 1:...
2561
* https://tracker.ceph.com/issues/51279
2562
    kclient hangs on umount (testing branch)
2563 18 Patrick Donnelly
2564
2565
h3. 2021 July 16
2566
2567
https://pulpito.ceph.com/pdonnell-2021-07-16_05:50:11-fs-wip-pdonnell-testing-20210716.022804-distro-basic-smithi/
2568
2569
* https://tracker.ceph.com/issues/48773
2570
    qa: scrub does not complete
2571
* https://tracker.ceph.com/issues/48772
2572
    qa: pjd: not ok 9, 44, 80
2573
* https://tracker.ceph.com/issues/45434
2574
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2575
* https://tracker.ceph.com/issues/51279
2576
    kclient hangs on umount (testing branch)
2577
* https://tracker.ceph.com/issues/50824
2578
    qa: snaptest-git-ceph bus error
2579 17 Patrick Donnelly
2580
2581
h3. 2021 July 04
2582
2583
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210703.052904
2584
2585
* https://tracker.ceph.com/issues/48773
2586
    qa: scrub does not complete
2587
* https://tracker.ceph.com/issues/39150
2588
    mon: "FAILED ceph_assert(session_map.sessions.empty())" when out of quorum
2589
* https://tracker.ceph.com/issues/45434
2590
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2591
* https://tracker.ceph.com/issues/51282
2592
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2593
* https://tracker.ceph.com/issues/48771
2594
    qa: iogen: workload fails to cause balancing
2595
* https://tracker.ceph.com/issues/51279
2596
    kclient hangs on umount (testing branch)
2597
* https://tracker.ceph.com/issues/50250
2598
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
2599 16 Patrick Donnelly
2600
2601
h3. 2021 July 01
2602
2603
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210701.192056
2604
2605
* https://tracker.ceph.com/issues/51197
2606
    qa: [WRN] Scrub error on inode 0x10000001520 (/client.0/tmp/t/linux-5.4/Documentation/driver-api) see mds.f log and `damage ls` output for details
2607
* https://tracker.ceph.com/issues/50866
2608
    osd: stat mismatch on objects
2609
* https://tracker.ceph.com/issues/48773
2610
    qa: scrub does not complete
2611 15 Patrick Donnelly
2612
2613
h3. 2021 June 26
2614
2615
https://pulpito.ceph.com/pdonnell-2021-06-26_00:57:00-fs-wip-pdonnell-testing-20210625.225421-distro-basic-smithi/
2616
2617
* https://tracker.ceph.com/issues/51183
2618
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
2619
* https://tracker.ceph.com/issues/51410
2620
    kclient: fails to finish reconnect during MDS thrashing (testing branch)
2621
* https://tracker.ceph.com/issues/48773
2622
    qa: scrub does not complete
2623
* https://tracker.ceph.com/issues/51282
2624
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2625
* https://tracker.ceph.com/issues/51169
2626
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
2627
* https://tracker.ceph.com/issues/48772
2628
    qa: pjd: not ok 9, 44, 80
2629 14 Patrick Donnelly
2630
2631
h3. 2021 June 21
2632
2633
https://pulpito.ceph.com/pdonnell-2021-06-22_00:27:21-fs-wip-pdonnell-testing-20210621.231646-distro-basic-smithi/
2634
2635
One failure caused by PR: https://github.com/ceph/ceph/pull/41935#issuecomment-866472599
2636
2637
* https://tracker.ceph.com/issues/51282
2638
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2639
* https://tracker.ceph.com/issues/51183
2640
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
2641
* https://tracker.ceph.com/issues/48773
2642
    qa: scrub does not complete
2643
* https://tracker.ceph.com/issues/48771
2644
    qa: iogen: workload fails to cause balancing
2645
* https://tracker.ceph.com/issues/51169
2646
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
2647
* https://tracker.ceph.com/issues/50495
2648
    libcephfs: shutdown race fails with status 141
2649
* https://tracker.ceph.com/issues/45434
2650
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2651
* https://tracker.ceph.com/issues/50824
2652
    qa: snaptest-git-ceph bus error
2653
* https://tracker.ceph.com/issues/50223
2654
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2655 13 Patrick Donnelly
2656
2657
h3. 2021 June 16
2658
2659
https://pulpito.ceph.com/pdonnell-2021-06-16_21:26:55-fs-wip-pdonnell-testing-20210616.191804-distro-basic-smithi/
2660
2661
MDS abort class of failures caused by PR: https://github.com/ceph/ceph/pull/41667
2662
2663
* https://tracker.ceph.com/issues/45434
2664
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2665
* https://tracker.ceph.com/issues/51169
2666
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
2667
* https://tracker.ceph.com/issues/43216
2668
    MDSMonitor: removes MDS coming out of quorum election
2669
* https://tracker.ceph.com/issues/51278
2670
    mds: "FAILED ceph_assert(!segments.empty())"
2671
* https://tracker.ceph.com/issues/51279
2672
    kclient hangs on umount (testing branch)
2673
* https://tracker.ceph.com/issues/51280
2674
    mds: "FAILED ceph_assert(r == 0 || r == -2)"
2675
* https://tracker.ceph.com/issues/51183
2676
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
2677
* https://tracker.ceph.com/issues/51281
2678
    qa: snaptest-snap-rm-cmp.sh: "echo 'FAIL: bad match, /tmp/a 4637e766853d1ad16a7b17079e2c6f03 != real c3883760b18d50e8d78819c54d579b00'"
2679
* https://tracker.ceph.com/issues/48773
2680
    qa: scrub does not complete
2681
* https://tracker.ceph.com/issues/51076
2682
    "wait_for_recovery: failed before timeout expired" during thrashosd test with EC backend.
2683
* https://tracker.ceph.com/issues/51228
2684
    qa: rmdir: failed to remove 'a/.snap/*': No such file or directory
2685
* https://tracker.ceph.com/issues/51282
2686
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2687 12 Patrick Donnelly
2688
2689
h3. 2021 June 14
2690
2691
https://pulpito.ceph.com/pdonnell-2021-06-14_20:53:05-fs-wip-pdonnell-testing-20210614.173325-distro-basic-smithi/
2692
2693
Some Ubuntu 20.04 upgrade fallout. In particular, upgrade tests are failing due to missing packages for 18.04 Pacific.
2694
2695
* https://tracker.ceph.com/issues/51169
2696
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
2697
* https://tracker.ceph.com/issues/51228
2698
    qa: rmdir: failed to remove 'a/.snap/*': No such file or directory
2699
* https://tracker.ceph.com/issues/48773
2700
    qa: scrub does not complete
2701
* https://tracker.ceph.com/issues/51183
2702
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
2703
* https://tracker.ceph.com/issues/45434
2704
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2705
* https://tracker.ceph.com/issues/51182
2706
    pybind/mgr/snap_schedule: Invalid command: Unexpected argument 'fs=cephfs'
2707
* https://tracker.ceph.com/issues/51229
2708
    qa: test_multi_snap_schedule list difference failure
2709
* https://tracker.ceph.com/issues/50821
2710
    qa: untar_snap_rm failure during mds thrashing
2711 11 Patrick Donnelly
2712
2713
h3. 2021 June 13
2714
2715
https://pulpito.ceph.com/pdonnell-2021-06-12_02:45:35-fs-wip-pdonnell-testing-20210612.002809-distro-basic-smithi/
2716
2717
Some Ubuntu 20.04 upgrade fallout. In particular, upgrade tests are failing due to missing packages for 18.04 Pacific.
2718
2719
* https://tracker.ceph.com/issues/51169
2720
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
2721
* https://tracker.ceph.com/issues/48773
2722
    qa: scrub does not complete
2723
* https://tracker.ceph.com/issues/51182
2724
    pybind/mgr/snap_schedule: Invalid command: Unexpected argument 'fs=cephfs'
2725
* https://tracker.ceph.com/issues/51183
2726
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
2727
* https://tracker.ceph.com/issues/51197
2728
    qa: [WRN] Scrub error on inode 0x10000001520 (/client.0/tmp/t/linux-5.4/Documentation/driver-api) see mds.f log and `damage ls` output for details
2729
* https://tracker.ceph.com/issues/45434
2730 10 Patrick Donnelly
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2731
2732
h3. 2021 June 11
2733
2734
https://pulpito.ceph.com/pdonnell-2021-06-11_18:02:10-fs-wip-pdonnell-testing-20210611.162716-distro-basic-smithi/
2735
2736
Some Ubuntu 20.04 upgrade fallout. In particular, upgrade tests are failing due to missing packages for 18.04 Pacific.
2737
2738
* https://tracker.ceph.com/issues/51169
2739
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
2740
* https://tracker.ceph.com/issues/45434
2741
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2742
* https://tracker.ceph.com/issues/48771
2743
    qa: iogen: workload fails to cause balancing
2744
* https://tracker.ceph.com/issues/43216
2745
    MDSMonitor: removes MDS coming out of quorum election
2746
* https://tracker.ceph.com/issues/51182
2747
    pybind/mgr/snap_schedule: Invalid command: Unexpected argument 'fs=cephfs'
2748
* https://tracker.ceph.com/issues/50223
2749
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2750
* https://tracker.ceph.com/issues/48773
2751
    qa: scrub does not complete
2752
* https://tracker.ceph.com/issues/51183
2753
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
2754
* https://tracker.ceph.com/issues/51184
2755
    qa: fs:bugs does not specify distro
2756 9 Patrick Donnelly
2757
2758
h3. 2021 June 03
2759
2760
https://pulpito.ceph.com/pdonnell-2021-06-03_03:40:33-fs-wip-pdonnell-testing-20210603.020013-distro-basic-smithi/
2761
2762
* https://tracker.ceph.com/issues/45434
2763
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2764
* https://tracker.ceph.com/issues/50016
2765
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
2766
* https://tracker.ceph.com/issues/50821
2767
    qa: untar_snap_rm failure during mds thrashing
2768
* https://tracker.ceph.com/issues/50622 (regression)
2769
    msg: active_connections regression
2770
* https://tracker.ceph.com/issues/49845#note-2 (regression)
2771
    qa: failed umount in test_volumes
2772
* https://tracker.ceph.com/issues/48773
2773
    qa: scrub does not complete
2774
* https://tracker.ceph.com/issues/43216
2775
    MDSMonitor: removes MDS coming out of quorum election
2776 7 Patrick Donnelly
2777
2778 8 Patrick Donnelly
h3. 2021 May 18
2779
2780
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210518.214114
2781
2782
Regression in testing kernel caused some failures. Ilya fixed those and rerun
2783
looked better. Some odd new noise in the rerun relating to packaging and "No
2784
module named 'tasks.ceph'".
2785
2786
* https://tracker.ceph.com/issues/50824
2787
    qa: snaptest-git-ceph bus error
2788
* https://tracker.ceph.com/issues/50622 (regression)
2789
    msg: active_connections regression
2790
* https://tracker.ceph.com/issues/49845#note-2 (regression)
2791
    qa: failed umount in test_volumes
2792
* https://tracker.ceph.com/issues/48203 (stock kernel update required)
2793
    qa: quota failure
2794
2795
2796 7 Patrick Donnelly
h3. 2021 May 18
2797
2798
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210518.025642
2799
2800
* https://tracker.ceph.com/issues/50821
2801
    qa: untar_snap_rm failure during mds thrashing
2802
* https://tracker.ceph.com/issues/48773
2803
    qa: scrub does not complete
2804
* https://tracker.ceph.com/issues/45591
2805
    mgr: FAILED ceph_assert(daemon != nullptr)
2806
* https://tracker.ceph.com/issues/50866
2807
    osd: stat mismatch on objects
2808
* https://tracker.ceph.com/issues/50016
2809
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
2810
* https://tracker.ceph.com/issues/50867
2811
    qa: fs:mirror: reduced data availability
2812
* https://tracker.ceph.com/issues/50821
2813
    qa: untar_snap_rm failure during mds thrashing
2814
* https://tracker.ceph.com/issues/50622 (regression)
2815
    msg: active_connections regression
2816
* https://tracker.ceph.com/issues/50223
2817
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2818
* https://tracker.ceph.com/issues/50868
2819
    qa: "kern.log.gz already exists; not overwritten"
2820
* https://tracker.ceph.com/issues/50870
2821
    qa: test_full: "rm: cannot remove 'large_file_a': Permission denied"
2822 6 Patrick Donnelly
2823
2824
h3. 2021 May 11
2825
2826
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210511.232042
2827
2828
* one class of failures caused by PR
2829
* https://tracker.ceph.com/issues/48812
2830
    qa: test_scrub_pause_and_resume_with_abort failure
2831
* https://tracker.ceph.com/issues/50390
2832
    mds: monclient: wait_auth_rotating timed out after 30
2833
* https://tracker.ceph.com/issues/48773
2834
    qa: scrub does not complete
2835
* https://tracker.ceph.com/issues/50821
2836
    qa: untar_snap_rm failure during mds thrashing
2837
* https://tracker.ceph.com/issues/50224
2838
    qa: test_mirroring_init_failure_with_recovery failure
2839
* https://tracker.ceph.com/issues/50622 (regression)
2840
    msg: active_connections regression
2841
* https://tracker.ceph.com/issues/50825
2842
    qa: snaptest-git-ceph hang during mon thrashing v2
2843
* https://tracker.ceph.com/issues/50821
2844
    qa: untar_snap_rm failure during mds thrashing
2845
* https://tracker.ceph.com/issues/50823
2846
    qa: RuntimeError: timeout waiting for cluster to stabilize
2847 5 Patrick Donnelly
2848
2849
h3. 2021 May 14
2850
2851
https://pulpito.ceph.com/pdonnell-2021-05-14_21:45:42-fs-master-distro-basic-smithi/
2852
2853
* https://tracker.ceph.com/issues/48812
2854
    qa: test_scrub_pause_and_resume_with_abort failure
2855
* https://tracker.ceph.com/issues/50821
2856
    qa: untar_snap_rm failure during mds thrashing
2857
* https://tracker.ceph.com/issues/50622 (regression)
2858
    msg: active_connections regression
2859
* https://tracker.ceph.com/issues/50822
2860
    qa: testing kernel patch for client metrics causes mds abort
2861
* https://tracker.ceph.com/issues/48773
2862
    qa: scrub does not complete
2863
* https://tracker.ceph.com/issues/50823
2864
    qa: RuntimeError: timeout waiting for cluster to stabilize
2865
* https://tracker.ceph.com/issues/50824
2866
    qa: snaptest-git-ceph bus error
2867
* https://tracker.ceph.com/issues/50825
2868
    qa: snaptest-git-ceph hang during mon thrashing v2
2869
* https://tracker.ceph.com/issues/50826
2870
    kceph: stock RHEL kernel hangs on snaptests with mon|osd thrashers
2871 4 Patrick Donnelly
2872
2873
h3. 2021 May 01
2874
2875
https://pulpito.ceph.com/pdonnell-2021-05-01_09:07:09-fs-wip-pdonnell-testing-20210501.040415-distro-basic-smithi/
2876
2877
* https://tracker.ceph.com/issues/45434
2878
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2879
* https://tracker.ceph.com/issues/50281
2880
    qa: untar_snap_rm timeout
2881
* https://tracker.ceph.com/issues/48203 (stock kernel update required)
2882
    qa: quota failure
2883
* https://tracker.ceph.com/issues/48773
2884
    qa: scrub does not complete
2885
* https://tracker.ceph.com/issues/50390
2886
    mds: monclient: wait_auth_rotating timed out after 30
2887
* https://tracker.ceph.com/issues/50250
2888
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details"
2889
* https://tracker.ceph.com/issues/50622 (regression)
2890
    msg: active_connections regression
2891
* https://tracker.ceph.com/issues/45591
2892
    mgr: FAILED ceph_assert(daemon != nullptr)
2893
* https://tracker.ceph.com/issues/50221
2894
    qa: snaptest-git-ceph failure in git diff
2895
* https://tracker.ceph.com/issues/50016
2896
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
2897 3 Patrick Donnelly
2898
2899
h3. 2021 Apr 15
2900
2901
https://pulpito.ceph.com/pdonnell-2021-04-15_01:35:57-fs-wip-pdonnell-testing-20210414.230315-distro-basic-smithi/
2902
2903
* https://tracker.ceph.com/issues/50281
2904
    qa: untar_snap_rm timeout
2905
* https://tracker.ceph.com/issues/50220
2906
    qa: dbench workload timeout
2907
* https://tracker.ceph.com/issues/50246
2908
    mds: failure replaying journal (EMetaBlob)
2909
* https://tracker.ceph.com/issues/50250
2910
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details"
2911
* https://tracker.ceph.com/issues/50016
2912
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
2913
* https://tracker.ceph.com/issues/50222
2914
    osd: 5.2s0 deep-scrub : stat mismatch
2915
* https://tracker.ceph.com/issues/45434
2916
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2917
* https://tracker.ceph.com/issues/49845
2918
    qa: failed umount in test_volumes
2919
* https://tracker.ceph.com/issues/37808
2920
    osd: osdmap cache weak_refs assert during shutdown
2921
* https://tracker.ceph.com/issues/50387
2922
    client: fs/snaps failure
2923
* https://tracker.ceph.com/issues/50389
2924
    mds: "cluster [ERR] Error recovering journal 0x203: (2) No such file or directory" in cluster log"
2925
* https://tracker.ceph.com/issues/50216
2926
    qa: "ls: cannot access 'lost+found': No such file or directory"
2927
* https://tracker.ceph.com/issues/50390
2928
    mds: monclient: wait_auth_rotating timed out after 30
2929
2930 1 Patrick Donnelly
2931
2932 2 Patrick Donnelly
h3. 2021 Apr 08
2933
2934
https://pulpito.ceph.com/pdonnell-2021-04-08_22:42:24-fs-wip-pdonnell-testing-20210408.192301-distro-basic-smithi/
2935
2936
* https://tracker.ceph.com/issues/45434
2937
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2938
* https://tracker.ceph.com/issues/50016
2939
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
2940
* https://tracker.ceph.com/issues/48773
2941
    qa: scrub does not complete
2942
* https://tracker.ceph.com/issues/50279
2943
    qa: "Replacing daemon mds.b as rank 0 with standby daemon mds.c"
2944
* https://tracker.ceph.com/issues/50246
2945
    mds: failure replaying journal (EMetaBlob)
2946
* https://tracker.ceph.com/issues/48365
2947
    qa: ffsb build failure on CentOS 8.2
2948
* https://tracker.ceph.com/issues/50216
2949
    qa: "ls: cannot access 'lost+found': No such file or directory"
2950
* https://tracker.ceph.com/issues/50223
2951
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2952
* https://tracker.ceph.com/issues/50280
2953
    cephadm: RuntimeError: uid/gid not found
2954
* https://tracker.ceph.com/issues/50281
2955
    qa: untar_snap_rm timeout
2956
2957 1 Patrick Donnelly
h3. 2021 Apr 08
2958
2959
https://pulpito.ceph.com/pdonnell-2021-04-08_04:31:36-fs-wip-pdonnell-testing-20210408.024225-distro-basic-smithi/
2960
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210408.142238 (with logic inversion / QA fix)
2961
2962
* https://tracker.ceph.com/issues/50246
2963
    mds: failure replaying journal (EMetaBlob)
2964
* https://tracker.ceph.com/issues/50250
2965
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details"
2966
2967
2968
h3. 2021 Apr 07
2969
2970
https://pulpito.ceph.com/pdonnell-2021-04-07_02:12:41-fs-wip-pdonnell-testing-20210406.213012-distro-basic-smithi/
2971
2972
* https://tracker.ceph.com/issues/50215
2973
    qa: "log [ERR] : error reading sessionmap 'mds2_sessionmap'"
2974
* https://tracker.ceph.com/issues/49466
2975
    qa: "Command failed on gibba030 with status 1: 'set -ex\nsudo dd of=/tmp/tmp.ZEeZBasJer'"
2976
* https://tracker.ceph.com/issues/50216
2977
    qa: "ls: cannot access 'lost+found': No such file or directory"
2978
* https://tracker.ceph.com/issues/48773
2979
    qa: scrub does not complete
2980
* https://tracker.ceph.com/issues/49845
2981
    qa: failed umount in test_volumes
2982
* https://tracker.ceph.com/issues/50220
2983
    qa: dbench workload timeout
2984
* https://tracker.ceph.com/issues/50221
2985
    qa: snaptest-git-ceph failure in git diff
2986
* https://tracker.ceph.com/issues/50222
2987
    osd: 5.2s0 deep-scrub : stat mismatch
2988
* https://tracker.ceph.com/issues/50223
2989
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2990
* https://tracker.ceph.com/issues/50224
2991
    qa: test_mirroring_init_failure_with_recovery failure
2992
2993
h3. 2021 Apr 01
2994
2995
https://pulpito.ceph.com/pdonnell-2021-04-01_00:45:34-fs-wip-pdonnell-testing-20210331.222326-distro-basic-smithi/
2996
2997
* https://tracker.ceph.com/issues/48772
2998
    qa: pjd: not ok 9, 44, 80
2999
* https://tracker.ceph.com/issues/50177
3000
    osd: "stalled aio... buggy kernel or bad device?"
3001
* https://tracker.ceph.com/issues/48771
3002
    qa: iogen: workload fails to cause balancing
3003
* https://tracker.ceph.com/issues/49845
3004
    qa: failed umount in test_volumes
3005
* https://tracker.ceph.com/issues/48773
3006
    qa: scrub does not complete
3007
* https://tracker.ceph.com/issues/48805
3008
    mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details"
3009
* https://tracker.ceph.com/issues/50178
3010
    qa: "TypeError: run() got an unexpected keyword argument 'shell'"
3011
* https://tracker.ceph.com/issues/45434
3012
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
3013
3014
h3. 2021 Mar 24
3015
3016
https://pulpito.ceph.com/pdonnell-2021-03-24_23:26:35-fs-wip-pdonnell-testing-20210324.190252-distro-basic-smithi/
3017
3018
* https://tracker.ceph.com/issues/49500
3019
    qa: "Assertion `cb_done' failed."
3020
* https://tracker.ceph.com/issues/50019
3021
    qa: mount failure with cephadm "probably no MDS server is up?"
3022
* https://tracker.ceph.com/issues/50020
3023
    qa: "RADOS object not found (Failed to operate read op for oid cephfs_mirror)"
3024
* https://tracker.ceph.com/issues/48773
3025
    qa: scrub does not complete
3026
* https://tracker.ceph.com/issues/45434
3027
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
3028
* https://tracker.ceph.com/issues/48805
3029
    mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details"
3030
* https://tracker.ceph.com/issues/48772
3031
    qa: pjd: not ok 9, 44, 80
3032
* https://tracker.ceph.com/issues/50021
3033
    qa: snaptest-git-ceph failure during mon thrashing
3034
* https://tracker.ceph.com/issues/48771
3035
    qa: iogen: workload fails to cause balancing
3036
* https://tracker.ceph.com/issues/50016
3037
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
3038
* https://tracker.ceph.com/issues/49466
3039
    qa: "Command failed on gibba030 with status 1: 'set -ex\nsudo dd of=/tmp/tmp.ZEeZBasJer'"
3040
3041
3042
h3. 2021 Mar 18
3043
3044
https://pulpito.ceph.com/pdonnell-2021-03-18_13:46:31-fs-wip-pdonnell-testing-20210318.024145-distro-basic-smithi/
3045
3046
* https://tracker.ceph.com/issues/49466
3047
    qa: "Command failed on gibba030 with status 1: 'set -ex\nsudo dd of=/tmp/tmp.ZEeZBasJer'"
3048
* https://tracker.ceph.com/issues/48773
3049
    qa: scrub does not complete
3050
* https://tracker.ceph.com/issues/48805
3051
    mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details"
3052
* https://tracker.ceph.com/issues/45434
3053
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
3054
* https://tracker.ceph.com/issues/49845
3055
    qa: failed umount in test_volumes
3056
* https://tracker.ceph.com/issues/49605
3057
    mgr: drops command on the floor
3058
* https://tracker.ceph.com/issues/48203 (stock kernel update required)
3059
    qa: quota failure
3060
* https://tracker.ceph.com/issues/49928
3061
    client: items pinned in cache preventing unmount x2
3062
3063
h3. 2021 Mar 15
3064
3065
https://pulpito.ceph.com/pdonnell-2021-03-15_22:16:56-fs-wip-pdonnell-testing-20210315.182203-distro-basic-smithi/
3066
3067
* https://tracker.ceph.com/issues/49842
3068
    qa: stuck pkg install
3069
* https://tracker.ceph.com/issues/49466
3070
    qa: "Command failed on gibba030 with status 1: 'set -ex\nsudo dd of=/tmp/tmp.ZEeZBasJer'"
3071
* https://tracker.ceph.com/issues/49822
3072
    test: test_mirroring_command_idempotency (tasks.cephfs.test_admin.TestMirroringCommands) failure
3073
* https://tracker.ceph.com/issues/49240
3074
    terminate called after throwing an instance of 'std::bad_alloc'
3075
* https://tracker.ceph.com/issues/48773
3076
    qa: scrub does not complete
3077
* https://tracker.ceph.com/issues/45434
3078
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
3079
* https://tracker.ceph.com/issues/49500
3080
    qa: "Assertion `cb_done' failed."
3081
* https://tracker.ceph.com/issues/49843
3082
    qa: fs/snaps/snaptest-upchildrealms.sh failure
3083
* https://tracker.ceph.com/issues/49845
3084
    qa: failed umount in test_volumes
3085
* https://tracker.ceph.com/issues/48805
3086
    mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details"
3087
* https://tracker.ceph.com/issues/49605
3088
    mgr: drops command on the floor
3089
3090
and failure caused by PR: https://github.com/ceph/ceph/pull/39969
3091
3092
3093
h3. 2021 Mar 09
3094
3095
https://pulpito.ceph.com/pdonnell-2021-03-09_03:27:39-fs-wip-pdonnell-testing-20210308.214827-distro-basic-smithi/
3096
3097
* https://tracker.ceph.com/issues/49500
3098
    qa: "Assertion `cb_done' failed."
3099
* https://tracker.ceph.com/issues/48805
3100
    mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details"
3101
* https://tracker.ceph.com/issues/48773
3102
    qa: scrub does not complete
3103
* https://tracker.ceph.com/issues/45434
3104
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
3105
* https://tracker.ceph.com/issues/49240
3106
    terminate called after throwing an instance of 'std::bad_alloc'
3107
* https://tracker.ceph.com/issues/49466
3108
    qa: "Command failed on gibba030 with status 1: 'set -ex\nsudo dd of=/tmp/tmp.ZEeZBasJer'"
3109
* https://tracker.ceph.com/issues/49684
3110
    qa: fs:cephadm mount does not wait for mds to be created
3111
* https://tracker.ceph.com/issues/48771
3112
    qa: iogen: workload fails to cause balancing