Project

General

Profile

Main » History » Version 233

Patrick Donnelly, 03/21/2024 01:04 AM

1 221 Patrick Donnelly
h1. <code>main</code> branch
2 1 Patrick Donnelly
3 228 Patrick Donnelly
h3. 2024-03-20
4
5
https://pulpito.ceph.com/pdonnell-2024-03-20_18:16:52-fs-wip-batrick-testing-20240320.145742-distro-default-smithi/
6
7 233 Patrick Donnelly
https://github.com/batrick/ceph/commit/360516069d9393362c4cc6eb9371680fe16d66ab
8
9 229 Patrick Donnelly
Ubuntu jobs filtered out because builds were skipped by jenkins/shaman.
10 1 Patrick Donnelly
11 229 Patrick Donnelly
This run has a lot more failures because https://github.com/ceph/ceph/pull/55455 fixed log WRN/ERR checks.
12 228 Patrick Donnelly
13 229 Patrick Donnelly
* https://tracker.ceph.com/issues/57676
14
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
15
* https://tracker.ceph.com/issues/64572
16
    workunits/fsx.sh failure
17
* https://tracker.ceph.com/issues/65018
18
    PG_DEGRADED warnings during cluster creation via cephadm: "Health check failed: Degraded data redundancy: 2/192 objects degraded (1.042%), 1 pg degraded (PG_DEGRADED)"
19
* https://tracker.ceph.com/issues/64707 (new issue)
20
    suites/fsstress.sh hangs on one client - test times out
21 1 Patrick Donnelly
* https://tracker.ceph.com/issues/64988
22
    qa: fs:workloads mgr client evicted indicated by "cluster [WRN] evicting unresponsive client smithi042:x (15288), after 303.306 seconds"
23
* https://tracker.ceph.com/issues/59684
24
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
25 230 Patrick Donnelly
* https://tracker.ceph.com/issues/64972
26
    qa: "ceph tell 4.3a deep-scrub" command not found
27
* https://tracker.ceph.com/issues/54108
28
    qa: iogen workunit: "The following counters failed to be set on mds daemons: {'mds.exported', 'mds.imported'}"
29
* https://tracker.ceph.com/issues/65019
30
    qa/suites/fs/top: [WRN] Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)" in cluster log 
31
* https://tracker.ceph.com/issues/65020
32
    qa: Scrub error on inode 0x1000000356c (/volumes/qa/sv_0/2f8f6bb4-3ea9-47a0-bd79-a0f50dc149d5/client.0/tmp/clients/client7/~dmtmp/PARADOX) see mds.b log and `damage ls` output for details" in cluster log
33
* https://tracker.ceph.com/issues/65021
34
    qa/suites/fs/nfs: cluster [WRN] Health check failed: 1 stray daemon(s) not managed by cephadm (CEPHADM_STRAY_DAEMON)" in cluster log
35
* https://tracker.ceph.com/issues/63699
36
    qa: failed cephfs-shell test_reading_conf
37 231 Patrick Donnelly
* https://tracker.ceph.com/issues/64711
38
    Test failure: test_cephfs_mirror_cancel_mirroring_and_readd (tasks.cephfs.test_mirroring.TestMirroring)
39
* https://tracker.ceph.com/issues/50821
40
    qa: untar_snap_rm failure during mds thrashing
41 232 Patrick Donnelly
* https://tracker.ceph.com/issues/65022
42
    qa: test_max_items_per_obj open procs not fully cleaned up
43 228 Patrick Donnelly
44 226 Venky Shankar
h3.  14th March 2024
45
46
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20240307.013758
47
48 227 Venky Shankar
(pjd.sh failures are related to a bug in the testing kernel. See - https://tracker.ceph.com/issues/64679#note-4)
49 226 Venky Shankar
50
* https://tracker.ceph.com/issues/62067
51
    ffsb.sh failure "Resource temporarily unavailable"
52
* https://tracker.ceph.com/issues/57676
53
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
54
* https://tracker.ceph.com/issues/64502
55
    pacific/quincy/v18.2.0: client: ceph-fuse fails to unmount after upgrade to main
56
* https://tracker.ceph.com/issues/64572
57
    workunits/fsx.sh failure
58
* https://tracker.ceph.com/issues/63700
59
    qa: test_cd_with_args failure
60
* https://tracker.ceph.com/issues/59684
61
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
62
* https://tracker.ceph.com/issues/61243
63
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
64
65 225 Venky Shankar
h3. 5th March 2024
66
67
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20240304.042522
68
69
* https://tracker.ceph.com/issues/57676
70
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
71
* https://tracker.ceph.com/issues/64502
72
    pacific/quincy/v18.2.0: client: ceph-fuse fails to unmount after upgrade to main
73
* https://tracker.ceph.com/issues/63949
74
    leak in mds.c detected by valgrind during CephFS QA run
75
* https://tracker.ceph.com/issues/57656
76
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
77
* https://tracker.ceph.com/issues/63699
78
    qa: failed cephfs-shell test_reading_conf
79
* https://tracker.ceph.com/issues/64572
80
    workunits/fsx.sh failure
81
* https://tracker.ceph.com/issues/64707 (new issue)
82
    suites/fsstress.sh hangs on one client - test times out
83
* https://tracker.ceph.com/issues/59684
84
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
85
* https://tracker.ceph.com/issues/63700
86
    qa: test_cd_with_args failure
87
* https://tracker.ceph.com/issues/64711
88
    Test failure: test_cephfs_mirror_cancel_mirroring_and_readd (tasks.cephfs.test_mirroring.TestMirroring)
89
* https://tracker.ceph.com/issues/64729 (new issue)
90
    mon.a (mon.0) 1281 : cluster 3 [WRN] MDS_SLOW_METADATA_IO: 3 MDSs report slow metadata IOs" in cluster log
91
* https://tracker.ceph.com/issues/64730
92
    fs/misc/multiple_rsync.sh workunit times out
93
94 224 Venky Shankar
h3. 26th Feb 2024
95
96
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20240216.060239
97
98
(This run is a bit messy due to
99
100
  a) OCI runtime issues in the testing kernel with centos9
101
  b) SELinux denials related failures
102
  c) Unrelated MON_DOWN warnings)
103
104
* https://tracker.ceph.com/issues/57676
105
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
106
* https://tracker.ceph.com/issues/63700
107
    qa: test_cd_with_args failure
108
* https://tracker.ceph.com/issues/63949
109
    leak in mds.c detected by valgrind during CephFS QA run
110
* https://tracker.ceph.com/issues/59684
111
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
112
* https://tracker.ceph.com/issues/61243
113
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
114
* https://tracker.ceph.com/issues/63699
115
    qa: failed cephfs-shell test_reading_conf
116
* https://tracker.ceph.com/issues/64172
117
    Test failure: test_multiple_path_r (tasks.cephfs.test_admin.TestFsAuthorize)
118
* https://tracker.ceph.com/issues/57656
119
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
120
* https://tracker.ceph.com/issues/64572
121
    workunits/fsx.sh failure
122
123 222 Patrick Donnelly
h3. 20th Feb 2024
124
125
https://github.com/ceph/ceph/pull/55601
126
https://github.com/ceph/ceph/pull/55659
127
128
https://pulpito.ceph.com/pdonnell-2024-02-20_07:23:03-fs:upgrade:mds_upgrade_sequence-wip-batrick-testing-20240220.022152-distro-default-smithi/
129
130
* https://tracker.ceph.com/issues/64502
131
    client: quincy ceph-fuse fails to unmount after upgrade to main
132
133 223 Patrick Donnelly
This run has numerous problems. #55601 introduces testing for the upgrade sequence from </code>reef/{v18.2.0,v18.2.1,reef}</code> as well as an extra dimension for the ceph-fuse client. The main "big" issue is i64502: the ceph-fuse client is not being unmounted when <code>fusermount -u</code> is called. Instead, the client begins to unmount only after daemons are shut down during test cleanup.
134 218 Venky Shankar
135
h3. 19th Feb 2024
136
137 220 Venky Shankar
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20240217.015652
138
139 218 Venky Shankar
* https://tracker.ceph.com/issues/61243
140
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
141
* https://tracker.ceph.com/issues/63700
142
    qa: test_cd_with_args failure
143
* https://tracker.ceph.com/issues/63141
144
    qa/cephfs: test_idem_unaffected_root_squash fails
145
* https://tracker.ceph.com/issues/59684
146
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
147
* https://tracker.ceph.com/issues/63949
148
    leak in mds.c detected by valgrind during CephFS QA run
149
* https://tracker.ceph.com/issues/63764
150
    Test failure: test_r_with_fsname_and_no_path_in_cap (tasks.cephfs.test_multifs_auth.TestMDSCaps)
151
* https://tracker.ceph.com/issues/63699
152
    qa: failed cephfs-shell test_reading_conf
153 219 Venky Shankar
* https://tracker.ceph.com/issues/64482
154
    ceph: stderr Error: OCI runtime error: crun: bpf create ``: Function not implemented
155 201 Rishabh Dave
156 217 Venky Shankar
h3. 29 Jan 2024
157
158
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20240119.075157-1
159
160
* https://tracker.ceph.com/issues/57676
161
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
162
* https://tracker.ceph.com/issues/63949
163
    leak in mds.c detected by valgrind during CephFS QA run
164
* https://tracker.ceph.com/issues/62067
165
    ffsb.sh failure "Resource temporarily unavailable"
166
* https://tracker.ceph.com/issues/64172
167
    Test failure: test_multiple_path_r (tasks.cephfs.test_admin.TestFsAuthorize)
168
* https://tracker.ceph.com/issues/63265
169
    qa: fs/snaps/snaptest-git-ceph.sh failed when reseting to tag 'v0.1'
170
* https://tracker.ceph.com/issues/61243
171
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
172
* https://tracker.ceph.com/issues/59684
173
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
174
* https://tracker.ceph.com/issues/57656
175
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
176
* https://tracker.ceph.com/issues/64209
177
    snaptest-multiple-capsnaps.sh fails with "got remote process result: 1"
178
179 216 Venky Shankar
h3. 17th Jan 2024
180
181
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20240103.072409-1
182
183
* https://tracker.ceph.com/issues/63764
184
    Test failure: test_r_with_fsname_and_no_path_in_cap (tasks.cephfs.test_multifs_auth.TestMDSCaps)
185
* https://tracker.ceph.com/issues/57676
186
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
187
* https://tracker.ceph.com/issues/51964
188
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
189
* https://tracker.ceph.com/issues/63949
190
    leak in mds.c detected by valgrind during CephFS QA run
191
* https://tracker.ceph.com/issues/62067
192
    ffsb.sh failure "Resource temporarily unavailable"
193
* https://tracker.ceph.com/issues/61243
194
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
195
* https://tracker.ceph.com/issues/63259
196
    mds: failed to store backtrace and force file system read-only
197
* https://tracker.ceph.com/issues/63265
198
    qa: fs/snaps/snaptest-git-ceph.sh failed when reseting to tag 'v0.1'
199
200
h3. 16 Jan 2024
201 215 Rishabh Dave
202 214 Rishabh Dave
https://pulpito.ceph.com/rishabh-2023-12-11_15:37:57-fs-rishabh-2023dec11-testing-default-smithi/
203
https://pulpito.ceph.com/rishabh-2023-12-17_11:19:43-fs-rishabh-2023dec11-testing-default-smithi/
204
https://pulpito.ceph.com/rishabh-2024-01-04_18:43:16-fs-rishabh-2024jan4-testing-default-smithi
205
206
* https://tracker.ceph.com/issues/63764
207
  Test failure: test_r_with_fsname_and_no_path_in_cap (tasks.cephfs.test_multifs_auth.TestMDSCaps)
208
* https://tracker.ceph.com/issues/63141
209
  qa/cephfs: test_idem_unaffected_root_squash fails
210
* https://tracker.ceph.com/issues/62067
211
  ffsb.sh failure "Resource temporarily unavailable" 
212
* https://tracker.ceph.com/issues/51964
213
  qa: test_cephfs_mirror_restart_sync_on_blocklist failure
214
* https://tracker.ceph.com/issues/54462 
215
  Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128
216
* https://tracker.ceph.com/issues/57676
217
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
218
219
* https://tracker.ceph.com/issues/63949
220
  valgrind leak in MDS
221
* https://tracker.ceph.com/issues/64041
222
  qa/cephfs: fs/upgrade/nofs suite attempts to jump more than 2 releases
223
* fsstress failure in last run was due a kernel MM layer failure, unrelated to CephFS
224
* from last run, job #7507400 failed due to MGR; FS wasn't degraded, so it's unrelated to CephFS
225
226 213 Venky Shankar
h3. 06 Dec 2023
227
228
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20231206.125818
229
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20231206.125818-x (rerun w/ squid kickoff changes)
230
231
* https://tracker.ceph.com/issues/63764
232
    Test failure: test_r_with_fsname_and_no_path_in_cap (tasks.cephfs.test_multifs_auth.TestMDSCaps)
233
* https://tracker.ceph.com/issues/63233
234
    mon|client|mds: valgrind reports possible leaks in the MDS
235
* https://tracker.ceph.com/issues/57676
236
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
237
* https://tracker.ceph.com/issues/62580
238
    testing: Test failure: test_snapshot_remove (tasks.cephfs.test_strays.TestStrays)
239
* https://tracker.ceph.com/issues/62067
240
    ffsb.sh failure "Resource temporarily unavailable"
241
* https://tracker.ceph.com/issues/61243
242
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
243
* https://tracker.ceph.com/issues/62081
244
    tasks/fscrypt-common does not finish, timesout
245
* https://tracker.ceph.com/issues/63265
246
    qa: fs/snaps/snaptest-git-ceph.sh failed when reseting to tag 'v0.1'
247
* https://tracker.ceph.com/issues/63806
248
    ffsb.sh workunit failure (MDS: std::out_of_range, damaged)
249
250 211 Patrick Donnelly
h3. 30 Nov 2023
251
252
https://pulpito.ceph.com/pdonnell-2023-11-30_08:05:19-fs:shell-wip-batrick-testing-20231130.014408-distro-default-smithi/
253
254
* https://tracker.ceph.com/issues/63699
255 212 Patrick Donnelly
    qa: failed cephfs-shell test_reading_conf
256
* https://tracker.ceph.com/issues/63700
257
    qa: test_cd_with_args failure
258 211 Patrick Donnelly
259 210 Venky Shankar
h3. 29 Nov 2023
260
261
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20231107.042705
262
263
* https://tracker.ceph.com/issues/63233
264
    mon|client|mds: valgrind reports possible leaks in the MDS
265
* https://tracker.ceph.com/issues/63141
266
    qa/cephfs: test_idem_unaffected_root_squash fails
267
* https://tracker.ceph.com/issues/57676
268
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
269
* https://tracker.ceph.com/issues/57655
270
    qa: fs:mixed-clients kernel_untar_build failure
271
* https://tracker.ceph.com/issues/62067
272
    ffsb.sh failure "Resource temporarily unavailable"
273
* https://tracker.ceph.com/issues/61243
274
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
275
* https://tracker.ceph.com/issues/62510 (pending RHEL back port)
    snaptest-git-ceph.sh failure with fs/thrash
276
* https://tracker.ceph.com/issues/62810
277
    Failure in snaptest-git-ceph.sh (it's an async unlink/create bug) -- Need to fix again
278
279 206 Venky Shankar
h3. 14 Nov 2023
280 207 Milind Changire
(Milind)
281
282
https://pulpito.ceph.com/mchangir-2023-11-13_10:27:15-fs-wip-mchangir-testing-20231110.052303-testing-default-smithi/
283
284
* https://tracker.ceph.com/issues/53859
285
  qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
286
* https://tracker.ceph.com/issues/63233
287
  mon|client|mds: valgrind reports possible leaks in the MDS
288
* https://tracker.ceph.com/issues/63521
289
  qa: Test failure: test_scrub_merge_dirfrags (tasks.cephfs.test_scrub_checks.TestScrubChecks)
290
* https://tracker.ceph.com/issues/57655
291
  qa: fs:mixed-clients kernel_untar_build failure
292
* https://tracker.ceph.com/issues/62580
293
    testing: Test failure: test_snapshot_remove (tasks.cephfs.test_strays.TestStrays)
294
* https://tracker.ceph.com/issues/57676
295
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
296
* https://tracker.ceph.com/issues/61243
297
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
298
* https://tracker.ceph.com/issues/63141
299
    qa/cephfs: test_idem_unaffected_root_squash fails
300
* https://tracker.ceph.com/issues/51964
301
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
302
* https://tracker.ceph.com/issues/63522
303
    No module named 'tasks.ceph_fuse'
304
    No module named 'tasks.kclient'
305
    No module named 'tasks.cephfs.fuse_mount'
306
    No module named 'tasks.ceph'
307
* https://tracker.ceph.com/issues/63523
308
    Command failed - qa/workunits/fs/misc/general_vxattrs.sh
309
310
311
h3. 14 Nov 2023
312 206 Venky Shankar
313
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20231106.073650
314
315
(nvm the fs:upgrade test failure - the PR is excluded from merge)
316
317
* https://tracker.ceph.com/issues/57676
318
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
319
* https://tracker.ceph.com/issues/63233
320
    mon|client|mds: valgrind reports possible leaks in the MDS
321
* https://tracker.ceph.com/issues/63141
322
    qa/cephfs: test_idem_unaffected_root_squash fails
323
* https://tracker.ceph.com/issues/62580
324
    testing: Test failure: test_snapshot_remove (tasks.cephfs.test_strays.TestStrays)
325
* https://tracker.ceph.com/issues/57655
326
    qa: fs:mixed-clients kernel_untar_build failure
327
* https://tracker.ceph.com/issues/51964
328
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
329
* https://tracker.ceph.com/issues/63519
330
    ceph-fuse: reef ceph-fuse crashes with main branch ceph-mds
331
* https://tracker.ceph.com/issues/57087
332
    qa: test_fragmented_injection (tasks.cephfs.test_data_scan.TestDataScan) failure
333
* https://tracker.ceph.com/issues/58945
334
    qa: xfstests-dev's generic test suite has 20 failures with fuse client
335
336 204 Rishabh Dave
h3. 7 Nov 2023
337
338 205 Rishabh Dave
fs: https://pulpito.ceph.com/rishabh-2023-11-04_04:30:51-fs-rishabh-2023nov3-testing-default-smithi/
339
re-run: https://pulpito.ceph.com/rishabh-2023-11-05_14:10:09-fs-rishabh-2023nov3-testing-default-smithi/
340
smoke: https://pulpito.ceph.com/rishabh-2023-11-08_08:39:05-smoke-rishabh-2023nov3-testing-default-smithi/
341 204 Rishabh Dave
342
* https://tracker.ceph.com/issues/53859
343
  qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
344
* https://tracker.ceph.com/issues/63233
345
  mon|client|mds: valgrind reports possible leaks in the MDS
346
* https://tracker.ceph.com/issues/57655
347
  qa: fs:mixed-clients kernel_untar_build failure
348
* https://tracker.ceph.com/issues/57676
349
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
350
351
* https://tracker.ceph.com/issues/63473
352
  fsstress.sh failed with errno 124
353
354 202 Rishabh Dave
h3. 3 Nov 2023
355 203 Rishabh Dave
356 202 Rishabh Dave
https://pulpito.ceph.com/rishabh-2023-10-27_06:26:52-fs-rishabh-2023oct26-testing-default-smithi/
357
358
* https://tracker.ceph.com/issues/63141
359
  qa/cephfs: test_idem_unaffected_root_squash fails
360
* https://tracker.ceph.com/issues/63233
361
  mon|client|mds: valgrind reports possible leaks in the MDS
362
* https://tracker.ceph.com/issues/57656
363
  dbench: write failed on handle 10010 (Resource temporarily unavailable)
364
* https://tracker.ceph.com/issues/57655
365
  qa: fs:mixed-clients kernel_untar_build failure
366
* https://tracker.ceph.com/issues/57676
367
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
368
369
* https://tracker.ceph.com/issues/59531
370
  "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio)"
371
* https://tracker.ceph.com/issues/52624
372
  qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
373
374 198 Patrick Donnelly
h3. 24 October 2023
375
376
https://pulpito.ceph.com/?branch=wip-batrick-testing-20231024.144545
377
378 200 Patrick Donnelly
Two failures:
379
380
https://pulpito.ceph.com/pdonnell-2023-10-26_05:21:22-fs-wip-batrick-testing-20231024.144545-distro-default-smithi/7438459/
381
https://pulpito.ceph.com/pdonnell-2023-10-26_05:21:22-fs-wip-batrick-testing-20231024.144545-distro-default-smithi/7438468/
382
383
probably related to https://github.com/ceph/ceph/pull/53255. Killing the mount as part of the test did not complete. Will research more.
384
385 198 Patrick Donnelly
* https://tracker.ceph.com/issues/52624
386
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
387
* https://tracker.ceph.com/issues/57676
388 199 Patrick Donnelly
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
389
* https://tracker.ceph.com/issues/63233
390
    mon|client|mds: valgrind reports possible leaks in the MDS
391
* https://tracker.ceph.com/issues/59531
392
    "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS 
393
* https://tracker.ceph.com/issues/57655
394
    qa: fs:mixed-clients kernel_untar_build failure
395 200 Patrick Donnelly
* https://tracker.ceph.com/issues/62067
396
    ffsb.sh failure "Resource temporarily unavailable"
397
* https://tracker.ceph.com/issues/63411
398
    qa: flush journal may cause timeouts of `scrub status`
399
* https://tracker.ceph.com/issues/61243
400
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
401
* https://tracker.ceph.com/issues/63141
402 198 Patrick Donnelly
    test_idem_unaffected_root_squash (test_admin.TestFsAuthorizeUpdate) fails
403 148 Rishabh Dave
404 195 Venky Shankar
h3. 18 Oct 2023
405
406
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20231018.065603
407
408
* https://tracker.ceph.com/issues/52624
409
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
410
* https://tracker.ceph.com/issues/57676
411
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
412
* https://tracker.ceph.com/issues/63233
413
    mon|client|mds: valgrind reports possible leaks in the MDS
414
* https://tracker.ceph.com/issues/63141
415
    qa/cephfs: test_idem_unaffected_root_squash fails
416
* https://tracker.ceph.com/issues/59531
417
  "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio)" 
418
* https://tracker.ceph.com/issues/62658
419
    error during scrub thrashing: reached maximum tries (31) after waiting for 900 seconds
420
* https://tracker.ceph.com/issues/62580
421
    testing: Test failure: test_snapshot_remove (tasks.cephfs.test_strays.TestStrays)
422
* https://tracker.ceph.com/issues/62067
423
    ffsb.sh failure "Resource temporarily unavailable"
424
* https://tracker.ceph.com/issues/57655
425
    qa: fs:mixed-clients kernel_untar_build failure
426
* https://tracker.ceph.com/issues/62036
427
    src/mds/MDCache.cc: 5131: FAILED ceph_assert(isolated_inodes.empty())
428
* https://tracker.ceph.com/issues/58945
429
    qa: xfstests-dev's generic test suite has 20 failures with fuse client
430
* https://tracker.ceph.com/issues/62847
431
    mds: blogbench requests stuck (5mds+scrub+snaps-flush)
432
433 193 Venky Shankar
h3. 13 Oct 2023
434
435
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20231013.093215
436
437
* https://tracker.ceph.com/issues/52624
438
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
439
* https://tracker.ceph.com/issues/62936
440
    Test failure: test_mirroring_init_failure_with_recovery (tasks.cephfs.test_mirroring.TestMirroring)
441
* https://tracker.ceph.com/issues/47292
442
    cephfs-shell: test_df_for_valid_file failure
443
* https://tracker.ceph.com/issues/63141
444
    qa/cephfs: test_idem_unaffected_root_squash fails
445
* https://tracker.ceph.com/issues/62081
446
    tasks/fscrypt-common does not finish, timesout
447 1 Patrick Donnelly
* https://tracker.ceph.com/issues/58945
448
    qa: xfstests-dev's generic test suite has 20 failures with fuse client
449 194 Venky Shankar
* https://tracker.ceph.com/issues/63233
450
    mon|client|mds: valgrind reports possible leaks in the MDS
451 193 Venky Shankar
452 190 Patrick Donnelly
h3. 16 Oct 2023
453
454
https://pulpito.ceph.com/?branch=wip-batrick-testing-20231016.203825
455
456 192 Patrick Donnelly
Infrastructure issues:
457
* /teuthology/pdonnell-2023-10-19_12:04:12-fs-wip-batrick-testing-20231016.203825-distro-default-smithi/7432286/teuthology.log
458
    Host lost.
459
460 196 Patrick Donnelly
One followup fix:
461
* https://pulpito.ceph.com/pdonnell-2023-10-20_00:33:29-fs-wip-batrick-testing-20231016.203825-distro-default-smithi/
462
463 192 Patrick Donnelly
Failures:
464
465
* https://tracker.ceph.com/issues/56694
466
    qa: avoid blocking forever on hung umount
467
* https://tracker.ceph.com/issues/63089
468
    qa: tasks/mirror times out
469
* https://tracker.ceph.com/issues/52624
470
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
471
* https://tracker.ceph.com/issues/59531
472
  "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio)" 
473
* https://tracker.ceph.com/issues/57676
474
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
475
* https://tracker.ceph.com/issues/62658 
476
    error during scrub thrashing: reached maximum tries (31) after waiting for 900 seconds
477
* https://tracker.ceph.com/issues/61243
478
  test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
479
* https://tracker.ceph.com/issues/57656
480
  dbench: write failed on handle 10010 (Resource temporarily unavailable)
481
* https://tracker.ceph.com/issues/63233
482
  mon|client|mds: valgrind reports possible leaks in the MDS
483 197 Patrick Donnelly
* https://tracker.ceph.com/issues/63278
484
  kclient: may wrongly decode session messages and believe it is blocklisted (dead jobs)
485 192 Patrick Donnelly
486 189 Rishabh Dave
h3. 9 Oct 2023
487
488
https://pulpito.ceph.com/rishabh-2023-10-06_11:56:52-fs-rishabh-cephfs-mon-testing-default-smithi/
489
490
* https://tracker.ceph.com/issues/54460
491
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
492
* https://tracker.ceph.com/issues/63141
493
  test_idem_unaffected_root_squash (test_admin.TestFsAuthorizeUpdate) fails
494
* https://tracker.ceph.com/issues/62937
495
  logrotate doesn't support parallel execution on same set of logfiles
496
* https://tracker.ceph.com/issues/61400
497
  valgrind+ceph-mon issues
498
* https://tracker.ceph.com/issues/57676
499
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
500
* https://tracker.ceph.com/issues/55805
501
  error during scrub thrashing reached max tries in 900 secs
502
503 188 Venky Shankar
h3. 26 Sep 2023
504
505
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230926.081818
506
507
* https://tracker.ceph.com/issues/52624
508
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
509
* https://tracker.ceph.com/issues/62873
510
    qa: FAIL: test_client_blocklisted_oldest_tid (tasks.cephfs.test_client_limits.TestClientLimits)
511
* https://tracker.ceph.com/issues/61400
512
    valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
513
* https://tracker.ceph.com/issues/57676
514
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
515
* https://tracker.ceph.com/issues/62682
516
    mon: no mdsmap broadcast after "fs set joinable" is set to true
517
* https://tracker.ceph.com/issues/63089
518
    qa: tasks/mirror times out
519
520 185 Rishabh Dave
h3. 22 Sep 2023
521
522
https://pulpito.ceph.com/rishabh-2023-09-12_12:12:15-fs-wip-rishabh-2023sep12-b2-testing-default-smithi/
523
524
* https://tracker.ceph.com/issues/59348
525
  qa: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota
526
* https://tracker.ceph.com/issues/59344
527
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
528
* https://tracker.ceph.com/issues/59531
529
  "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio)" 
530
* https://tracker.ceph.com/issues/61574
531
  build failure for mdtest project
532
* https://tracker.ceph.com/issues/62702
533
  fsstress.sh: MDS slow requests for the internal 'rename' requests
534
* https://tracker.ceph.com/issues/57676
535
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
536
537
* https://tracker.ceph.com/issues/62863 
538
  deadlock in ceph-fuse causes teuthology job to hang and fail
539
* https://tracker.ceph.com/issues/62870
540
  test_cluster_info (tasks.cephfs.test_nfs.TestNFS)
541
* https://tracker.ceph.com/issues/62873
542
  test_client_blocklisted_oldest_tid (tasks.cephfs.test_client_limits.TestClientLimits)
543
544 186 Venky Shankar
h3. 20 Sep 2023
545
546
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230920.072635
547
548
* https://tracker.ceph.com/issues/52624
549
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
550
* https://tracker.ceph.com/issues/61400
551
    valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
552
* https://tracker.ceph.com/issues/61399
553
    libmpich: undefined references to fi_strerror
554
* https://tracker.ceph.com/issues/62081
555
    tasks/fscrypt-common does not finish, timesout
556
* https://tracker.ceph.com/issues/62658 
557
    error during scrub thrashing: reached maximum tries (31) after waiting for 900 seconds
558
* https://tracker.ceph.com/issues/62915
559
    qa/suites/fs/nfs: No orchestrator configured (try `ceph orch set backend`) while running test cases
560
* https://tracker.ceph.com/issues/59531
561
    quincy: "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio)"
562
* https://tracker.ceph.com/issues/62873
563
  qa: FAIL: test_client_blocklisted_oldest_tid (tasks.cephfs.test_client_limits.TestClientLimits)
564
* https://tracker.ceph.com/issues/62936
565
    Test failure: test_mirroring_init_failure_with_recovery (tasks.cephfs.test_mirroring.TestMirroring)
566
* https://tracker.ceph.com/issues/62937
567
    Command failed on smithi027 with status 3: 'sudo logrotate /etc/logrotate.d/ceph-test.conf'
568
* https://tracker.ceph.com/issues/62510
569
    snaptest-git-ceph.sh failure with fs/thrash
570
* https://tracker.ceph.com/issues/62081
571
    tasks/fscrypt-common does not finish, timesout
572
* https://tracker.ceph.com/issues/62126
573
    test failure: suites/blogbench.sh stops running
574 187 Venky Shankar
* https://tracker.ceph.com/issues/62682
575
    mon: no mdsmap broadcast after "fs set joinable" is set to true
576 186 Venky Shankar
577 184 Milind Changire
h3. 19 Sep 2023
578
579
http://pulpito.front.sepia.ceph.com/mchangir-2023-09-12_05:40:22-fs-wip-mchangir-testing-20230908.140927-testing-default-smithi/
580
581
* https://tracker.ceph.com/issues/58220#note-9
582
  workunit fs/test_python.sh: test_disk_quota_exceeeded_error failure
583
* https://tracker.ceph.com/issues/62702
584
  Command failed (workunit test suites/fsstress.sh) on smithi124 with status 124
585
* https://tracker.ceph.com/issues/57676
586
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
587
* https://tracker.ceph.com/issues/59348
588
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
589
* https://tracker.ceph.com/issues/52624
590
  qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
591
* https://tracker.ceph.com/issues/51964
592
  qa: test_cephfs_mirror_restart_sync_on_blocklist failure
593
* https://tracker.ceph.com/issues/61243
594
  test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
595
* https://tracker.ceph.com/issues/59344
596
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
597
* https://tracker.ceph.com/issues/62873
598
  qa: FAIL: test_client_blocklisted_oldest_tid (tasks.cephfs.test_client_limits.TestClientLimits)
599
* https://tracker.ceph.com/issues/59413
600
  cephfs: qa snaptest-git-ceph.sh failed with "got remote process result: 128"
601
* https://tracker.ceph.com/issues/53859
602
  qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
603
* https://tracker.ceph.com/issues/62482
604
  qa: cluster [WRN] Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
605
606 178 Patrick Donnelly
607 177 Venky Shankar
h3. 13 Sep 2023
608
609
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230908.065909
610
611
* https://tracker.ceph.com/issues/52624
612
      qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
613
* https://tracker.ceph.com/issues/57655
614
    qa: fs:mixed-clients kernel_untar_build failure
615
* https://tracker.ceph.com/issues/57676
616
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
617
* https://tracker.ceph.com/issues/61243
618
    qa: tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev - 17 tests failed
619
* https://tracker.ceph.com/issues/62567
620
    postgres workunit times out - MDS_SLOW_REQUEST in logs
621
* https://tracker.ceph.com/issues/61400
622
    valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
623
* https://tracker.ceph.com/issues/61399
624
    libmpich: undefined references to fi_strerror
625
* https://tracker.ceph.com/issues/57655
626
    qa: fs:mixed-clients kernel_untar_build failure
627
* https://tracker.ceph.com/issues/57676
628
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
629
* https://tracker.ceph.com/issues/51964
630
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
631
* https://tracker.ceph.com/issues/62081
632
    tasks/fscrypt-common does not finish, timesout
633 178 Patrick Donnelly
634 179 Patrick Donnelly
h3. 2023 Sep 12
635 178 Patrick Donnelly
636
https://pulpito.ceph.com/pdonnell-2023-09-12_14:07:50-fs-wip-batrick-testing-20230912.122437-distro-default-smithi/
637 1 Patrick Donnelly
638 181 Patrick Donnelly
A few failures caused by qa refactoring in https://github.com/ceph/ceph/pull/48130 ; notably:
639
640 182 Patrick Donnelly
* Test failure: test_export_pin_many (tasks.cephfs.test_exports.TestExportPin) caused by fragmentation from config changes.
641 181 Patrick Donnelly
642
Failures:
643
644 179 Patrick Donnelly
* https://tracker.ceph.com/issues/59348
645
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
646
* https://tracker.ceph.com/issues/57656
647
  dbench: write failed on handle 10010 (Resource temporarily unavailable)
648
* https://tracker.ceph.com/issues/55805
649
  error scrub thrashing reached max tries in 900 secs
650
* https://tracker.ceph.com/issues/62067
651
    ffsb.sh failure "Resource temporarily unavailable"
652
* https://tracker.ceph.com/issues/59344
653
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
654
* https://tracker.ceph.com/issues/61399
655 180 Patrick Donnelly
  libmpich: undefined references to fi_strerror
656
* https://tracker.ceph.com/issues/62832
657
  common: config_proxy deadlock during shutdown (and possibly other times)
658
* https://tracker.ceph.com/issues/59413
659 1 Patrick Donnelly
  cephfs: qa snaptest-git-ceph.sh failed with "got remote process result: 128"
660 181 Patrick Donnelly
* https://tracker.ceph.com/issues/57676
661
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
662
* https://tracker.ceph.com/issues/62567
663
  Command failed with status 1: "sudo TESTDIR=/home/ubuntu/cephtest bash -c 'sudo -u postgres -- pgbench -s 500 -i'"
664
* https://tracker.ceph.com/issues/54460
665
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
666
* https://tracker.ceph.com/issues/58220#note-9
667
  workunit fs/test_python.sh: test_disk_quota_exceeeded_error failure
668
* https://tracker.ceph.com/issues/59348
669
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
670 183 Patrick Donnelly
* https://tracker.ceph.com/issues/62847
671
    mds: blogbench requests stuck (5mds+scrub+snaps-flush)
672
* https://tracker.ceph.com/issues/62848
673
    qa: fail_fs upgrade scenario hanging
674
* https://tracker.ceph.com/issues/62081
675
    tasks/fscrypt-common does not finish, timesout
676 177 Venky Shankar
677 176 Venky Shankar
h3. 11 Sep 2023
678 175 Venky Shankar
679
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230830.153114
680
681
* https://tracker.ceph.com/issues/52624
682
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
683
* https://tracker.ceph.com/issues/61399
684
    qa: build failure for ior (the failed instance is when compiling `mdtest`)
685
* https://tracker.ceph.com/issues/57655
686
    qa: fs:mixed-clients kernel_untar_build failure
687
* https://tracker.ceph.com/issues/61399
688
    ior build failure
689
* https://tracker.ceph.com/issues/59531
690
    quincy: "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio)"
691
* https://tracker.ceph.com/issues/59344
692
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
693
* https://tracker.ceph.com/issues/59346
694
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
695
* https://tracker.ceph.com/issues/59348
696
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
697
* https://tracker.ceph.com/issues/57676
698
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
699
* https://tracker.ceph.com/issues/61243
700
  qa: tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev - 17 tests failed
701
* https://tracker.ceph.com/issues/62567
702
  postgres workunit times out - MDS_SLOW_REQUEST in logs
703
704
705 174 Rishabh Dave
h3. 6 Sep 2023 Run 2
706
707
https://pulpito.ceph.com/rishabh-2023-08-25_01:50:32-fs-wip-rishabh-2023aug3-b5-testing-default-smithi/ 
708
709
* https://tracker.ceph.com/issues/51964
710
  test_cephfs_mirror_restart_sync_on_blocklist failure
711
* https://tracker.ceph.com/issues/59348
712
  test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
713
* https://tracker.ceph.com/issues/53859
714
  qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
715
* https://tracker.ceph.com/issues/61892
716
  test_strays.TestStrays.test_snapshot_remove failed
717
* https://tracker.ceph.com/issues/54460
718
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
719
* https://tracker.ceph.com/issues/59346
720
  fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write"
721
* https://tracker.ceph.com/issues/59344
722
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
723
* https://tracker.ceph.com/issues/62484
724
  qa: ffsb.sh test failure
725
* https://tracker.ceph.com/issues/62567
726
  Command failed with status 1: "sudo TESTDIR=/home/ubuntu/cephtest bash -c 'sudo -u postgres -- pgbench -s 500 -i'"
727
  
728
* https://tracker.ceph.com/issues/61399
729
  ior build failure
730
* https://tracker.ceph.com/issues/57676
731
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
732
* https://tracker.ceph.com/issues/55805
733
  error scrub thrashing reached max tries in 900 secs
734
735 172 Rishabh Dave
h3. 6 Sep 2023
736 171 Rishabh Dave
737 173 Rishabh Dave
https://pulpito.ceph.com/rishabh-2023-08-10_20:16:46-fs-wip-rishabh-2023Aug1-b4-testing-default-smithi/
738 171 Rishabh Dave
739 1 Patrick Donnelly
* https://tracker.ceph.com/issues/53859
740
  qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
741 173 Rishabh Dave
* https://tracker.ceph.com/issues/51964
742
  qa: test_cephfs_mirror_restart_sync_on_blocklist failure
743 1 Patrick Donnelly
* https://tracker.ceph.com/issues/61892
744 173 Rishabh Dave
  test_snapshot_remove (test_strays.TestStrays) failed
745
* https://tracker.ceph.com/issues/59348
746
  qa: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota
747
* https://tracker.ceph.com/issues/54462
748
  Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128
749
* https://tracker.ceph.com/issues/62556
750
  test_acls: xfstests_dev: python2 is missing
751
* https://tracker.ceph.com/issues/62067
752
  ffsb.sh failure "Resource temporarily unavailable"
753
* https://tracker.ceph.com/issues/57656
754
  dbench: write failed on handle 10010 (Resource temporarily unavailable)
755 1 Patrick Donnelly
* https://tracker.ceph.com/issues/59346
756
  fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write"
757 171 Rishabh Dave
* https://tracker.ceph.com/issues/59344
758 173 Rishabh Dave
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
759
760 171 Rishabh Dave
* https://tracker.ceph.com/issues/61399
761
  ior build failure
762
* https://tracker.ceph.com/issues/57676
763
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
764
* https://tracker.ceph.com/issues/55805
765
  error scrub thrashing reached max tries in 900 secs
766 173 Rishabh Dave
767
* https://tracker.ceph.com/issues/62567
768
  Command failed on smithi008 with status 1: "sudo TESTDIR=/home/ubuntu/cephtest bash -c 'sudo -u postgres -- pgbench -s 500 -i'"
769
* https://tracker.ceph.com/issues/62702
770
  workunit test suites/fsstress.sh on smithi066 with status 124
771 170 Rishabh Dave
772
h3. 5 Sep 2023
773
774
https://pulpito.ceph.com/rishabh-2023-08-25_06:38:25-fs-wip-rishabh-2023aug3-b5-testing-default-smithi/
775
orch:cephadm suite run: http://pulpito.front.sepia.ceph.com/rishabh-2023-09-05_12:16:09-orch:cephadm-wip-rishabh-2023aug3-b5-testing-default-smithi/
776
  this run has failures but acc to Adam King these are not relevant and should be ignored
777
778
* https://tracker.ceph.com/issues/61892
779
  test_snapshot_remove (test_strays.TestStrays) failed
780
* https://tracker.ceph.com/issues/59348
781
  test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota
782
* https://tracker.ceph.com/issues/54462
783
  Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128
784
* https://tracker.ceph.com/issues/62067
785
  ffsb.sh failure "Resource temporarily unavailable"
786
* https://tracker.ceph.com/issues/57656 
787
  dbench: write failed on handle 10010 (Resource temporarily unavailable)
788
* https://tracker.ceph.com/issues/59346
789
  fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write"
790
* https://tracker.ceph.com/issues/59344
791
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
792
* https://tracker.ceph.com/issues/50223
793
  client.xxxx isn't responding to mclientcaps(revoke)
794
* https://tracker.ceph.com/issues/57655
795
  qa: fs:mixed-clients kernel_untar_build failure
796
* https://tracker.ceph.com/issues/62187
797
  iozone.sh: line 5: iozone: command not found
798
 
799
* https://tracker.ceph.com/issues/61399
800
  ior build failure
801
* https://tracker.ceph.com/issues/57676
802
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
803
* https://tracker.ceph.com/issues/55805
804
  error scrub thrashing reached max tries in 900 secs
805 169 Venky Shankar
806
807
h3. 31 Aug 2023
808
809
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230824.045828
810
811
* https://tracker.ceph.com/issues/52624
812
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
813
* https://tracker.ceph.com/issues/62187
814
    iozone: command not found
815
* https://tracker.ceph.com/issues/61399
816
    ior build failure
817
* https://tracker.ceph.com/issues/59531
818
    quincy: "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio)"
819
* https://tracker.ceph.com/issues/61399
820
    qa: build failure for ior (the failed instance is when compiling `mdtest`)
821
* https://tracker.ceph.com/issues/57655
822
    qa: fs:mixed-clients kernel_untar_build failure
823
* https://tracker.ceph.com/issues/59344
824
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
825
* https://tracker.ceph.com/issues/59346
826
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
827
* https://tracker.ceph.com/issues/59348
828
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
829
* https://tracker.ceph.com/issues/59413
830
    cephfs: qa snaptest-git-ceph.sh failed with "got remote process result: 128"
831
* https://tracker.ceph.com/issues/62653
832
    qa: unimplemented fcntl command: 1036 with fsstress
833
* https://tracker.ceph.com/issues/61400
834
    valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
835
* https://tracker.ceph.com/issues/62658
836
    error during scrub thrashing: reached maximum tries (31) after waiting for 900 seconds
837
* https://tracker.ceph.com/issues/62188
838
    AttributeError: 'RemoteProcess' object has no attribute 'read' (happens only with multis-auth test)
839 168 Venky Shankar
840
841
h3. 25 Aug 2023
842
843
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230822.064807
844
845
* https://tracker.ceph.com/issues/59344
846
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
847
* https://tracker.ceph.com/issues/59346
848
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
849
* https://tracker.ceph.com/issues/59348
850
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
851
* https://tracker.ceph.com/issues/57655
852
    qa: fs:mixed-clients kernel_untar_build failure
853
* https://tracker.ceph.com/issues/61243
854
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
855
* https://tracker.ceph.com/issues/61399
856
    ior build failure
857
* https://tracker.ceph.com/issues/61399
858
    qa: build failure for ior (the failed instance is when compiling `mdtest`)
859
* https://tracker.ceph.com/issues/62484
860
    qa: ffsb.sh test failure
861
* https://tracker.ceph.com/issues/59531
862
    quincy: "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio)"
863
* https://tracker.ceph.com/issues/62510
864
    snaptest-git-ceph.sh failure with fs/thrash
865 167 Venky Shankar
866
867
h3. 24 Aug 2023
868
869
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230822.060131
870
871
* https://tracker.ceph.com/issues/57676
872
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
873
* https://tracker.ceph.com/issues/51964
874
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
875
* https://tracker.ceph.com/issues/59344
876
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
877
* https://tracker.ceph.com/issues/59346
878
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
879
* https://tracker.ceph.com/issues/59348
880
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
881
* https://tracker.ceph.com/issues/61399
882
    ior build failure
883
* https://tracker.ceph.com/issues/61399
884
    qa: build failure for ior (the failed instance is when compiling `mdtest`)
885
* https://tracker.ceph.com/issues/62510
886
    snaptest-git-ceph.sh failure with fs/thrash
887
* https://tracker.ceph.com/issues/62484
888
    qa: ffsb.sh test failure
889
* https://tracker.ceph.com/issues/57087
890
    qa: test_fragmented_injection (tasks.cephfs.test_data_scan.TestDataScan) failure
891
* https://tracker.ceph.com/issues/57656
892
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
893
* https://tracker.ceph.com/issues/62187
894
    iozone: command not found
895
* https://tracker.ceph.com/issues/62188
896
    AttributeError: 'RemoteProcess' object has no attribute 'read' (happens only with multis-auth test)
897
* https://tracker.ceph.com/issues/62567
898
    postgres workunit times out - MDS_SLOW_REQUEST in logs
899 166 Venky Shankar
900
901
h3. 22 Aug 2023
902
903
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230809.035933
904
905
* https://tracker.ceph.com/issues/57676
906
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
907
* https://tracker.ceph.com/issues/51964
908
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
909
* https://tracker.ceph.com/issues/59344
910
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
911
* https://tracker.ceph.com/issues/59346
912
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
913
* https://tracker.ceph.com/issues/59348
914
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
915
* https://tracker.ceph.com/issues/61399
916
    ior build failure
917
* https://tracker.ceph.com/issues/61399
918
    qa: build failure for ior (the failed instance is when compiling `mdtest`)
919
* https://tracker.ceph.com/issues/57655
920
    qa: fs:mixed-clients kernel_untar_build failure
921
* https://tracker.ceph.com/issues/61243
922
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
923
* https://tracker.ceph.com/issues/62188
924
    AttributeError: 'RemoteProcess' object has no attribute 'read' (happens only with multis-auth test)
925
* https://tracker.ceph.com/issues/62510
926
    snaptest-git-ceph.sh failure with fs/thrash
927
* https://tracker.ceph.com/issues/62511
928
    src/mds/MDLog.cc: 299: FAILED ceph_assert(!mds_is_shutting_down)
929 165 Venky Shankar
930
931
h3. 14 Aug 2023
932
933
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230808.093601
934
935
* https://tracker.ceph.com/issues/51964
936
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
937
* https://tracker.ceph.com/issues/61400
938
    valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
939
* https://tracker.ceph.com/issues/61399
940
    ior build failure
941
* https://tracker.ceph.com/issues/59348
942
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
943
* https://tracker.ceph.com/issues/59531
944
    cluster [WRN] OSD bench result of 137706.272521 IOPS exceeded the threshold
945
* https://tracker.ceph.com/issues/59344
946
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
947
* https://tracker.ceph.com/issues/59346
948
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
949
* https://tracker.ceph.com/issues/61399
950
    qa: build failure for ior (the failed instance is when compiling `mdtest`)
951
* https://tracker.ceph.com/issues/59684 [kclient bug]
952
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
953
* https://tracker.ceph.com/issues/61243 (NEW)
954
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
955
* https://tracker.ceph.com/issues/57655
956
    qa: fs:mixed-clients kernel_untar_build failure
957
* https://tracker.ceph.com/issues/57656
958
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
959 163 Venky Shankar
960
961
h3. 28 JULY 2023
962
963
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230725.053049
964
965
* https://tracker.ceph.com/issues/51964
966
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
967
* https://tracker.ceph.com/issues/61400
968
    valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
969
* https://tracker.ceph.com/issues/61399
970
    ior build failure
971
* https://tracker.ceph.com/issues/57676
972
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
973
* https://tracker.ceph.com/issues/59348
974
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
975
* https://tracker.ceph.com/issues/59531
976
    cluster [WRN] OSD bench result of 137706.272521 IOPS exceeded the threshold
977
* https://tracker.ceph.com/issues/59344
978
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
979
* https://tracker.ceph.com/issues/59346
980
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
981
* https://github.com/ceph/ceph/pull/52556
982
    task/test_nfs: AttributeError: 'TestNFS' object has no attribute 'run_ceph_cmd' (see note #4)
983
* https://tracker.ceph.com/issues/62187
984
    iozone: command not found
985
* https://tracker.ceph.com/issues/61399
986
    qa: build failure for ior (the failed instance is when compiling `mdtest`)
987
* https://tracker.ceph.com/issues/62188
988 164 Rishabh Dave
    AttributeError: 'RemoteProcess' object has no attribute 'read' (happens only with multis-auth test)
989 158 Rishabh Dave
990
h3. 24 Jul 2023
991
992
https://pulpito.ceph.com/rishabh-2023-07-13_21:35:13-fs-wip-rishabh-2023Jul13-testing-default-smithi/
993
https://pulpito.ceph.com/rishabh-2023-07-14_10:26:42-fs-wip-rishabh-2023Jul13-testing-default-smithi/
994
There were few failure from one of the PRs under testing. Following run confirms that removing this PR fixes these failures -
995
https://pulpito.ceph.com/rishabh-2023-07-18_02:11:50-fs-wip-rishabh-2023Jul13-m-quota-testing-default-smithi/
996
One more extra run to check if blogbench.sh fail every time:
997
https://pulpito.ceph.com/rishabh-2023-07-21_17:58:19-fs-wip-rishabh-2023Jul13-m-quota-testing-default-smithi/
998
blogbench.sh failure were seen on above runs for first time, following run with main branch that confirms that "blogbench.sh" was not related to any of the PRs that are under testing -
999 161 Rishabh Dave
https://pulpito.ceph.com/rishabh-2023-07-21_21:30:53-fs-wip-rishabh-2023Jul13-base-2-testing-default-smithi/
1000
1001
* https://tracker.ceph.com/issues/61892
1002
  test_snapshot_remove (test_strays.TestStrays) failed
1003
* https://tracker.ceph.com/issues/53859
1004
  test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
1005
* https://tracker.ceph.com/issues/61982
1006
  test_clean_stale_subvolume_snapshot_metadata (tasks.cephfs.test_volumes.TestSubvolumeSnapshots)
1007
* https://tracker.ceph.com/issues/52438
1008
  qa: ffsb timeout
1009
* https://tracker.ceph.com/issues/54460
1010
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1011
* https://tracker.ceph.com/issues/57655
1012
  qa: fs:mixed-clients kernel_untar_build failure
1013
* https://tracker.ceph.com/issues/48773
1014
  reached max tries: scrub does not complete
1015
* https://tracker.ceph.com/issues/58340
1016
  mds: fsstress.sh hangs with multimds
1017
* https://tracker.ceph.com/issues/61400
1018
  valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
1019
* https://tracker.ceph.com/issues/57206
1020
  libcephfs/test.sh: ceph_test_libcephfs_reclaim
1021
  
1022
* https://tracker.ceph.com/issues/57656
1023
  [testing] dbench: write failed on handle 10010 (Resource temporarily unavailable)
1024
* https://tracker.ceph.com/issues/61399
1025
  ior build failure
1026
* https://tracker.ceph.com/issues/57676
1027
  error during scrub thrashing: backtrace
1028
  
1029
* https://tracker.ceph.com/issues/38452
1030
  'sudo -u postgres -- pgbench -s 500 -i' failed
1031 158 Rishabh Dave
* https://tracker.ceph.com/issues/62126
1032 157 Venky Shankar
  blogbench.sh failure
1033
1034
h3. 18 July 2023
1035
1036
* https://tracker.ceph.com/issues/52624
1037
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
1038
* https://tracker.ceph.com/issues/57676
1039
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1040
* https://tracker.ceph.com/issues/54460
1041
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1042
* https://tracker.ceph.com/issues/57655
1043
    qa: fs:mixed-clients kernel_untar_build failure
1044
* https://tracker.ceph.com/issues/51964
1045
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1046
* https://tracker.ceph.com/issues/59344
1047
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
1048
* https://tracker.ceph.com/issues/61182
1049
    cephfs-mirror-ha-workunit: reached maximum tries (50) after waiting for 300 seconds
1050
* https://tracker.ceph.com/issues/61957
1051
    test_client_limits.TestClientLimits.test_client_release_bug
1052
* https://tracker.ceph.com/issues/59348
1053
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1054
* https://tracker.ceph.com/issues/61892
1055
    test_strays.TestStrays.test_snapshot_remove failed
1056
* https://tracker.ceph.com/issues/59346
1057
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
1058
* https://tracker.ceph.com/issues/44565
1059
    src/mds/SimpleLock.h: 528: FAILED ceph_assert(state == LOCK_XLOCK || state == LOCK_XLOCKDONE || state == LOCK_XLOCKSNAP || state == LOCK_LOCK_XLOCK || state == LOCK_LOCK || is_locallock())
1060
* https://tracker.ceph.com/issues/62067
1061
    ffsb.sh failure "Resource temporarily unavailable"
1062 156 Venky Shankar
1063
1064
h3. 17 July 2023
1065
1066
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230704.040136
1067
1068
* https://tracker.ceph.com/issues/61982
1069
    Test failure: test_clean_stale_subvolume_snapshot_metadata (tasks.cephfs.test_volumes.TestSubvolumeSnapshots)
1070
* https://tracker.ceph.com/issues/59344
1071
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
1072
* https://tracker.ceph.com/issues/61182
1073
    cephfs-mirror-ha-workunit: reached maximum tries (50) after waiting for 300 seconds
1074
* https://tracker.ceph.com/issues/61957
1075
    test_client_limits.TestClientLimits.test_client_release_bug
1076
* https://tracker.ceph.com/issues/61400
1077
    valgrind+ceph-mon: segmentation fault in rocksdb+tcmalloc
1078
* https://tracker.ceph.com/issues/59348
1079
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1080
* https://tracker.ceph.com/issues/61892
1081
    test_strays.TestStrays.test_snapshot_remove failed
1082
* https://tracker.ceph.com/issues/59346
1083
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
1084
* https://tracker.ceph.com/issues/62036
1085
    src/mds/MDCache.cc: 5131: FAILED ceph_assert(isolated_inodes.empty())
1086
* https://tracker.ceph.com/issues/61737
1087
    coredump from '/bin/podman pull quay.ceph.io/ceph-ci/ceph:pacific'
1088
* https://tracker.ceph.com/issues/44565
1089
    src/mds/SimpleLock.h: 528: FAILED ceph_assert(state == LOCK_XLOCK || state == LOCK_XLOCKDONE || state == LOCK_XLOCKSNAP || state == LOCK_LOCK_XLOCK || state == LOCK_LOCK || is_locallock())
1090 155 Rishabh Dave
1091 1 Patrick Donnelly
1092 153 Rishabh Dave
h3. 13 July 2023 Run 2
1093 152 Rishabh Dave
1094
1095
https://pulpito.ceph.com/rishabh-2023-07-08_23:33:40-fs-wip-rishabh-2023Jul9-testing-default-smithi/
1096
https://pulpito.ceph.com/rishabh-2023-07-09_20:19:09-fs-wip-rishabh-2023Jul9-testing-default-smithi/
1097
1098
* https://tracker.ceph.com/issues/61957
1099
  test_client_limits.TestClientLimits.test_client_release_bug
1100
* https://tracker.ceph.com/issues/61982
1101
  Test failure: test_clean_stale_subvolume_snapshot_metadata (tasks.cephfs.test_volumes.TestSubvolumeSnapshots)
1102
* https://tracker.ceph.com/issues/59348
1103
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1104
* https://tracker.ceph.com/issues/59344
1105
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
1106
* https://tracker.ceph.com/issues/54460
1107
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1108
* https://tracker.ceph.com/issues/57655
1109
  qa: fs:mixed-clients kernel_untar_build failure
1110
* https://tracker.ceph.com/issues/61400
1111
  valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
1112
* https://tracker.ceph.com/issues/61399
1113
  ior build failure
1114
1115 151 Venky Shankar
h3. 13 July 2023
1116
1117
https://pulpito.ceph.com/vshankar-2023-07-04_11:45:30-fs-wip-vshankar-testing-20230704.040242-testing-default-smithi/
1118
1119
* https://tracker.ceph.com/issues/54460
1120
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1121
* https://tracker.ceph.com/issues/61400
1122
    valgrind+ceph-mon: segmentation fault in rocksdb+tcmalloc
1123
* https://tracker.ceph.com/issues/57655
1124
    qa: fs:mixed-clients kernel_untar_build failure
1125
* https://tracker.ceph.com/issues/61945
1126
    LibCephFS.DelegTimeout failure
1127
* https://tracker.ceph.com/issues/52624
1128
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
1129
* https://tracker.ceph.com/issues/57676
1130
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1131
* https://tracker.ceph.com/issues/59348
1132
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1133
* https://tracker.ceph.com/issues/59344
1134
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
1135
* https://tracker.ceph.com/issues/51964
1136
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1137
* https://tracker.ceph.com/issues/59346
1138
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
1139
* https://tracker.ceph.com/issues/61982
1140
    Test failure: test_clean_stale_subvolume_snapshot_metadata (tasks.cephfs.test_volumes.TestSubvolumeSnapshots)
1141 150 Rishabh Dave
1142
1143
h3. 13 Jul 2023
1144
1145
https://pulpito.ceph.com/rishabh-2023-07-05_22:21:20-fs-wip-rishabh-2023Jul5-testing-default-smithi/
1146
https://pulpito.ceph.com/rishabh-2023-07-06_19:33:28-fs-wip-rishabh-2023Jul5-testing-default-smithi/
1147
1148
* https://tracker.ceph.com/issues/61957
1149
  test_client_limits.TestClientLimits.test_client_release_bug
1150
* https://tracker.ceph.com/issues/59348
1151
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1152
* https://tracker.ceph.com/issues/59346
1153
  fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
1154
* https://tracker.ceph.com/issues/48773
1155
  scrub does not complete: reached max tries
1156
* https://tracker.ceph.com/issues/59344
1157
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
1158
* https://tracker.ceph.com/issues/52438
1159
  qa: ffsb timeout
1160
* https://tracker.ceph.com/issues/57656
1161
  [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
1162
* https://tracker.ceph.com/issues/58742
1163
  xfstests-dev: kcephfs: generic
1164
* https://tracker.ceph.com/issues/61399
1165 148 Rishabh Dave
  libmpich: undefined references to fi_strerror
1166 149 Rishabh Dave
1167 148 Rishabh Dave
h3. 12 July 2023
1168
1169
https://pulpito.ceph.com/rishabh-2023-07-05_18:32:52-fs-wip-rishabh-CephManager-in-CephFSTestCase-testing-default-smithi/
1170
https://pulpito.ceph.com/rishabh-2023-07-06_19:46:43-fs-wip-rishabh-CephManager-in-CephFSTestCase-testing-default-smithi/
1171
1172
* https://tracker.ceph.com/issues/61892
1173
  test_strays.TestStrays.test_snapshot_remove failed
1174
* https://tracker.ceph.com/issues/59348
1175
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1176
* https://tracker.ceph.com/issues/53859
1177
  qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
1178
* https://tracker.ceph.com/issues/59346
1179
  fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write"
1180
* https://tracker.ceph.com/issues/58742
1181
  xfstests-dev: kcephfs: generic
1182
* https://tracker.ceph.com/issues/59344
1183
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
1184
* https://tracker.ceph.com/issues/52438
1185
  qa: ffsb timeout
1186
* https://tracker.ceph.com/issues/57656
1187
  [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
1188
* https://tracker.ceph.com/issues/54460
1189
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1190
* https://tracker.ceph.com/issues/57655
1191
  qa: fs:mixed-clients kernel_untar_build failure
1192
* https://tracker.ceph.com/issues/61182
1193
  cephfs-mirror-ha-workunit: reached maximum tries (50) after waiting for 300 seconds
1194
* https://tracker.ceph.com/issues/61400
1195
  valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
1196 147 Rishabh Dave
* https://tracker.ceph.com/issues/48773
1197 146 Patrick Donnelly
  reached max tries: scrub does not complete
1198
1199
h3. 05 July 2023
1200
1201
https://pulpito.ceph.com/pdonnell-2023-07-05_03:38:33-fs:libcephfs-wip-pdonnell-testing-20230705.003205-distro-default-smithi/
1202
1203 137 Rishabh Dave
* https://tracker.ceph.com/issues/59346
1204 143 Rishabh Dave
  fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write"
1205
1206
h3. 27 Jun 2023
1207
1208
https://pulpito.ceph.com/rishabh-2023-06-21_23:38:17-fs-wip-rishabh-improvements-authmon-testing-default-smithi/
1209 144 Rishabh Dave
https://pulpito.ceph.com/rishabh-2023-06-23_17:37:30-fs-wip-rishabh-improvements-authmon-distro-default-smithi/
1210
1211
* https://tracker.ceph.com/issues/59348
1212
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1213
* https://tracker.ceph.com/issues/54460
1214
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1215
* https://tracker.ceph.com/issues/59346
1216
  fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write"
1217
* https://tracker.ceph.com/issues/59344
1218
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
1219
* https://tracker.ceph.com/issues/61399
1220
  libmpich: undefined references to fi_strerror
1221
* https://tracker.ceph.com/issues/50223
1222
  client.xxxx isn't responding to mclientcaps(revoke)
1223 143 Rishabh Dave
* https://tracker.ceph.com/issues/61831
1224
  Test failure: test_mirroring_init_failure_with_recovery (tasks.cephfs.test_mirroring.TestMirroring)
1225 142 Venky Shankar
1226
1227
h3. 22 June 2023
1228
1229
* https://tracker.ceph.com/issues/57676
1230
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1231
* https://tracker.ceph.com/issues/54460
1232
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1233
* https://tracker.ceph.com/issues/59344
1234
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
1235
* https://tracker.ceph.com/issues/59348
1236
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1237
* https://tracker.ceph.com/issues/61400
1238
    valgrind+ceph-mon: segmentation fault in rocksdb+tcmalloc
1239
* https://tracker.ceph.com/issues/57655
1240
    qa: fs:mixed-clients kernel_untar_build failure
1241
* https://tracker.ceph.com/issues/61394
1242
    qa/quincy: cluster [WRN] evicting unresponsive client smithi152 (4298), after 303.726 seconds" in cluster log
1243
* https://tracker.ceph.com/issues/61762
1244
    qa: wait_for_clean: failed before timeout expired
1245
* https://tracker.ceph.com/issues/61775
1246
    cephfs-mirror: mirror daemon does not shutdown (in mirror ha tests)
1247
* https://tracker.ceph.com/issues/44565
1248
    src/mds/SimpleLock.h: 528: FAILED ceph_assert(state == LOCK_XLOCK || state == LOCK_XLOCKDONE || state == LOCK_XLOCKSNAP || state == LOCK_LOCK_XLOCK || state == LOCK_LOCK || is_locallock())
1249
* https://tracker.ceph.com/issues/61790
1250
    cephfs client to mds comms remain silent after reconnect
1251
* https://tracker.ceph.com/issues/61791
1252
    snaptest-git-ceph.sh test timed out (job dead)
1253 139 Venky Shankar
1254
1255
h3. 20 June 2023
1256
1257
https://pulpito.ceph.com/vshankar-2023-06-15_04:58:28-fs-wip-vshankar-testing-20230614.124123-testing-default-smithi/
1258
1259
* https://tracker.ceph.com/issues/57676
1260
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1261
* https://tracker.ceph.com/issues/54460
1262
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1263 140 Venky Shankar
* https://tracker.ceph.com/issues/54462
1264 1 Patrick Donnelly
  Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128
1265 141 Venky Shankar
* https://tracker.ceph.com/issues/58340
1266 139 Venky Shankar
  mds: fsstress.sh hangs with multimds
1267
* https://tracker.ceph.com/issues/59344
1268
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
1269
* https://tracker.ceph.com/issues/59348
1270
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1271
* https://tracker.ceph.com/issues/57656
1272
  [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
1273
* https://tracker.ceph.com/issues/61400
1274
    valgrind+ceph-mon: segmentation fault in rocksdb+tcmalloc
1275
* https://tracker.ceph.com/issues/57655
1276
    qa: fs:mixed-clients kernel_untar_build failure
1277
* https://tracker.ceph.com/issues/44565
1278
    src/mds/SimpleLock.h: 528: FAILED ceph_assert(state == LOCK_XLOCK || state == LOCK_XLOCKDONE || state == LOCK_XLOCKSNAP || state == LOCK_LOCK_XLOCK || state == LOCK_LOCK || is_locallock())
1279
* https://tracker.ceph.com/issues/61737
1280 138 Rishabh Dave
    coredump from '/bin/podman pull quay.ceph.io/ceph-ci/ceph:pacific'
1281
1282
h3. 16 June 2023
1283
1284 1 Patrick Donnelly
https://pulpito.ceph.com/rishabh-2023-05-16_10:39:13-fs-wip-rishabh-2023May15-1524-testing-default-smithi/
1285 145 Rishabh Dave
https://pulpito.ceph.com/rishabh-2023-05-17_11:09:48-fs-wip-rishabh-2023May15-1524-testing-default-smithi/
1286 138 Rishabh Dave
https://pulpito.ceph.com/rishabh-2023-05-18_10:01:53-fs-wip-rishabh-2023May15-1524-testing-default-smithi/
1287 1 Patrick Donnelly
(bins were rebuilt with a subset of orig PRs) https://pulpito.ceph.com/rishabh-2023-06-09_10:19:22-fs-wip-rishabh-2023Jun9-1308-testing-default-smithi/
1288
1289
1290
* https://tracker.ceph.com/issues/59344
1291
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
1292 138 Rishabh Dave
* https://tracker.ceph.com/issues/59348
1293
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1294 145 Rishabh Dave
* https://tracker.ceph.com/issues/59346
1295
  fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write"
1296
* https://tracker.ceph.com/issues/57656
1297
  [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
1298
* https://tracker.ceph.com/issues/54460
1299
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1300 138 Rishabh Dave
* https://tracker.ceph.com/issues/54462
1301
  Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128
1302 145 Rishabh Dave
* https://tracker.ceph.com/issues/61399
1303
  libmpich: undefined references to fi_strerror
1304
* https://tracker.ceph.com/issues/58945
1305
  xfstests-dev: ceph-fuse: generic 
1306 138 Rishabh Dave
* https://tracker.ceph.com/issues/58742
1307 136 Patrick Donnelly
  xfstests-dev: kcephfs: generic
1308
1309
h3. 24 May 2023
1310
1311
https://pulpito.ceph.com/pdonnell-2023-05-23_18:20:18-fs-wip-pdonnell-testing-20230523.134409-distro-default-smithi/
1312
1313
* https://tracker.ceph.com/issues/57676
1314
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1315
* https://tracker.ceph.com/issues/59683
1316
    Error: Unable to find a match: userspace-rcu-devel libedit-devel device-mapper-devel with fscrypt tests
1317
* https://tracker.ceph.com/issues/61399
1318
    qa: "[Makefile:299: ior] Error 1"
1319
* https://tracker.ceph.com/issues/61265
1320
    qa: tasks.cephfs.fuse_mount:process failed to terminate after unmount
1321
* https://tracker.ceph.com/issues/59348
1322
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1323
* https://tracker.ceph.com/issues/59346
1324
    qa/workunits/fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write"
1325
* https://tracker.ceph.com/issues/61400
1326
    valgrind+ceph-mon: segmentation fault in rocksdb+tcmalloc
1327
* https://tracker.ceph.com/issues/54460
1328
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1329
* https://tracker.ceph.com/issues/51964
1330
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1331
* https://tracker.ceph.com/issues/59344
1332
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
1333
* https://tracker.ceph.com/issues/61407
1334
    mds: abort on CInode::verify_dirfrags
1335
* https://tracker.ceph.com/issues/48773
1336
    qa: scrub does not complete
1337
* https://tracker.ceph.com/issues/57655
1338
    qa: fs:mixed-clients kernel_untar_build failure
1339
* https://tracker.ceph.com/issues/61409
1340 128 Venky Shankar
    qa: _test_stale_caps does not wait for file flush before stat
1341
1342
h3. 15 May 2023
1343 130 Venky Shankar
1344 128 Venky Shankar
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230509.090020
1345
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230509.090020-6
1346
1347
* https://tracker.ceph.com/issues/52624
1348
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
1349
* https://tracker.ceph.com/issues/54460
1350
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1351
* https://tracker.ceph.com/issues/57676
1352
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1353
* https://tracker.ceph.com/issues/59684 [kclient bug]
1354
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
1355
* https://tracker.ceph.com/issues/59348
1356
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1357 131 Venky Shankar
* https://tracker.ceph.com/issues/61148
1358
    dbench test results in call trace in dmesg [kclient bug]
1359 133 Kotresh Hiremath Ravishankar
* https://tracker.ceph.com/issues/58340
1360 134 Kotresh Hiremath Ravishankar
    mds: fsstress.sh hangs with multimds
1361 125 Venky Shankar
1362
 
1363 129 Rishabh Dave
h3. 11 May 2023
1364
1365
https://pulpito.ceph.com/yuriw-2023-05-10_18:21:40-fs-wip-yuri7-testing-2023-05-10-0742-distro-default-smithi/
1366
1367
* https://tracker.ceph.com/issues/59684 [kclient bug]
1368
  Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
1369
* https://tracker.ceph.com/issues/59348
1370
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1371
* https://tracker.ceph.com/issues/57655
1372
  qa: fs:mixed-clients kernel_untar_build failure
1373
* https://tracker.ceph.com/issues/57676
1374
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
1375
* https://tracker.ceph.com/issues/55805
1376
  error during scrub thrashing reached max tries in 900 secs
1377
* https://tracker.ceph.com/issues/54460
1378
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1379
* https://tracker.ceph.com/issues/57656
1380
  [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
1381
* https://tracker.ceph.com/issues/58220
1382
  Command failed (workunit test fs/quota/quota.sh) on smithi081 with status 1:
1383 1 Patrick Donnelly
* https://tracker.ceph.com/issues/58220#note-9
1384
  workunit fs/test_python.sh: test_disk_quota_exceeeded_error failure
1385 134 Kotresh Hiremath Ravishankar
* https://tracker.ceph.com/issues/59342
1386
  qa/workunits/kernel_untar_build.sh failed when compiling the Linux source
1387 135 Kotresh Hiremath Ravishankar
* https://tracker.ceph.com/issues/58949
1388
    test_cephfs.test_disk_quota_exceeeded_error - AssertionError: DiskQuotaExceeded not raised by write
1389 129 Rishabh Dave
* https://tracker.ceph.com/issues/61243 (NEW)
1390
  test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
1391
1392 125 Venky Shankar
h3. 11 May 2023
1393 127 Venky Shankar
1394
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230509.054005
1395 126 Venky Shankar
1396 125 Venky Shankar
(no fsstress job failure [https://tracker.ceph.com/issues/58340] since https://github.com/ceph/ceph/pull/49553
1397
 was included in the branch, however, the PR got updated and needs retest).
1398
1399
* https://tracker.ceph.com/issues/52624
1400
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
1401
* https://tracker.ceph.com/issues/54460
1402
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1403
* https://tracker.ceph.com/issues/57676
1404
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1405
* https://tracker.ceph.com/issues/59683
1406
    Error: Unable to find a match: userspace-rcu-devel libedit-devel device-mapper-devel with fscrypt tests
1407
* https://tracker.ceph.com/issues/59684 [kclient bug]
1408
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
1409
* https://tracker.ceph.com/issues/59348
1410 124 Venky Shankar
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1411
1412
h3. 09 May 2023
1413
1414
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230506.143554
1415
1416
* https://tracker.ceph.com/issues/52624
1417
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
1418
* https://tracker.ceph.com/issues/58340
1419
    mds: fsstress.sh hangs with multimds
1420
* https://tracker.ceph.com/issues/54460
1421
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1422
* https://tracker.ceph.com/issues/57676
1423
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1424
* https://tracker.ceph.com/issues/51964
1425
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1426
* https://tracker.ceph.com/issues/59350
1427
    qa: test_flag_scrub_mdsdir (tasks.cephfs.test_scrub_checks.TestScrubChecks) ... ERROR
1428
* https://tracker.ceph.com/issues/59683
1429
    Error: Unable to find a match: userspace-rcu-devel libedit-devel device-mapper-devel with fscrypt tests
1430
* https://tracker.ceph.com/issues/59684 [kclient bug]
1431
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
1432
* https://tracker.ceph.com/issues/59348
1433 123 Venky Shankar
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1434
1435
h3. 10 Apr 2023
1436
1437
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230330.105356
1438
1439
* https://tracker.ceph.com/issues/52624
1440
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
1441
* https://tracker.ceph.com/issues/58340
1442
    mds: fsstress.sh hangs with multimds
1443
* https://tracker.ceph.com/issues/54460
1444
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1445
* https://tracker.ceph.com/issues/57676
1446
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1447 119 Rishabh Dave
* https://tracker.ceph.com/issues/51964
1448 120 Rishabh Dave
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1449 121 Rishabh Dave
1450 120 Rishabh Dave
h3. 31 Mar 2023
1451 122 Rishabh Dave
1452
run: http://pulpito.front.sepia.ceph.com/rishabh-2023-03-03_21:39:49-fs-wip-rishabh-2023Mar03-2316-testing-default-smithi/
1453 120 Rishabh Dave
re-run (some PRs removed from batch): http://pulpito.front.sepia.ceph.com/rishabh-2023-03-11_05:54:03-fs-wip-rishabh-2023Mar10-1727-testing-default-smithi/
1454
re-run (some PRs removed from batch): http://pulpito.front.sepia.ceph.com/rishabh-2023-03-23_08:27:28-fs-wip-rishabh-2023Mar20-2250-testing-default-smithi/
1455
1456
There were many more re-runs for "failed+dead" jobs as well as for individual jobs. half of the PRs from the batch were removed (gradually over subsequent re-runs).
1457
1458
* https://tracker.ceph.com/issues/57676
1459
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
1460
* https://tracker.ceph.com/issues/54460
1461
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1462
* https://tracker.ceph.com/issues/58220
1463
  Command failed (workunit test fs/quota/quota.sh) on smithixxx with status 1
1464
* https://tracker.ceph.com/issues/58220#note-9
1465
  workunit fs/test_python.sh: test_disk_quota_exceeeded_error failure
1466
* https://tracker.ceph.com/issues/56695
1467
  Command failed (workunit test suites/pjd.sh)
1468
* https://tracker.ceph.com/issues/58564 
1469
  workuit dbench failed with error code 1
1470
* https://tracker.ceph.com/issues/57206
1471
  libcephfs/test.sh: ceph_test_libcephfs_reclaim
1472
* https://tracker.ceph.com/issues/57580
1473
  Test failure: test_newops_getvxattr (tasks.cephfs.test_newops.TestNewOps)
1474
* https://tracker.ceph.com/issues/58940
1475
  ceph osd hit ceph_abort
1476
* https://tracker.ceph.com/issues/55805
1477 118 Venky Shankar
  error scrub thrashing reached max tries in 900 secs
1478
1479
h3. 30 March 2023
1480
1481
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230315.085747
1482
1483
* https://tracker.ceph.com/issues/58938
1484
    qa: xfstests-dev's generic test suite has 7 failures with kclient
1485
* https://tracker.ceph.com/issues/51964
1486
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1487
* https://tracker.ceph.com/issues/58340
1488 114 Venky Shankar
    mds: fsstress.sh hangs with multimds
1489
1490 115 Venky Shankar
h3. 29 March 2023
1491 114 Venky Shankar
1492
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230317.095222
1493
1494
* https://tracker.ceph.com/issues/56695
1495
    [RHEL stock] pjd test failures
1496
* https://tracker.ceph.com/issues/57676
1497
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1498
* https://tracker.ceph.com/issues/57087
1499
    qa: test_fragmented_injection (tasks.cephfs.test_data_scan.TestDataScan) failure
1500 116 Venky Shankar
* https://tracker.ceph.com/issues/58340
1501
    mds: fsstress.sh hangs with multimds
1502 114 Venky Shankar
* https://tracker.ceph.com/issues/57655
1503
    qa: fs:mixed-clients kernel_untar_build failure
1504 117 Venky Shankar
* https://tracker.ceph.com/issues/59230
1505
    Test failure: test_object_deletion (tasks.cephfs.test_damage.TestDamage)
1506 114 Venky Shankar
* https://tracker.ceph.com/issues/58938
1507 113 Venky Shankar
    qa: xfstests-dev's generic test suite has 7 failures with kclient
1508
1509
h3. 13 Mar 2023
1510
1511
* https://tracker.ceph.com/issues/56695
1512
    [RHEL stock] pjd test failures
1513
* https://tracker.ceph.com/issues/57676
1514
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1515
* https://tracker.ceph.com/issues/51964
1516
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1517
* https://tracker.ceph.com/issues/54460
1518
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1519
* https://tracker.ceph.com/issues/57656
1520 112 Venky Shankar
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
1521
1522
h3. 09 Mar 2023
1523
1524
https://pulpito.ceph.com/vshankar-2023-03-03_04:39:14-fs-wip-vshankar-testing-20230303.023823-testing-default-smithi/
1525
https://pulpito.ceph.com/vshankar-2023-03-08_15:12:36-fs-wip-vshankar-testing-20230308.112059-testing-default-smithi/
1526
1527
* https://tracker.ceph.com/issues/56695
1528
    [RHEL stock] pjd test failures
1529
* https://tracker.ceph.com/issues/57676
1530
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1531
* https://tracker.ceph.com/issues/51964
1532
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1533
* https://tracker.ceph.com/issues/54460
1534
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1535
* https://tracker.ceph.com/issues/58340
1536
    mds: fsstress.sh hangs with multimds
1537
* https://tracker.ceph.com/issues/57087
1538 111 Venky Shankar
    qa: test_fragmented_injection (tasks.cephfs.test_data_scan.TestDataScan) failure
1539
1540
h3. 07 Mar 2023
1541
1542
https://pulpito.ceph.com/vshankar-2023-03-02_09:21:58-fs-wip-vshankar-testing-20230222.044949-testing-default-smithi/
1543
https://pulpito.ceph.com/vshankar-2023-03-07_05:15:12-fs-wip-vshankar-testing-20230307.030510-testing-default-smithi/
1544
1545
* https://tracker.ceph.com/issues/56695
1546
    [RHEL stock] pjd test failures
1547
* https://tracker.ceph.com/issues/57676
1548
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1549
* https://tracker.ceph.com/issues/51964
1550
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1551
* https://tracker.ceph.com/issues/57656
1552
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
1553
* https://tracker.ceph.com/issues/57655
1554
    qa: fs:mixed-clients kernel_untar_build failure
1555
* https://tracker.ceph.com/issues/58220
1556
    Command failed (workunit test fs/quota/quota.sh) on smithi081 with status 1:
1557
* https://tracker.ceph.com/issues/54460
1558
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1559
* https://tracker.ceph.com/issues/58934
1560 109 Venky Shankar
    snaptest-git-ceph.sh failure with ceph-fuse
1561
1562
h3. 28 Feb 2023
1563
1564
https://pulpito.ceph.com/vshankar-2023-02-24_02:11:45-fs-wip-vshankar-testing-20230222.025426-testing-default-smithi/
1565
1566
* https://tracker.ceph.com/issues/56695
1567
    [RHEL stock] pjd test failures
1568
* https://tracker.ceph.com/issues/57676
1569
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1570 110 Venky Shankar
* https://tracker.ceph.com/issues/56446
1571 109 Venky Shankar
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
1572
1573 107 Venky Shankar
(teuthology infra issues causing testing delays - merging PRs which have tests passing)
1574
1575
h3. 25 Jan 2023
1576
1577
https://pulpito.ceph.com/vshankar-2023-01-25_07:57:32-fs-wip-vshankar-testing-20230125.055346-testing-default-smithi/
1578
1579
* https://tracker.ceph.com/issues/52624
1580
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
1581
* https://tracker.ceph.com/issues/56695
1582
    [RHEL stock] pjd test failures
1583
* https://tracker.ceph.com/issues/57676
1584
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1585
* https://tracker.ceph.com/issues/56446
1586
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
1587
* https://tracker.ceph.com/issues/57206
1588
    libcephfs/test.sh: ceph_test_libcephfs_reclaim
1589
* https://tracker.ceph.com/issues/58220
1590
    Command failed (workunit test fs/quota/quota.sh) on smithi081 with status 1:
1591
* https://tracker.ceph.com/issues/58340
1592
  mds: fsstress.sh hangs with multimds
1593
* https://tracker.ceph.com/issues/56011
1594
    fs/thrash: snaptest-snap-rm-cmp.sh fails in mds5sum comparison
1595
* https://tracker.ceph.com/issues/54460
1596 101 Rishabh Dave
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1597
1598
h3. 30 JAN 2023
1599
1600
run: http://pulpito.front.sepia.ceph.com/rishabh-2022-11-28_08:04:11-fs-wip-rishabh-testing-2022Nov24-1818-testing-default-smithi/
1601
re-run: http://pulpito.front.sepia.ceph.com/rishabh-2023-01-13_12:08:33-fs-wip-rishabh-testing-2022Nov24-11Jan2023-distro-default-smithi/
1602 105 Rishabh Dave
re-run of re-run: http://pulpito.front.sepia.ceph.com/rishabh-2023-01-23_18:53:32-fs-wip-rishabh-testing-2022Nov24-11Jan2023-distro-default-smithi/
1603
1604 101 Rishabh Dave
* https://tracker.ceph.com/issues/52624
1605
  qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
1606
* https://tracker.ceph.com/issues/56695
1607
  [RHEL stock] pjd test failures
1608
* https://tracker.ceph.com/issues/57676
1609
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
1610
* https://tracker.ceph.com/issues/55332
1611
  Failure in snaptest-git-ceph.sh
1612
* https://tracker.ceph.com/issues/51964
1613
  qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1614
* https://tracker.ceph.com/issues/56446
1615
  Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
1616
* https://tracker.ceph.com/issues/57655 
1617
  qa: fs:mixed-clients kernel_untar_build failure
1618
* https://tracker.ceph.com/issues/54460
1619
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1620 103 Rishabh Dave
* https://tracker.ceph.com/issues/58340
1621
  mds: fsstress.sh hangs with multimds
1622 101 Rishabh Dave
* https://tracker.ceph.com/issues/58219
1623 102 Rishabh Dave
  Command crashed: 'ceph-dencoder type inode_backtrace_t import - decode dump_json'
1624
1625
* "Failed to load ceph-mgr modules: prometheus" in cluster log"
1626 106 Rishabh Dave
  http://pulpito.front.sepia.ceph.com/rishabh-2023-01-23_18:53:32-fs-wip-rishabh-testing-2022Nov24-11Jan2023-distro-default-smithi/7134086
1627
  Acc to Venky this was fixed in https://github.com/ceph/ceph/commit/cf6089200d96fc56b08ee17a4e31f19823370dc8
1628 102 Rishabh Dave
* Created https://tracker.ceph.com/issues/58564
1629 100 Venky Shankar
  workunit test suites/dbench.sh failed error code 1
1630
1631
h3. 15 Dec 2022
1632
1633
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20221215.112736
1634
1635
* https://tracker.ceph.com/issues/52624
1636
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1637
* https://tracker.ceph.com/issues/56695
1638
    [RHEL stock] pjd test failures
1639
* https://tracker.ceph.com/issues/58219
1640
* https://tracker.ceph.com/issues/57655
1641
* qa: fs:mixed-clients kernel_untar_build failure
1642
    Test failure: test_journal_migration (tasks.cephfs.test_journal_migration.TestJournalMigration)
1643
* https://tracker.ceph.com/issues/57676
1644
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1645
* https://tracker.ceph.com/issues/58340
1646 96 Venky Shankar
    mds: fsstress.sh hangs with multimds
1647
1648
h3. 08 Dec 2022
1649 99 Venky Shankar
1650 96 Venky Shankar
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20221130.043104
1651
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20221209.043803
1652
1653
(lots of transient git.ceph.com failures)
1654
1655
* https://tracker.ceph.com/issues/52624
1656
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1657
* https://tracker.ceph.com/issues/56695
1658
    [RHEL stock] pjd test failures
1659
* https://tracker.ceph.com/issues/57655
1660
    qa: fs:mixed-clients kernel_untar_build failure
1661
* https://tracker.ceph.com/issues/58219
1662
    Test failure: test_journal_migration (tasks.cephfs.test_journal_migration.TestJournalMigration)
1663
* https://tracker.ceph.com/issues/58220
1664
    Command failed (workunit test fs/quota/quota.sh) on smithi081 with status 1:
1665 97 Venky Shankar
* https://tracker.ceph.com/issues/57676
1666
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1667 98 Venky Shankar
* https://tracker.ceph.com/issues/53859
1668
    qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
1669
* https://tracker.ceph.com/issues/54460
1670
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1671 96 Venky Shankar
* https://tracker.ceph.com/issues/58244
1672 95 Venky Shankar
    Test failure: test_rebuild_inotable (tasks.cephfs.test_data_scan.TestDataScan)
1673
1674
h3. 14 Oct 2022
1675
1676
https://pulpito.ceph.com/vshankar-2022-10-12_04:56:59-fs-wip-vshankar-testing-20221011-145847-testing-default-smithi/
1677
https://pulpito.ceph.com/vshankar-2022-10-14_04:04:57-fs-wip-vshankar-testing-20221014-072608-testing-default-smithi/
1678
1679
* https://tracker.ceph.com/issues/52624
1680
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1681
* https://tracker.ceph.com/issues/55804
1682
    Command failed (workunit test suites/pjd.sh)
1683
* https://tracker.ceph.com/issues/51964
1684
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1685
* https://tracker.ceph.com/issues/57682
1686
    client: ERROR: test_reconnect_after_blocklisted
1687 90 Rishabh Dave
* https://tracker.ceph.com/issues/54460
1688 91 Rishabh Dave
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1689
1690
h3. 10 Oct 2022
1691 92 Rishabh Dave
1692 91 Rishabh Dave
http://pulpito.front.sepia.ceph.com/rishabh-2022-09-30_19:45:21-fs-wip-rishabh-testing-30Sep2022-testing-default-smithi/
1693
1694
reruns
1695
* fs-thrash, passed: http://pulpito.front.sepia.ceph.com/rishabh-2022-10-04_13:19:47-fs-wip-rishabh-testing-30Sep2022-testing-default-smithi/
1696 94 Rishabh Dave
* fs-verify, passed: http://pulpito.front.sepia.ceph.com/rishabh-2022-10-05_12:25:37-fs-wip-rishabh-testing-30Sep2022-testing-default-smithi/
1697 91 Rishabh Dave
* cephadm failures also passed after many re-runs: http://pulpito.front.sepia.ceph.com/rishabh-2022-10-06_13:50:51-fs-wip-rishabh-testing-30Sep2022-2-testing-default-smithi/
1698 93 Rishabh Dave
    ** needed this PR to be merged in ceph-ci branch - https://github.com/ceph/ceph/pull/47458
1699 91 Rishabh Dave
1700
known bugs
1701
* https://tracker.ceph.com/issues/52624
1702
  qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1703
* https://tracker.ceph.com/issues/50223
1704
  client.xxxx isn't responding to mclientcaps(revoke
1705
* https://tracker.ceph.com/issues/57299
1706
  qa: test_dump_loads fails with JSONDecodeError
1707
* https://tracker.ceph.com/issues/57655 [Exist in main as well]
1708
  qa: fs:mixed-clients kernel_untar_build failure
1709
* https://tracker.ceph.com/issues/57206
1710 90 Rishabh Dave
  libcephfs/test.sh: ceph_test_libcephfs_reclaim
1711
1712
h3. 2022 Sep 29
1713
1714
http://pulpito.front.sepia.ceph.com/rishabh-2022-09-14_12:48:43-fs-wip-rishabh-testing-2022Sep9-1708-testing-default-smithi/
1715
1716
* https://tracker.ceph.com/issues/55804
1717
  Command failed (workunit test suites/pjd.sh)
1718
* https://tracker.ceph.com/issues/36593
1719
  Command failed (workunit test fs/quota/quota.sh) on smithixxx with status 1
1720
* https://tracker.ceph.com/issues/52624
1721
  qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1722
* https://tracker.ceph.com/issues/51964
1723
  qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1724
* https://tracker.ceph.com/issues/56632
1725
  Test failure: test_subvolume_snapshot_clone_quota_exceeded
1726
* https://tracker.ceph.com/issues/50821
1727 88 Patrick Donnelly
  qa: untar_snap_rm failure during mds thrashing
1728
1729
h3. 2022 Sep 26
1730
1731
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20220923.171109
1732
1733
* https://tracker.ceph.com/issues/55804
1734
    qa failure: pjd link tests failed
1735
* https://tracker.ceph.com/issues/57676
1736
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1737
* https://tracker.ceph.com/issues/52624
1738
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1739
* https://tracker.ceph.com/issues/57580
1740
    Test failure: test_newops_getvxattr (tasks.cephfs.test_newops.TestNewOps)
1741
* https://tracker.ceph.com/issues/48773
1742
    qa: scrub does not complete
1743
* https://tracker.ceph.com/issues/57299
1744
    qa: test_dump_loads fails with JSONDecodeError
1745
* https://tracker.ceph.com/issues/57280
1746
    qa: tasks/kernel_cfuse_workunits_untarbuild_blogbench fails - Failed to fetch package version from shaman
1747
* https://tracker.ceph.com/issues/57205
1748
    Test failure: test_subvolume_group_ls_filter_internal_directories (tasks.cephfs.test_volumes.TestSubvolumeGroups)
1749
* https://tracker.ceph.com/issues/57656
1750
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
1751
* https://tracker.ceph.com/issues/57677
1752
    qa: "1 MDSs behind on trimming (MDS_TRIM)"
1753
* https://tracker.ceph.com/issues/57206
1754
    libcephfs/test.sh: ceph_test_libcephfs_reclaim
1755
* https://tracker.ceph.com/issues/57446
1756
    qa: test_subvolume_snapshot_info_if_orphan_clone fails
1757 89 Patrick Donnelly
* https://tracker.ceph.com/issues/57655 [Exist in main as well]
1758
    qa: fs:mixed-clients kernel_untar_build failure
1759 88 Patrick Donnelly
* https://tracker.ceph.com/issues/57682
1760
    client: ERROR: test_reconnect_after_blocklisted
1761 87 Patrick Donnelly
1762
1763
h3. 2022 Sep 22
1764
1765
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20220920.234701
1766
1767
* https://tracker.ceph.com/issues/57299
1768
    qa: test_dump_loads fails with JSONDecodeError
1769
* https://tracker.ceph.com/issues/57205
1770
    Test failure: test_subvolume_group_ls_filter_internal_directories (tasks.cephfs.test_volumes.TestSubvolumeGroups)
1771
* https://tracker.ceph.com/issues/52624
1772
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1773
* https://tracker.ceph.com/issues/57580
1774
    Test failure: test_newops_getvxattr (tasks.cephfs.test_newops.TestNewOps)
1775
* https://tracker.ceph.com/issues/57280
1776
    qa: tasks/kernel_cfuse_workunits_untarbuild_blogbench fails - Failed to fetch package version from shaman
1777
* https://tracker.ceph.com/issues/48773
1778
    qa: scrub does not complete
1779
* https://tracker.ceph.com/issues/56446
1780
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
1781
* https://tracker.ceph.com/issues/57206
1782
    libcephfs/test.sh: ceph_test_libcephfs_reclaim
1783
* https://tracker.ceph.com/issues/51267
1784
    CommandFailedError: Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithi096 with status 1:...
1785
1786
NEW:
1787
1788
* https://tracker.ceph.com/issues/57656
1789
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
1790
* https://tracker.ceph.com/issues/57655 [Exist in main as well]
1791
    qa: fs:mixed-clients kernel_untar_build failure
1792
* https://tracker.ceph.com/issues/57657
1793
    mds: scrub locates mismatch between child accounted_rstats and self rstats
1794
1795
Segfault probably caused by: https://github.com/ceph/ceph/pull/47795#issuecomment-1255724799
1796 80 Venky Shankar
1797 79 Venky Shankar
1798
h3. 2022 Sep 16
1799
1800
https://pulpito.ceph.com/?branch=wip-vshankar-testing1-20220905-132828
1801
1802
* https://tracker.ceph.com/issues/57446
1803
    qa: test_subvolume_snapshot_info_if_orphan_clone fails
1804
* https://tracker.ceph.com/issues/57299
1805
    qa: test_dump_loads fails with JSONDecodeError
1806
* https://tracker.ceph.com/issues/50223
1807
    client.xxxx isn't responding to mclientcaps(revoke)
1808
* https://tracker.ceph.com/issues/52624
1809
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1810
* https://tracker.ceph.com/issues/57205
1811
    Test failure: test_subvolume_group_ls_filter_internal_directories (tasks.cephfs.test_volumes.TestSubvolumeGroups)
1812
* https://tracker.ceph.com/issues/57280
1813
    qa: tasks/kernel_cfuse_workunits_untarbuild_blogbench fails - Failed to fetch package version from shaman
1814
* https://tracker.ceph.com/issues/51282
1815
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
1816
* https://tracker.ceph.com/issues/48203
1817
  https://tracker.ceph.com/issues/36593
1818
    qa: quota failure
1819
    qa: quota failure caused by clients stepping on each other
1820
* https://tracker.ceph.com/issues/57580
1821 77 Rishabh Dave
    Test failure: test_newops_getvxattr (tasks.cephfs.test_newops.TestNewOps)
1822
1823 76 Rishabh Dave
1824
h3. 2022 Aug 26
1825
1826
http://pulpito.front.sepia.ceph.com/rishabh-2022-08-22_17:49:59-fs-wip-rishabh-testing-2022Aug19-testing-default-smithi/
1827
http://pulpito.front.sepia.ceph.com/rishabh-2022-08-24_11:56:51-fs-wip-rishabh-testing-2022Aug19-testing-default-smithi/
1828
1829
* https://tracker.ceph.com/issues/57206
1830
  libcephfs/test.sh: ceph_test_libcephfs_reclaim
1831
* https://tracker.ceph.com/issues/56632
1832
  Test failure: test_subvolume_snapshot_clone_quota_exceeded (tasks.cephfs.test_volumes.TestSubvolumeSnapshotClones)
1833
* https://tracker.ceph.com/issues/56446
1834
  Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
1835
* https://tracker.ceph.com/issues/51964
1836
  qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1837
* https://tracker.ceph.com/issues/53859
1838
  qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
1839
1840
* https://tracker.ceph.com/issues/54460
1841
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1842
* https://tracker.ceph.com/issues/54462
1843
  Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128
1844
* https://tracker.ceph.com/issues/54460
1845
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1846
* https://tracker.ceph.com/issues/36593
1847
  Command failed (workunit test fs/quota/quota.sh) on smithixxx with status 1
1848
1849
* https://tracker.ceph.com/issues/52624
1850
  qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1851
* https://tracker.ceph.com/issues/55804
1852
  Command failed (workunit test suites/pjd.sh)
1853
* https://tracker.ceph.com/issues/50223
1854
  client.xxxx isn't responding to mclientcaps(revoke)
1855 75 Venky Shankar
1856
1857
h3. 2022 Aug 22
1858
1859
https://pulpito.ceph.com/vshankar-2022-08-12_09:34:24-fs-wip-vshankar-testing1-20220812-072441-testing-default-smithi/
1860
https://pulpito.ceph.com/vshankar-2022-08-18_04:30:42-fs-wip-vshankar-testing1-20220818-082047-testing-default-smithi/ (drop problematic PR and re-run)
1861
1862
* https://tracker.ceph.com/issues/52624
1863
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1864
* https://tracker.ceph.com/issues/56446
1865
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
1866
* https://tracker.ceph.com/issues/55804
1867
    Command failed (workunit test suites/pjd.sh)
1868
* https://tracker.ceph.com/issues/51278
1869
    mds: "FAILED ceph_assert(!segments.empty())"
1870
* https://tracker.ceph.com/issues/54460
1871
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1872
* https://tracker.ceph.com/issues/57205
1873
    Test failure: test_subvolume_group_ls_filter_internal_directories (tasks.cephfs.test_volumes.TestSubvolumeGroups)
1874
* https://tracker.ceph.com/issues/57206
1875
    ceph_test_libcephfs_reclaim crashes during test
1876
* https://tracker.ceph.com/issues/53859
1877
  Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
1878
* https://tracker.ceph.com/issues/50223
1879 72 Venky Shankar
    client.xxxx isn't responding to mclientcaps(revoke)
1880
1881
h3. 2022 Aug 12
1882
1883
https://pulpito.ceph.com/vshankar-2022-08-10_04:06:00-fs-wip-vshankar-testing-20220805-190751-testing-default-smithi/
1884
https://pulpito.ceph.com/vshankar-2022-08-11_12:16:58-fs-wip-vshankar-testing-20220811-145809-testing-default-smithi/ (drop problematic PR and re-run)
1885
1886
* https://tracker.ceph.com/issues/52624
1887
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1888
* https://tracker.ceph.com/issues/56446
1889
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
1890
* https://tracker.ceph.com/issues/51964
1891
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1892
* https://tracker.ceph.com/issues/55804
1893
    Command failed (workunit test suites/pjd.sh)
1894
* https://tracker.ceph.com/issues/50223
1895
    client.xxxx isn't responding to mclientcaps(revoke)
1896
* https://tracker.ceph.com/issues/50821
1897 73 Venky Shankar
    qa: untar_snap_rm failure during mds thrashing
1898 72 Venky Shankar
* https://tracker.ceph.com/issues/54460
1899 71 Venky Shankar
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1900
1901
h3. 2022 Aug 04
1902
1903
https://pulpito.ceph.com/?branch=wip-vshankar-testing1-20220804-123835 (only mgr/volumes, mgr/stats)
1904
1905 69 Rishabh Dave
Unrealted teuthology failure on rhel
1906 68 Rishabh Dave
1907
h3. 2022 Jul 25
1908
1909
http://pulpito.front.sepia.ceph.com/rishabh-2022-07-22_11:34:20-fs-wip-rishabh-testing-2022Jul22-1400-testing-default-smithi/
1910
1911 74 Rishabh Dave
1st re-run: http://pulpito.front.sepia.ceph.com/rishabh-2022-07-24_03:51:19-fs-wip-rishabh-testing-2022Jul22-1400-testing-default-smithi
1912
2nd re-run: http://pulpito.front.sepia.ceph.com/rishabh-2022-07-24_08:53:36-fs-wip-rishabh-testing-2022Jul22-1400-testing-default-smithi/
1913 68 Rishabh Dave
3rd re-run: http://pulpito.front.sepia.ceph.com/rishabh-2022-07-24_08:53:36-fs-wip-rishabh-testing-2022Jul22-1400-testing-default-smithi/
1914
4th (final) re-run: http://pulpito.front.sepia.ceph.com/rishabh-2022-07-28_03:59:01-fs-wip-rishabh-testing-2022Jul28-0143-testing-default-smithi/
1915
1916
* https://tracker.ceph.com/issues/55804
1917
  Command failed (workunit test suites/pjd.sh)
1918
* https://tracker.ceph.com/issues/50223
1919
  client.xxxx isn't responding to mclientcaps(revoke)
1920
1921
* https://tracker.ceph.com/issues/54460
1922
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1923 1 Patrick Donnelly
* https://tracker.ceph.com/issues/36593
1924 74 Rishabh Dave
  Command failed (workunit test fs/quota/quota.sh) on smithixxx with status 1
1925 68 Rishabh Dave
* https://tracker.ceph.com/issues/54462
1926 67 Patrick Donnelly
  Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128~
1927
1928
h3. 2022 July 22
1929
1930
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20220721.235756
1931
1932
MDS_HEALTH_DUMMY error in log fixed by followup commit.
1933
transient selinux ping failure
1934
1935
* https://tracker.ceph.com/issues/56694
1936
    qa: avoid blocking forever on hung umount
1937
* https://tracker.ceph.com/issues/56695
1938
    [RHEL stock] pjd test failures
1939
* https://tracker.ceph.com/issues/56696
1940
    admin keyring disappears during qa run
1941
* https://tracker.ceph.com/issues/56697
1942
    qa: fs/snaps fails for fuse
1943
* https://tracker.ceph.com/issues/50222
1944
    osd: 5.2s0 deep-scrub : stat mismatch
1945
* https://tracker.ceph.com/issues/56698
1946
    client: FAILED ceph_assert(_size == 0)
1947
* https://tracker.ceph.com/issues/50223
1948
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
1949 66 Rishabh Dave
1950 65 Rishabh Dave
1951
h3. 2022 Jul 15
1952
1953
http://pulpito.front.sepia.ceph.com/rishabh-2022-07-08_23:53:34-fs-wip-rishabh-testing-2022Jul08-1820-testing-default-smithi/
1954
1955
re-run: http://pulpito.front.sepia.ceph.com/rishabh-2022-07-15_06:42:04-fs-wip-rishabh-testing-2022Jul08-1820-testing-default-smithi/
1956
1957
* https://tracker.ceph.com/issues/53859
1958
  Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
1959
* https://tracker.ceph.com/issues/55804
1960
  Command failed (workunit test suites/pjd.sh)
1961
* https://tracker.ceph.com/issues/50223
1962
  client.xxxx isn't responding to mclientcaps(revoke)
1963
* https://tracker.ceph.com/issues/50222
1964
  osd: deep-scrub : stat mismatch
1965
1966
* https://tracker.ceph.com/issues/56632
1967
  Test failure: test_subvolume_snapshot_clone_quota_exceeded (tasks.cephfs.test_volumes.TestSubvolumeSnapshotClones)
1968
* https://tracker.ceph.com/issues/56634
1969
  workunit test fs/snaps/snaptest-intodir.sh
1970
* https://tracker.ceph.com/issues/56644
1971
  Test failure: test_rapid_creation (tasks.cephfs.test_fragment.TestFragmentation)
1972
1973 61 Rishabh Dave
1974
1975
h3. 2022 July 05
1976 62 Rishabh Dave
1977 64 Rishabh Dave
http://pulpito.front.sepia.ceph.com/rishabh-2022-07-02_14:14:52-fs-wip-rishabh-testing-20220702-1631-testing-default-smithi/
1978
1979
On 1st re-run some jobs passed - http://pulpito.front.sepia.ceph.com/rishabh-2022-07-03_15:10:28-fs-wip-rishabh-testing-20220702-1631-distro-default-smithi/
1980
1981
On 2nd re-run only few jobs failed -
1982 62 Rishabh Dave
http://pulpito.front.sepia.ceph.com/rishabh-2022-07-06_05:24:29-fs-wip-rishabh-testing-20220705-2132-distro-default-smithi/
1983
http://pulpito.front.sepia.ceph.com/rishabh-2022-07-06_05:24:29-fs-wip-rishabh-testing-20220705-2132-distro-default-smithi/
1984
1985
* https://tracker.ceph.com/issues/56446
1986
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
1987
* https://tracker.ceph.com/issues/55804
1988
    Command failed (workunit test suites/pjd.sh) on smithi047 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/
1989
1990
* https://tracker.ceph.com/issues/56445
1991 63 Rishabh Dave
    Command failed on smithi080 with status 123: "find /home/ubuntu/cephtest/archive/syslog -name '*.log' -print0 | sudo xargs -0 --no-run-if-empty -- gzip --"
1992
* https://tracker.ceph.com/issues/51267
1993
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithi098 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1
1994 62 Rishabh Dave
* https://tracker.ceph.com/issues/50224
1995
    Test failure: test_mirroring_init_failure_with_recovery (tasks.cephfs.test_mirroring.TestMirroring)
1996 61 Rishabh Dave
1997 58 Venky Shankar
1998
1999
h3. 2022 July 04
2000
2001
https://pulpito.ceph.com/vshankar-2022-06-29_09:19:00-fs-wip-vshankar-testing-20220627-100931-testing-default-smithi/
2002
(rhel runs were borked due to: https://lists.ceph.io/hyperkitty/list/dev@ceph.io/thread/JSZQFUKVLDND4W33PXDGCABPHNSPT6SS/, tests ran with --filter-out=rhel)
2003
2004
* https://tracker.ceph.com/issues/56445
2005 59 Rishabh Dave
    Command failed on smithi162 with status 123: "find /home/ubuntu/cephtest/archive/syslog -name '*.log' -print0 | sudo xargs -0 --no-run-if-empty -- gzip --"
2006
* https://tracker.ceph.com/issues/56446
2007
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
2008
* https://tracker.ceph.com/issues/51964
2009 60 Rishabh Dave
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
2010 59 Rishabh Dave
* https://tracker.ceph.com/issues/52624
2011 57 Venky Shankar
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2012
2013
h3. 2022 June 20
2014
2015
https://pulpito.ceph.com/vshankar-2022-06-15_04:03:39-fs-wip-vshankar-testing1-20220615-072516-testing-default-smithi/
2016
https://pulpito.ceph.com/vshankar-2022-06-19_08:22:46-fs-wip-vshankar-testing1-20220619-102531-testing-default-smithi/
2017
2018
* https://tracker.ceph.com/issues/52624
2019
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2020
* https://tracker.ceph.com/issues/55804
2021
    qa failure: pjd link tests failed
2022
* https://tracker.ceph.com/issues/54108
2023
    qa: iogen workunit: "The following counters failed to be set on mds daemons: {'mds.exported', 'mds.imported'}"
2024
* https://tracker.ceph.com/issues/55332
2025 56 Patrick Donnelly
    Failure in snaptest-git-ceph.sh (it's an async unlink/create bug)
2026
2027
h3. 2022 June 13
2028
2029
https://pulpito.ceph.com/pdonnell-2022-06-12_05:08:12-fs:workload-wip-pdonnell-testing-20220612.004943-distro-default-smithi/
2030
2031
* https://tracker.ceph.com/issues/56024
2032
    cephadm: removes ceph.conf during qa run causing command failure
2033
* https://tracker.ceph.com/issues/48773
2034
    qa: scrub does not complete
2035
* https://tracker.ceph.com/issues/56012
2036
    mds: src/mds/MDLog.cc: 283: FAILED ceph_assert(!mds->is_ any_replay())
2037 55 Venky Shankar
2038 54 Venky Shankar
2039
h3. 2022 Jun 13
2040
2041
https://pulpito.ceph.com/vshankar-2022-06-07_00:25:50-fs-wip-vshankar-testing-20220606-223254-testing-default-smithi/
2042
https://pulpito.ceph.com/vshankar-2022-06-10_01:04:46-fs-wip-vshankar-testing-20220609-175550-testing-default-smithi/
2043
2044
* https://tracker.ceph.com/issues/52624
2045
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2046
* https://tracker.ceph.com/issues/51964
2047
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
2048
* https://tracker.ceph.com/issues/53859
2049
    qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
2050
* https://tracker.ceph.com/issues/55804
2051
    qa failure: pjd link tests failed
2052
* https://tracker.ceph.com/issues/56003
2053
    client: src/include/xlist.h: 81: FAILED ceph_assert(_size == 0)
2054
* https://tracker.ceph.com/issues/56011
2055
    fs/thrash: snaptest-snap-rm-cmp.sh fails in mds5sum comparison
2056
* https://tracker.ceph.com/issues/56012
2057 53 Venky Shankar
    mds: src/mds/MDLog.cc: 283: FAILED ceph_assert(!mds->is_ any_replay())
2058
2059
h3. 2022 Jun 07
2060
2061
https://pulpito.ceph.com/vshankar-2022-06-06_21:25:41-fs-wip-vshankar-testing1-20220606-230129-testing-default-smithi/
2062
https://pulpito.ceph.com/vshankar-2022-06-07_10:53:31-fs-wip-vshankar-testing1-20220607-104134-testing-default-smithi/ (rerun after dropping a problematic PR)
2063
2064
* https://tracker.ceph.com/issues/52624
2065
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2066
* https://tracker.ceph.com/issues/50223
2067
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2068
* https://tracker.ceph.com/issues/50224
2069 51 Venky Shankar
    qa: test_mirroring_init_failure_with_recovery failure
2070
2071
h3. 2022 May 12
2072 52 Venky Shankar
2073 51 Venky Shankar
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20220509-125847
2074
https://pulpito.ceph.com/vshankar-2022-05-13_17:09:16-fs-wip-vshankar-testing-20220513-120051-testing-default-smithi/ (drop prs + rerun)
2075
2076
* https://tracker.ceph.com/issues/52624
2077
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2078
* https://tracker.ceph.com/issues/50223
2079
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2080
* https://tracker.ceph.com/issues/55332
2081
    Failure in snaptest-git-ceph.sh
2082
* https://tracker.ceph.com/issues/53859
2083 1 Patrick Donnelly
    qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
2084 52 Venky Shankar
* https://tracker.ceph.com/issues/55538
2085
    Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead)
2086 51 Venky Shankar
* https://tracker.ceph.com/issues/55258
2087 49 Venky Shankar
    lots of "heartbeat_check: no reply from X.X.X.X" in OSD logs (cropss up again, though very infrequent)
2088
2089 50 Venky Shankar
h3. 2022 May 04
2090
2091
https://pulpito.ceph.com/vshankar-2022-05-01_13:18:44-fs-wip-vshankar-testing1-20220428-204527-testing-default-smithi/
2092 49 Venky Shankar
https://pulpito.ceph.com/vshankar-2022-05-02_16:58:59-fs-wip-vshankar-testing1-20220502-201957-testing-default-smithi/ (after dropping PRs)
2093
2094
* https://tracker.ceph.com/issues/52624
2095
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2096
* https://tracker.ceph.com/issues/50223
2097
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2098
* https://tracker.ceph.com/issues/55332
2099
    Failure in snaptest-git-ceph.sh
2100
* https://tracker.ceph.com/issues/53859
2101
    qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
2102
* https://tracker.ceph.com/issues/55516
2103
    qa: fs suite tests failing with "json.decoder.JSONDecodeError: Extra data: line 2 column 82 (char 82)"
2104
* https://tracker.ceph.com/issues/55537
2105
    mds: crash during fs:upgrade test
2106
* https://tracker.ceph.com/issues/55538
2107 48 Venky Shankar
    Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead)
2108
2109
h3. 2022 Apr 25
2110
2111
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20220420-113951 (owner vshankar)
2112
2113
* https://tracker.ceph.com/issues/52624
2114
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2115
* https://tracker.ceph.com/issues/50223
2116
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2117
* https://tracker.ceph.com/issues/55258
2118
    lots of "heartbeat_check: no reply from X.X.X.X" in OSD logs
2119
* https://tracker.ceph.com/issues/55377
2120 47 Venky Shankar
    kclient: mds revoke Fwb caps stuck after the kclient tries writebcak once
2121
2122
h3. 2022 Apr 14
2123
2124
https://pulpito.ceph.com/?branch=wip-vshankar-testing1-20220411-144044
2125
2126
* https://tracker.ceph.com/issues/52624
2127
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2128
* https://tracker.ceph.com/issues/50223
2129
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2130
* https://tracker.ceph.com/issues/52438
2131
    qa: ffsb timeout
2132
* https://tracker.ceph.com/issues/55170
2133
    mds: crash during rejoin (CDir::fetch_keys)
2134
* https://tracker.ceph.com/issues/55331
2135
    pjd failure
2136
* https://tracker.ceph.com/issues/48773
2137
    qa: scrub does not complete
2138
* https://tracker.ceph.com/issues/55332
2139
    Failure in snaptest-git-ceph.sh
2140
* https://tracker.ceph.com/issues/55258
2141 45 Venky Shankar
    lots of "heartbeat_check: no reply from X.X.X.X" in OSD logs
2142
2143 46 Venky Shankar
h3. 2022 Apr 11
2144 45 Venky Shankar
2145
https://pulpito.ceph.com/?branch=wip-vshankar-testing-55110-20220408-203242
2146
2147
* https://tracker.ceph.com/issues/48773
2148
    qa: scrub does not complete
2149
* https://tracker.ceph.com/issues/52624
2150
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2151
* https://tracker.ceph.com/issues/52438
2152
    qa: ffsb timeout
2153
* https://tracker.ceph.com/issues/48680
2154
    mds: scrubbing stuck "scrub active (0 inodes in the stack)"
2155
* https://tracker.ceph.com/issues/55236
2156
    qa: fs/snaps tests fails with "hit max job timeout"
2157
* https://tracker.ceph.com/issues/54108
2158
    qa: iogen workunit: "The following counters failed to be set on mds daemons: {'mds.exported', 'mds.imported'}"
2159
* https://tracker.ceph.com/issues/54971
2160
    Test failure: test_perf_stats_stale_metrics (tasks.cephfs.test_mds_metrics.TestMDSMetrics)
2161
* https://tracker.ceph.com/issues/50223
2162
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2163
* https://tracker.ceph.com/issues/55258
2164 44 Venky Shankar
    lots of "heartbeat_check: no reply from X.X.X.X" in OSD logs
2165 42 Venky Shankar
2166 43 Venky Shankar
h3. 2022 Mar 21
2167
2168
https://pulpito.ceph.com/vshankar-2022-03-20_02:16:37-fs-wip-vshankar-testing-20220319-163539-testing-default-smithi/
2169
2170
Run didn't go well, lots of failures - debugging by dropping PRs and running against master branch. Only merging unrelated PRs that pass tests.
2171
2172
2173 42 Venky Shankar
h3. 2022 Mar 08
2174
2175
https://pulpito.ceph.com/vshankar-2022-02-28_04:32:15-fs-wip-vshankar-testing-20220226-211550-testing-default-smithi/
2176
2177
rerun with
2178
- (drop) https://github.com/ceph/ceph/pull/44679
2179
- (drop) https://github.com/ceph/ceph/pull/44958
2180
https://pulpito.ceph.com/vshankar-2022-03-06_14:47:51-fs-wip-vshankar-testing-20220304-132102-testing-default-smithi/
2181
2182
* https://tracker.ceph.com/issues/54419 (new)
2183
    `ceph orch upgrade start` seems to never reach completion
2184
* https://tracker.ceph.com/issues/51964
2185
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
2186
* https://tracker.ceph.com/issues/52624
2187
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2188
* https://tracker.ceph.com/issues/50223
2189
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2190
* https://tracker.ceph.com/issues/52438
2191
    qa: ffsb timeout
2192
* https://tracker.ceph.com/issues/50821
2193
    qa: untar_snap_rm failure during mds thrashing
2194 41 Venky Shankar
2195
2196
h3. 2022 Feb 09
2197
2198
https://pulpito.ceph.com/vshankar-2022-02-05_17:27:49-fs-wip-vshankar-testing-20220201-113815-testing-default-smithi/
2199
2200
rerun with
2201
- (drop) https://github.com/ceph/ceph/pull/37938
2202
- (drop) https://github.com/ceph/ceph/pull/44335
2203
- (drop) https://github.com/ceph/ceph/pull/44491
2204
- (drop) https://github.com/ceph/ceph/pull/44501
2205
https://pulpito.ceph.com/vshankar-2022-02-08_14:27:29-fs-wip-vshankar-testing-20220208-181241-testing-default-smithi/
2206
2207
* https://tracker.ceph.com/issues/51964
2208
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
2209
* https://tracker.ceph.com/issues/54066
2210
    test_subvolume_no_upgrade_v1_sanity fails with `AssertionError: 1000 != 0`
2211
* https://tracker.ceph.com/issues/48773
2212
    qa: scrub does not complete
2213
* https://tracker.ceph.com/issues/52624
2214
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2215
* https://tracker.ceph.com/issues/50223
2216
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2217
* https://tracker.ceph.com/issues/52438
2218 40 Patrick Donnelly
    qa: ffsb timeout
2219
2220
h3. 2022 Feb 01
2221
2222
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20220127.171526
2223
2224
* https://tracker.ceph.com/issues/54107
2225
    kclient: hang during umount
2226
* https://tracker.ceph.com/issues/54106
2227
    kclient: hang during workunit cleanup
2228
* https://tracker.ceph.com/issues/54108
2229
    qa: iogen workunit: "The following counters failed to be set on mds daemons: {'mds.exported', 'mds.imported'}"
2230
* https://tracker.ceph.com/issues/48773
2231
    qa: scrub does not complete
2232
* https://tracker.ceph.com/issues/52438
2233
    qa: ffsb timeout
2234 36 Venky Shankar
2235
2236
h3. 2022 Jan 13
2237 39 Venky Shankar
2238 36 Venky Shankar
https://pulpito.ceph.com/vshankar-2022-01-06_13:18:41-fs-wip-vshankar-testing-20220106-145819-testing-default-smithi/
2239 38 Venky Shankar
2240
rerun with:
2241 36 Venky Shankar
- (add) https://github.com/ceph/ceph/pull/44570
2242
- (drop) https://github.com/ceph/ceph/pull/43184
2243
https://pulpito.ceph.com/vshankar-2022-01-13_04:42:40-fs-wip-vshankar-testing-20220106-145819-testing-default-smithi/
2244
2245
* https://tracker.ceph.com/issues/50223
2246
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2247
* https://tracker.ceph.com/issues/51282
2248
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2249
* https://tracker.ceph.com/issues/48773
2250
    qa: scrub does not complete
2251
* https://tracker.ceph.com/issues/52624
2252
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2253
* https://tracker.ceph.com/issues/53859
2254 34 Venky Shankar
    qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
2255
2256
h3. 2022 Jan 03
2257
2258
https://pulpito.ceph.com/vshankar-2021-12-22_07:37:44-fs-wip-vshankar-testing-20211216-114012-testing-default-smithi/
2259
https://pulpito.ceph.com/vshankar-2022-01-03_12:27:45-fs-wip-vshankar-testing-20220103-142738-testing-default-smithi/ (rerun)
2260
2261
* https://tracker.ceph.com/issues/50223
2262
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2263
* https://tracker.ceph.com/issues/51964
2264
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
2265
* https://tracker.ceph.com/issues/51267
2266
    CommandFailedError: Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithi096 with status 1:...
2267
* https://tracker.ceph.com/issues/51282
2268
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2269
* https://tracker.ceph.com/issues/50821
2270
    qa: untar_snap_rm failure during mds thrashing
2271 35 Ramana Raja
* https://tracker.ceph.com/issues/51278
2272
    mds: "FAILED ceph_assert(!segments.empty())"
2273
* https://tracker.ceph.com/issues/52279
2274 34 Venky Shankar
    cephadm tests fail due to: error adding seccomp filter rule for syscall bdflush: requested action matches default action of filter
2275 33 Patrick Donnelly
2276
2277
h3. 2021 Dec 22
2278
2279
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211222.014316
2280
2281
* https://tracker.ceph.com/issues/52624
2282
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2283
* https://tracker.ceph.com/issues/50223
2284
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2285
* https://tracker.ceph.com/issues/52279
2286
    cephadm tests fail due to: error adding seccomp filter rule for syscall bdflush: requested action matches default action of filter
2287
* https://tracker.ceph.com/issues/50224
2288
    qa: test_mirroring_init_failure_with_recovery failure
2289
* https://tracker.ceph.com/issues/48773
2290
    qa: scrub does not complete
2291 32 Venky Shankar
2292
2293
h3. 2021 Nov 30
2294
2295
https://pulpito.ceph.com/vshankar-2021-11-24_07:14:27-fs-wip-vshankar-testing-20211124-094330-testing-default-smithi/
2296
https://pulpito.ceph.com/vshankar-2021-11-30_06:23:32-fs-wip-vshankar-testing-20211124-094330-distro-default-smithi/ (rerun w/ QA fixes)
2297
2298
* https://tracker.ceph.com/issues/53436
2299
    mds, mon: mds beacon messages get dropped? (mds never reaches up:active state)
2300
* https://tracker.ceph.com/issues/51964
2301
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
2302
* https://tracker.ceph.com/issues/48812
2303
    qa: test_scrub_pause_and_resume_with_abort failure
2304
* https://tracker.ceph.com/issues/51076
2305
    "wait_for_recovery: failed before timeout expired" during thrashosd test with EC backend.
2306
* https://tracker.ceph.com/issues/50223
2307
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2308
* https://tracker.ceph.com/issues/52624
2309
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2310
* https://tracker.ceph.com/issues/50250
2311
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
2312 31 Patrick Donnelly
2313
2314
h3. 2021 November 9
2315
2316
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211109.180315
2317
2318
* https://tracker.ceph.com/issues/53214
2319
    qa: "dd: error reading '/sys/kernel/debug/ceph/2a934501-6731-4052-a836-f42229a869be.client4874/metrics': Is a directory"
2320
* https://tracker.ceph.com/issues/48773
2321
    qa: scrub does not complete
2322
* https://tracker.ceph.com/issues/50223
2323
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2324
* https://tracker.ceph.com/issues/51282
2325
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2326
* https://tracker.ceph.com/issues/52624
2327
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2328
* https://tracker.ceph.com/issues/53216
2329
    qa: "RuntimeError: value of attributes should be either str or None. client_id"
2330
* https://tracker.ceph.com/issues/50250
2331
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
2332
2333 30 Patrick Donnelly
2334
2335
h3. 2021 November 03
2336
2337
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211103.023355
2338
2339
* https://tracker.ceph.com/issues/51964
2340
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
2341
* https://tracker.ceph.com/issues/51282
2342
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2343
* https://tracker.ceph.com/issues/52436
2344
    fs/ceph: "corrupt mdsmap"
2345
* https://tracker.ceph.com/issues/53074
2346
    pybind/mgr/cephadm: upgrade sequence does not continue if no MDS are active
2347
* https://tracker.ceph.com/issues/53150
2348
    pybind/mgr/cephadm/upgrade: tolerate MDS failures during upgrade straddling v16.2.5
2349
* https://tracker.ceph.com/issues/53155
2350
    MDSMonitor: assertion during upgrade to v16.2.5+
2351 29 Patrick Donnelly
2352
2353
h3. 2021 October 26
2354
2355
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211025.000447
2356
2357
* https://tracker.ceph.com/issues/53074
2358
    pybind/mgr/cephadm: upgrade sequence does not continue if no MDS are active
2359
* https://tracker.ceph.com/issues/52997
2360
    testing: hang ing umount
2361
* https://tracker.ceph.com/issues/50824
2362
    qa: snaptest-git-ceph bus error
2363
* https://tracker.ceph.com/issues/52436
2364
    fs/ceph: "corrupt mdsmap"
2365
* https://tracker.ceph.com/issues/48773
2366
    qa: scrub does not complete
2367
* https://tracker.ceph.com/issues/53082
2368
    ceph-fuse: segmenetation fault in Client::handle_mds_map
2369
* https://tracker.ceph.com/issues/50223
2370
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2371
* https://tracker.ceph.com/issues/52624
2372
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2373
* https://tracker.ceph.com/issues/50224
2374
    qa: test_mirroring_init_failure_with_recovery failure
2375
* https://tracker.ceph.com/issues/50821
2376
    qa: untar_snap_rm failure during mds thrashing
2377
* https://tracker.ceph.com/issues/50250
2378
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
2379
2380 27 Patrick Donnelly
2381
2382 28 Patrick Donnelly
h3. 2021 October 19
2383 27 Patrick Donnelly
2384
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211019.013028
2385
2386
* https://tracker.ceph.com/issues/52995
2387
    qa: test_standby_count_wanted failure
2388
* https://tracker.ceph.com/issues/52948
2389
    osd: fails to come up: "teuthology.misc:7 of 8 OSDs are up"
2390
* https://tracker.ceph.com/issues/52996
2391
    qa: test_perf_counters via test_openfiletable
2392
* https://tracker.ceph.com/issues/48772
2393
    qa: pjd: not ok 9, 44, 80
2394
* https://tracker.ceph.com/issues/52997
2395
    testing: hang ing umount
2396
* https://tracker.ceph.com/issues/50250
2397
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
2398
* https://tracker.ceph.com/issues/52624
2399
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2400
* https://tracker.ceph.com/issues/50223
2401
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2402
* https://tracker.ceph.com/issues/50821
2403
    qa: untar_snap_rm failure during mds thrashing
2404
* https://tracker.ceph.com/issues/48773
2405
    qa: scrub does not complete
2406 26 Patrick Donnelly
2407
2408
h3. 2021 October 12
2409
2410
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211012.192211
2411
2412
Some failures caused by teuthology bug: https://tracker.ceph.com/issues/52944
2413
2414
New test caused failure: https://github.com/ceph/ceph/pull/43297#discussion_r729883167
2415
2416
2417
* https://tracker.ceph.com/issues/51282
2418
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2419
* https://tracker.ceph.com/issues/52948
2420
    osd: fails to come up: "teuthology.misc:7 of 8 OSDs are up"
2421
* https://tracker.ceph.com/issues/48773
2422
    qa: scrub does not complete
2423
* https://tracker.ceph.com/issues/50224
2424
    qa: test_mirroring_init_failure_with_recovery failure
2425
* https://tracker.ceph.com/issues/52949
2426
    RuntimeError: The following counters failed to be set on mds daemons: {'mds.dir_split'}
2427 25 Patrick Donnelly
2428 23 Patrick Donnelly
2429 24 Patrick Donnelly
h3. 2021 October 02
2430
2431
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211002.163337
2432
2433
Some failures caused by cephadm upgrade test. Fixed in follow-up qa commit.
2434
2435
test_simple failures caused by PR in this set.
2436
2437
A few reruns because of QA infra noise.
2438
2439
* https://tracker.ceph.com/issues/52822
2440
    qa: failed pacific install on fs:upgrade
2441
* https://tracker.ceph.com/issues/52624
2442
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2443
* https://tracker.ceph.com/issues/50223
2444
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2445
* https://tracker.ceph.com/issues/48773
2446
    qa: scrub does not complete
2447
2448
2449 23 Patrick Donnelly
h3. 2021 September 20
2450
2451
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210917.174826
2452
2453
* https://tracker.ceph.com/issues/52677
2454
    qa: test_simple failure
2455
* https://tracker.ceph.com/issues/51279
2456
    kclient hangs on umount (testing branch)
2457
* https://tracker.ceph.com/issues/50223
2458
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2459
* https://tracker.ceph.com/issues/50250
2460
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
2461
* https://tracker.ceph.com/issues/52624
2462
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2463
* https://tracker.ceph.com/issues/52438
2464
    qa: ffsb timeout
2465 22 Patrick Donnelly
2466
2467
h3. 2021 September 10
2468
2469
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210910.181451
2470
2471
* https://tracker.ceph.com/issues/50223
2472
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2473
* https://tracker.ceph.com/issues/50250
2474
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
2475
* https://tracker.ceph.com/issues/52624
2476
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2477
* https://tracker.ceph.com/issues/52625
2478
    qa: test_kill_mdstable (tasks.cephfs.test_snapshots.TestSnapshots)
2479
* https://tracker.ceph.com/issues/52439
2480
    qa: acls does not compile on centos stream
2481
* https://tracker.ceph.com/issues/50821
2482
    qa: untar_snap_rm failure during mds thrashing
2483
* https://tracker.ceph.com/issues/48773
2484
    qa: scrub does not complete
2485
* https://tracker.ceph.com/issues/52626
2486
    mds: ScrubStack.cc: 831: FAILED ceph_assert(diri)
2487
* https://tracker.ceph.com/issues/51279
2488
    kclient hangs on umount (testing branch)
2489 21 Patrick Donnelly
2490
2491
h3. 2021 August 27
2492
2493
Several jobs died because of device failures.
2494
2495
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210827.024746
2496
2497
* https://tracker.ceph.com/issues/52430
2498
    mds: fast async create client mount breaks racy test
2499
* https://tracker.ceph.com/issues/52436
2500
    fs/ceph: "corrupt mdsmap"
2501
* https://tracker.ceph.com/issues/52437
2502
    mds: InoTable::replay_release_ids abort via test_inotable_sync
2503
* https://tracker.ceph.com/issues/51282
2504
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2505
* https://tracker.ceph.com/issues/52438
2506
    qa: ffsb timeout
2507
* https://tracker.ceph.com/issues/52439
2508
    qa: acls does not compile on centos stream
2509 20 Patrick Donnelly
2510
2511
h3. 2021 July 30
2512
2513
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210729.214022
2514
2515
* https://tracker.ceph.com/issues/50250
2516
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
2517
* https://tracker.ceph.com/issues/51282
2518
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2519
* https://tracker.ceph.com/issues/48773
2520
    qa: scrub does not complete
2521
* https://tracker.ceph.com/issues/51975
2522
    pybind/mgr/stats: KeyError
2523 19 Patrick Donnelly
2524
2525
h3. 2021 July 28
2526
2527
https://pulpito.ceph.com/pdonnell-2021-07-28_00:39:45-fs-wip-pdonnell-testing-20210727.213757-distro-basic-smithi/
2528
2529
with qa fix: https://pulpito.ceph.com/pdonnell-2021-07-28_16:20:28-fs-wip-pdonnell-testing-20210728.141004-distro-basic-smithi/
2530
2531
* https://tracker.ceph.com/issues/51905
2532
    qa: "error reading sessionmap 'mds1_sessionmap'"
2533
* https://tracker.ceph.com/issues/48773
2534
    qa: scrub does not complete
2535
* https://tracker.ceph.com/issues/50250
2536
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
2537
* https://tracker.ceph.com/issues/51267
2538
    CommandFailedError: Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithi096 with status 1:...
2539
* https://tracker.ceph.com/issues/51279
2540
    kclient hangs on umount (testing branch)
2541 18 Patrick Donnelly
2542
2543
h3. 2021 July 16
2544
2545
https://pulpito.ceph.com/pdonnell-2021-07-16_05:50:11-fs-wip-pdonnell-testing-20210716.022804-distro-basic-smithi/
2546
2547
* https://tracker.ceph.com/issues/48773
2548
    qa: scrub does not complete
2549
* https://tracker.ceph.com/issues/48772
2550
    qa: pjd: not ok 9, 44, 80
2551
* https://tracker.ceph.com/issues/45434
2552
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2553
* https://tracker.ceph.com/issues/51279
2554
    kclient hangs on umount (testing branch)
2555
* https://tracker.ceph.com/issues/50824
2556
    qa: snaptest-git-ceph bus error
2557 17 Patrick Donnelly
2558
2559
h3. 2021 July 04
2560
2561
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210703.052904
2562
2563
* https://tracker.ceph.com/issues/48773
2564
    qa: scrub does not complete
2565
* https://tracker.ceph.com/issues/39150
2566
    mon: "FAILED ceph_assert(session_map.sessions.empty())" when out of quorum
2567
* https://tracker.ceph.com/issues/45434
2568
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2569
* https://tracker.ceph.com/issues/51282
2570
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2571
* https://tracker.ceph.com/issues/48771
2572
    qa: iogen: workload fails to cause balancing
2573
* https://tracker.ceph.com/issues/51279
2574
    kclient hangs on umount (testing branch)
2575
* https://tracker.ceph.com/issues/50250
2576
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
2577 16 Patrick Donnelly
2578
2579
h3. 2021 July 01
2580
2581
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210701.192056
2582
2583
* https://tracker.ceph.com/issues/51197
2584
    qa: [WRN] Scrub error on inode 0x10000001520 (/client.0/tmp/t/linux-5.4/Documentation/driver-api) see mds.f log and `damage ls` output for details
2585
* https://tracker.ceph.com/issues/50866
2586
    osd: stat mismatch on objects
2587
* https://tracker.ceph.com/issues/48773
2588
    qa: scrub does not complete
2589 15 Patrick Donnelly
2590
2591
h3. 2021 June 26
2592
2593
https://pulpito.ceph.com/pdonnell-2021-06-26_00:57:00-fs-wip-pdonnell-testing-20210625.225421-distro-basic-smithi/
2594
2595
* https://tracker.ceph.com/issues/51183
2596
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
2597
* https://tracker.ceph.com/issues/51410
2598
    kclient: fails to finish reconnect during MDS thrashing (testing branch)
2599
* https://tracker.ceph.com/issues/48773
2600
    qa: scrub does not complete
2601
* https://tracker.ceph.com/issues/51282
2602
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2603
* https://tracker.ceph.com/issues/51169
2604
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
2605
* https://tracker.ceph.com/issues/48772
2606
    qa: pjd: not ok 9, 44, 80
2607 14 Patrick Donnelly
2608
2609
h3. 2021 June 21
2610
2611
https://pulpito.ceph.com/pdonnell-2021-06-22_00:27:21-fs-wip-pdonnell-testing-20210621.231646-distro-basic-smithi/
2612
2613
One failure caused by PR: https://github.com/ceph/ceph/pull/41935#issuecomment-866472599
2614
2615
* https://tracker.ceph.com/issues/51282
2616
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2617
* https://tracker.ceph.com/issues/51183
2618
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
2619
* https://tracker.ceph.com/issues/48773
2620
    qa: scrub does not complete
2621
* https://tracker.ceph.com/issues/48771
2622
    qa: iogen: workload fails to cause balancing
2623
* https://tracker.ceph.com/issues/51169
2624
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
2625
* https://tracker.ceph.com/issues/50495
2626
    libcephfs: shutdown race fails with status 141
2627
* https://tracker.ceph.com/issues/45434
2628
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2629
* https://tracker.ceph.com/issues/50824
2630
    qa: snaptest-git-ceph bus error
2631
* https://tracker.ceph.com/issues/50223
2632
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2633 13 Patrick Donnelly
2634
2635
h3. 2021 June 16
2636
2637
https://pulpito.ceph.com/pdonnell-2021-06-16_21:26:55-fs-wip-pdonnell-testing-20210616.191804-distro-basic-smithi/
2638
2639
MDS abort class of failures caused by PR: https://github.com/ceph/ceph/pull/41667
2640
2641
* https://tracker.ceph.com/issues/45434
2642
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2643
* https://tracker.ceph.com/issues/51169
2644
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
2645
* https://tracker.ceph.com/issues/43216
2646
    MDSMonitor: removes MDS coming out of quorum election
2647
* https://tracker.ceph.com/issues/51278
2648
    mds: "FAILED ceph_assert(!segments.empty())"
2649
* https://tracker.ceph.com/issues/51279
2650
    kclient hangs on umount (testing branch)
2651
* https://tracker.ceph.com/issues/51280
2652
    mds: "FAILED ceph_assert(r == 0 || r == -2)"
2653
* https://tracker.ceph.com/issues/51183
2654
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
2655
* https://tracker.ceph.com/issues/51281
2656
    qa: snaptest-snap-rm-cmp.sh: "echo 'FAIL: bad match, /tmp/a 4637e766853d1ad16a7b17079e2c6f03 != real c3883760b18d50e8d78819c54d579b00'"
2657
* https://tracker.ceph.com/issues/48773
2658
    qa: scrub does not complete
2659
* https://tracker.ceph.com/issues/51076
2660
    "wait_for_recovery: failed before timeout expired" during thrashosd test with EC backend.
2661
* https://tracker.ceph.com/issues/51228
2662
    qa: rmdir: failed to remove 'a/.snap/*': No such file or directory
2663
* https://tracker.ceph.com/issues/51282
2664
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2665 12 Patrick Donnelly
2666
2667
h3. 2021 June 14
2668
2669
https://pulpito.ceph.com/pdonnell-2021-06-14_20:53:05-fs-wip-pdonnell-testing-20210614.173325-distro-basic-smithi/
2670
2671
Some Ubuntu 20.04 upgrade fallout. In particular, upgrade tests are failing due to missing packages for 18.04 Pacific.
2672
2673
* https://tracker.ceph.com/issues/51169
2674
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
2675
* https://tracker.ceph.com/issues/51228
2676
    qa: rmdir: failed to remove 'a/.snap/*': No such file or directory
2677
* https://tracker.ceph.com/issues/48773
2678
    qa: scrub does not complete
2679
* https://tracker.ceph.com/issues/51183
2680
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
2681
* https://tracker.ceph.com/issues/45434
2682
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2683
* https://tracker.ceph.com/issues/51182
2684
    pybind/mgr/snap_schedule: Invalid command: Unexpected argument 'fs=cephfs'
2685
* https://tracker.ceph.com/issues/51229
2686
    qa: test_multi_snap_schedule list difference failure
2687
* https://tracker.ceph.com/issues/50821
2688
    qa: untar_snap_rm failure during mds thrashing
2689 11 Patrick Donnelly
2690
2691
h3. 2021 June 13
2692
2693
https://pulpito.ceph.com/pdonnell-2021-06-12_02:45:35-fs-wip-pdonnell-testing-20210612.002809-distro-basic-smithi/
2694
2695
Some Ubuntu 20.04 upgrade fallout. In particular, upgrade tests are failing due to missing packages for 18.04 Pacific.
2696
2697
* https://tracker.ceph.com/issues/51169
2698
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
2699
* https://tracker.ceph.com/issues/48773
2700
    qa: scrub does not complete
2701
* https://tracker.ceph.com/issues/51182
2702
    pybind/mgr/snap_schedule: Invalid command: Unexpected argument 'fs=cephfs'
2703
* https://tracker.ceph.com/issues/51183
2704
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
2705
* https://tracker.ceph.com/issues/51197
2706
    qa: [WRN] Scrub error on inode 0x10000001520 (/client.0/tmp/t/linux-5.4/Documentation/driver-api) see mds.f log and `damage ls` output for details
2707
* https://tracker.ceph.com/issues/45434
2708 10 Patrick Donnelly
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2709
2710
h3. 2021 June 11
2711
2712
https://pulpito.ceph.com/pdonnell-2021-06-11_18:02:10-fs-wip-pdonnell-testing-20210611.162716-distro-basic-smithi/
2713
2714
Some Ubuntu 20.04 upgrade fallout. In particular, upgrade tests are failing due to missing packages for 18.04 Pacific.
2715
2716
* https://tracker.ceph.com/issues/51169
2717
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
2718
* https://tracker.ceph.com/issues/45434
2719
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2720
* https://tracker.ceph.com/issues/48771
2721
    qa: iogen: workload fails to cause balancing
2722
* https://tracker.ceph.com/issues/43216
2723
    MDSMonitor: removes MDS coming out of quorum election
2724
* https://tracker.ceph.com/issues/51182
2725
    pybind/mgr/snap_schedule: Invalid command: Unexpected argument 'fs=cephfs'
2726
* https://tracker.ceph.com/issues/50223
2727
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2728
* https://tracker.ceph.com/issues/48773
2729
    qa: scrub does not complete
2730
* https://tracker.ceph.com/issues/51183
2731
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
2732
* https://tracker.ceph.com/issues/51184
2733
    qa: fs:bugs does not specify distro
2734 9 Patrick Donnelly
2735
2736
h3. 2021 June 03
2737
2738
https://pulpito.ceph.com/pdonnell-2021-06-03_03:40:33-fs-wip-pdonnell-testing-20210603.020013-distro-basic-smithi/
2739
2740
* https://tracker.ceph.com/issues/45434
2741
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2742
* https://tracker.ceph.com/issues/50016
2743
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
2744
* https://tracker.ceph.com/issues/50821
2745
    qa: untar_snap_rm failure during mds thrashing
2746
* https://tracker.ceph.com/issues/50622 (regression)
2747
    msg: active_connections regression
2748
* https://tracker.ceph.com/issues/49845#note-2 (regression)
2749
    qa: failed umount in test_volumes
2750
* https://tracker.ceph.com/issues/48773
2751
    qa: scrub does not complete
2752
* https://tracker.ceph.com/issues/43216
2753
    MDSMonitor: removes MDS coming out of quorum election
2754 7 Patrick Donnelly
2755
2756 8 Patrick Donnelly
h3. 2021 May 18
2757
2758
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210518.214114
2759
2760
Regression in testing kernel caused some failures. Ilya fixed those and rerun
2761
looked better. Some odd new noise in the rerun relating to packaging and "No
2762
module named 'tasks.ceph'".
2763
2764
* https://tracker.ceph.com/issues/50824
2765
    qa: snaptest-git-ceph bus error
2766
* https://tracker.ceph.com/issues/50622 (regression)
2767
    msg: active_connections regression
2768
* https://tracker.ceph.com/issues/49845#note-2 (regression)
2769
    qa: failed umount in test_volumes
2770
* https://tracker.ceph.com/issues/48203 (stock kernel update required)
2771
    qa: quota failure
2772
2773
2774 7 Patrick Donnelly
h3. 2021 May 18
2775
2776
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210518.025642
2777
2778
* https://tracker.ceph.com/issues/50821
2779
    qa: untar_snap_rm failure during mds thrashing
2780
* https://tracker.ceph.com/issues/48773
2781
    qa: scrub does not complete
2782
* https://tracker.ceph.com/issues/45591
2783
    mgr: FAILED ceph_assert(daemon != nullptr)
2784
* https://tracker.ceph.com/issues/50866
2785
    osd: stat mismatch on objects
2786
* https://tracker.ceph.com/issues/50016
2787
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
2788
* https://tracker.ceph.com/issues/50867
2789
    qa: fs:mirror: reduced data availability
2790
* https://tracker.ceph.com/issues/50821
2791
    qa: untar_snap_rm failure during mds thrashing
2792
* https://tracker.ceph.com/issues/50622 (regression)
2793
    msg: active_connections regression
2794
* https://tracker.ceph.com/issues/50223
2795
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2796
* https://tracker.ceph.com/issues/50868
2797
    qa: "kern.log.gz already exists; not overwritten"
2798
* https://tracker.ceph.com/issues/50870
2799
    qa: test_full: "rm: cannot remove 'large_file_a': Permission denied"
2800 6 Patrick Donnelly
2801
2802
h3. 2021 May 11
2803
2804
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210511.232042
2805
2806
* one class of failures caused by PR
2807
* https://tracker.ceph.com/issues/48812
2808
    qa: test_scrub_pause_and_resume_with_abort failure
2809
* https://tracker.ceph.com/issues/50390
2810
    mds: monclient: wait_auth_rotating timed out after 30
2811
* https://tracker.ceph.com/issues/48773
2812
    qa: scrub does not complete
2813
* https://tracker.ceph.com/issues/50821
2814
    qa: untar_snap_rm failure during mds thrashing
2815
* https://tracker.ceph.com/issues/50224
2816
    qa: test_mirroring_init_failure_with_recovery failure
2817
* https://tracker.ceph.com/issues/50622 (regression)
2818
    msg: active_connections regression
2819
* https://tracker.ceph.com/issues/50825
2820
    qa: snaptest-git-ceph hang during mon thrashing v2
2821
* https://tracker.ceph.com/issues/50821
2822
    qa: untar_snap_rm failure during mds thrashing
2823
* https://tracker.ceph.com/issues/50823
2824
    qa: RuntimeError: timeout waiting for cluster to stabilize
2825 5 Patrick Donnelly
2826
2827
h3. 2021 May 14
2828
2829
https://pulpito.ceph.com/pdonnell-2021-05-14_21:45:42-fs-master-distro-basic-smithi/
2830
2831
* https://tracker.ceph.com/issues/48812
2832
    qa: test_scrub_pause_and_resume_with_abort failure
2833
* https://tracker.ceph.com/issues/50821
2834
    qa: untar_snap_rm failure during mds thrashing
2835
* https://tracker.ceph.com/issues/50622 (regression)
2836
    msg: active_connections regression
2837
* https://tracker.ceph.com/issues/50822
2838
    qa: testing kernel patch for client metrics causes mds abort
2839
* https://tracker.ceph.com/issues/48773
2840
    qa: scrub does not complete
2841
* https://tracker.ceph.com/issues/50823
2842
    qa: RuntimeError: timeout waiting for cluster to stabilize
2843
* https://tracker.ceph.com/issues/50824
2844
    qa: snaptest-git-ceph bus error
2845
* https://tracker.ceph.com/issues/50825
2846
    qa: snaptest-git-ceph hang during mon thrashing v2
2847
* https://tracker.ceph.com/issues/50826
2848
    kceph: stock RHEL kernel hangs on snaptests with mon|osd thrashers
2849 4 Patrick Donnelly
2850
2851
h3. 2021 May 01
2852
2853
https://pulpito.ceph.com/pdonnell-2021-05-01_09:07:09-fs-wip-pdonnell-testing-20210501.040415-distro-basic-smithi/
2854
2855
* https://tracker.ceph.com/issues/45434
2856
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2857
* https://tracker.ceph.com/issues/50281
2858
    qa: untar_snap_rm timeout
2859
* https://tracker.ceph.com/issues/48203 (stock kernel update required)
2860
    qa: quota failure
2861
* https://tracker.ceph.com/issues/48773
2862
    qa: scrub does not complete
2863
* https://tracker.ceph.com/issues/50390
2864
    mds: monclient: wait_auth_rotating timed out after 30
2865
* https://tracker.ceph.com/issues/50250
2866
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details"
2867
* https://tracker.ceph.com/issues/50622 (regression)
2868
    msg: active_connections regression
2869
* https://tracker.ceph.com/issues/45591
2870
    mgr: FAILED ceph_assert(daemon != nullptr)
2871
* https://tracker.ceph.com/issues/50221
2872
    qa: snaptest-git-ceph failure in git diff
2873
* https://tracker.ceph.com/issues/50016
2874
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
2875 3 Patrick Donnelly
2876
2877
h3. 2021 Apr 15
2878
2879
https://pulpito.ceph.com/pdonnell-2021-04-15_01:35:57-fs-wip-pdonnell-testing-20210414.230315-distro-basic-smithi/
2880
2881
* https://tracker.ceph.com/issues/50281
2882
    qa: untar_snap_rm timeout
2883
* https://tracker.ceph.com/issues/50220
2884
    qa: dbench workload timeout
2885
* https://tracker.ceph.com/issues/50246
2886
    mds: failure replaying journal (EMetaBlob)
2887
* https://tracker.ceph.com/issues/50250
2888
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details"
2889
* https://tracker.ceph.com/issues/50016
2890
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
2891
* https://tracker.ceph.com/issues/50222
2892
    osd: 5.2s0 deep-scrub : stat mismatch
2893
* https://tracker.ceph.com/issues/45434
2894
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2895
* https://tracker.ceph.com/issues/49845
2896
    qa: failed umount in test_volumes
2897
* https://tracker.ceph.com/issues/37808
2898
    osd: osdmap cache weak_refs assert during shutdown
2899
* https://tracker.ceph.com/issues/50387
2900
    client: fs/snaps failure
2901
* https://tracker.ceph.com/issues/50389
2902
    mds: "cluster [ERR] Error recovering journal 0x203: (2) No such file or directory" in cluster log"
2903
* https://tracker.ceph.com/issues/50216
2904
    qa: "ls: cannot access 'lost+found': No such file or directory"
2905
* https://tracker.ceph.com/issues/50390
2906
    mds: monclient: wait_auth_rotating timed out after 30
2907
2908 1 Patrick Donnelly
2909
2910 2 Patrick Donnelly
h3. 2021 Apr 08
2911
2912
https://pulpito.ceph.com/pdonnell-2021-04-08_22:42:24-fs-wip-pdonnell-testing-20210408.192301-distro-basic-smithi/
2913
2914
* https://tracker.ceph.com/issues/45434
2915
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2916
* https://tracker.ceph.com/issues/50016
2917
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
2918
* https://tracker.ceph.com/issues/48773
2919
    qa: scrub does not complete
2920
* https://tracker.ceph.com/issues/50279
2921
    qa: "Replacing daemon mds.b as rank 0 with standby daemon mds.c"
2922
* https://tracker.ceph.com/issues/50246
2923
    mds: failure replaying journal (EMetaBlob)
2924
* https://tracker.ceph.com/issues/48365
2925
    qa: ffsb build failure on CentOS 8.2
2926
* https://tracker.ceph.com/issues/50216
2927
    qa: "ls: cannot access 'lost+found': No such file or directory"
2928
* https://tracker.ceph.com/issues/50223
2929
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2930
* https://tracker.ceph.com/issues/50280
2931
    cephadm: RuntimeError: uid/gid not found
2932
* https://tracker.ceph.com/issues/50281
2933
    qa: untar_snap_rm timeout
2934
2935 1 Patrick Donnelly
h3. 2021 Apr 08
2936
2937
https://pulpito.ceph.com/pdonnell-2021-04-08_04:31:36-fs-wip-pdonnell-testing-20210408.024225-distro-basic-smithi/
2938
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210408.142238 (with logic inversion / QA fix)
2939
2940
* https://tracker.ceph.com/issues/50246
2941
    mds: failure replaying journal (EMetaBlob)
2942
* https://tracker.ceph.com/issues/50250
2943
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details"
2944
2945
2946
h3. 2021 Apr 07
2947
2948
https://pulpito.ceph.com/pdonnell-2021-04-07_02:12:41-fs-wip-pdonnell-testing-20210406.213012-distro-basic-smithi/
2949
2950
* https://tracker.ceph.com/issues/50215
2951
    qa: "log [ERR] : error reading sessionmap 'mds2_sessionmap'"
2952
* https://tracker.ceph.com/issues/49466
2953
    qa: "Command failed on gibba030 with status 1: 'set -ex\nsudo dd of=/tmp/tmp.ZEeZBasJer'"
2954
* https://tracker.ceph.com/issues/50216
2955
    qa: "ls: cannot access 'lost+found': No such file or directory"
2956
* https://tracker.ceph.com/issues/48773
2957
    qa: scrub does not complete
2958
* https://tracker.ceph.com/issues/49845
2959
    qa: failed umount in test_volumes
2960
* https://tracker.ceph.com/issues/50220
2961
    qa: dbench workload timeout
2962
* https://tracker.ceph.com/issues/50221
2963
    qa: snaptest-git-ceph failure in git diff
2964
* https://tracker.ceph.com/issues/50222
2965
    osd: 5.2s0 deep-scrub : stat mismatch
2966
* https://tracker.ceph.com/issues/50223
2967
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2968
* https://tracker.ceph.com/issues/50224
2969
    qa: test_mirroring_init_failure_with_recovery failure
2970
2971
h3. 2021 Apr 01
2972
2973
https://pulpito.ceph.com/pdonnell-2021-04-01_00:45:34-fs-wip-pdonnell-testing-20210331.222326-distro-basic-smithi/
2974
2975
* https://tracker.ceph.com/issues/48772
2976
    qa: pjd: not ok 9, 44, 80
2977
* https://tracker.ceph.com/issues/50177
2978
    osd: "stalled aio... buggy kernel or bad device?"
2979
* https://tracker.ceph.com/issues/48771
2980
    qa: iogen: workload fails to cause balancing
2981
* https://tracker.ceph.com/issues/49845
2982
    qa: failed umount in test_volumes
2983
* https://tracker.ceph.com/issues/48773
2984
    qa: scrub does not complete
2985
* https://tracker.ceph.com/issues/48805
2986
    mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details"
2987
* https://tracker.ceph.com/issues/50178
2988
    qa: "TypeError: run() got an unexpected keyword argument 'shell'"
2989
* https://tracker.ceph.com/issues/45434
2990
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2991
2992
h3. 2021 Mar 24
2993
2994
https://pulpito.ceph.com/pdonnell-2021-03-24_23:26:35-fs-wip-pdonnell-testing-20210324.190252-distro-basic-smithi/
2995
2996
* https://tracker.ceph.com/issues/49500
2997
    qa: "Assertion `cb_done' failed."
2998
* https://tracker.ceph.com/issues/50019
2999
    qa: mount failure with cephadm "probably no MDS server is up?"
3000
* https://tracker.ceph.com/issues/50020
3001
    qa: "RADOS object not found (Failed to operate read op for oid cephfs_mirror)"
3002
* https://tracker.ceph.com/issues/48773
3003
    qa: scrub does not complete
3004
* https://tracker.ceph.com/issues/45434
3005
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
3006
* https://tracker.ceph.com/issues/48805
3007
    mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details"
3008
* https://tracker.ceph.com/issues/48772
3009
    qa: pjd: not ok 9, 44, 80
3010
* https://tracker.ceph.com/issues/50021
3011
    qa: snaptest-git-ceph failure during mon thrashing
3012
* https://tracker.ceph.com/issues/48771
3013
    qa: iogen: workload fails to cause balancing
3014
* https://tracker.ceph.com/issues/50016
3015
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
3016
* https://tracker.ceph.com/issues/49466
3017
    qa: "Command failed on gibba030 with status 1: 'set -ex\nsudo dd of=/tmp/tmp.ZEeZBasJer'"
3018
3019
3020
h3. 2021 Mar 18
3021
3022
https://pulpito.ceph.com/pdonnell-2021-03-18_13:46:31-fs-wip-pdonnell-testing-20210318.024145-distro-basic-smithi/
3023
3024
* https://tracker.ceph.com/issues/49466
3025
    qa: "Command failed on gibba030 with status 1: 'set -ex\nsudo dd of=/tmp/tmp.ZEeZBasJer'"
3026
* https://tracker.ceph.com/issues/48773
3027
    qa: scrub does not complete
3028
* https://tracker.ceph.com/issues/48805
3029
    mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details"
3030
* https://tracker.ceph.com/issues/45434
3031
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
3032
* https://tracker.ceph.com/issues/49845
3033
    qa: failed umount in test_volumes
3034
* https://tracker.ceph.com/issues/49605
3035
    mgr: drops command on the floor
3036
* https://tracker.ceph.com/issues/48203 (stock kernel update required)
3037
    qa: quota failure
3038
* https://tracker.ceph.com/issues/49928
3039
    client: items pinned in cache preventing unmount x2
3040
3041
h3. 2021 Mar 15
3042
3043
https://pulpito.ceph.com/pdonnell-2021-03-15_22:16:56-fs-wip-pdonnell-testing-20210315.182203-distro-basic-smithi/
3044
3045
* https://tracker.ceph.com/issues/49842
3046
    qa: stuck pkg install
3047
* https://tracker.ceph.com/issues/49466
3048
    qa: "Command failed on gibba030 with status 1: 'set -ex\nsudo dd of=/tmp/tmp.ZEeZBasJer'"
3049
* https://tracker.ceph.com/issues/49822
3050
    test: test_mirroring_command_idempotency (tasks.cephfs.test_admin.TestMirroringCommands) failure
3051
* https://tracker.ceph.com/issues/49240
3052
    terminate called after throwing an instance of 'std::bad_alloc'
3053
* https://tracker.ceph.com/issues/48773
3054
    qa: scrub does not complete
3055
* https://tracker.ceph.com/issues/45434
3056
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
3057
* https://tracker.ceph.com/issues/49500
3058
    qa: "Assertion `cb_done' failed."
3059
* https://tracker.ceph.com/issues/49843
3060
    qa: fs/snaps/snaptest-upchildrealms.sh failure
3061
* https://tracker.ceph.com/issues/49845
3062
    qa: failed umount in test_volumes
3063
* https://tracker.ceph.com/issues/48805
3064
    mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details"
3065
* https://tracker.ceph.com/issues/49605
3066
    mgr: drops command on the floor
3067
3068
and failure caused by PR: https://github.com/ceph/ceph/pull/39969
3069
3070
3071
h3. 2021 Mar 09
3072
3073
https://pulpito.ceph.com/pdonnell-2021-03-09_03:27:39-fs-wip-pdonnell-testing-20210308.214827-distro-basic-smithi/
3074
3075
* https://tracker.ceph.com/issues/49500
3076
    qa: "Assertion `cb_done' failed."
3077
* https://tracker.ceph.com/issues/48805
3078
    mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details"
3079
* https://tracker.ceph.com/issues/48773
3080
    qa: scrub does not complete
3081
* https://tracker.ceph.com/issues/45434
3082
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
3083
* https://tracker.ceph.com/issues/49240
3084
    terminate called after throwing an instance of 'std::bad_alloc'
3085
* https://tracker.ceph.com/issues/49466
3086
    qa: "Command failed on gibba030 with status 1: 'set -ex\nsudo dd of=/tmp/tmp.ZEeZBasJer'"
3087
* https://tracker.ceph.com/issues/49684
3088
    qa: fs:cephadm mount does not wait for mds to be created
3089
* https://tracker.ceph.com/issues/48771
3090
    qa: iogen: workload fails to cause balancing