Project

General

Profile

Main » History » Version 235

Milind Changire, 03/25/2024 10:05 AM

1 221 Patrick Donnelly
h1. <code>main</code> branch
2 1 Patrick Donnelly
3 235 Milind Changire
h3. 2024-03-25
4
5
https://pulpito.ceph.com/mchangir-2024-03-22_09:46:06-fs:upgrade-wip-mchangir-testing-main-20240318.032620-testing-default-smithi/
6
* https://tracker.ceph.com/issues/64502
7
  fusermount -u fails with: teuthology.exceptions.MaxWhileTries: reached maximum tries (51) after waiting for 300 seconds
8
9
https://pulpito.ceph.com/mchangir-2024-03-22_09:48:09-fs:libcephfs-wip-mchangir-testing-main-20240318.032620-testing-default-smithi/
10
11
* https://tracker.ceph.com/issues/62245
12
  libcephfs/test.sh failed - https://tracker.ceph.com/issues/62245#note-3
13
14
15
16 228 Patrick Donnelly
h3. 2024-03-20
17
18 234 Patrick Donnelly
https://pulpito.ceph.com/?branch=wip-batrick-testing-20240320.145742
19 228 Patrick Donnelly
20 233 Patrick Donnelly
https://github.com/batrick/ceph/commit/360516069d9393362c4cc6eb9371680fe16d66ab
21
22 229 Patrick Donnelly
Ubuntu jobs filtered out because builds were skipped by jenkins/shaman.
23 1 Patrick Donnelly
24 229 Patrick Donnelly
This run has a lot more failures because https://github.com/ceph/ceph/pull/55455 fixed log WRN/ERR checks.
25 228 Patrick Donnelly
26 229 Patrick Donnelly
* https://tracker.ceph.com/issues/57676
27
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
28
* https://tracker.ceph.com/issues/64572
29
    workunits/fsx.sh failure
30
* https://tracker.ceph.com/issues/65018
31
    PG_DEGRADED warnings during cluster creation via cephadm: "Health check failed: Degraded data redundancy: 2/192 objects degraded (1.042%), 1 pg degraded (PG_DEGRADED)"
32
* https://tracker.ceph.com/issues/64707 (new issue)
33
    suites/fsstress.sh hangs on one client - test times out
34 1 Patrick Donnelly
* https://tracker.ceph.com/issues/64988
35
    qa: fs:workloads mgr client evicted indicated by "cluster [WRN] evicting unresponsive client smithi042:x (15288), after 303.306 seconds"
36
* https://tracker.ceph.com/issues/59684
37
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
38 230 Patrick Donnelly
* https://tracker.ceph.com/issues/64972
39
    qa: "ceph tell 4.3a deep-scrub" command not found
40
* https://tracker.ceph.com/issues/54108
41
    qa: iogen workunit: "The following counters failed to be set on mds daemons: {'mds.exported', 'mds.imported'}"
42
* https://tracker.ceph.com/issues/65019
43
    qa/suites/fs/top: [WRN] Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)" in cluster log 
44
* https://tracker.ceph.com/issues/65020
45
    qa: Scrub error on inode 0x1000000356c (/volumes/qa/sv_0/2f8f6bb4-3ea9-47a0-bd79-a0f50dc149d5/client.0/tmp/clients/client7/~dmtmp/PARADOX) see mds.b log and `damage ls` output for details" in cluster log
46
* https://tracker.ceph.com/issues/65021
47
    qa/suites/fs/nfs: cluster [WRN] Health check failed: 1 stray daemon(s) not managed by cephadm (CEPHADM_STRAY_DAEMON)" in cluster log
48
* https://tracker.ceph.com/issues/63699
49
    qa: failed cephfs-shell test_reading_conf
50 231 Patrick Donnelly
* https://tracker.ceph.com/issues/64711
51
    Test failure: test_cephfs_mirror_cancel_mirroring_and_readd (tasks.cephfs.test_mirroring.TestMirroring)
52
* https://tracker.ceph.com/issues/50821
53
    qa: untar_snap_rm failure during mds thrashing
54 232 Patrick Donnelly
* https://tracker.ceph.com/issues/65022
55
    qa: test_max_items_per_obj open procs not fully cleaned up
56 228 Patrick Donnelly
57 226 Venky Shankar
h3.  14th March 2024
58
59
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20240307.013758
60
61 227 Venky Shankar
(pjd.sh failures are related to a bug in the testing kernel. See - https://tracker.ceph.com/issues/64679#note-4)
62 226 Venky Shankar
63
* https://tracker.ceph.com/issues/62067
64
    ffsb.sh failure "Resource temporarily unavailable"
65
* https://tracker.ceph.com/issues/57676
66
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
67
* https://tracker.ceph.com/issues/64502
68
    pacific/quincy/v18.2.0: client: ceph-fuse fails to unmount after upgrade to main
69
* https://tracker.ceph.com/issues/64572
70
    workunits/fsx.sh failure
71
* https://tracker.ceph.com/issues/63700
72
    qa: test_cd_with_args failure
73
* https://tracker.ceph.com/issues/59684
74
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
75
* https://tracker.ceph.com/issues/61243
76
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
77
78 225 Venky Shankar
h3. 5th March 2024
79
80
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20240304.042522
81
82
* https://tracker.ceph.com/issues/57676
83
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
84
* https://tracker.ceph.com/issues/64502
85
    pacific/quincy/v18.2.0: client: ceph-fuse fails to unmount after upgrade to main
86
* https://tracker.ceph.com/issues/63949
87
    leak in mds.c detected by valgrind during CephFS QA run
88
* https://tracker.ceph.com/issues/57656
89
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
90
* https://tracker.ceph.com/issues/63699
91
    qa: failed cephfs-shell test_reading_conf
92
* https://tracker.ceph.com/issues/64572
93
    workunits/fsx.sh failure
94
* https://tracker.ceph.com/issues/64707 (new issue)
95
    suites/fsstress.sh hangs on one client - test times out
96
* https://tracker.ceph.com/issues/59684
97
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
98
* https://tracker.ceph.com/issues/63700
99
    qa: test_cd_with_args failure
100
* https://tracker.ceph.com/issues/64711
101
    Test failure: test_cephfs_mirror_cancel_mirroring_and_readd (tasks.cephfs.test_mirroring.TestMirroring)
102
* https://tracker.ceph.com/issues/64729 (new issue)
103
    mon.a (mon.0) 1281 : cluster 3 [WRN] MDS_SLOW_METADATA_IO: 3 MDSs report slow metadata IOs" in cluster log
104
* https://tracker.ceph.com/issues/64730
105
    fs/misc/multiple_rsync.sh workunit times out
106
107 224 Venky Shankar
h3. 26th Feb 2024
108
109
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20240216.060239
110
111
(This run is a bit messy due to
112
113
  a) OCI runtime issues in the testing kernel with centos9
114
  b) SELinux denials related failures
115
  c) Unrelated MON_DOWN warnings)
116
117
* https://tracker.ceph.com/issues/57676
118
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
119
* https://tracker.ceph.com/issues/63700
120
    qa: test_cd_with_args failure
121
* https://tracker.ceph.com/issues/63949
122
    leak in mds.c detected by valgrind during CephFS QA run
123
* https://tracker.ceph.com/issues/59684
124
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
125
* https://tracker.ceph.com/issues/61243
126
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
127
* https://tracker.ceph.com/issues/63699
128
    qa: failed cephfs-shell test_reading_conf
129
* https://tracker.ceph.com/issues/64172
130
    Test failure: test_multiple_path_r (tasks.cephfs.test_admin.TestFsAuthorize)
131
* https://tracker.ceph.com/issues/57656
132
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
133
* https://tracker.ceph.com/issues/64572
134
    workunits/fsx.sh failure
135
136 222 Patrick Donnelly
h3. 20th Feb 2024
137
138
https://github.com/ceph/ceph/pull/55601
139
https://github.com/ceph/ceph/pull/55659
140
141
https://pulpito.ceph.com/pdonnell-2024-02-20_07:23:03-fs:upgrade:mds_upgrade_sequence-wip-batrick-testing-20240220.022152-distro-default-smithi/
142
143
* https://tracker.ceph.com/issues/64502
144
    client: quincy ceph-fuse fails to unmount after upgrade to main
145
146 223 Patrick Donnelly
This run has numerous problems. #55601 introduces testing for the upgrade sequence from </code>reef/{v18.2.0,v18.2.1,reef}</code> as well as an extra dimension for the ceph-fuse client. The main "big" issue is i64502: the ceph-fuse client is not being unmounted when <code>fusermount -u</code> is called. Instead, the client begins to unmount only after daemons are shut down during test cleanup.
147 218 Venky Shankar
148
h3. 19th Feb 2024
149
150 220 Venky Shankar
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20240217.015652
151
152 218 Venky Shankar
* https://tracker.ceph.com/issues/61243
153
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
154
* https://tracker.ceph.com/issues/63700
155
    qa: test_cd_with_args failure
156
* https://tracker.ceph.com/issues/63141
157
    qa/cephfs: test_idem_unaffected_root_squash fails
158
* https://tracker.ceph.com/issues/59684
159
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
160
* https://tracker.ceph.com/issues/63949
161
    leak in mds.c detected by valgrind during CephFS QA run
162
* https://tracker.ceph.com/issues/63764
163
    Test failure: test_r_with_fsname_and_no_path_in_cap (tasks.cephfs.test_multifs_auth.TestMDSCaps)
164
* https://tracker.ceph.com/issues/63699
165
    qa: failed cephfs-shell test_reading_conf
166 219 Venky Shankar
* https://tracker.ceph.com/issues/64482
167
    ceph: stderr Error: OCI runtime error: crun: bpf create ``: Function not implemented
168 201 Rishabh Dave
169 217 Venky Shankar
h3. 29 Jan 2024
170
171
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20240119.075157-1
172
173
* https://tracker.ceph.com/issues/57676
174
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
175
* https://tracker.ceph.com/issues/63949
176
    leak in mds.c detected by valgrind during CephFS QA run
177
* https://tracker.ceph.com/issues/62067
178
    ffsb.sh failure "Resource temporarily unavailable"
179
* https://tracker.ceph.com/issues/64172
180
    Test failure: test_multiple_path_r (tasks.cephfs.test_admin.TestFsAuthorize)
181
* https://tracker.ceph.com/issues/63265
182
    qa: fs/snaps/snaptest-git-ceph.sh failed when reseting to tag 'v0.1'
183
* https://tracker.ceph.com/issues/61243
184
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
185
* https://tracker.ceph.com/issues/59684
186
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
187
* https://tracker.ceph.com/issues/57656
188
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
189
* https://tracker.ceph.com/issues/64209
190
    snaptest-multiple-capsnaps.sh fails with "got remote process result: 1"
191
192 216 Venky Shankar
h3. 17th Jan 2024
193
194
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20240103.072409-1
195
196
* https://tracker.ceph.com/issues/63764
197
    Test failure: test_r_with_fsname_and_no_path_in_cap (tasks.cephfs.test_multifs_auth.TestMDSCaps)
198
* https://tracker.ceph.com/issues/57676
199
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
200
* https://tracker.ceph.com/issues/51964
201
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
202
* https://tracker.ceph.com/issues/63949
203
    leak in mds.c detected by valgrind during CephFS QA run
204
* https://tracker.ceph.com/issues/62067
205
    ffsb.sh failure "Resource temporarily unavailable"
206
* https://tracker.ceph.com/issues/61243
207
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
208
* https://tracker.ceph.com/issues/63259
209
    mds: failed to store backtrace and force file system read-only
210
* https://tracker.ceph.com/issues/63265
211
    qa: fs/snaps/snaptest-git-ceph.sh failed when reseting to tag 'v0.1'
212
213
h3. 16 Jan 2024
214 215 Rishabh Dave
215 214 Rishabh Dave
https://pulpito.ceph.com/rishabh-2023-12-11_15:37:57-fs-rishabh-2023dec11-testing-default-smithi/
216
https://pulpito.ceph.com/rishabh-2023-12-17_11:19:43-fs-rishabh-2023dec11-testing-default-smithi/
217
https://pulpito.ceph.com/rishabh-2024-01-04_18:43:16-fs-rishabh-2024jan4-testing-default-smithi
218
219
* https://tracker.ceph.com/issues/63764
220
  Test failure: test_r_with_fsname_and_no_path_in_cap (tasks.cephfs.test_multifs_auth.TestMDSCaps)
221
* https://tracker.ceph.com/issues/63141
222
  qa/cephfs: test_idem_unaffected_root_squash fails
223
* https://tracker.ceph.com/issues/62067
224
  ffsb.sh failure "Resource temporarily unavailable" 
225
* https://tracker.ceph.com/issues/51964
226
  qa: test_cephfs_mirror_restart_sync_on_blocklist failure
227
* https://tracker.ceph.com/issues/54462 
228
  Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128
229
* https://tracker.ceph.com/issues/57676
230
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
231
232
* https://tracker.ceph.com/issues/63949
233
  valgrind leak in MDS
234
* https://tracker.ceph.com/issues/64041
235
  qa/cephfs: fs/upgrade/nofs suite attempts to jump more than 2 releases
236
* fsstress failure in last run was due a kernel MM layer failure, unrelated to CephFS
237
* from last run, job #7507400 failed due to MGR; FS wasn't degraded, so it's unrelated to CephFS
238
239 213 Venky Shankar
h3. 06 Dec 2023
240
241
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20231206.125818
242
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20231206.125818-x (rerun w/ squid kickoff changes)
243
244
* https://tracker.ceph.com/issues/63764
245
    Test failure: test_r_with_fsname_and_no_path_in_cap (tasks.cephfs.test_multifs_auth.TestMDSCaps)
246
* https://tracker.ceph.com/issues/63233
247
    mon|client|mds: valgrind reports possible leaks in the MDS
248
* https://tracker.ceph.com/issues/57676
249
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
250
* https://tracker.ceph.com/issues/62580
251
    testing: Test failure: test_snapshot_remove (tasks.cephfs.test_strays.TestStrays)
252
* https://tracker.ceph.com/issues/62067
253
    ffsb.sh failure "Resource temporarily unavailable"
254
* https://tracker.ceph.com/issues/61243
255
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
256
* https://tracker.ceph.com/issues/62081
257
    tasks/fscrypt-common does not finish, timesout
258
* https://tracker.ceph.com/issues/63265
259
    qa: fs/snaps/snaptest-git-ceph.sh failed when reseting to tag 'v0.1'
260
* https://tracker.ceph.com/issues/63806
261
    ffsb.sh workunit failure (MDS: std::out_of_range, damaged)
262
263 211 Patrick Donnelly
h3. 30 Nov 2023
264
265
https://pulpito.ceph.com/pdonnell-2023-11-30_08:05:19-fs:shell-wip-batrick-testing-20231130.014408-distro-default-smithi/
266
267
* https://tracker.ceph.com/issues/63699
268 212 Patrick Donnelly
    qa: failed cephfs-shell test_reading_conf
269
* https://tracker.ceph.com/issues/63700
270
    qa: test_cd_with_args failure
271 211 Patrick Donnelly
272 210 Venky Shankar
h3. 29 Nov 2023
273
274
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20231107.042705
275
276
* https://tracker.ceph.com/issues/63233
277
    mon|client|mds: valgrind reports possible leaks in the MDS
278
* https://tracker.ceph.com/issues/63141
279
    qa/cephfs: test_idem_unaffected_root_squash fails
280
* https://tracker.ceph.com/issues/57676
281
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
282
* https://tracker.ceph.com/issues/57655
283
    qa: fs:mixed-clients kernel_untar_build failure
284
* https://tracker.ceph.com/issues/62067
285
    ffsb.sh failure "Resource temporarily unavailable"
286
* https://tracker.ceph.com/issues/61243
287
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
288
* https://tracker.ceph.com/issues/62510 (pending RHEL back port)
    snaptest-git-ceph.sh failure with fs/thrash
289
* https://tracker.ceph.com/issues/62810
290
    Failure in snaptest-git-ceph.sh (it's an async unlink/create bug) -- Need to fix again
291
292 206 Venky Shankar
h3. 14 Nov 2023
293 207 Milind Changire
(Milind)
294
295
https://pulpito.ceph.com/mchangir-2023-11-13_10:27:15-fs-wip-mchangir-testing-20231110.052303-testing-default-smithi/
296
297
* https://tracker.ceph.com/issues/53859
298
  qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
299
* https://tracker.ceph.com/issues/63233
300
  mon|client|mds: valgrind reports possible leaks in the MDS
301
* https://tracker.ceph.com/issues/63521
302
  qa: Test failure: test_scrub_merge_dirfrags (tasks.cephfs.test_scrub_checks.TestScrubChecks)
303
* https://tracker.ceph.com/issues/57655
304
  qa: fs:mixed-clients kernel_untar_build failure
305
* https://tracker.ceph.com/issues/62580
306
    testing: Test failure: test_snapshot_remove (tasks.cephfs.test_strays.TestStrays)
307
* https://tracker.ceph.com/issues/57676
308
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
309
* https://tracker.ceph.com/issues/61243
310
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
311
* https://tracker.ceph.com/issues/63141
312
    qa/cephfs: test_idem_unaffected_root_squash fails
313
* https://tracker.ceph.com/issues/51964
314
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
315
* https://tracker.ceph.com/issues/63522
316
    No module named 'tasks.ceph_fuse'
317
    No module named 'tasks.kclient'
318
    No module named 'tasks.cephfs.fuse_mount'
319
    No module named 'tasks.ceph'
320
* https://tracker.ceph.com/issues/63523
321
    Command failed - qa/workunits/fs/misc/general_vxattrs.sh
322
323
324
h3. 14 Nov 2023
325 206 Venky Shankar
326
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20231106.073650
327
328
(nvm the fs:upgrade test failure - the PR is excluded from merge)
329
330
* https://tracker.ceph.com/issues/57676
331
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
332
* https://tracker.ceph.com/issues/63233
333
    mon|client|mds: valgrind reports possible leaks in the MDS
334
* https://tracker.ceph.com/issues/63141
335
    qa/cephfs: test_idem_unaffected_root_squash fails
336
* https://tracker.ceph.com/issues/62580
337
    testing: Test failure: test_snapshot_remove (tasks.cephfs.test_strays.TestStrays)
338
* https://tracker.ceph.com/issues/57655
339
    qa: fs:mixed-clients kernel_untar_build failure
340
* https://tracker.ceph.com/issues/51964
341
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
342
* https://tracker.ceph.com/issues/63519
343
    ceph-fuse: reef ceph-fuse crashes with main branch ceph-mds
344
* https://tracker.ceph.com/issues/57087
345
    qa: test_fragmented_injection (tasks.cephfs.test_data_scan.TestDataScan) failure
346
* https://tracker.ceph.com/issues/58945
347
    qa: xfstests-dev's generic test suite has 20 failures with fuse client
348
349 204 Rishabh Dave
h3. 7 Nov 2023
350
351 205 Rishabh Dave
fs: https://pulpito.ceph.com/rishabh-2023-11-04_04:30:51-fs-rishabh-2023nov3-testing-default-smithi/
352
re-run: https://pulpito.ceph.com/rishabh-2023-11-05_14:10:09-fs-rishabh-2023nov3-testing-default-smithi/
353
smoke: https://pulpito.ceph.com/rishabh-2023-11-08_08:39:05-smoke-rishabh-2023nov3-testing-default-smithi/
354 204 Rishabh Dave
355
* https://tracker.ceph.com/issues/53859
356
  qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
357
* https://tracker.ceph.com/issues/63233
358
  mon|client|mds: valgrind reports possible leaks in the MDS
359
* https://tracker.ceph.com/issues/57655
360
  qa: fs:mixed-clients kernel_untar_build failure
361
* https://tracker.ceph.com/issues/57676
362
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
363
364
* https://tracker.ceph.com/issues/63473
365
  fsstress.sh failed with errno 124
366
367 202 Rishabh Dave
h3. 3 Nov 2023
368 203 Rishabh Dave
369 202 Rishabh Dave
https://pulpito.ceph.com/rishabh-2023-10-27_06:26:52-fs-rishabh-2023oct26-testing-default-smithi/
370
371
* https://tracker.ceph.com/issues/63141
372
  qa/cephfs: test_idem_unaffected_root_squash fails
373
* https://tracker.ceph.com/issues/63233
374
  mon|client|mds: valgrind reports possible leaks in the MDS
375
* https://tracker.ceph.com/issues/57656
376
  dbench: write failed on handle 10010 (Resource temporarily unavailable)
377
* https://tracker.ceph.com/issues/57655
378
  qa: fs:mixed-clients kernel_untar_build failure
379
* https://tracker.ceph.com/issues/57676
380
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
381
382
* https://tracker.ceph.com/issues/59531
383
  "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio)"
384
* https://tracker.ceph.com/issues/52624
385
  qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
386
387 198 Patrick Donnelly
h3. 24 October 2023
388
389
https://pulpito.ceph.com/?branch=wip-batrick-testing-20231024.144545
390
391 200 Patrick Donnelly
Two failures:
392
393
https://pulpito.ceph.com/pdonnell-2023-10-26_05:21:22-fs-wip-batrick-testing-20231024.144545-distro-default-smithi/7438459/
394
https://pulpito.ceph.com/pdonnell-2023-10-26_05:21:22-fs-wip-batrick-testing-20231024.144545-distro-default-smithi/7438468/
395
396
probably related to https://github.com/ceph/ceph/pull/53255. Killing the mount as part of the test did not complete. Will research more.
397
398 198 Patrick Donnelly
* https://tracker.ceph.com/issues/52624
399
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
400
* https://tracker.ceph.com/issues/57676
401 199 Patrick Donnelly
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
402
* https://tracker.ceph.com/issues/63233
403
    mon|client|mds: valgrind reports possible leaks in the MDS
404
* https://tracker.ceph.com/issues/59531
405
    "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS 
406
* https://tracker.ceph.com/issues/57655
407
    qa: fs:mixed-clients kernel_untar_build failure
408 200 Patrick Donnelly
* https://tracker.ceph.com/issues/62067
409
    ffsb.sh failure "Resource temporarily unavailable"
410
* https://tracker.ceph.com/issues/63411
411
    qa: flush journal may cause timeouts of `scrub status`
412
* https://tracker.ceph.com/issues/61243
413
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
414
* https://tracker.ceph.com/issues/63141
415 198 Patrick Donnelly
    test_idem_unaffected_root_squash (test_admin.TestFsAuthorizeUpdate) fails
416 148 Rishabh Dave
417 195 Venky Shankar
h3. 18 Oct 2023
418
419
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20231018.065603
420
421
* https://tracker.ceph.com/issues/52624
422
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
423
* https://tracker.ceph.com/issues/57676
424
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
425
* https://tracker.ceph.com/issues/63233
426
    mon|client|mds: valgrind reports possible leaks in the MDS
427
* https://tracker.ceph.com/issues/63141
428
    qa/cephfs: test_idem_unaffected_root_squash fails
429
* https://tracker.ceph.com/issues/59531
430
  "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio)" 
431
* https://tracker.ceph.com/issues/62658
432
    error during scrub thrashing: reached maximum tries (31) after waiting for 900 seconds
433
* https://tracker.ceph.com/issues/62580
434
    testing: Test failure: test_snapshot_remove (tasks.cephfs.test_strays.TestStrays)
435
* https://tracker.ceph.com/issues/62067
436
    ffsb.sh failure "Resource temporarily unavailable"
437
* https://tracker.ceph.com/issues/57655
438
    qa: fs:mixed-clients kernel_untar_build failure
439
* https://tracker.ceph.com/issues/62036
440
    src/mds/MDCache.cc: 5131: FAILED ceph_assert(isolated_inodes.empty())
441
* https://tracker.ceph.com/issues/58945
442
    qa: xfstests-dev's generic test suite has 20 failures with fuse client
443
* https://tracker.ceph.com/issues/62847
444
    mds: blogbench requests stuck (5mds+scrub+snaps-flush)
445
446 193 Venky Shankar
h3. 13 Oct 2023
447
448
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20231013.093215
449
450
* https://tracker.ceph.com/issues/52624
451
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
452
* https://tracker.ceph.com/issues/62936
453
    Test failure: test_mirroring_init_failure_with_recovery (tasks.cephfs.test_mirroring.TestMirroring)
454
* https://tracker.ceph.com/issues/47292
455
    cephfs-shell: test_df_for_valid_file failure
456
* https://tracker.ceph.com/issues/63141
457
    qa/cephfs: test_idem_unaffected_root_squash fails
458
* https://tracker.ceph.com/issues/62081
459
    tasks/fscrypt-common does not finish, timesout
460 1 Patrick Donnelly
* https://tracker.ceph.com/issues/58945
461
    qa: xfstests-dev's generic test suite has 20 failures with fuse client
462 194 Venky Shankar
* https://tracker.ceph.com/issues/63233
463
    mon|client|mds: valgrind reports possible leaks in the MDS
464 193 Venky Shankar
465 190 Patrick Donnelly
h3. 16 Oct 2023
466
467
https://pulpito.ceph.com/?branch=wip-batrick-testing-20231016.203825
468
469 192 Patrick Donnelly
Infrastructure issues:
470
* /teuthology/pdonnell-2023-10-19_12:04:12-fs-wip-batrick-testing-20231016.203825-distro-default-smithi/7432286/teuthology.log
471
    Host lost.
472
473 196 Patrick Donnelly
One followup fix:
474
* https://pulpito.ceph.com/pdonnell-2023-10-20_00:33:29-fs-wip-batrick-testing-20231016.203825-distro-default-smithi/
475
476 192 Patrick Donnelly
Failures:
477
478
* https://tracker.ceph.com/issues/56694
479
    qa: avoid blocking forever on hung umount
480
* https://tracker.ceph.com/issues/63089
481
    qa: tasks/mirror times out
482
* https://tracker.ceph.com/issues/52624
483
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
484
* https://tracker.ceph.com/issues/59531
485
  "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio)" 
486
* https://tracker.ceph.com/issues/57676
487
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
488
* https://tracker.ceph.com/issues/62658 
489
    error during scrub thrashing: reached maximum tries (31) after waiting for 900 seconds
490
* https://tracker.ceph.com/issues/61243
491
  test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
492
* https://tracker.ceph.com/issues/57656
493
  dbench: write failed on handle 10010 (Resource temporarily unavailable)
494
* https://tracker.ceph.com/issues/63233
495
  mon|client|mds: valgrind reports possible leaks in the MDS
496 197 Patrick Donnelly
* https://tracker.ceph.com/issues/63278
497
  kclient: may wrongly decode session messages and believe it is blocklisted (dead jobs)
498 192 Patrick Donnelly
499 189 Rishabh Dave
h3. 9 Oct 2023
500
501
https://pulpito.ceph.com/rishabh-2023-10-06_11:56:52-fs-rishabh-cephfs-mon-testing-default-smithi/
502
503
* https://tracker.ceph.com/issues/54460
504
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
505
* https://tracker.ceph.com/issues/63141
506
  test_idem_unaffected_root_squash (test_admin.TestFsAuthorizeUpdate) fails
507
* https://tracker.ceph.com/issues/62937
508
  logrotate doesn't support parallel execution on same set of logfiles
509
* https://tracker.ceph.com/issues/61400
510
  valgrind+ceph-mon issues
511
* https://tracker.ceph.com/issues/57676
512
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
513
* https://tracker.ceph.com/issues/55805
514
  error during scrub thrashing reached max tries in 900 secs
515
516 188 Venky Shankar
h3. 26 Sep 2023
517
518
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230926.081818
519
520
* https://tracker.ceph.com/issues/52624
521
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
522
* https://tracker.ceph.com/issues/62873
523
    qa: FAIL: test_client_blocklisted_oldest_tid (tasks.cephfs.test_client_limits.TestClientLimits)
524
* https://tracker.ceph.com/issues/61400
525
    valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
526
* https://tracker.ceph.com/issues/57676
527
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
528
* https://tracker.ceph.com/issues/62682
529
    mon: no mdsmap broadcast after "fs set joinable" is set to true
530
* https://tracker.ceph.com/issues/63089
531
    qa: tasks/mirror times out
532
533 185 Rishabh Dave
h3. 22 Sep 2023
534
535
https://pulpito.ceph.com/rishabh-2023-09-12_12:12:15-fs-wip-rishabh-2023sep12-b2-testing-default-smithi/
536
537
* https://tracker.ceph.com/issues/59348
538
  qa: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota
539
* https://tracker.ceph.com/issues/59344
540
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
541
* https://tracker.ceph.com/issues/59531
542
  "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio)" 
543
* https://tracker.ceph.com/issues/61574
544
  build failure for mdtest project
545
* https://tracker.ceph.com/issues/62702
546
  fsstress.sh: MDS slow requests for the internal 'rename' requests
547
* https://tracker.ceph.com/issues/57676
548
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
549
550
* https://tracker.ceph.com/issues/62863 
551
  deadlock in ceph-fuse causes teuthology job to hang and fail
552
* https://tracker.ceph.com/issues/62870
553
  test_cluster_info (tasks.cephfs.test_nfs.TestNFS)
554
* https://tracker.ceph.com/issues/62873
555
  test_client_blocklisted_oldest_tid (tasks.cephfs.test_client_limits.TestClientLimits)
556
557 186 Venky Shankar
h3. 20 Sep 2023
558
559
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230920.072635
560
561
* https://tracker.ceph.com/issues/52624
562
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
563
* https://tracker.ceph.com/issues/61400
564
    valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
565
* https://tracker.ceph.com/issues/61399
566
    libmpich: undefined references to fi_strerror
567
* https://tracker.ceph.com/issues/62081
568
    tasks/fscrypt-common does not finish, timesout
569
* https://tracker.ceph.com/issues/62658 
570
    error during scrub thrashing: reached maximum tries (31) after waiting for 900 seconds
571
* https://tracker.ceph.com/issues/62915
572
    qa/suites/fs/nfs: No orchestrator configured (try `ceph orch set backend`) while running test cases
573
* https://tracker.ceph.com/issues/59531
574
    quincy: "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio)"
575
* https://tracker.ceph.com/issues/62873
576
  qa: FAIL: test_client_blocklisted_oldest_tid (tasks.cephfs.test_client_limits.TestClientLimits)
577
* https://tracker.ceph.com/issues/62936
578
    Test failure: test_mirroring_init_failure_with_recovery (tasks.cephfs.test_mirroring.TestMirroring)
579
* https://tracker.ceph.com/issues/62937
580
    Command failed on smithi027 with status 3: 'sudo logrotate /etc/logrotate.d/ceph-test.conf'
581
* https://tracker.ceph.com/issues/62510
582
    snaptest-git-ceph.sh failure with fs/thrash
583
* https://tracker.ceph.com/issues/62081
584
    tasks/fscrypt-common does not finish, timesout
585
* https://tracker.ceph.com/issues/62126
586
    test failure: suites/blogbench.sh stops running
587 187 Venky Shankar
* https://tracker.ceph.com/issues/62682
588
    mon: no mdsmap broadcast after "fs set joinable" is set to true
589 186 Venky Shankar
590 184 Milind Changire
h3. 19 Sep 2023
591
592
http://pulpito.front.sepia.ceph.com/mchangir-2023-09-12_05:40:22-fs-wip-mchangir-testing-20230908.140927-testing-default-smithi/
593
594
* https://tracker.ceph.com/issues/58220#note-9
595
  workunit fs/test_python.sh: test_disk_quota_exceeeded_error failure
596
* https://tracker.ceph.com/issues/62702
597
  Command failed (workunit test suites/fsstress.sh) on smithi124 with status 124
598
* https://tracker.ceph.com/issues/57676
599
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
600
* https://tracker.ceph.com/issues/59348
601
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
602
* https://tracker.ceph.com/issues/52624
603
  qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
604
* https://tracker.ceph.com/issues/51964
605
  qa: test_cephfs_mirror_restart_sync_on_blocklist failure
606
* https://tracker.ceph.com/issues/61243
607
  test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
608
* https://tracker.ceph.com/issues/59344
609
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
610
* https://tracker.ceph.com/issues/62873
611
  qa: FAIL: test_client_blocklisted_oldest_tid (tasks.cephfs.test_client_limits.TestClientLimits)
612
* https://tracker.ceph.com/issues/59413
613
  cephfs: qa snaptest-git-ceph.sh failed with "got remote process result: 128"
614
* https://tracker.ceph.com/issues/53859
615
  qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
616
* https://tracker.ceph.com/issues/62482
617
  qa: cluster [WRN] Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
618
619 178 Patrick Donnelly
620 177 Venky Shankar
h3. 13 Sep 2023
621
622
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230908.065909
623
624
* https://tracker.ceph.com/issues/52624
625
      qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
626
* https://tracker.ceph.com/issues/57655
627
    qa: fs:mixed-clients kernel_untar_build failure
628
* https://tracker.ceph.com/issues/57676
629
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
630
* https://tracker.ceph.com/issues/61243
631
    qa: tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev - 17 tests failed
632
* https://tracker.ceph.com/issues/62567
633
    postgres workunit times out - MDS_SLOW_REQUEST in logs
634
* https://tracker.ceph.com/issues/61400
635
    valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
636
* https://tracker.ceph.com/issues/61399
637
    libmpich: undefined references to fi_strerror
638
* https://tracker.ceph.com/issues/57655
639
    qa: fs:mixed-clients kernel_untar_build failure
640
* https://tracker.ceph.com/issues/57676
641
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
642
* https://tracker.ceph.com/issues/51964
643
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
644
* https://tracker.ceph.com/issues/62081
645
    tasks/fscrypt-common does not finish, timesout
646 178 Patrick Donnelly
647 179 Patrick Donnelly
h3. 2023 Sep 12
648 178 Patrick Donnelly
649
https://pulpito.ceph.com/pdonnell-2023-09-12_14:07:50-fs-wip-batrick-testing-20230912.122437-distro-default-smithi/
650 1 Patrick Donnelly
651 181 Patrick Donnelly
A few failures caused by qa refactoring in https://github.com/ceph/ceph/pull/48130 ; notably:
652
653 182 Patrick Donnelly
* Test failure: test_export_pin_many (tasks.cephfs.test_exports.TestExportPin) caused by fragmentation from config changes.
654 181 Patrick Donnelly
655
Failures:
656
657 179 Patrick Donnelly
* https://tracker.ceph.com/issues/59348
658
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
659
* https://tracker.ceph.com/issues/57656
660
  dbench: write failed on handle 10010 (Resource temporarily unavailable)
661
* https://tracker.ceph.com/issues/55805
662
  error scrub thrashing reached max tries in 900 secs
663
* https://tracker.ceph.com/issues/62067
664
    ffsb.sh failure "Resource temporarily unavailable"
665
* https://tracker.ceph.com/issues/59344
666
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
667
* https://tracker.ceph.com/issues/61399
668 180 Patrick Donnelly
  libmpich: undefined references to fi_strerror
669
* https://tracker.ceph.com/issues/62832
670
  common: config_proxy deadlock during shutdown (and possibly other times)
671
* https://tracker.ceph.com/issues/59413
672 1 Patrick Donnelly
  cephfs: qa snaptest-git-ceph.sh failed with "got remote process result: 128"
673 181 Patrick Donnelly
* https://tracker.ceph.com/issues/57676
674
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
675
* https://tracker.ceph.com/issues/62567
676
  Command failed with status 1: "sudo TESTDIR=/home/ubuntu/cephtest bash -c 'sudo -u postgres -- pgbench -s 500 -i'"
677
* https://tracker.ceph.com/issues/54460
678
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
679
* https://tracker.ceph.com/issues/58220#note-9
680
  workunit fs/test_python.sh: test_disk_quota_exceeeded_error failure
681
* https://tracker.ceph.com/issues/59348
682
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
683 183 Patrick Donnelly
* https://tracker.ceph.com/issues/62847
684
    mds: blogbench requests stuck (5mds+scrub+snaps-flush)
685
* https://tracker.ceph.com/issues/62848
686
    qa: fail_fs upgrade scenario hanging
687
* https://tracker.ceph.com/issues/62081
688
    tasks/fscrypt-common does not finish, timesout
689 177 Venky Shankar
690 176 Venky Shankar
h3. 11 Sep 2023
691 175 Venky Shankar
692
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230830.153114
693
694
* https://tracker.ceph.com/issues/52624
695
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
696
* https://tracker.ceph.com/issues/61399
697
    qa: build failure for ior (the failed instance is when compiling `mdtest`)
698
* https://tracker.ceph.com/issues/57655
699
    qa: fs:mixed-clients kernel_untar_build failure
700
* https://tracker.ceph.com/issues/61399
701
    ior build failure
702
* https://tracker.ceph.com/issues/59531
703
    quincy: "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio)"
704
* https://tracker.ceph.com/issues/59344
705
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
706
* https://tracker.ceph.com/issues/59346
707
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
708
* https://tracker.ceph.com/issues/59348
709
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
710
* https://tracker.ceph.com/issues/57676
711
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
712
* https://tracker.ceph.com/issues/61243
713
  qa: tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev - 17 tests failed
714
* https://tracker.ceph.com/issues/62567
715
  postgres workunit times out - MDS_SLOW_REQUEST in logs
716
717
718 174 Rishabh Dave
h3. 6 Sep 2023 Run 2
719
720
https://pulpito.ceph.com/rishabh-2023-08-25_01:50:32-fs-wip-rishabh-2023aug3-b5-testing-default-smithi/ 
721
722
* https://tracker.ceph.com/issues/51964
723
  test_cephfs_mirror_restart_sync_on_blocklist failure
724
* https://tracker.ceph.com/issues/59348
725
  test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
726
* https://tracker.ceph.com/issues/53859
727
  qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
728
* https://tracker.ceph.com/issues/61892
729
  test_strays.TestStrays.test_snapshot_remove failed
730
* https://tracker.ceph.com/issues/54460
731
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
732
* https://tracker.ceph.com/issues/59346
733
  fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write"
734
* https://tracker.ceph.com/issues/59344
735
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
736
* https://tracker.ceph.com/issues/62484
737
  qa: ffsb.sh test failure
738
* https://tracker.ceph.com/issues/62567
739
  Command failed with status 1: "sudo TESTDIR=/home/ubuntu/cephtest bash -c 'sudo -u postgres -- pgbench -s 500 -i'"
740
  
741
* https://tracker.ceph.com/issues/61399
742
  ior build failure
743
* https://tracker.ceph.com/issues/57676
744
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
745
* https://tracker.ceph.com/issues/55805
746
  error scrub thrashing reached max tries in 900 secs
747
748 172 Rishabh Dave
h3. 6 Sep 2023
749 171 Rishabh Dave
750 173 Rishabh Dave
https://pulpito.ceph.com/rishabh-2023-08-10_20:16:46-fs-wip-rishabh-2023Aug1-b4-testing-default-smithi/
751 171 Rishabh Dave
752 1 Patrick Donnelly
* https://tracker.ceph.com/issues/53859
753
  qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
754 173 Rishabh Dave
* https://tracker.ceph.com/issues/51964
755
  qa: test_cephfs_mirror_restart_sync_on_blocklist failure
756 1 Patrick Donnelly
* https://tracker.ceph.com/issues/61892
757 173 Rishabh Dave
  test_snapshot_remove (test_strays.TestStrays) failed
758
* https://tracker.ceph.com/issues/59348
759
  qa: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota
760
* https://tracker.ceph.com/issues/54462
761
  Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128
762
* https://tracker.ceph.com/issues/62556
763
  test_acls: xfstests_dev: python2 is missing
764
* https://tracker.ceph.com/issues/62067
765
  ffsb.sh failure "Resource temporarily unavailable"
766
* https://tracker.ceph.com/issues/57656
767
  dbench: write failed on handle 10010 (Resource temporarily unavailable)
768 1 Patrick Donnelly
* https://tracker.ceph.com/issues/59346
769
  fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write"
770 171 Rishabh Dave
* https://tracker.ceph.com/issues/59344
771 173 Rishabh Dave
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
772
773 171 Rishabh Dave
* https://tracker.ceph.com/issues/61399
774
  ior build failure
775
* https://tracker.ceph.com/issues/57676
776
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
777
* https://tracker.ceph.com/issues/55805
778
  error scrub thrashing reached max tries in 900 secs
779 173 Rishabh Dave
780
* https://tracker.ceph.com/issues/62567
781
  Command failed on smithi008 with status 1: "sudo TESTDIR=/home/ubuntu/cephtest bash -c 'sudo -u postgres -- pgbench -s 500 -i'"
782
* https://tracker.ceph.com/issues/62702
783
  workunit test suites/fsstress.sh on smithi066 with status 124
784 170 Rishabh Dave
785
h3. 5 Sep 2023
786
787
https://pulpito.ceph.com/rishabh-2023-08-25_06:38:25-fs-wip-rishabh-2023aug3-b5-testing-default-smithi/
788
orch:cephadm suite run: http://pulpito.front.sepia.ceph.com/rishabh-2023-09-05_12:16:09-orch:cephadm-wip-rishabh-2023aug3-b5-testing-default-smithi/
789
  this run has failures but acc to Adam King these are not relevant and should be ignored
790
791
* https://tracker.ceph.com/issues/61892
792
  test_snapshot_remove (test_strays.TestStrays) failed
793
* https://tracker.ceph.com/issues/59348
794
  test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota
795
* https://tracker.ceph.com/issues/54462
796
  Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128
797
* https://tracker.ceph.com/issues/62067
798
  ffsb.sh failure "Resource temporarily unavailable"
799
* https://tracker.ceph.com/issues/57656 
800
  dbench: write failed on handle 10010 (Resource temporarily unavailable)
801
* https://tracker.ceph.com/issues/59346
802
  fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write"
803
* https://tracker.ceph.com/issues/59344
804
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
805
* https://tracker.ceph.com/issues/50223
806
  client.xxxx isn't responding to mclientcaps(revoke)
807
* https://tracker.ceph.com/issues/57655
808
  qa: fs:mixed-clients kernel_untar_build failure
809
* https://tracker.ceph.com/issues/62187
810
  iozone.sh: line 5: iozone: command not found
811
 
812
* https://tracker.ceph.com/issues/61399
813
  ior build failure
814
* https://tracker.ceph.com/issues/57676
815
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
816
* https://tracker.ceph.com/issues/55805
817
  error scrub thrashing reached max tries in 900 secs
818 169 Venky Shankar
819
820
h3. 31 Aug 2023
821
822
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230824.045828
823
824
* https://tracker.ceph.com/issues/52624
825
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
826
* https://tracker.ceph.com/issues/62187
827
    iozone: command not found
828
* https://tracker.ceph.com/issues/61399
829
    ior build failure
830
* https://tracker.ceph.com/issues/59531
831
    quincy: "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio)"
832
* https://tracker.ceph.com/issues/61399
833
    qa: build failure for ior (the failed instance is when compiling `mdtest`)
834
* https://tracker.ceph.com/issues/57655
835
    qa: fs:mixed-clients kernel_untar_build failure
836
* https://tracker.ceph.com/issues/59344
837
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
838
* https://tracker.ceph.com/issues/59346
839
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
840
* https://tracker.ceph.com/issues/59348
841
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
842
* https://tracker.ceph.com/issues/59413
843
    cephfs: qa snaptest-git-ceph.sh failed with "got remote process result: 128"
844
* https://tracker.ceph.com/issues/62653
845
    qa: unimplemented fcntl command: 1036 with fsstress
846
* https://tracker.ceph.com/issues/61400
847
    valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
848
* https://tracker.ceph.com/issues/62658
849
    error during scrub thrashing: reached maximum tries (31) after waiting for 900 seconds
850
* https://tracker.ceph.com/issues/62188
851
    AttributeError: 'RemoteProcess' object has no attribute 'read' (happens only with multis-auth test)
852 168 Venky Shankar
853
854
h3. 25 Aug 2023
855
856
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230822.064807
857
858
* https://tracker.ceph.com/issues/59344
859
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
860
* https://tracker.ceph.com/issues/59346
861
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
862
* https://tracker.ceph.com/issues/59348
863
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
864
* https://tracker.ceph.com/issues/57655
865
    qa: fs:mixed-clients kernel_untar_build failure
866
* https://tracker.ceph.com/issues/61243
867
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
868
* https://tracker.ceph.com/issues/61399
869
    ior build failure
870
* https://tracker.ceph.com/issues/61399
871
    qa: build failure for ior (the failed instance is when compiling `mdtest`)
872
* https://tracker.ceph.com/issues/62484
873
    qa: ffsb.sh test failure
874
* https://tracker.ceph.com/issues/59531
875
    quincy: "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio)"
876
* https://tracker.ceph.com/issues/62510
877
    snaptest-git-ceph.sh failure with fs/thrash
878 167 Venky Shankar
879
880
h3. 24 Aug 2023
881
882
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230822.060131
883
884
* https://tracker.ceph.com/issues/57676
885
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
886
* https://tracker.ceph.com/issues/51964
887
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
888
* https://tracker.ceph.com/issues/59344
889
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
890
* https://tracker.ceph.com/issues/59346
891
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
892
* https://tracker.ceph.com/issues/59348
893
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
894
* https://tracker.ceph.com/issues/61399
895
    ior build failure
896
* https://tracker.ceph.com/issues/61399
897
    qa: build failure for ior (the failed instance is when compiling `mdtest`)
898
* https://tracker.ceph.com/issues/62510
899
    snaptest-git-ceph.sh failure with fs/thrash
900
* https://tracker.ceph.com/issues/62484
901
    qa: ffsb.sh test failure
902
* https://tracker.ceph.com/issues/57087
903
    qa: test_fragmented_injection (tasks.cephfs.test_data_scan.TestDataScan) failure
904
* https://tracker.ceph.com/issues/57656
905
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
906
* https://tracker.ceph.com/issues/62187
907
    iozone: command not found
908
* https://tracker.ceph.com/issues/62188
909
    AttributeError: 'RemoteProcess' object has no attribute 'read' (happens only with multis-auth test)
910
* https://tracker.ceph.com/issues/62567
911
    postgres workunit times out - MDS_SLOW_REQUEST in logs
912 166 Venky Shankar
913
914
h3. 22 Aug 2023
915
916
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230809.035933
917
918
* https://tracker.ceph.com/issues/57676
919
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
920
* https://tracker.ceph.com/issues/51964
921
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
922
* https://tracker.ceph.com/issues/59344
923
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
924
* https://tracker.ceph.com/issues/59346
925
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
926
* https://tracker.ceph.com/issues/59348
927
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
928
* https://tracker.ceph.com/issues/61399
929
    ior build failure
930
* https://tracker.ceph.com/issues/61399
931
    qa: build failure for ior (the failed instance is when compiling `mdtest`)
932
* https://tracker.ceph.com/issues/57655
933
    qa: fs:mixed-clients kernel_untar_build failure
934
* https://tracker.ceph.com/issues/61243
935
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
936
* https://tracker.ceph.com/issues/62188
937
    AttributeError: 'RemoteProcess' object has no attribute 'read' (happens only with multis-auth test)
938
* https://tracker.ceph.com/issues/62510
939
    snaptest-git-ceph.sh failure with fs/thrash
940
* https://tracker.ceph.com/issues/62511
941
    src/mds/MDLog.cc: 299: FAILED ceph_assert(!mds_is_shutting_down)
942 165 Venky Shankar
943
944
h3. 14 Aug 2023
945
946
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230808.093601
947
948
* https://tracker.ceph.com/issues/51964
949
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
950
* https://tracker.ceph.com/issues/61400
951
    valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
952
* https://tracker.ceph.com/issues/61399
953
    ior build failure
954
* https://tracker.ceph.com/issues/59348
955
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
956
* https://tracker.ceph.com/issues/59531
957
    cluster [WRN] OSD bench result of 137706.272521 IOPS exceeded the threshold
958
* https://tracker.ceph.com/issues/59344
959
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
960
* https://tracker.ceph.com/issues/59346
961
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
962
* https://tracker.ceph.com/issues/61399
963
    qa: build failure for ior (the failed instance is when compiling `mdtest`)
964
* https://tracker.ceph.com/issues/59684 [kclient bug]
965
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
966
* https://tracker.ceph.com/issues/61243 (NEW)
967
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
968
* https://tracker.ceph.com/issues/57655
969
    qa: fs:mixed-clients kernel_untar_build failure
970
* https://tracker.ceph.com/issues/57656
971
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
972 163 Venky Shankar
973
974
h3. 28 JULY 2023
975
976
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230725.053049
977
978
* https://tracker.ceph.com/issues/51964
979
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
980
* https://tracker.ceph.com/issues/61400
981
    valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
982
* https://tracker.ceph.com/issues/61399
983
    ior build failure
984
* https://tracker.ceph.com/issues/57676
985
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
986
* https://tracker.ceph.com/issues/59348
987
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
988
* https://tracker.ceph.com/issues/59531
989
    cluster [WRN] OSD bench result of 137706.272521 IOPS exceeded the threshold
990
* https://tracker.ceph.com/issues/59344
991
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
992
* https://tracker.ceph.com/issues/59346
993
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
994
* https://github.com/ceph/ceph/pull/52556
995
    task/test_nfs: AttributeError: 'TestNFS' object has no attribute 'run_ceph_cmd' (see note #4)
996
* https://tracker.ceph.com/issues/62187
997
    iozone: command not found
998
* https://tracker.ceph.com/issues/61399
999
    qa: build failure for ior (the failed instance is when compiling `mdtest`)
1000
* https://tracker.ceph.com/issues/62188
1001 164 Rishabh Dave
    AttributeError: 'RemoteProcess' object has no attribute 'read' (happens only with multis-auth test)
1002 158 Rishabh Dave
1003
h3. 24 Jul 2023
1004
1005
https://pulpito.ceph.com/rishabh-2023-07-13_21:35:13-fs-wip-rishabh-2023Jul13-testing-default-smithi/
1006
https://pulpito.ceph.com/rishabh-2023-07-14_10:26:42-fs-wip-rishabh-2023Jul13-testing-default-smithi/
1007
There were few failure from one of the PRs under testing. Following run confirms that removing this PR fixes these failures -
1008
https://pulpito.ceph.com/rishabh-2023-07-18_02:11:50-fs-wip-rishabh-2023Jul13-m-quota-testing-default-smithi/
1009
One more extra run to check if blogbench.sh fail every time:
1010
https://pulpito.ceph.com/rishabh-2023-07-21_17:58:19-fs-wip-rishabh-2023Jul13-m-quota-testing-default-smithi/
1011
blogbench.sh failure were seen on above runs for first time, following run with main branch that confirms that "blogbench.sh" was not related to any of the PRs that are under testing -
1012 161 Rishabh Dave
https://pulpito.ceph.com/rishabh-2023-07-21_21:30:53-fs-wip-rishabh-2023Jul13-base-2-testing-default-smithi/
1013
1014
* https://tracker.ceph.com/issues/61892
1015
  test_snapshot_remove (test_strays.TestStrays) failed
1016
* https://tracker.ceph.com/issues/53859
1017
  test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
1018
* https://tracker.ceph.com/issues/61982
1019
  test_clean_stale_subvolume_snapshot_metadata (tasks.cephfs.test_volumes.TestSubvolumeSnapshots)
1020
* https://tracker.ceph.com/issues/52438
1021
  qa: ffsb timeout
1022
* https://tracker.ceph.com/issues/54460
1023
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1024
* https://tracker.ceph.com/issues/57655
1025
  qa: fs:mixed-clients kernel_untar_build failure
1026
* https://tracker.ceph.com/issues/48773
1027
  reached max tries: scrub does not complete
1028
* https://tracker.ceph.com/issues/58340
1029
  mds: fsstress.sh hangs with multimds
1030
* https://tracker.ceph.com/issues/61400
1031
  valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
1032
* https://tracker.ceph.com/issues/57206
1033
  libcephfs/test.sh: ceph_test_libcephfs_reclaim
1034
  
1035
* https://tracker.ceph.com/issues/57656
1036
  [testing] dbench: write failed on handle 10010 (Resource temporarily unavailable)
1037
* https://tracker.ceph.com/issues/61399
1038
  ior build failure
1039
* https://tracker.ceph.com/issues/57676
1040
  error during scrub thrashing: backtrace
1041
  
1042
* https://tracker.ceph.com/issues/38452
1043
  'sudo -u postgres -- pgbench -s 500 -i' failed
1044 158 Rishabh Dave
* https://tracker.ceph.com/issues/62126
1045 157 Venky Shankar
  blogbench.sh failure
1046
1047
h3. 18 July 2023
1048
1049
* https://tracker.ceph.com/issues/52624
1050
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
1051
* https://tracker.ceph.com/issues/57676
1052
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1053
* https://tracker.ceph.com/issues/54460
1054
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1055
* https://tracker.ceph.com/issues/57655
1056
    qa: fs:mixed-clients kernel_untar_build failure
1057
* https://tracker.ceph.com/issues/51964
1058
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1059
* https://tracker.ceph.com/issues/59344
1060
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
1061
* https://tracker.ceph.com/issues/61182
1062
    cephfs-mirror-ha-workunit: reached maximum tries (50) after waiting for 300 seconds
1063
* https://tracker.ceph.com/issues/61957
1064
    test_client_limits.TestClientLimits.test_client_release_bug
1065
* https://tracker.ceph.com/issues/59348
1066
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1067
* https://tracker.ceph.com/issues/61892
1068
    test_strays.TestStrays.test_snapshot_remove failed
1069
* https://tracker.ceph.com/issues/59346
1070
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
1071
* https://tracker.ceph.com/issues/44565
1072
    src/mds/SimpleLock.h: 528: FAILED ceph_assert(state == LOCK_XLOCK || state == LOCK_XLOCKDONE || state == LOCK_XLOCKSNAP || state == LOCK_LOCK_XLOCK || state == LOCK_LOCK || is_locallock())
1073
* https://tracker.ceph.com/issues/62067
1074
    ffsb.sh failure "Resource temporarily unavailable"
1075 156 Venky Shankar
1076
1077
h3. 17 July 2023
1078
1079
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230704.040136
1080
1081
* https://tracker.ceph.com/issues/61982
1082
    Test failure: test_clean_stale_subvolume_snapshot_metadata (tasks.cephfs.test_volumes.TestSubvolumeSnapshots)
1083
* https://tracker.ceph.com/issues/59344
1084
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
1085
* https://tracker.ceph.com/issues/61182
1086
    cephfs-mirror-ha-workunit: reached maximum tries (50) after waiting for 300 seconds
1087
* https://tracker.ceph.com/issues/61957
1088
    test_client_limits.TestClientLimits.test_client_release_bug
1089
* https://tracker.ceph.com/issues/61400
1090
    valgrind+ceph-mon: segmentation fault in rocksdb+tcmalloc
1091
* https://tracker.ceph.com/issues/59348
1092
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1093
* https://tracker.ceph.com/issues/61892
1094
    test_strays.TestStrays.test_snapshot_remove failed
1095
* https://tracker.ceph.com/issues/59346
1096
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
1097
* https://tracker.ceph.com/issues/62036
1098
    src/mds/MDCache.cc: 5131: FAILED ceph_assert(isolated_inodes.empty())
1099
* https://tracker.ceph.com/issues/61737
1100
    coredump from '/bin/podman pull quay.ceph.io/ceph-ci/ceph:pacific'
1101
* https://tracker.ceph.com/issues/44565
1102
    src/mds/SimpleLock.h: 528: FAILED ceph_assert(state == LOCK_XLOCK || state == LOCK_XLOCKDONE || state == LOCK_XLOCKSNAP || state == LOCK_LOCK_XLOCK || state == LOCK_LOCK || is_locallock())
1103 155 Rishabh Dave
1104 1 Patrick Donnelly
1105 153 Rishabh Dave
h3. 13 July 2023 Run 2
1106 152 Rishabh Dave
1107
1108
https://pulpito.ceph.com/rishabh-2023-07-08_23:33:40-fs-wip-rishabh-2023Jul9-testing-default-smithi/
1109
https://pulpito.ceph.com/rishabh-2023-07-09_20:19:09-fs-wip-rishabh-2023Jul9-testing-default-smithi/
1110
1111
* https://tracker.ceph.com/issues/61957
1112
  test_client_limits.TestClientLimits.test_client_release_bug
1113
* https://tracker.ceph.com/issues/61982
1114
  Test failure: test_clean_stale_subvolume_snapshot_metadata (tasks.cephfs.test_volumes.TestSubvolumeSnapshots)
1115
* https://tracker.ceph.com/issues/59348
1116
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1117
* https://tracker.ceph.com/issues/59344
1118
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
1119
* https://tracker.ceph.com/issues/54460
1120
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1121
* https://tracker.ceph.com/issues/57655
1122
  qa: fs:mixed-clients kernel_untar_build failure
1123
* https://tracker.ceph.com/issues/61400
1124
  valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
1125
* https://tracker.ceph.com/issues/61399
1126
  ior build failure
1127
1128 151 Venky Shankar
h3. 13 July 2023
1129
1130
https://pulpito.ceph.com/vshankar-2023-07-04_11:45:30-fs-wip-vshankar-testing-20230704.040242-testing-default-smithi/
1131
1132
* https://tracker.ceph.com/issues/54460
1133
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1134
* https://tracker.ceph.com/issues/61400
1135
    valgrind+ceph-mon: segmentation fault in rocksdb+tcmalloc
1136
* https://tracker.ceph.com/issues/57655
1137
    qa: fs:mixed-clients kernel_untar_build failure
1138
* https://tracker.ceph.com/issues/61945
1139
    LibCephFS.DelegTimeout failure
1140
* https://tracker.ceph.com/issues/52624
1141
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
1142
* https://tracker.ceph.com/issues/57676
1143
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1144
* https://tracker.ceph.com/issues/59348
1145
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1146
* https://tracker.ceph.com/issues/59344
1147
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
1148
* https://tracker.ceph.com/issues/51964
1149
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1150
* https://tracker.ceph.com/issues/59346
1151
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
1152
* https://tracker.ceph.com/issues/61982
1153
    Test failure: test_clean_stale_subvolume_snapshot_metadata (tasks.cephfs.test_volumes.TestSubvolumeSnapshots)
1154 150 Rishabh Dave
1155
1156
h3. 13 Jul 2023
1157
1158
https://pulpito.ceph.com/rishabh-2023-07-05_22:21:20-fs-wip-rishabh-2023Jul5-testing-default-smithi/
1159
https://pulpito.ceph.com/rishabh-2023-07-06_19:33:28-fs-wip-rishabh-2023Jul5-testing-default-smithi/
1160
1161
* https://tracker.ceph.com/issues/61957
1162
  test_client_limits.TestClientLimits.test_client_release_bug
1163
* https://tracker.ceph.com/issues/59348
1164
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1165
* https://tracker.ceph.com/issues/59346
1166
  fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
1167
* https://tracker.ceph.com/issues/48773
1168
  scrub does not complete: reached max tries
1169
* https://tracker.ceph.com/issues/59344
1170
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
1171
* https://tracker.ceph.com/issues/52438
1172
  qa: ffsb timeout
1173
* https://tracker.ceph.com/issues/57656
1174
  [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
1175
* https://tracker.ceph.com/issues/58742
1176
  xfstests-dev: kcephfs: generic
1177
* https://tracker.ceph.com/issues/61399
1178 148 Rishabh Dave
  libmpich: undefined references to fi_strerror
1179 149 Rishabh Dave
1180 148 Rishabh Dave
h3. 12 July 2023
1181
1182
https://pulpito.ceph.com/rishabh-2023-07-05_18:32:52-fs-wip-rishabh-CephManager-in-CephFSTestCase-testing-default-smithi/
1183
https://pulpito.ceph.com/rishabh-2023-07-06_19:46:43-fs-wip-rishabh-CephManager-in-CephFSTestCase-testing-default-smithi/
1184
1185
* https://tracker.ceph.com/issues/61892
1186
  test_strays.TestStrays.test_snapshot_remove failed
1187
* https://tracker.ceph.com/issues/59348
1188
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1189
* https://tracker.ceph.com/issues/53859
1190
  qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
1191
* https://tracker.ceph.com/issues/59346
1192
  fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write"
1193
* https://tracker.ceph.com/issues/58742
1194
  xfstests-dev: kcephfs: generic
1195
* https://tracker.ceph.com/issues/59344
1196
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
1197
* https://tracker.ceph.com/issues/52438
1198
  qa: ffsb timeout
1199
* https://tracker.ceph.com/issues/57656
1200
  [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
1201
* https://tracker.ceph.com/issues/54460
1202
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1203
* https://tracker.ceph.com/issues/57655
1204
  qa: fs:mixed-clients kernel_untar_build failure
1205
* https://tracker.ceph.com/issues/61182
1206
  cephfs-mirror-ha-workunit: reached maximum tries (50) after waiting for 300 seconds
1207
* https://tracker.ceph.com/issues/61400
1208
  valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
1209 147 Rishabh Dave
* https://tracker.ceph.com/issues/48773
1210 146 Patrick Donnelly
  reached max tries: scrub does not complete
1211
1212
h3. 05 July 2023
1213
1214
https://pulpito.ceph.com/pdonnell-2023-07-05_03:38:33-fs:libcephfs-wip-pdonnell-testing-20230705.003205-distro-default-smithi/
1215
1216 137 Rishabh Dave
* https://tracker.ceph.com/issues/59346
1217 143 Rishabh Dave
  fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write"
1218
1219
h3. 27 Jun 2023
1220
1221
https://pulpito.ceph.com/rishabh-2023-06-21_23:38:17-fs-wip-rishabh-improvements-authmon-testing-default-smithi/
1222 144 Rishabh Dave
https://pulpito.ceph.com/rishabh-2023-06-23_17:37:30-fs-wip-rishabh-improvements-authmon-distro-default-smithi/
1223
1224
* https://tracker.ceph.com/issues/59348
1225
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1226
* https://tracker.ceph.com/issues/54460
1227
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1228
* https://tracker.ceph.com/issues/59346
1229
  fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write"
1230
* https://tracker.ceph.com/issues/59344
1231
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
1232
* https://tracker.ceph.com/issues/61399
1233
  libmpich: undefined references to fi_strerror
1234
* https://tracker.ceph.com/issues/50223
1235
  client.xxxx isn't responding to mclientcaps(revoke)
1236 143 Rishabh Dave
* https://tracker.ceph.com/issues/61831
1237
  Test failure: test_mirroring_init_failure_with_recovery (tasks.cephfs.test_mirroring.TestMirroring)
1238 142 Venky Shankar
1239
1240
h3. 22 June 2023
1241
1242
* https://tracker.ceph.com/issues/57676
1243
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1244
* https://tracker.ceph.com/issues/54460
1245
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1246
* https://tracker.ceph.com/issues/59344
1247
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
1248
* https://tracker.ceph.com/issues/59348
1249
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1250
* https://tracker.ceph.com/issues/61400
1251
    valgrind+ceph-mon: segmentation fault in rocksdb+tcmalloc
1252
* https://tracker.ceph.com/issues/57655
1253
    qa: fs:mixed-clients kernel_untar_build failure
1254
* https://tracker.ceph.com/issues/61394
1255
    qa/quincy: cluster [WRN] evicting unresponsive client smithi152 (4298), after 303.726 seconds" in cluster log
1256
* https://tracker.ceph.com/issues/61762
1257
    qa: wait_for_clean: failed before timeout expired
1258
* https://tracker.ceph.com/issues/61775
1259
    cephfs-mirror: mirror daemon does not shutdown (in mirror ha tests)
1260
* https://tracker.ceph.com/issues/44565
1261
    src/mds/SimpleLock.h: 528: FAILED ceph_assert(state == LOCK_XLOCK || state == LOCK_XLOCKDONE || state == LOCK_XLOCKSNAP || state == LOCK_LOCK_XLOCK || state == LOCK_LOCK || is_locallock())
1262
* https://tracker.ceph.com/issues/61790
1263
    cephfs client to mds comms remain silent after reconnect
1264
* https://tracker.ceph.com/issues/61791
1265
    snaptest-git-ceph.sh test timed out (job dead)
1266 139 Venky Shankar
1267
1268
h3. 20 June 2023
1269
1270
https://pulpito.ceph.com/vshankar-2023-06-15_04:58:28-fs-wip-vshankar-testing-20230614.124123-testing-default-smithi/
1271
1272
* https://tracker.ceph.com/issues/57676
1273
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1274
* https://tracker.ceph.com/issues/54460
1275
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1276 140 Venky Shankar
* https://tracker.ceph.com/issues/54462
1277 1 Patrick Donnelly
  Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128
1278 141 Venky Shankar
* https://tracker.ceph.com/issues/58340
1279 139 Venky Shankar
  mds: fsstress.sh hangs with multimds
1280
* https://tracker.ceph.com/issues/59344
1281
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
1282
* https://tracker.ceph.com/issues/59348
1283
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1284
* https://tracker.ceph.com/issues/57656
1285
  [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
1286
* https://tracker.ceph.com/issues/61400
1287
    valgrind+ceph-mon: segmentation fault in rocksdb+tcmalloc
1288
* https://tracker.ceph.com/issues/57655
1289
    qa: fs:mixed-clients kernel_untar_build failure
1290
* https://tracker.ceph.com/issues/44565
1291
    src/mds/SimpleLock.h: 528: FAILED ceph_assert(state == LOCK_XLOCK || state == LOCK_XLOCKDONE || state == LOCK_XLOCKSNAP || state == LOCK_LOCK_XLOCK || state == LOCK_LOCK || is_locallock())
1292
* https://tracker.ceph.com/issues/61737
1293 138 Rishabh Dave
    coredump from '/bin/podman pull quay.ceph.io/ceph-ci/ceph:pacific'
1294
1295
h3. 16 June 2023
1296
1297 1 Patrick Donnelly
https://pulpito.ceph.com/rishabh-2023-05-16_10:39:13-fs-wip-rishabh-2023May15-1524-testing-default-smithi/
1298 145 Rishabh Dave
https://pulpito.ceph.com/rishabh-2023-05-17_11:09:48-fs-wip-rishabh-2023May15-1524-testing-default-smithi/
1299 138 Rishabh Dave
https://pulpito.ceph.com/rishabh-2023-05-18_10:01:53-fs-wip-rishabh-2023May15-1524-testing-default-smithi/
1300 1 Patrick Donnelly
(bins were rebuilt with a subset of orig PRs) https://pulpito.ceph.com/rishabh-2023-06-09_10:19:22-fs-wip-rishabh-2023Jun9-1308-testing-default-smithi/
1301
1302
1303
* https://tracker.ceph.com/issues/59344
1304
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
1305 138 Rishabh Dave
* https://tracker.ceph.com/issues/59348
1306
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1307 145 Rishabh Dave
* https://tracker.ceph.com/issues/59346
1308
  fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write"
1309
* https://tracker.ceph.com/issues/57656
1310
  [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
1311
* https://tracker.ceph.com/issues/54460
1312
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1313 138 Rishabh Dave
* https://tracker.ceph.com/issues/54462
1314
  Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128
1315 145 Rishabh Dave
* https://tracker.ceph.com/issues/61399
1316
  libmpich: undefined references to fi_strerror
1317
* https://tracker.ceph.com/issues/58945
1318
  xfstests-dev: ceph-fuse: generic 
1319 138 Rishabh Dave
* https://tracker.ceph.com/issues/58742
1320 136 Patrick Donnelly
  xfstests-dev: kcephfs: generic
1321
1322
h3. 24 May 2023
1323
1324
https://pulpito.ceph.com/pdonnell-2023-05-23_18:20:18-fs-wip-pdonnell-testing-20230523.134409-distro-default-smithi/
1325
1326
* https://tracker.ceph.com/issues/57676
1327
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1328
* https://tracker.ceph.com/issues/59683
1329
    Error: Unable to find a match: userspace-rcu-devel libedit-devel device-mapper-devel with fscrypt tests
1330
* https://tracker.ceph.com/issues/61399
1331
    qa: "[Makefile:299: ior] Error 1"
1332
* https://tracker.ceph.com/issues/61265
1333
    qa: tasks.cephfs.fuse_mount:process failed to terminate after unmount
1334
* https://tracker.ceph.com/issues/59348
1335
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1336
* https://tracker.ceph.com/issues/59346
1337
    qa/workunits/fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write"
1338
* https://tracker.ceph.com/issues/61400
1339
    valgrind+ceph-mon: segmentation fault in rocksdb+tcmalloc
1340
* https://tracker.ceph.com/issues/54460
1341
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1342
* https://tracker.ceph.com/issues/51964
1343
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1344
* https://tracker.ceph.com/issues/59344
1345
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
1346
* https://tracker.ceph.com/issues/61407
1347
    mds: abort on CInode::verify_dirfrags
1348
* https://tracker.ceph.com/issues/48773
1349
    qa: scrub does not complete
1350
* https://tracker.ceph.com/issues/57655
1351
    qa: fs:mixed-clients kernel_untar_build failure
1352
* https://tracker.ceph.com/issues/61409
1353 128 Venky Shankar
    qa: _test_stale_caps does not wait for file flush before stat
1354
1355
h3. 15 May 2023
1356 130 Venky Shankar
1357 128 Venky Shankar
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230509.090020
1358
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230509.090020-6
1359
1360
* https://tracker.ceph.com/issues/52624
1361
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
1362
* https://tracker.ceph.com/issues/54460
1363
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1364
* https://tracker.ceph.com/issues/57676
1365
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1366
* https://tracker.ceph.com/issues/59684 [kclient bug]
1367
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
1368
* https://tracker.ceph.com/issues/59348
1369
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1370 131 Venky Shankar
* https://tracker.ceph.com/issues/61148
1371
    dbench test results in call trace in dmesg [kclient bug]
1372 133 Kotresh Hiremath Ravishankar
* https://tracker.ceph.com/issues/58340
1373 134 Kotresh Hiremath Ravishankar
    mds: fsstress.sh hangs with multimds
1374 125 Venky Shankar
1375
 
1376 129 Rishabh Dave
h3. 11 May 2023
1377
1378
https://pulpito.ceph.com/yuriw-2023-05-10_18:21:40-fs-wip-yuri7-testing-2023-05-10-0742-distro-default-smithi/
1379
1380
* https://tracker.ceph.com/issues/59684 [kclient bug]
1381
  Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
1382
* https://tracker.ceph.com/issues/59348
1383
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1384
* https://tracker.ceph.com/issues/57655
1385
  qa: fs:mixed-clients kernel_untar_build failure
1386
* https://tracker.ceph.com/issues/57676
1387
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
1388
* https://tracker.ceph.com/issues/55805
1389
  error during scrub thrashing reached max tries in 900 secs
1390
* https://tracker.ceph.com/issues/54460
1391
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1392
* https://tracker.ceph.com/issues/57656
1393
  [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
1394
* https://tracker.ceph.com/issues/58220
1395
  Command failed (workunit test fs/quota/quota.sh) on smithi081 with status 1:
1396 1 Patrick Donnelly
* https://tracker.ceph.com/issues/58220#note-9
1397
  workunit fs/test_python.sh: test_disk_quota_exceeeded_error failure
1398 134 Kotresh Hiremath Ravishankar
* https://tracker.ceph.com/issues/59342
1399
  qa/workunits/kernel_untar_build.sh failed when compiling the Linux source
1400 135 Kotresh Hiremath Ravishankar
* https://tracker.ceph.com/issues/58949
1401
    test_cephfs.test_disk_quota_exceeeded_error - AssertionError: DiskQuotaExceeded not raised by write
1402 129 Rishabh Dave
* https://tracker.ceph.com/issues/61243 (NEW)
1403
  test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
1404
1405 125 Venky Shankar
h3. 11 May 2023
1406 127 Venky Shankar
1407
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230509.054005
1408 126 Venky Shankar
1409 125 Venky Shankar
(no fsstress job failure [https://tracker.ceph.com/issues/58340] since https://github.com/ceph/ceph/pull/49553
1410
 was included in the branch, however, the PR got updated and needs retest).
1411
1412
* https://tracker.ceph.com/issues/52624
1413
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
1414
* https://tracker.ceph.com/issues/54460
1415
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1416
* https://tracker.ceph.com/issues/57676
1417
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1418
* https://tracker.ceph.com/issues/59683
1419
    Error: Unable to find a match: userspace-rcu-devel libedit-devel device-mapper-devel with fscrypt tests
1420
* https://tracker.ceph.com/issues/59684 [kclient bug]
1421
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
1422
* https://tracker.ceph.com/issues/59348
1423 124 Venky Shankar
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1424
1425
h3. 09 May 2023
1426
1427
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230506.143554
1428
1429
* https://tracker.ceph.com/issues/52624
1430
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
1431
* https://tracker.ceph.com/issues/58340
1432
    mds: fsstress.sh hangs with multimds
1433
* https://tracker.ceph.com/issues/54460
1434
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1435
* https://tracker.ceph.com/issues/57676
1436
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1437
* https://tracker.ceph.com/issues/51964
1438
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1439
* https://tracker.ceph.com/issues/59350
1440
    qa: test_flag_scrub_mdsdir (tasks.cephfs.test_scrub_checks.TestScrubChecks) ... ERROR
1441
* https://tracker.ceph.com/issues/59683
1442
    Error: Unable to find a match: userspace-rcu-devel libedit-devel device-mapper-devel with fscrypt tests
1443
* https://tracker.ceph.com/issues/59684 [kclient bug]
1444
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
1445
* https://tracker.ceph.com/issues/59348
1446 123 Venky Shankar
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1447
1448
h3. 10 Apr 2023
1449
1450
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230330.105356
1451
1452
* https://tracker.ceph.com/issues/52624
1453
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
1454
* https://tracker.ceph.com/issues/58340
1455
    mds: fsstress.sh hangs with multimds
1456
* https://tracker.ceph.com/issues/54460
1457
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1458
* https://tracker.ceph.com/issues/57676
1459
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1460 119 Rishabh Dave
* https://tracker.ceph.com/issues/51964
1461 120 Rishabh Dave
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1462 121 Rishabh Dave
1463 120 Rishabh Dave
h3. 31 Mar 2023
1464 122 Rishabh Dave
1465
run: http://pulpito.front.sepia.ceph.com/rishabh-2023-03-03_21:39:49-fs-wip-rishabh-2023Mar03-2316-testing-default-smithi/
1466 120 Rishabh Dave
re-run (some PRs removed from batch): http://pulpito.front.sepia.ceph.com/rishabh-2023-03-11_05:54:03-fs-wip-rishabh-2023Mar10-1727-testing-default-smithi/
1467
re-run (some PRs removed from batch): http://pulpito.front.sepia.ceph.com/rishabh-2023-03-23_08:27:28-fs-wip-rishabh-2023Mar20-2250-testing-default-smithi/
1468
1469
There were many more re-runs for "failed+dead" jobs as well as for individual jobs. half of the PRs from the batch were removed (gradually over subsequent re-runs).
1470
1471
* https://tracker.ceph.com/issues/57676
1472
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
1473
* https://tracker.ceph.com/issues/54460
1474
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1475
* https://tracker.ceph.com/issues/58220
1476
  Command failed (workunit test fs/quota/quota.sh) on smithixxx with status 1
1477
* https://tracker.ceph.com/issues/58220#note-9
1478
  workunit fs/test_python.sh: test_disk_quota_exceeeded_error failure
1479
* https://tracker.ceph.com/issues/56695
1480
  Command failed (workunit test suites/pjd.sh)
1481
* https://tracker.ceph.com/issues/58564 
1482
  workuit dbench failed with error code 1
1483
* https://tracker.ceph.com/issues/57206
1484
  libcephfs/test.sh: ceph_test_libcephfs_reclaim
1485
* https://tracker.ceph.com/issues/57580
1486
  Test failure: test_newops_getvxattr (tasks.cephfs.test_newops.TestNewOps)
1487
* https://tracker.ceph.com/issues/58940
1488
  ceph osd hit ceph_abort
1489
* https://tracker.ceph.com/issues/55805
1490 118 Venky Shankar
  error scrub thrashing reached max tries in 900 secs
1491
1492
h3. 30 March 2023
1493
1494
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230315.085747
1495
1496
* https://tracker.ceph.com/issues/58938
1497
    qa: xfstests-dev's generic test suite has 7 failures with kclient
1498
* https://tracker.ceph.com/issues/51964
1499
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1500
* https://tracker.ceph.com/issues/58340
1501 114 Venky Shankar
    mds: fsstress.sh hangs with multimds
1502
1503 115 Venky Shankar
h3. 29 March 2023
1504 114 Venky Shankar
1505
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230317.095222
1506
1507
* https://tracker.ceph.com/issues/56695
1508
    [RHEL stock] pjd test failures
1509
* https://tracker.ceph.com/issues/57676
1510
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1511
* https://tracker.ceph.com/issues/57087
1512
    qa: test_fragmented_injection (tasks.cephfs.test_data_scan.TestDataScan) failure
1513 116 Venky Shankar
* https://tracker.ceph.com/issues/58340
1514
    mds: fsstress.sh hangs with multimds
1515 114 Venky Shankar
* https://tracker.ceph.com/issues/57655
1516
    qa: fs:mixed-clients kernel_untar_build failure
1517 117 Venky Shankar
* https://tracker.ceph.com/issues/59230
1518
    Test failure: test_object_deletion (tasks.cephfs.test_damage.TestDamage)
1519 114 Venky Shankar
* https://tracker.ceph.com/issues/58938
1520 113 Venky Shankar
    qa: xfstests-dev's generic test suite has 7 failures with kclient
1521
1522
h3. 13 Mar 2023
1523
1524
* https://tracker.ceph.com/issues/56695
1525
    [RHEL stock] pjd test failures
1526
* https://tracker.ceph.com/issues/57676
1527
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1528
* https://tracker.ceph.com/issues/51964
1529
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1530
* https://tracker.ceph.com/issues/54460
1531
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1532
* https://tracker.ceph.com/issues/57656
1533 112 Venky Shankar
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
1534
1535
h3. 09 Mar 2023
1536
1537
https://pulpito.ceph.com/vshankar-2023-03-03_04:39:14-fs-wip-vshankar-testing-20230303.023823-testing-default-smithi/
1538
https://pulpito.ceph.com/vshankar-2023-03-08_15:12:36-fs-wip-vshankar-testing-20230308.112059-testing-default-smithi/
1539
1540
* https://tracker.ceph.com/issues/56695
1541
    [RHEL stock] pjd test failures
1542
* https://tracker.ceph.com/issues/57676
1543
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1544
* https://tracker.ceph.com/issues/51964
1545
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1546
* https://tracker.ceph.com/issues/54460
1547
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1548
* https://tracker.ceph.com/issues/58340
1549
    mds: fsstress.sh hangs with multimds
1550
* https://tracker.ceph.com/issues/57087
1551 111 Venky Shankar
    qa: test_fragmented_injection (tasks.cephfs.test_data_scan.TestDataScan) failure
1552
1553
h3. 07 Mar 2023
1554
1555
https://pulpito.ceph.com/vshankar-2023-03-02_09:21:58-fs-wip-vshankar-testing-20230222.044949-testing-default-smithi/
1556
https://pulpito.ceph.com/vshankar-2023-03-07_05:15:12-fs-wip-vshankar-testing-20230307.030510-testing-default-smithi/
1557
1558
* https://tracker.ceph.com/issues/56695
1559
    [RHEL stock] pjd test failures
1560
* https://tracker.ceph.com/issues/57676
1561
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1562
* https://tracker.ceph.com/issues/51964
1563
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1564
* https://tracker.ceph.com/issues/57656
1565
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
1566
* https://tracker.ceph.com/issues/57655
1567
    qa: fs:mixed-clients kernel_untar_build failure
1568
* https://tracker.ceph.com/issues/58220
1569
    Command failed (workunit test fs/quota/quota.sh) on smithi081 with status 1:
1570
* https://tracker.ceph.com/issues/54460
1571
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1572
* https://tracker.ceph.com/issues/58934
1573 109 Venky Shankar
    snaptest-git-ceph.sh failure with ceph-fuse
1574
1575
h3. 28 Feb 2023
1576
1577
https://pulpito.ceph.com/vshankar-2023-02-24_02:11:45-fs-wip-vshankar-testing-20230222.025426-testing-default-smithi/
1578
1579
* https://tracker.ceph.com/issues/56695
1580
    [RHEL stock] pjd test failures
1581
* https://tracker.ceph.com/issues/57676
1582
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1583 110 Venky Shankar
* https://tracker.ceph.com/issues/56446
1584 109 Venky Shankar
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
1585
1586 107 Venky Shankar
(teuthology infra issues causing testing delays - merging PRs which have tests passing)
1587
1588
h3. 25 Jan 2023
1589
1590
https://pulpito.ceph.com/vshankar-2023-01-25_07:57:32-fs-wip-vshankar-testing-20230125.055346-testing-default-smithi/
1591
1592
* https://tracker.ceph.com/issues/52624
1593
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
1594
* https://tracker.ceph.com/issues/56695
1595
    [RHEL stock] pjd test failures
1596
* https://tracker.ceph.com/issues/57676
1597
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1598
* https://tracker.ceph.com/issues/56446
1599
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
1600
* https://tracker.ceph.com/issues/57206
1601
    libcephfs/test.sh: ceph_test_libcephfs_reclaim
1602
* https://tracker.ceph.com/issues/58220
1603
    Command failed (workunit test fs/quota/quota.sh) on smithi081 with status 1:
1604
* https://tracker.ceph.com/issues/58340
1605
  mds: fsstress.sh hangs with multimds
1606
* https://tracker.ceph.com/issues/56011
1607
    fs/thrash: snaptest-snap-rm-cmp.sh fails in mds5sum comparison
1608
* https://tracker.ceph.com/issues/54460
1609 101 Rishabh Dave
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1610
1611
h3. 30 JAN 2023
1612
1613
run: http://pulpito.front.sepia.ceph.com/rishabh-2022-11-28_08:04:11-fs-wip-rishabh-testing-2022Nov24-1818-testing-default-smithi/
1614
re-run: http://pulpito.front.sepia.ceph.com/rishabh-2023-01-13_12:08:33-fs-wip-rishabh-testing-2022Nov24-11Jan2023-distro-default-smithi/
1615 105 Rishabh Dave
re-run of re-run: http://pulpito.front.sepia.ceph.com/rishabh-2023-01-23_18:53:32-fs-wip-rishabh-testing-2022Nov24-11Jan2023-distro-default-smithi/
1616
1617 101 Rishabh Dave
* https://tracker.ceph.com/issues/52624
1618
  qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
1619
* https://tracker.ceph.com/issues/56695
1620
  [RHEL stock] pjd test failures
1621
* https://tracker.ceph.com/issues/57676
1622
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
1623
* https://tracker.ceph.com/issues/55332
1624
  Failure in snaptest-git-ceph.sh
1625
* https://tracker.ceph.com/issues/51964
1626
  qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1627
* https://tracker.ceph.com/issues/56446
1628
  Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
1629
* https://tracker.ceph.com/issues/57655 
1630
  qa: fs:mixed-clients kernel_untar_build failure
1631
* https://tracker.ceph.com/issues/54460
1632
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1633 103 Rishabh Dave
* https://tracker.ceph.com/issues/58340
1634
  mds: fsstress.sh hangs with multimds
1635 101 Rishabh Dave
* https://tracker.ceph.com/issues/58219
1636 102 Rishabh Dave
  Command crashed: 'ceph-dencoder type inode_backtrace_t import - decode dump_json'
1637
1638
* "Failed to load ceph-mgr modules: prometheus" in cluster log"
1639 106 Rishabh Dave
  http://pulpito.front.sepia.ceph.com/rishabh-2023-01-23_18:53:32-fs-wip-rishabh-testing-2022Nov24-11Jan2023-distro-default-smithi/7134086
1640
  Acc to Venky this was fixed in https://github.com/ceph/ceph/commit/cf6089200d96fc56b08ee17a4e31f19823370dc8
1641 102 Rishabh Dave
* Created https://tracker.ceph.com/issues/58564
1642 100 Venky Shankar
  workunit test suites/dbench.sh failed error code 1
1643
1644
h3. 15 Dec 2022
1645
1646
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20221215.112736
1647
1648
* https://tracker.ceph.com/issues/52624
1649
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1650
* https://tracker.ceph.com/issues/56695
1651
    [RHEL stock] pjd test failures
1652
* https://tracker.ceph.com/issues/58219
1653
* https://tracker.ceph.com/issues/57655
1654
* qa: fs:mixed-clients kernel_untar_build failure
1655
    Test failure: test_journal_migration (tasks.cephfs.test_journal_migration.TestJournalMigration)
1656
* https://tracker.ceph.com/issues/57676
1657
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1658
* https://tracker.ceph.com/issues/58340
1659 96 Venky Shankar
    mds: fsstress.sh hangs with multimds
1660
1661
h3. 08 Dec 2022
1662 99 Venky Shankar
1663 96 Venky Shankar
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20221130.043104
1664
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20221209.043803
1665
1666
(lots of transient git.ceph.com failures)
1667
1668
* https://tracker.ceph.com/issues/52624
1669
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1670
* https://tracker.ceph.com/issues/56695
1671
    [RHEL stock] pjd test failures
1672
* https://tracker.ceph.com/issues/57655
1673
    qa: fs:mixed-clients kernel_untar_build failure
1674
* https://tracker.ceph.com/issues/58219
1675
    Test failure: test_journal_migration (tasks.cephfs.test_journal_migration.TestJournalMigration)
1676
* https://tracker.ceph.com/issues/58220
1677
    Command failed (workunit test fs/quota/quota.sh) on smithi081 with status 1:
1678 97 Venky Shankar
* https://tracker.ceph.com/issues/57676
1679
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1680 98 Venky Shankar
* https://tracker.ceph.com/issues/53859
1681
    qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
1682
* https://tracker.ceph.com/issues/54460
1683
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1684 96 Venky Shankar
* https://tracker.ceph.com/issues/58244
1685 95 Venky Shankar
    Test failure: test_rebuild_inotable (tasks.cephfs.test_data_scan.TestDataScan)
1686
1687
h3. 14 Oct 2022
1688
1689
https://pulpito.ceph.com/vshankar-2022-10-12_04:56:59-fs-wip-vshankar-testing-20221011-145847-testing-default-smithi/
1690
https://pulpito.ceph.com/vshankar-2022-10-14_04:04:57-fs-wip-vshankar-testing-20221014-072608-testing-default-smithi/
1691
1692
* https://tracker.ceph.com/issues/52624
1693
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1694
* https://tracker.ceph.com/issues/55804
1695
    Command failed (workunit test suites/pjd.sh)
1696
* https://tracker.ceph.com/issues/51964
1697
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1698
* https://tracker.ceph.com/issues/57682
1699
    client: ERROR: test_reconnect_after_blocklisted
1700 90 Rishabh Dave
* https://tracker.ceph.com/issues/54460
1701 91 Rishabh Dave
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1702
1703
h3. 10 Oct 2022
1704 92 Rishabh Dave
1705 91 Rishabh Dave
http://pulpito.front.sepia.ceph.com/rishabh-2022-09-30_19:45:21-fs-wip-rishabh-testing-30Sep2022-testing-default-smithi/
1706
1707
reruns
1708
* fs-thrash, passed: http://pulpito.front.sepia.ceph.com/rishabh-2022-10-04_13:19:47-fs-wip-rishabh-testing-30Sep2022-testing-default-smithi/
1709 94 Rishabh Dave
* fs-verify, passed: http://pulpito.front.sepia.ceph.com/rishabh-2022-10-05_12:25:37-fs-wip-rishabh-testing-30Sep2022-testing-default-smithi/
1710 91 Rishabh Dave
* cephadm failures also passed after many re-runs: http://pulpito.front.sepia.ceph.com/rishabh-2022-10-06_13:50:51-fs-wip-rishabh-testing-30Sep2022-2-testing-default-smithi/
1711 93 Rishabh Dave
    ** needed this PR to be merged in ceph-ci branch - https://github.com/ceph/ceph/pull/47458
1712 91 Rishabh Dave
1713
known bugs
1714
* https://tracker.ceph.com/issues/52624
1715
  qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1716
* https://tracker.ceph.com/issues/50223
1717
  client.xxxx isn't responding to mclientcaps(revoke
1718
* https://tracker.ceph.com/issues/57299
1719
  qa: test_dump_loads fails with JSONDecodeError
1720
* https://tracker.ceph.com/issues/57655 [Exist in main as well]
1721
  qa: fs:mixed-clients kernel_untar_build failure
1722
* https://tracker.ceph.com/issues/57206
1723 90 Rishabh Dave
  libcephfs/test.sh: ceph_test_libcephfs_reclaim
1724
1725
h3. 2022 Sep 29
1726
1727
http://pulpito.front.sepia.ceph.com/rishabh-2022-09-14_12:48:43-fs-wip-rishabh-testing-2022Sep9-1708-testing-default-smithi/
1728
1729
* https://tracker.ceph.com/issues/55804
1730
  Command failed (workunit test suites/pjd.sh)
1731
* https://tracker.ceph.com/issues/36593
1732
  Command failed (workunit test fs/quota/quota.sh) on smithixxx with status 1
1733
* https://tracker.ceph.com/issues/52624
1734
  qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1735
* https://tracker.ceph.com/issues/51964
1736
  qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1737
* https://tracker.ceph.com/issues/56632
1738
  Test failure: test_subvolume_snapshot_clone_quota_exceeded
1739
* https://tracker.ceph.com/issues/50821
1740 88 Patrick Donnelly
  qa: untar_snap_rm failure during mds thrashing
1741
1742
h3. 2022 Sep 26
1743
1744
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20220923.171109
1745
1746
* https://tracker.ceph.com/issues/55804
1747
    qa failure: pjd link tests failed
1748
* https://tracker.ceph.com/issues/57676
1749
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1750
* https://tracker.ceph.com/issues/52624
1751
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1752
* https://tracker.ceph.com/issues/57580
1753
    Test failure: test_newops_getvxattr (tasks.cephfs.test_newops.TestNewOps)
1754
* https://tracker.ceph.com/issues/48773
1755
    qa: scrub does not complete
1756
* https://tracker.ceph.com/issues/57299
1757
    qa: test_dump_loads fails with JSONDecodeError
1758
* https://tracker.ceph.com/issues/57280
1759
    qa: tasks/kernel_cfuse_workunits_untarbuild_blogbench fails - Failed to fetch package version from shaman
1760
* https://tracker.ceph.com/issues/57205
1761
    Test failure: test_subvolume_group_ls_filter_internal_directories (tasks.cephfs.test_volumes.TestSubvolumeGroups)
1762
* https://tracker.ceph.com/issues/57656
1763
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
1764
* https://tracker.ceph.com/issues/57677
1765
    qa: "1 MDSs behind on trimming (MDS_TRIM)"
1766
* https://tracker.ceph.com/issues/57206
1767
    libcephfs/test.sh: ceph_test_libcephfs_reclaim
1768
* https://tracker.ceph.com/issues/57446
1769
    qa: test_subvolume_snapshot_info_if_orphan_clone fails
1770 89 Patrick Donnelly
* https://tracker.ceph.com/issues/57655 [Exist in main as well]
1771
    qa: fs:mixed-clients kernel_untar_build failure
1772 88 Patrick Donnelly
* https://tracker.ceph.com/issues/57682
1773
    client: ERROR: test_reconnect_after_blocklisted
1774 87 Patrick Donnelly
1775
1776
h3. 2022 Sep 22
1777
1778
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20220920.234701
1779
1780
* https://tracker.ceph.com/issues/57299
1781
    qa: test_dump_loads fails with JSONDecodeError
1782
* https://tracker.ceph.com/issues/57205
1783
    Test failure: test_subvolume_group_ls_filter_internal_directories (tasks.cephfs.test_volumes.TestSubvolumeGroups)
1784
* https://tracker.ceph.com/issues/52624
1785
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1786
* https://tracker.ceph.com/issues/57580
1787
    Test failure: test_newops_getvxattr (tasks.cephfs.test_newops.TestNewOps)
1788
* https://tracker.ceph.com/issues/57280
1789
    qa: tasks/kernel_cfuse_workunits_untarbuild_blogbench fails - Failed to fetch package version from shaman
1790
* https://tracker.ceph.com/issues/48773
1791
    qa: scrub does not complete
1792
* https://tracker.ceph.com/issues/56446
1793
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
1794
* https://tracker.ceph.com/issues/57206
1795
    libcephfs/test.sh: ceph_test_libcephfs_reclaim
1796
* https://tracker.ceph.com/issues/51267
1797
    CommandFailedError: Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithi096 with status 1:...
1798
1799
NEW:
1800
1801
* https://tracker.ceph.com/issues/57656
1802
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
1803
* https://tracker.ceph.com/issues/57655 [Exist in main as well]
1804
    qa: fs:mixed-clients kernel_untar_build failure
1805
* https://tracker.ceph.com/issues/57657
1806
    mds: scrub locates mismatch between child accounted_rstats and self rstats
1807
1808
Segfault probably caused by: https://github.com/ceph/ceph/pull/47795#issuecomment-1255724799
1809 80 Venky Shankar
1810 79 Venky Shankar
1811
h3. 2022 Sep 16
1812
1813
https://pulpito.ceph.com/?branch=wip-vshankar-testing1-20220905-132828
1814
1815
* https://tracker.ceph.com/issues/57446
1816
    qa: test_subvolume_snapshot_info_if_orphan_clone fails
1817
* https://tracker.ceph.com/issues/57299
1818
    qa: test_dump_loads fails with JSONDecodeError
1819
* https://tracker.ceph.com/issues/50223
1820
    client.xxxx isn't responding to mclientcaps(revoke)
1821
* https://tracker.ceph.com/issues/52624
1822
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1823
* https://tracker.ceph.com/issues/57205
1824
    Test failure: test_subvolume_group_ls_filter_internal_directories (tasks.cephfs.test_volumes.TestSubvolumeGroups)
1825
* https://tracker.ceph.com/issues/57280
1826
    qa: tasks/kernel_cfuse_workunits_untarbuild_blogbench fails - Failed to fetch package version from shaman
1827
* https://tracker.ceph.com/issues/51282
1828
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
1829
* https://tracker.ceph.com/issues/48203
1830
  https://tracker.ceph.com/issues/36593
1831
    qa: quota failure
1832
    qa: quota failure caused by clients stepping on each other
1833
* https://tracker.ceph.com/issues/57580
1834 77 Rishabh Dave
    Test failure: test_newops_getvxattr (tasks.cephfs.test_newops.TestNewOps)
1835
1836 76 Rishabh Dave
1837
h3. 2022 Aug 26
1838
1839
http://pulpito.front.sepia.ceph.com/rishabh-2022-08-22_17:49:59-fs-wip-rishabh-testing-2022Aug19-testing-default-smithi/
1840
http://pulpito.front.sepia.ceph.com/rishabh-2022-08-24_11:56:51-fs-wip-rishabh-testing-2022Aug19-testing-default-smithi/
1841
1842
* https://tracker.ceph.com/issues/57206
1843
  libcephfs/test.sh: ceph_test_libcephfs_reclaim
1844
* https://tracker.ceph.com/issues/56632
1845
  Test failure: test_subvolume_snapshot_clone_quota_exceeded (tasks.cephfs.test_volumes.TestSubvolumeSnapshotClones)
1846
* https://tracker.ceph.com/issues/56446
1847
  Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
1848
* https://tracker.ceph.com/issues/51964
1849
  qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1850
* https://tracker.ceph.com/issues/53859
1851
  qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
1852
1853
* https://tracker.ceph.com/issues/54460
1854
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1855
* https://tracker.ceph.com/issues/54462
1856
  Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128
1857
* https://tracker.ceph.com/issues/54460
1858
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1859
* https://tracker.ceph.com/issues/36593
1860
  Command failed (workunit test fs/quota/quota.sh) on smithixxx with status 1
1861
1862
* https://tracker.ceph.com/issues/52624
1863
  qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1864
* https://tracker.ceph.com/issues/55804
1865
  Command failed (workunit test suites/pjd.sh)
1866
* https://tracker.ceph.com/issues/50223
1867
  client.xxxx isn't responding to mclientcaps(revoke)
1868 75 Venky Shankar
1869
1870
h3. 2022 Aug 22
1871
1872
https://pulpito.ceph.com/vshankar-2022-08-12_09:34:24-fs-wip-vshankar-testing1-20220812-072441-testing-default-smithi/
1873
https://pulpito.ceph.com/vshankar-2022-08-18_04:30:42-fs-wip-vshankar-testing1-20220818-082047-testing-default-smithi/ (drop problematic PR and re-run)
1874
1875
* https://tracker.ceph.com/issues/52624
1876
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1877
* https://tracker.ceph.com/issues/56446
1878
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
1879
* https://tracker.ceph.com/issues/55804
1880
    Command failed (workunit test suites/pjd.sh)
1881
* https://tracker.ceph.com/issues/51278
1882
    mds: "FAILED ceph_assert(!segments.empty())"
1883
* https://tracker.ceph.com/issues/54460
1884
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1885
* https://tracker.ceph.com/issues/57205
1886
    Test failure: test_subvolume_group_ls_filter_internal_directories (tasks.cephfs.test_volumes.TestSubvolumeGroups)
1887
* https://tracker.ceph.com/issues/57206
1888
    ceph_test_libcephfs_reclaim crashes during test
1889
* https://tracker.ceph.com/issues/53859
1890
  Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
1891
* https://tracker.ceph.com/issues/50223
1892 72 Venky Shankar
    client.xxxx isn't responding to mclientcaps(revoke)
1893
1894
h3. 2022 Aug 12
1895
1896
https://pulpito.ceph.com/vshankar-2022-08-10_04:06:00-fs-wip-vshankar-testing-20220805-190751-testing-default-smithi/
1897
https://pulpito.ceph.com/vshankar-2022-08-11_12:16:58-fs-wip-vshankar-testing-20220811-145809-testing-default-smithi/ (drop problematic PR and re-run)
1898
1899
* https://tracker.ceph.com/issues/52624
1900
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1901
* https://tracker.ceph.com/issues/56446
1902
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
1903
* https://tracker.ceph.com/issues/51964
1904
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1905
* https://tracker.ceph.com/issues/55804
1906
    Command failed (workunit test suites/pjd.sh)
1907
* https://tracker.ceph.com/issues/50223
1908
    client.xxxx isn't responding to mclientcaps(revoke)
1909
* https://tracker.ceph.com/issues/50821
1910 73 Venky Shankar
    qa: untar_snap_rm failure during mds thrashing
1911 72 Venky Shankar
* https://tracker.ceph.com/issues/54460
1912 71 Venky Shankar
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1913
1914
h3. 2022 Aug 04
1915
1916
https://pulpito.ceph.com/?branch=wip-vshankar-testing1-20220804-123835 (only mgr/volumes, mgr/stats)
1917
1918 69 Rishabh Dave
Unrealted teuthology failure on rhel
1919 68 Rishabh Dave
1920
h3. 2022 Jul 25
1921
1922
http://pulpito.front.sepia.ceph.com/rishabh-2022-07-22_11:34:20-fs-wip-rishabh-testing-2022Jul22-1400-testing-default-smithi/
1923
1924 74 Rishabh Dave
1st re-run: http://pulpito.front.sepia.ceph.com/rishabh-2022-07-24_03:51:19-fs-wip-rishabh-testing-2022Jul22-1400-testing-default-smithi
1925
2nd re-run: http://pulpito.front.sepia.ceph.com/rishabh-2022-07-24_08:53:36-fs-wip-rishabh-testing-2022Jul22-1400-testing-default-smithi/
1926 68 Rishabh Dave
3rd re-run: http://pulpito.front.sepia.ceph.com/rishabh-2022-07-24_08:53:36-fs-wip-rishabh-testing-2022Jul22-1400-testing-default-smithi/
1927
4th (final) re-run: http://pulpito.front.sepia.ceph.com/rishabh-2022-07-28_03:59:01-fs-wip-rishabh-testing-2022Jul28-0143-testing-default-smithi/
1928
1929
* https://tracker.ceph.com/issues/55804
1930
  Command failed (workunit test suites/pjd.sh)
1931
* https://tracker.ceph.com/issues/50223
1932
  client.xxxx isn't responding to mclientcaps(revoke)
1933
1934
* https://tracker.ceph.com/issues/54460
1935
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1936 1 Patrick Donnelly
* https://tracker.ceph.com/issues/36593
1937 74 Rishabh Dave
  Command failed (workunit test fs/quota/quota.sh) on smithixxx with status 1
1938 68 Rishabh Dave
* https://tracker.ceph.com/issues/54462
1939 67 Patrick Donnelly
  Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128~
1940
1941
h3. 2022 July 22
1942
1943
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20220721.235756
1944
1945
MDS_HEALTH_DUMMY error in log fixed by followup commit.
1946
transient selinux ping failure
1947
1948
* https://tracker.ceph.com/issues/56694
1949
    qa: avoid blocking forever on hung umount
1950
* https://tracker.ceph.com/issues/56695
1951
    [RHEL stock] pjd test failures
1952
* https://tracker.ceph.com/issues/56696
1953
    admin keyring disappears during qa run
1954
* https://tracker.ceph.com/issues/56697
1955
    qa: fs/snaps fails for fuse
1956
* https://tracker.ceph.com/issues/50222
1957
    osd: 5.2s0 deep-scrub : stat mismatch
1958
* https://tracker.ceph.com/issues/56698
1959
    client: FAILED ceph_assert(_size == 0)
1960
* https://tracker.ceph.com/issues/50223
1961
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
1962 66 Rishabh Dave
1963 65 Rishabh Dave
1964
h3. 2022 Jul 15
1965
1966
http://pulpito.front.sepia.ceph.com/rishabh-2022-07-08_23:53:34-fs-wip-rishabh-testing-2022Jul08-1820-testing-default-smithi/
1967
1968
re-run: http://pulpito.front.sepia.ceph.com/rishabh-2022-07-15_06:42:04-fs-wip-rishabh-testing-2022Jul08-1820-testing-default-smithi/
1969
1970
* https://tracker.ceph.com/issues/53859
1971
  Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
1972
* https://tracker.ceph.com/issues/55804
1973
  Command failed (workunit test suites/pjd.sh)
1974
* https://tracker.ceph.com/issues/50223
1975
  client.xxxx isn't responding to mclientcaps(revoke)
1976
* https://tracker.ceph.com/issues/50222
1977
  osd: deep-scrub : stat mismatch
1978
1979
* https://tracker.ceph.com/issues/56632
1980
  Test failure: test_subvolume_snapshot_clone_quota_exceeded (tasks.cephfs.test_volumes.TestSubvolumeSnapshotClones)
1981
* https://tracker.ceph.com/issues/56634
1982
  workunit test fs/snaps/snaptest-intodir.sh
1983
* https://tracker.ceph.com/issues/56644
1984
  Test failure: test_rapid_creation (tasks.cephfs.test_fragment.TestFragmentation)
1985
1986 61 Rishabh Dave
1987
1988
h3. 2022 July 05
1989 62 Rishabh Dave
1990 64 Rishabh Dave
http://pulpito.front.sepia.ceph.com/rishabh-2022-07-02_14:14:52-fs-wip-rishabh-testing-20220702-1631-testing-default-smithi/
1991
1992
On 1st re-run some jobs passed - http://pulpito.front.sepia.ceph.com/rishabh-2022-07-03_15:10:28-fs-wip-rishabh-testing-20220702-1631-distro-default-smithi/
1993
1994
On 2nd re-run only few jobs failed -
1995 62 Rishabh Dave
http://pulpito.front.sepia.ceph.com/rishabh-2022-07-06_05:24:29-fs-wip-rishabh-testing-20220705-2132-distro-default-smithi/
1996
http://pulpito.front.sepia.ceph.com/rishabh-2022-07-06_05:24:29-fs-wip-rishabh-testing-20220705-2132-distro-default-smithi/
1997
1998
* https://tracker.ceph.com/issues/56446
1999
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
2000
* https://tracker.ceph.com/issues/55804
2001
    Command failed (workunit test suites/pjd.sh) on smithi047 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/
2002
2003
* https://tracker.ceph.com/issues/56445
2004 63 Rishabh Dave
    Command failed on smithi080 with status 123: "find /home/ubuntu/cephtest/archive/syslog -name '*.log' -print0 | sudo xargs -0 --no-run-if-empty -- gzip --"
2005
* https://tracker.ceph.com/issues/51267
2006
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithi098 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1
2007 62 Rishabh Dave
* https://tracker.ceph.com/issues/50224
2008
    Test failure: test_mirroring_init_failure_with_recovery (tasks.cephfs.test_mirroring.TestMirroring)
2009 61 Rishabh Dave
2010 58 Venky Shankar
2011
2012
h3. 2022 July 04
2013
2014
https://pulpito.ceph.com/vshankar-2022-06-29_09:19:00-fs-wip-vshankar-testing-20220627-100931-testing-default-smithi/
2015
(rhel runs were borked due to: https://lists.ceph.io/hyperkitty/list/dev@ceph.io/thread/JSZQFUKVLDND4W33PXDGCABPHNSPT6SS/, tests ran with --filter-out=rhel)
2016
2017
* https://tracker.ceph.com/issues/56445
2018 59 Rishabh Dave
    Command failed on smithi162 with status 123: "find /home/ubuntu/cephtest/archive/syslog -name '*.log' -print0 | sudo xargs -0 --no-run-if-empty -- gzip --"
2019
* https://tracker.ceph.com/issues/56446
2020
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
2021
* https://tracker.ceph.com/issues/51964
2022 60 Rishabh Dave
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
2023 59 Rishabh Dave
* https://tracker.ceph.com/issues/52624
2024 57 Venky Shankar
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2025
2026
h3. 2022 June 20
2027
2028
https://pulpito.ceph.com/vshankar-2022-06-15_04:03:39-fs-wip-vshankar-testing1-20220615-072516-testing-default-smithi/
2029
https://pulpito.ceph.com/vshankar-2022-06-19_08:22:46-fs-wip-vshankar-testing1-20220619-102531-testing-default-smithi/
2030
2031
* https://tracker.ceph.com/issues/52624
2032
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2033
* https://tracker.ceph.com/issues/55804
2034
    qa failure: pjd link tests failed
2035
* https://tracker.ceph.com/issues/54108
2036
    qa: iogen workunit: "The following counters failed to be set on mds daemons: {'mds.exported', 'mds.imported'}"
2037
* https://tracker.ceph.com/issues/55332
2038 56 Patrick Donnelly
    Failure in snaptest-git-ceph.sh (it's an async unlink/create bug)
2039
2040
h3. 2022 June 13
2041
2042
https://pulpito.ceph.com/pdonnell-2022-06-12_05:08:12-fs:workload-wip-pdonnell-testing-20220612.004943-distro-default-smithi/
2043
2044
* https://tracker.ceph.com/issues/56024
2045
    cephadm: removes ceph.conf during qa run causing command failure
2046
* https://tracker.ceph.com/issues/48773
2047
    qa: scrub does not complete
2048
* https://tracker.ceph.com/issues/56012
2049
    mds: src/mds/MDLog.cc: 283: FAILED ceph_assert(!mds->is_ any_replay())
2050 55 Venky Shankar
2051 54 Venky Shankar
2052
h3. 2022 Jun 13
2053
2054
https://pulpito.ceph.com/vshankar-2022-06-07_00:25:50-fs-wip-vshankar-testing-20220606-223254-testing-default-smithi/
2055
https://pulpito.ceph.com/vshankar-2022-06-10_01:04:46-fs-wip-vshankar-testing-20220609-175550-testing-default-smithi/
2056
2057
* https://tracker.ceph.com/issues/52624
2058
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2059
* https://tracker.ceph.com/issues/51964
2060
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
2061
* https://tracker.ceph.com/issues/53859
2062
    qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
2063
* https://tracker.ceph.com/issues/55804
2064
    qa failure: pjd link tests failed
2065
* https://tracker.ceph.com/issues/56003
2066
    client: src/include/xlist.h: 81: FAILED ceph_assert(_size == 0)
2067
* https://tracker.ceph.com/issues/56011
2068
    fs/thrash: snaptest-snap-rm-cmp.sh fails in mds5sum comparison
2069
* https://tracker.ceph.com/issues/56012
2070 53 Venky Shankar
    mds: src/mds/MDLog.cc: 283: FAILED ceph_assert(!mds->is_ any_replay())
2071
2072
h3. 2022 Jun 07
2073
2074
https://pulpito.ceph.com/vshankar-2022-06-06_21:25:41-fs-wip-vshankar-testing1-20220606-230129-testing-default-smithi/
2075
https://pulpito.ceph.com/vshankar-2022-06-07_10:53:31-fs-wip-vshankar-testing1-20220607-104134-testing-default-smithi/ (rerun after dropping a problematic PR)
2076
2077
* https://tracker.ceph.com/issues/52624
2078
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2079
* https://tracker.ceph.com/issues/50223
2080
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2081
* https://tracker.ceph.com/issues/50224
2082 51 Venky Shankar
    qa: test_mirroring_init_failure_with_recovery failure
2083
2084
h3. 2022 May 12
2085 52 Venky Shankar
2086 51 Venky Shankar
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20220509-125847
2087
https://pulpito.ceph.com/vshankar-2022-05-13_17:09:16-fs-wip-vshankar-testing-20220513-120051-testing-default-smithi/ (drop prs + rerun)
2088
2089
* https://tracker.ceph.com/issues/52624
2090
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2091
* https://tracker.ceph.com/issues/50223
2092
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2093
* https://tracker.ceph.com/issues/55332
2094
    Failure in snaptest-git-ceph.sh
2095
* https://tracker.ceph.com/issues/53859
2096 1 Patrick Donnelly
    qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
2097 52 Venky Shankar
* https://tracker.ceph.com/issues/55538
2098
    Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead)
2099 51 Venky Shankar
* https://tracker.ceph.com/issues/55258
2100 49 Venky Shankar
    lots of "heartbeat_check: no reply from X.X.X.X" in OSD logs (cropss up again, though very infrequent)
2101
2102 50 Venky Shankar
h3. 2022 May 04
2103
2104
https://pulpito.ceph.com/vshankar-2022-05-01_13:18:44-fs-wip-vshankar-testing1-20220428-204527-testing-default-smithi/
2105 49 Venky Shankar
https://pulpito.ceph.com/vshankar-2022-05-02_16:58:59-fs-wip-vshankar-testing1-20220502-201957-testing-default-smithi/ (after dropping PRs)
2106
2107
* https://tracker.ceph.com/issues/52624
2108
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2109
* https://tracker.ceph.com/issues/50223
2110
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2111
* https://tracker.ceph.com/issues/55332
2112
    Failure in snaptest-git-ceph.sh
2113
* https://tracker.ceph.com/issues/53859
2114
    qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
2115
* https://tracker.ceph.com/issues/55516
2116
    qa: fs suite tests failing with "json.decoder.JSONDecodeError: Extra data: line 2 column 82 (char 82)"
2117
* https://tracker.ceph.com/issues/55537
2118
    mds: crash during fs:upgrade test
2119
* https://tracker.ceph.com/issues/55538
2120 48 Venky Shankar
    Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead)
2121
2122
h3. 2022 Apr 25
2123
2124
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20220420-113951 (owner vshankar)
2125
2126
* https://tracker.ceph.com/issues/52624
2127
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2128
* https://tracker.ceph.com/issues/50223
2129
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2130
* https://tracker.ceph.com/issues/55258
2131
    lots of "heartbeat_check: no reply from X.X.X.X" in OSD logs
2132
* https://tracker.ceph.com/issues/55377
2133 47 Venky Shankar
    kclient: mds revoke Fwb caps stuck after the kclient tries writebcak once
2134
2135
h3. 2022 Apr 14
2136
2137
https://pulpito.ceph.com/?branch=wip-vshankar-testing1-20220411-144044
2138
2139
* https://tracker.ceph.com/issues/52624
2140
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2141
* https://tracker.ceph.com/issues/50223
2142
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2143
* https://tracker.ceph.com/issues/52438
2144
    qa: ffsb timeout
2145
* https://tracker.ceph.com/issues/55170
2146
    mds: crash during rejoin (CDir::fetch_keys)
2147
* https://tracker.ceph.com/issues/55331
2148
    pjd failure
2149
* https://tracker.ceph.com/issues/48773
2150
    qa: scrub does not complete
2151
* https://tracker.ceph.com/issues/55332
2152
    Failure in snaptest-git-ceph.sh
2153
* https://tracker.ceph.com/issues/55258
2154 45 Venky Shankar
    lots of "heartbeat_check: no reply from X.X.X.X" in OSD logs
2155
2156 46 Venky Shankar
h3. 2022 Apr 11
2157 45 Venky Shankar
2158
https://pulpito.ceph.com/?branch=wip-vshankar-testing-55110-20220408-203242
2159
2160
* https://tracker.ceph.com/issues/48773
2161
    qa: scrub does not complete
2162
* https://tracker.ceph.com/issues/52624
2163
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2164
* https://tracker.ceph.com/issues/52438
2165
    qa: ffsb timeout
2166
* https://tracker.ceph.com/issues/48680
2167
    mds: scrubbing stuck "scrub active (0 inodes in the stack)"
2168
* https://tracker.ceph.com/issues/55236
2169
    qa: fs/snaps tests fails with "hit max job timeout"
2170
* https://tracker.ceph.com/issues/54108
2171
    qa: iogen workunit: "The following counters failed to be set on mds daemons: {'mds.exported', 'mds.imported'}"
2172
* https://tracker.ceph.com/issues/54971
2173
    Test failure: test_perf_stats_stale_metrics (tasks.cephfs.test_mds_metrics.TestMDSMetrics)
2174
* https://tracker.ceph.com/issues/50223
2175
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2176
* https://tracker.ceph.com/issues/55258
2177 44 Venky Shankar
    lots of "heartbeat_check: no reply from X.X.X.X" in OSD logs
2178 42 Venky Shankar
2179 43 Venky Shankar
h3. 2022 Mar 21
2180
2181
https://pulpito.ceph.com/vshankar-2022-03-20_02:16:37-fs-wip-vshankar-testing-20220319-163539-testing-default-smithi/
2182
2183
Run didn't go well, lots of failures - debugging by dropping PRs and running against master branch. Only merging unrelated PRs that pass tests.
2184
2185
2186 42 Venky Shankar
h3. 2022 Mar 08
2187
2188
https://pulpito.ceph.com/vshankar-2022-02-28_04:32:15-fs-wip-vshankar-testing-20220226-211550-testing-default-smithi/
2189
2190
rerun with
2191
- (drop) https://github.com/ceph/ceph/pull/44679
2192
- (drop) https://github.com/ceph/ceph/pull/44958
2193
https://pulpito.ceph.com/vshankar-2022-03-06_14:47:51-fs-wip-vshankar-testing-20220304-132102-testing-default-smithi/
2194
2195
* https://tracker.ceph.com/issues/54419 (new)
2196
    `ceph orch upgrade start` seems to never reach completion
2197
* https://tracker.ceph.com/issues/51964
2198
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
2199
* https://tracker.ceph.com/issues/52624
2200
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2201
* https://tracker.ceph.com/issues/50223
2202
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2203
* https://tracker.ceph.com/issues/52438
2204
    qa: ffsb timeout
2205
* https://tracker.ceph.com/issues/50821
2206
    qa: untar_snap_rm failure during mds thrashing
2207 41 Venky Shankar
2208
2209
h3. 2022 Feb 09
2210
2211
https://pulpito.ceph.com/vshankar-2022-02-05_17:27:49-fs-wip-vshankar-testing-20220201-113815-testing-default-smithi/
2212
2213
rerun with
2214
- (drop) https://github.com/ceph/ceph/pull/37938
2215
- (drop) https://github.com/ceph/ceph/pull/44335
2216
- (drop) https://github.com/ceph/ceph/pull/44491
2217
- (drop) https://github.com/ceph/ceph/pull/44501
2218
https://pulpito.ceph.com/vshankar-2022-02-08_14:27:29-fs-wip-vshankar-testing-20220208-181241-testing-default-smithi/
2219
2220
* https://tracker.ceph.com/issues/51964
2221
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
2222
* https://tracker.ceph.com/issues/54066
2223
    test_subvolume_no_upgrade_v1_sanity fails with `AssertionError: 1000 != 0`
2224
* https://tracker.ceph.com/issues/48773
2225
    qa: scrub does not complete
2226
* https://tracker.ceph.com/issues/52624
2227
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2228
* https://tracker.ceph.com/issues/50223
2229
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2230
* https://tracker.ceph.com/issues/52438
2231 40 Patrick Donnelly
    qa: ffsb timeout
2232
2233
h3. 2022 Feb 01
2234
2235
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20220127.171526
2236
2237
* https://tracker.ceph.com/issues/54107
2238
    kclient: hang during umount
2239
* https://tracker.ceph.com/issues/54106
2240
    kclient: hang during workunit cleanup
2241
* https://tracker.ceph.com/issues/54108
2242
    qa: iogen workunit: "The following counters failed to be set on mds daemons: {'mds.exported', 'mds.imported'}"
2243
* https://tracker.ceph.com/issues/48773
2244
    qa: scrub does not complete
2245
* https://tracker.ceph.com/issues/52438
2246
    qa: ffsb timeout
2247 36 Venky Shankar
2248
2249
h3. 2022 Jan 13
2250 39 Venky Shankar
2251 36 Venky Shankar
https://pulpito.ceph.com/vshankar-2022-01-06_13:18:41-fs-wip-vshankar-testing-20220106-145819-testing-default-smithi/
2252 38 Venky Shankar
2253
rerun with:
2254 36 Venky Shankar
- (add) https://github.com/ceph/ceph/pull/44570
2255
- (drop) https://github.com/ceph/ceph/pull/43184
2256
https://pulpito.ceph.com/vshankar-2022-01-13_04:42:40-fs-wip-vshankar-testing-20220106-145819-testing-default-smithi/
2257
2258
* https://tracker.ceph.com/issues/50223
2259
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2260
* https://tracker.ceph.com/issues/51282
2261
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2262
* https://tracker.ceph.com/issues/48773
2263
    qa: scrub does not complete
2264
* https://tracker.ceph.com/issues/52624
2265
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2266
* https://tracker.ceph.com/issues/53859
2267 34 Venky Shankar
    qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
2268
2269
h3. 2022 Jan 03
2270
2271
https://pulpito.ceph.com/vshankar-2021-12-22_07:37:44-fs-wip-vshankar-testing-20211216-114012-testing-default-smithi/
2272
https://pulpito.ceph.com/vshankar-2022-01-03_12:27:45-fs-wip-vshankar-testing-20220103-142738-testing-default-smithi/ (rerun)
2273
2274
* https://tracker.ceph.com/issues/50223
2275
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2276
* https://tracker.ceph.com/issues/51964
2277
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
2278
* https://tracker.ceph.com/issues/51267
2279
    CommandFailedError: Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithi096 with status 1:...
2280
* https://tracker.ceph.com/issues/51282
2281
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2282
* https://tracker.ceph.com/issues/50821
2283
    qa: untar_snap_rm failure during mds thrashing
2284 35 Ramana Raja
* https://tracker.ceph.com/issues/51278
2285
    mds: "FAILED ceph_assert(!segments.empty())"
2286
* https://tracker.ceph.com/issues/52279
2287 34 Venky Shankar
    cephadm tests fail due to: error adding seccomp filter rule for syscall bdflush: requested action matches default action of filter
2288 33 Patrick Donnelly
2289
2290
h3. 2021 Dec 22
2291
2292
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211222.014316
2293
2294
* https://tracker.ceph.com/issues/52624
2295
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2296
* https://tracker.ceph.com/issues/50223
2297
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2298
* https://tracker.ceph.com/issues/52279
2299
    cephadm tests fail due to: error adding seccomp filter rule for syscall bdflush: requested action matches default action of filter
2300
* https://tracker.ceph.com/issues/50224
2301
    qa: test_mirroring_init_failure_with_recovery failure
2302
* https://tracker.ceph.com/issues/48773
2303
    qa: scrub does not complete
2304 32 Venky Shankar
2305
2306
h3. 2021 Nov 30
2307
2308
https://pulpito.ceph.com/vshankar-2021-11-24_07:14:27-fs-wip-vshankar-testing-20211124-094330-testing-default-smithi/
2309
https://pulpito.ceph.com/vshankar-2021-11-30_06:23:32-fs-wip-vshankar-testing-20211124-094330-distro-default-smithi/ (rerun w/ QA fixes)
2310
2311
* https://tracker.ceph.com/issues/53436
2312
    mds, mon: mds beacon messages get dropped? (mds never reaches up:active state)
2313
* https://tracker.ceph.com/issues/51964
2314
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
2315
* https://tracker.ceph.com/issues/48812
2316
    qa: test_scrub_pause_and_resume_with_abort failure
2317
* https://tracker.ceph.com/issues/51076
2318
    "wait_for_recovery: failed before timeout expired" during thrashosd test with EC backend.
2319
* https://tracker.ceph.com/issues/50223
2320
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2321
* https://tracker.ceph.com/issues/52624
2322
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2323
* https://tracker.ceph.com/issues/50250
2324
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
2325 31 Patrick Donnelly
2326
2327
h3. 2021 November 9
2328
2329
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211109.180315
2330
2331
* https://tracker.ceph.com/issues/53214
2332
    qa: "dd: error reading '/sys/kernel/debug/ceph/2a934501-6731-4052-a836-f42229a869be.client4874/metrics': Is a directory"
2333
* https://tracker.ceph.com/issues/48773
2334
    qa: scrub does not complete
2335
* https://tracker.ceph.com/issues/50223
2336
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2337
* https://tracker.ceph.com/issues/51282
2338
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2339
* https://tracker.ceph.com/issues/52624
2340
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2341
* https://tracker.ceph.com/issues/53216
2342
    qa: "RuntimeError: value of attributes should be either str or None. client_id"
2343
* https://tracker.ceph.com/issues/50250
2344
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
2345
2346 30 Patrick Donnelly
2347
2348
h3. 2021 November 03
2349
2350
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211103.023355
2351
2352
* https://tracker.ceph.com/issues/51964
2353
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
2354
* https://tracker.ceph.com/issues/51282
2355
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2356
* https://tracker.ceph.com/issues/52436
2357
    fs/ceph: "corrupt mdsmap"
2358
* https://tracker.ceph.com/issues/53074
2359
    pybind/mgr/cephadm: upgrade sequence does not continue if no MDS are active
2360
* https://tracker.ceph.com/issues/53150
2361
    pybind/mgr/cephadm/upgrade: tolerate MDS failures during upgrade straddling v16.2.5
2362
* https://tracker.ceph.com/issues/53155
2363
    MDSMonitor: assertion during upgrade to v16.2.5+
2364 29 Patrick Donnelly
2365
2366
h3. 2021 October 26
2367
2368
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211025.000447
2369
2370
* https://tracker.ceph.com/issues/53074
2371
    pybind/mgr/cephadm: upgrade sequence does not continue if no MDS are active
2372
* https://tracker.ceph.com/issues/52997
2373
    testing: hang ing umount
2374
* https://tracker.ceph.com/issues/50824
2375
    qa: snaptest-git-ceph bus error
2376
* https://tracker.ceph.com/issues/52436
2377
    fs/ceph: "corrupt mdsmap"
2378
* https://tracker.ceph.com/issues/48773
2379
    qa: scrub does not complete
2380
* https://tracker.ceph.com/issues/53082
2381
    ceph-fuse: segmenetation fault in Client::handle_mds_map
2382
* https://tracker.ceph.com/issues/50223
2383
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2384
* https://tracker.ceph.com/issues/52624
2385
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2386
* https://tracker.ceph.com/issues/50224
2387
    qa: test_mirroring_init_failure_with_recovery failure
2388
* https://tracker.ceph.com/issues/50821
2389
    qa: untar_snap_rm failure during mds thrashing
2390
* https://tracker.ceph.com/issues/50250
2391
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
2392
2393 27 Patrick Donnelly
2394
2395 28 Patrick Donnelly
h3. 2021 October 19
2396 27 Patrick Donnelly
2397
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211019.013028
2398
2399
* https://tracker.ceph.com/issues/52995
2400
    qa: test_standby_count_wanted failure
2401
* https://tracker.ceph.com/issues/52948
2402
    osd: fails to come up: "teuthology.misc:7 of 8 OSDs are up"
2403
* https://tracker.ceph.com/issues/52996
2404
    qa: test_perf_counters via test_openfiletable
2405
* https://tracker.ceph.com/issues/48772
2406
    qa: pjd: not ok 9, 44, 80
2407
* https://tracker.ceph.com/issues/52997
2408
    testing: hang ing umount
2409
* https://tracker.ceph.com/issues/50250
2410
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
2411
* https://tracker.ceph.com/issues/52624
2412
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2413
* https://tracker.ceph.com/issues/50223
2414
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2415
* https://tracker.ceph.com/issues/50821
2416
    qa: untar_snap_rm failure during mds thrashing
2417
* https://tracker.ceph.com/issues/48773
2418
    qa: scrub does not complete
2419 26 Patrick Donnelly
2420
2421
h3. 2021 October 12
2422
2423
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211012.192211
2424
2425
Some failures caused by teuthology bug: https://tracker.ceph.com/issues/52944
2426
2427
New test caused failure: https://github.com/ceph/ceph/pull/43297#discussion_r729883167
2428
2429
2430
* https://tracker.ceph.com/issues/51282
2431
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2432
* https://tracker.ceph.com/issues/52948
2433
    osd: fails to come up: "teuthology.misc:7 of 8 OSDs are up"
2434
* https://tracker.ceph.com/issues/48773
2435
    qa: scrub does not complete
2436
* https://tracker.ceph.com/issues/50224
2437
    qa: test_mirroring_init_failure_with_recovery failure
2438
* https://tracker.ceph.com/issues/52949
2439
    RuntimeError: The following counters failed to be set on mds daemons: {'mds.dir_split'}
2440 25 Patrick Donnelly
2441 23 Patrick Donnelly
2442 24 Patrick Donnelly
h3. 2021 October 02
2443
2444
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211002.163337
2445
2446
Some failures caused by cephadm upgrade test. Fixed in follow-up qa commit.
2447
2448
test_simple failures caused by PR in this set.
2449
2450
A few reruns because of QA infra noise.
2451
2452
* https://tracker.ceph.com/issues/52822
2453
    qa: failed pacific install on fs:upgrade
2454
* https://tracker.ceph.com/issues/52624
2455
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2456
* https://tracker.ceph.com/issues/50223
2457
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2458
* https://tracker.ceph.com/issues/48773
2459
    qa: scrub does not complete
2460
2461
2462 23 Patrick Donnelly
h3. 2021 September 20
2463
2464
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210917.174826
2465
2466
* https://tracker.ceph.com/issues/52677
2467
    qa: test_simple failure
2468
* https://tracker.ceph.com/issues/51279
2469
    kclient hangs on umount (testing branch)
2470
* https://tracker.ceph.com/issues/50223
2471
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2472
* https://tracker.ceph.com/issues/50250
2473
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
2474
* https://tracker.ceph.com/issues/52624
2475
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2476
* https://tracker.ceph.com/issues/52438
2477
    qa: ffsb timeout
2478 22 Patrick Donnelly
2479
2480
h3. 2021 September 10
2481
2482
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210910.181451
2483
2484
* https://tracker.ceph.com/issues/50223
2485
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2486
* https://tracker.ceph.com/issues/50250
2487
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
2488
* https://tracker.ceph.com/issues/52624
2489
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2490
* https://tracker.ceph.com/issues/52625
2491
    qa: test_kill_mdstable (tasks.cephfs.test_snapshots.TestSnapshots)
2492
* https://tracker.ceph.com/issues/52439
2493
    qa: acls does not compile on centos stream
2494
* https://tracker.ceph.com/issues/50821
2495
    qa: untar_snap_rm failure during mds thrashing
2496
* https://tracker.ceph.com/issues/48773
2497
    qa: scrub does not complete
2498
* https://tracker.ceph.com/issues/52626
2499
    mds: ScrubStack.cc: 831: FAILED ceph_assert(diri)
2500
* https://tracker.ceph.com/issues/51279
2501
    kclient hangs on umount (testing branch)
2502 21 Patrick Donnelly
2503
2504
h3. 2021 August 27
2505
2506
Several jobs died because of device failures.
2507
2508
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210827.024746
2509
2510
* https://tracker.ceph.com/issues/52430
2511
    mds: fast async create client mount breaks racy test
2512
* https://tracker.ceph.com/issues/52436
2513
    fs/ceph: "corrupt mdsmap"
2514
* https://tracker.ceph.com/issues/52437
2515
    mds: InoTable::replay_release_ids abort via test_inotable_sync
2516
* https://tracker.ceph.com/issues/51282
2517
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2518
* https://tracker.ceph.com/issues/52438
2519
    qa: ffsb timeout
2520
* https://tracker.ceph.com/issues/52439
2521
    qa: acls does not compile on centos stream
2522 20 Patrick Donnelly
2523
2524
h3. 2021 July 30
2525
2526
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210729.214022
2527
2528
* https://tracker.ceph.com/issues/50250
2529
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
2530
* https://tracker.ceph.com/issues/51282
2531
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2532
* https://tracker.ceph.com/issues/48773
2533
    qa: scrub does not complete
2534
* https://tracker.ceph.com/issues/51975
2535
    pybind/mgr/stats: KeyError
2536 19 Patrick Donnelly
2537
2538
h3. 2021 July 28
2539
2540
https://pulpito.ceph.com/pdonnell-2021-07-28_00:39:45-fs-wip-pdonnell-testing-20210727.213757-distro-basic-smithi/
2541
2542
with qa fix: https://pulpito.ceph.com/pdonnell-2021-07-28_16:20:28-fs-wip-pdonnell-testing-20210728.141004-distro-basic-smithi/
2543
2544
* https://tracker.ceph.com/issues/51905
2545
    qa: "error reading sessionmap 'mds1_sessionmap'"
2546
* https://tracker.ceph.com/issues/48773
2547
    qa: scrub does not complete
2548
* https://tracker.ceph.com/issues/50250
2549
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
2550
* https://tracker.ceph.com/issues/51267
2551
    CommandFailedError: Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithi096 with status 1:...
2552
* https://tracker.ceph.com/issues/51279
2553
    kclient hangs on umount (testing branch)
2554 18 Patrick Donnelly
2555
2556
h3. 2021 July 16
2557
2558
https://pulpito.ceph.com/pdonnell-2021-07-16_05:50:11-fs-wip-pdonnell-testing-20210716.022804-distro-basic-smithi/
2559
2560
* https://tracker.ceph.com/issues/48773
2561
    qa: scrub does not complete
2562
* https://tracker.ceph.com/issues/48772
2563
    qa: pjd: not ok 9, 44, 80
2564
* https://tracker.ceph.com/issues/45434
2565
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2566
* https://tracker.ceph.com/issues/51279
2567
    kclient hangs on umount (testing branch)
2568
* https://tracker.ceph.com/issues/50824
2569
    qa: snaptest-git-ceph bus error
2570 17 Patrick Donnelly
2571
2572
h3. 2021 July 04
2573
2574
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210703.052904
2575
2576
* https://tracker.ceph.com/issues/48773
2577
    qa: scrub does not complete
2578
* https://tracker.ceph.com/issues/39150
2579
    mon: "FAILED ceph_assert(session_map.sessions.empty())" when out of quorum
2580
* https://tracker.ceph.com/issues/45434
2581
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2582
* https://tracker.ceph.com/issues/51282
2583
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2584
* https://tracker.ceph.com/issues/48771
2585
    qa: iogen: workload fails to cause balancing
2586
* https://tracker.ceph.com/issues/51279
2587
    kclient hangs on umount (testing branch)
2588
* https://tracker.ceph.com/issues/50250
2589
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
2590 16 Patrick Donnelly
2591
2592
h3. 2021 July 01
2593
2594
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210701.192056
2595
2596
* https://tracker.ceph.com/issues/51197
2597
    qa: [WRN] Scrub error on inode 0x10000001520 (/client.0/tmp/t/linux-5.4/Documentation/driver-api) see mds.f log and `damage ls` output for details
2598
* https://tracker.ceph.com/issues/50866
2599
    osd: stat mismatch on objects
2600
* https://tracker.ceph.com/issues/48773
2601
    qa: scrub does not complete
2602 15 Patrick Donnelly
2603
2604
h3. 2021 June 26
2605
2606
https://pulpito.ceph.com/pdonnell-2021-06-26_00:57:00-fs-wip-pdonnell-testing-20210625.225421-distro-basic-smithi/
2607
2608
* https://tracker.ceph.com/issues/51183
2609
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
2610
* https://tracker.ceph.com/issues/51410
2611
    kclient: fails to finish reconnect during MDS thrashing (testing branch)
2612
* https://tracker.ceph.com/issues/48773
2613
    qa: scrub does not complete
2614
* https://tracker.ceph.com/issues/51282
2615
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2616
* https://tracker.ceph.com/issues/51169
2617
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
2618
* https://tracker.ceph.com/issues/48772
2619
    qa: pjd: not ok 9, 44, 80
2620 14 Patrick Donnelly
2621
2622
h3. 2021 June 21
2623
2624
https://pulpito.ceph.com/pdonnell-2021-06-22_00:27:21-fs-wip-pdonnell-testing-20210621.231646-distro-basic-smithi/
2625
2626
One failure caused by PR: https://github.com/ceph/ceph/pull/41935#issuecomment-866472599
2627
2628
* https://tracker.ceph.com/issues/51282
2629
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2630
* https://tracker.ceph.com/issues/51183
2631
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
2632
* https://tracker.ceph.com/issues/48773
2633
    qa: scrub does not complete
2634
* https://tracker.ceph.com/issues/48771
2635
    qa: iogen: workload fails to cause balancing
2636
* https://tracker.ceph.com/issues/51169
2637
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
2638
* https://tracker.ceph.com/issues/50495
2639
    libcephfs: shutdown race fails with status 141
2640
* https://tracker.ceph.com/issues/45434
2641
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2642
* https://tracker.ceph.com/issues/50824
2643
    qa: snaptest-git-ceph bus error
2644
* https://tracker.ceph.com/issues/50223
2645
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2646 13 Patrick Donnelly
2647
2648
h3. 2021 June 16
2649
2650
https://pulpito.ceph.com/pdonnell-2021-06-16_21:26:55-fs-wip-pdonnell-testing-20210616.191804-distro-basic-smithi/
2651
2652
MDS abort class of failures caused by PR: https://github.com/ceph/ceph/pull/41667
2653
2654
* https://tracker.ceph.com/issues/45434
2655
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2656
* https://tracker.ceph.com/issues/51169
2657
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
2658
* https://tracker.ceph.com/issues/43216
2659
    MDSMonitor: removes MDS coming out of quorum election
2660
* https://tracker.ceph.com/issues/51278
2661
    mds: "FAILED ceph_assert(!segments.empty())"
2662
* https://tracker.ceph.com/issues/51279
2663
    kclient hangs on umount (testing branch)
2664
* https://tracker.ceph.com/issues/51280
2665
    mds: "FAILED ceph_assert(r == 0 || r == -2)"
2666
* https://tracker.ceph.com/issues/51183
2667
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
2668
* https://tracker.ceph.com/issues/51281
2669
    qa: snaptest-snap-rm-cmp.sh: "echo 'FAIL: bad match, /tmp/a 4637e766853d1ad16a7b17079e2c6f03 != real c3883760b18d50e8d78819c54d579b00'"
2670
* https://tracker.ceph.com/issues/48773
2671
    qa: scrub does not complete
2672
* https://tracker.ceph.com/issues/51076
2673
    "wait_for_recovery: failed before timeout expired" during thrashosd test with EC backend.
2674
* https://tracker.ceph.com/issues/51228
2675
    qa: rmdir: failed to remove 'a/.snap/*': No such file or directory
2676
* https://tracker.ceph.com/issues/51282
2677
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2678 12 Patrick Donnelly
2679
2680
h3. 2021 June 14
2681
2682
https://pulpito.ceph.com/pdonnell-2021-06-14_20:53:05-fs-wip-pdonnell-testing-20210614.173325-distro-basic-smithi/
2683
2684
Some Ubuntu 20.04 upgrade fallout. In particular, upgrade tests are failing due to missing packages for 18.04 Pacific.
2685
2686
* https://tracker.ceph.com/issues/51169
2687
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
2688
* https://tracker.ceph.com/issues/51228
2689
    qa: rmdir: failed to remove 'a/.snap/*': No such file or directory
2690
* https://tracker.ceph.com/issues/48773
2691
    qa: scrub does not complete
2692
* https://tracker.ceph.com/issues/51183
2693
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
2694
* https://tracker.ceph.com/issues/45434
2695
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2696
* https://tracker.ceph.com/issues/51182
2697
    pybind/mgr/snap_schedule: Invalid command: Unexpected argument 'fs=cephfs'
2698
* https://tracker.ceph.com/issues/51229
2699
    qa: test_multi_snap_schedule list difference failure
2700
* https://tracker.ceph.com/issues/50821
2701
    qa: untar_snap_rm failure during mds thrashing
2702 11 Patrick Donnelly
2703
2704
h3. 2021 June 13
2705
2706
https://pulpito.ceph.com/pdonnell-2021-06-12_02:45:35-fs-wip-pdonnell-testing-20210612.002809-distro-basic-smithi/
2707
2708
Some Ubuntu 20.04 upgrade fallout. In particular, upgrade tests are failing due to missing packages for 18.04 Pacific.
2709
2710
* https://tracker.ceph.com/issues/51169
2711
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
2712
* https://tracker.ceph.com/issues/48773
2713
    qa: scrub does not complete
2714
* https://tracker.ceph.com/issues/51182
2715
    pybind/mgr/snap_schedule: Invalid command: Unexpected argument 'fs=cephfs'
2716
* https://tracker.ceph.com/issues/51183
2717
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
2718
* https://tracker.ceph.com/issues/51197
2719
    qa: [WRN] Scrub error on inode 0x10000001520 (/client.0/tmp/t/linux-5.4/Documentation/driver-api) see mds.f log and `damage ls` output for details
2720
* https://tracker.ceph.com/issues/45434
2721 10 Patrick Donnelly
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2722
2723
h3. 2021 June 11
2724
2725
https://pulpito.ceph.com/pdonnell-2021-06-11_18:02:10-fs-wip-pdonnell-testing-20210611.162716-distro-basic-smithi/
2726
2727
Some Ubuntu 20.04 upgrade fallout. In particular, upgrade tests are failing due to missing packages for 18.04 Pacific.
2728
2729
* https://tracker.ceph.com/issues/51169
2730
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
2731
* https://tracker.ceph.com/issues/45434
2732
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2733
* https://tracker.ceph.com/issues/48771
2734
    qa: iogen: workload fails to cause balancing
2735
* https://tracker.ceph.com/issues/43216
2736
    MDSMonitor: removes MDS coming out of quorum election
2737
* https://tracker.ceph.com/issues/51182
2738
    pybind/mgr/snap_schedule: Invalid command: Unexpected argument 'fs=cephfs'
2739
* https://tracker.ceph.com/issues/50223
2740
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2741
* https://tracker.ceph.com/issues/48773
2742
    qa: scrub does not complete
2743
* https://tracker.ceph.com/issues/51183
2744
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
2745
* https://tracker.ceph.com/issues/51184
2746
    qa: fs:bugs does not specify distro
2747 9 Patrick Donnelly
2748
2749
h3. 2021 June 03
2750
2751
https://pulpito.ceph.com/pdonnell-2021-06-03_03:40:33-fs-wip-pdonnell-testing-20210603.020013-distro-basic-smithi/
2752
2753
* https://tracker.ceph.com/issues/45434
2754
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2755
* https://tracker.ceph.com/issues/50016
2756
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
2757
* https://tracker.ceph.com/issues/50821
2758
    qa: untar_snap_rm failure during mds thrashing
2759
* https://tracker.ceph.com/issues/50622 (regression)
2760
    msg: active_connections regression
2761
* https://tracker.ceph.com/issues/49845#note-2 (regression)
2762
    qa: failed umount in test_volumes
2763
* https://tracker.ceph.com/issues/48773
2764
    qa: scrub does not complete
2765
* https://tracker.ceph.com/issues/43216
2766
    MDSMonitor: removes MDS coming out of quorum election
2767 7 Patrick Donnelly
2768
2769 8 Patrick Donnelly
h3. 2021 May 18
2770
2771
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210518.214114
2772
2773
Regression in testing kernel caused some failures. Ilya fixed those and rerun
2774
looked better. Some odd new noise in the rerun relating to packaging and "No
2775
module named 'tasks.ceph'".
2776
2777
* https://tracker.ceph.com/issues/50824
2778
    qa: snaptest-git-ceph bus error
2779
* https://tracker.ceph.com/issues/50622 (regression)
2780
    msg: active_connections regression
2781
* https://tracker.ceph.com/issues/49845#note-2 (regression)
2782
    qa: failed umount in test_volumes
2783
* https://tracker.ceph.com/issues/48203 (stock kernel update required)
2784
    qa: quota failure
2785
2786
2787 7 Patrick Donnelly
h3. 2021 May 18
2788
2789
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210518.025642
2790
2791
* https://tracker.ceph.com/issues/50821
2792
    qa: untar_snap_rm failure during mds thrashing
2793
* https://tracker.ceph.com/issues/48773
2794
    qa: scrub does not complete
2795
* https://tracker.ceph.com/issues/45591
2796
    mgr: FAILED ceph_assert(daemon != nullptr)
2797
* https://tracker.ceph.com/issues/50866
2798
    osd: stat mismatch on objects
2799
* https://tracker.ceph.com/issues/50016
2800
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
2801
* https://tracker.ceph.com/issues/50867
2802
    qa: fs:mirror: reduced data availability
2803
* https://tracker.ceph.com/issues/50821
2804
    qa: untar_snap_rm failure during mds thrashing
2805
* https://tracker.ceph.com/issues/50622 (regression)
2806
    msg: active_connections regression
2807
* https://tracker.ceph.com/issues/50223
2808
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2809
* https://tracker.ceph.com/issues/50868
2810
    qa: "kern.log.gz already exists; not overwritten"
2811
* https://tracker.ceph.com/issues/50870
2812
    qa: test_full: "rm: cannot remove 'large_file_a': Permission denied"
2813 6 Patrick Donnelly
2814
2815
h3. 2021 May 11
2816
2817
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210511.232042
2818
2819
* one class of failures caused by PR
2820
* https://tracker.ceph.com/issues/48812
2821
    qa: test_scrub_pause_and_resume_with_abort failure
2822
* https://tracker.ceph.com/issues/50390
2823
    mds: monclient: wait_auth_rotating timed out after 30
2824
* https://tracker.ceph.com/issues/48773
2825
    qa: scrub does not complete
2826
* https://tracker.ceph.com/issues/50821
2827
    qa: untar_snap_rm failure during mds thrashing
2828
* https://tracker.ceph.com/issues/50224
2829
    qa: test_mirroring_init_failure_with_recovery failure
2830
* https://tracker.ceph.com/issues/50622 (regression)
2831
    msg: active_connections regression
2832
* https://tracker.ceph.com/issues/50825
2833
    qa: snaptest-git-ceph hang during mon thrashing v2
2834
* https://tracker.ceph.com/issues/50821
2835
    qa: untar_snap_rm failure during mds thrashing
2836
* https://tracker.ceph.com/issues/50823
2837
    qa: RuntimeError: timeout waiting for cluster to stabilize
2838 5 Patrick Donnelly
2839
2840
h3. 2021 May 14
2841
2842
https://pulpito.ceph.com/pdonnell-2021-05-14_21:45:42-fs-master-distro-basic-smithi/
2843
2844
* https://tracker.ceph.com/issues/48812
2845
    qa: test_scrub_pause_and_resume_with_abort failure
2846
* https://tracker.ceph.com/issues/50821
2847
    qa: untar_snap_rm failure during mds thrashing
2848
* https://tracker.ceph.com/issues/50622 (regression)
2849
    msg: active_connections regression
2850
* https://tracker.ceph.com/issues/50822
2851
    qa: testing kernel patch for client metrics causes mds abort
2852
* https://tracker.ceph.com/issues/48773
2853
    qa: scrub does not complete
2854
* https://tracker.ceph.com/issues/50823
2855
    qa: RuntimeError: timeout waiting for cluster to stabilize
2856
* https://tracker.ceph.com/issues/50824
2857
    qa: snaptest-git-ceph bus error
2858
* https://tracker.ceph.com/issues/50825
2859
    qa: snaptest-git-ceph hang during mon thrashing v2
2860
* https://tracker.ceph.com/issues/50826
2861
    kceph: stock RHEL kernel hangs on snaptests with mon|osd thrashers
2862 4 Patrick Donnelly
2863
2864
h3. 2021 May 01
2865
2866
https://pulpito.ceph.com/pdonnell-2021-05-01_09:07:09-fs-wip-pdonnell-testing-20210501.040415-distro-basic-smithi/
2867
2868
* https://tracker.ceph.com/issues/45434
2869
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2870
* https://tracker.ceph.com/issues/50281
2871
    qa: untar_snap_rm timeout
2872
* https://tracker.ceph.com/issues/48203 (stock kernel update required)
2873
    qa: quota failure
2874
* https://tracker.ceph.com/issues/48773
2875
    qa: scrub does not complete
2876
* https://tracker.ceph.com/issues/50390
2877
    mds: monclient: wait_auth_rotating timed out after 30
2878
* https://tracker.ceph.com/issues/50250
2879
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details"
2880
* https://tracker.ceph.com/issues/50622 (regression)
2881
    msg: active_connections regression
2882
* https://tracker.ceph.com/issues/45591
2883
    mgr: FAILED ceph_assert(daemon != nullptr)
2884
* https://tracker.ceph.com/issues/50221
2885
    qa: snaptest-git-ceph failure in git diff
2886
* https://tracker.ceph.com/issues/50016
2887
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
2888 3 Patrick Donnelly
2889
2890
h3. 2021 Apr 15
2891
2892
https://pulpito.ceph.com/pdonnell-2021-04-15_01:35:57-fs-wip-pdonnell-testing-20210414.230315-distro-basic-smithi/
2893
2894
* https://tracker.ceph.com/issues/50281
2895
    qa: untar_snap_rm timeout
2896
* https://tracker.ceph.com/issues/50220
2897
    qa: dbench workload timeout
2898
* https://tracker.ceph.com/issues/50246
2899
    mds: failure replaying journal (EMetaBlob)
2900
* https://tracker.ceph.com/issues/50250
2901
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details"
2902
* https://tracker.ceph.com/issues/50016
2903
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
2904
* https://tracker.ceph.com/issues/50222
2905
    osd: 5.2s0 deep-scrub : stat mismatch
2906
* https://tracker.ceph.com/issues/45434
2907
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2908
* https://tracker.ceph.com/issues/49845
2909
    qa: failed umount in test_volumes
2910
* https://tracker.ceph.com/issues/37808
2911
    osd: osdmap cache weak_refs assert during shutdown
2912
* https://tracker.ceph.com/issues/50387
2913
    client: fs/snaps failure
2914
* https://tracker.ceph.com/issues/50389
2915
    mds: "cluster [ERR] Error recovering journal 0x203: (2) No such file or directory" in cluster log"
2916
* https://tracker.ceph.com/issues/50216
2917
    qa: "ls: cannot access 'lost+found': No such file or directory"
2918
* https://tracker.ceph.com/issues/50390
2919
    mds: monclient: wait_auth_rotating timed out after 30
2920
2921 1 Patrick Donnelly
2922
2923 2 Patrick Donnelly
h3. 2021 Apr 08
2924
2925
https://pulpito.ceph.com/pdonnell-2021-04-08_22:42:24-fs-wip-pdonnell-testing-20210408.192301-distro-basic-smithi/
2926
2927
* https://tracker.ceph.com/issues/45434
2928
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2929
* https://tracker.ceph.com/issues/50016
2930
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
2931
* https://tracker.ceph.com/issues/48773
2932
    qa: scrub does not complete
2933
* https://tracker.ceph.com/issues/50279
2934
    qa: "Replacing daemon mds.b as rank 0 with standby daemon mds.c"
2935
* https://tracker.ceph.com/issues/50246
2936
    mds: failure replaying journal (EMetaBlob)
2937
* https://tracker.ceph.com/issues/48365
2938
    qa: ffsb build failure on CentOS 8.2
2939
* https://tracker.ceph.com/issues/50216
2940
    qa: "ls: cannot access 'lost+found': No such file or directory"
2941
* https://tracker.ceph.com/issues/50223
2942
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2943
* https://tracker.ceph.com/issues/50280
2944
    cephadm: RuntimeError: uid/gid not found
2945
* https://tracker.ceph.com/issues/50281
2946
    qa: untar_snap_rm timeout
2947
2948 1 Patrick Donnelly
h3. 2021 Apr 08
2949
2950
https://pulpito.ceph.com/pdonnell-2021-04-08_04:31:36-fs-wip-pdonnell-testing-20210408.024225-distro-basic-smithi/
2951
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210408.142238 (with logic inversion / QA fix)
2952
2953
* https://tracker.ceph.com/issues/50246
2954
    mds: failure replaying journal (EMetaBlob)
2955
* https://tracker.ceph.com/issues/50250
2956
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details"
2957
2958
2959
h3. 2021 Apr 07
2960
2961
https://pulpito.ceph.com/pdonnell-2021-04-07_02:12:41-fs-wip-pdonnell-testing-20210406.213012-distro-basic-smithi/
2962
2963
* https://tracker.ceph.com/issues/50215
2964
    qa: "log [ERR] : error reading sessionmap 'mds2_sessionmap'"
2965
* https://tracker.ceph.com/issues/49466
2966
    qa: "Command failed on gibba030 with status 1: 'set -ex\nsudo dd of=/tmp/tmp.ZEeZBasJer'"
2967
* https://tracker.ceph.com/issues/50216
2968
    qa: "ls: cannot access 'lost+found': No such file or directory"
2969
* https://tracker.ceph.com/issues/48773
2970
    qa: scrub does not complete
2971
* https://tracker.ceph.com/issues/49845
2972
    qa: failed umount in test_volumes
2973
* https://tracker.ceph.com/issues/50220
2974
    qa: dbench workload timeout
2975
* https://tracker.ceph.com/issues/50221
2976
    qa: snaptest-git-ceph failure in git diff
2977
* https://tracker.ceph.com/issues/50222
2978
    osd: 5.2s0 deep-scrub : stat mismatch
2979
* https://tracker.ceph.com/issues/50223
2980
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2981
* https://tracker.ceph.com/issues/50224
2982
    qa: test_mirroring_init_failure_with_recovery failure
2983
2984
h3. 2021 Apr 01
2985
2986
https://pulpito.ceph.com/pdonnell-2021-04-01_00:45:34-fs-wip-pdonnell-testing-20210331.222326-distro-basic-smithi/
2987
2988
* https://tracker.ceph.com/issues/48772
2989
    qa: pjd: not ok 9, 44, 80
2990
* https://tracker.ceph.com/issues/50177
2991
    osd: "stalled aio... buggy kernel or bad device?"
2992
* https://tracker.ceph.com/issues/48771
2993
    qa: iogen: workload fails to cause balancing
2994
* https://tracker.ceph.com/issues/49845
2995
    qa: failed umount in test_volumes
2996
* https://tracker.ceph.com/issues/48773
2997
    qa: scrub does not complete
2998
* https://tracker.ceph.com/issues/48805
2999
    mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details"
3000
* https://tracker.ceph.com/issues/50178
3001
    qa: "TypeError: run() got an unexpected keyword argument 'shell'"
3002
* https://tracker.ceph.com/issues/45434
3003
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
3004
3005
h3. 2021 Mar 24
3006
3007
https://pulpito.ceph.com/pdonnell-2021-03-24_23:26:35-fs-wip-pdonnell-testing-20210324.190252-distro-basic-smithi/
3008
3009
* https://tracker.ceph.com/issues/49500
3010
    qa: "Assertion `cb_done' failed."
3011
* https://tracker.ceph.com/issues/50019
3012
    qa: mount failure with cephadm "probably no MDS server is up?"
3013
* https://tracker.ceph.com/issues/50020
3014
    qa: "RADOS object not found (Failed to operate read op for oid cephfs_mirror)"
3015
* https://tracker.ceph.com/issues/48773
3016
    qa: scrub does not complete
3017
* https://tracker.ceph.com/issues/45434
3018
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
3019
* https://tracker.ceph.com/issues/48805
3020
    mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details"
3021
* https://tracker.ceph.com/issues/48772
3022
    qa: pjd: not ok 9, 44, 80
3023
* https://tracker.ceph.com/issues/50021
3024
    qa: snaptest-git-ceph failure during mon thrashing
3025
* https://tracker.ceph.com/issues/48771
3026
    qa: iogen: workload fails to cause balancing
3027
* https://tracker.ceph.com/issues/50016
3028
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
3029
* https://tracker.ceph.com/issues/49466
3030
    qa: "Command failed on gibba030 with status 1: 'set -ex\nsudo dd of=/tmp/tmp.ZEeZBasJer'"
3031
3032
3033
h3. 2021 Mar 18
3034
3035
https://pulpito.ceph.com/pdonnell-2021-03-18_13:46:31-fs-wip-pdonnell-testing-20210318.024145-distro-basic-smithi/
3036
3037
* https://tracker.ceph.com/issues/49466
3038
    qa: "Command failed on gibba030 with status 1: 'set -ex\nsudo dd of=/tmp/tmp.ZEeZBasJer'"
3039
* https://tracker.ceph.com/issues/48773
3040
    qa: scrub does not complete
3041
* https://tracker.ceph.com/issues/48805
3042
    mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details"
3043
* https://tracker.ceph.com/issues/45434
3044
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
3045
* https://tracker.ceph.com/issues/49845
3046
    qa: failed umount in test_volumes
3047
* https://tracker.ceph.com/issues/49605
3048
    mgr: drops command on the floor
3049
* https://tracker.ceph.com/issues/48203 (stock kernel update required)
3050
    qa: quota failure
3051
* https://tracker.ceph.com/issues/49928
3052
    client: items pinned in cache preventing unmount x2
3053
3054
h3. 2021 Mar 15
3055
3056
https://pulpito.ceph.com/pdonnell-2021-03-15_22:16:56-fs-wip-pdonnell-testing-20210315.182203-distro-basic-smithi/
3057
3058
* https://tracker.ceph.com/issues/49842
3059
    qa: stuck pkg install
3060
* https://tracker.ceph.com/issues/49466
3061
    qa: "Command failed on gibba030 with status 1: 'set -ex\nsudo dd of=/tmp/tmp.ZEeZBasJer'"
3062
* https://tracker.ceph.com/issues/49822
3063
    test: test_mirroring_command_idempotency (tasks.cephfs.test_admin.TestMirroringCommands) failure
3064
* https://tracker.ceph.com/issues/49240
3065
    terminate called after throwing an instance of 'std::bad_alloc'
3066
* https://tracker.ceph.com/issues/48773
3067
    qa: scrub does not complete
3068
* https://tracker.ceph.com/issues/45434
3069
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
3070
* https://tracker.ceph.com/issues/49500
3071
    qa: "Assertion `cb_done' failed."
3072
* https://tracker.ceph.com/issues/49843
3073
    qa: fs/snaps/snaptest-upchildrealms.sh failure
3074
* https://tracker.ceph.com/issues/49845
3075
    qa: failed umount in test_volumes
3076
* https://tracker.ceph.com/issues/48805
3077
    mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details"
3078
* https://tracker.ceph.com/issues/49605
3079
    mgr: drops command on the floor
3080
3081
and failure caused by PR: https://github.com/ceph/ceph/pull/39969
3082
3083
3084
h3. 2021 Mar 09
3085
3086
https://pulpito.ceph.com/pdonnell-2021-03-09_03:27:39-fs-wip-pdonnell-testing-20210308.214827-distro-basic-smithi/
3087
3088
* https://tracker.ceph.com/issues/49500
3089
    qa: "Assertion `cb_done' failed."
3090
* https://tracker.ceph.com/issues/48805
3091
    mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details"
3092
* https://tracker.ceph.com/issues/48773
3093
    qa: scrub does not complete
3094
* https://tracker.ceph.com/issues/45434
3095
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
3096
* https://tracker.ceph.com/issues/49240
3097
    terminate called after throwing an instance of 'std::bad_alloc'
3098
* https://tracker.ceph.com/issues/49466
3099
    qa: "Command failed on gibba030 with status 1: 'set -ex\nsudo dd of=/tmp/tmp.ZEeZBasJer'"
3100
* https://tracker.ceph.com/issues/49684
3101
    qa: fs:cephadm mount does not wait for mds to be created
3102
* https://tracker.ceph.com/issues/48771
3103
    qa: iogen: workload fails to cause balancing