Project

General

Profile

Main » History » Version 239

Patrick Donnelly, 03/28/2024 06:39 PM

1 221 Patrick Donnelly
h1. <code>main</code> branch
2 1 Patrick Donnelly
3 236 Patrick Donnelly
h3. 2024-03-28
4
5
https://tracker.ceph.com/issues/65213
6
7 237 Patrick Donnelly
* "qa: error during scrub thrashing: rank damage found: {'backtrace'}":https://tracker.ceph.com/issues/57676
8
* "workunits/fsx.sh failure":https://tracker.ceph.com/issues/64572
9
* "PG_DEGRADED warnings during cluster creation via cephadm: Health check failed: Degraded data":https://tracker.ceph.com/issues/65018
10 238 Patrick Donnelly
* "suites/fsstress.sh hangs on one client - test times out":https://tracker.ceph.com/issues/64707
11
* "qa: ceph tell 4.3a deep-scrub command not found":https://tracker.ceph.com/issues/64972
12
* "qa: iogen workunit: The following counters failed to be set on mds daemons: {'mds.exported', 'mds.imported'}":https://tracker.ceph.com/issues/54108
13 239 Patrick Donnelly
* "qa: Scrub error on inode 0x1000000356c (/volumes/qa/sv_0/2f8f6bb4-3ea9-47a0-bd79-a0f50dc149d5/client.0/tmp/clients/client7/~dmtmp/PARADOX) see mds.b log and `damage ls` output for details in cluster log":https://tracker.ceph.com/issues/65020
14
* "qa: failed cephfs-shell test_reading_conf":https://tracker.ceph.com/issues/63699
15
* "Test failure: test_cephfs_mirror_cancel_mirroring_and_readd":https://tracker.ceph.com/issues/64711
16
* "qa: test_max_items_per_obj open procs not fully cleaned up":https://tracker.ceph.com/issues/65022
17
* "pacific/quincy/v18.2.0: client: ceph-fuse fails to unmount after upgrade to main":https://tracker.ceph.com/issues/64502
18
* "centos 9 testing reveals rocksdb Leak_StillReachable memory leak in mons":https://tracker.ceph.com/issues/61774
19
* "qa: Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)":https://tracker.ceph.com/issues/52624
20
* "qa: dbench workload timeout":https://tracker.ceph.com/issues/50220
21
22
23 236 Patrick Donnelly
24 235 Milind Changire
h3. 2024-03-25
25
26
https://pulpito.ceph.com/mchangir-2024-03-22_09:46:06-fs:upgrade-wip-mchangir-testing-main-20240318.032620-testing-default-smithi/
27
* https://tracker.ceph.com/issues/64502
28
  fusermount -u fails with: teuthology.exceptions.MaxWhileTries: reached maximum tries (51) after waiting for 300 seconds
29
30
https://pulpito.ceph.com/mchangir-2024-03-22_09:48:09-fs:libcephfs-wip-mchangir-testing-main-20240318.032620-testing-default-smithi/
31
32
* https://tracker.ceph.com/issues/62245
33
  libcephfs/test.sh failed - https://tracker.ceph.com/issues/62245#note-3
34
35
36 228 Patrick Donnelly
h3. 2024-03-20
37
38 234 Patrick Donnelly
https://pulpito.ceph.com/?branch=wip-batrick-testing-20240320.145742
39 228 Patrick Donnelly
40 233 Patrick Donnelly
https://github.com/batrick/ceph/commit/360516069d9393362c4cc6eb9371680fe16d66ab
41
42 229 Patrick Donnelly
Ubuntu jobs filtered out because builds were skipped by jenkins/shaman.
43 1 Patrick Donnelly
44 229 Patrick Donnelly
This run has a lot more failures because https://github.com/ceph/ceph/pull/55455 fixed log WRN/ERR checks.
45 228 Patrick Donnelly
46 229 Patrick Donnelly
* https://tracker.ceph.com/issues/57676
47
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
48
* https://tracker.ceph.com/issues/64572
49
    workunits/fsx.sh failure
50
* https://tracker.ceph.com/issues/65018
51
    PG_DEGRADED warnings during cluster creation via cephadm: "Health check failed: Degraded data redundancy: 2/192 objects degraded (1.042%), 1 pg degraded (PG_DEGRADED)"
52
* https://tracker.ceph.com/issues/64707 (new issue)
53
    suites/fsstress.sh hangs on one client - test times out
54 1 Patrick Donnelly
* https://tracker.ceph.com/issues/64988
55
    qa: fs:workloads mgr client evicted indicated by "cluster [WRN] evicting unresponsive client smithi042:x (15288), after 303.306 seconds"
56
* https://tracker.ceph.com/issues/59684
57
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
58 230 Patrick Donnelly
* https://tracker.ceph.com/issues/64972
59
    qa: "ceph tell 4.3a deep-scrub" command not found
60
* https://tracker.ceph.com/issues/54108
61
    qa: iogen workunit: "The following counters failed to be set on mds daemons: {'mds.exported', 'mds.imported'}"
62
* https://tracker.ceph.com/issues/65019
63
    qa/suites/fs/top: [WRN] Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)" in cluster log 
64
* https://tracker.ceph.com/issues/65020
65
    qa: Scrub error on inode 0x1000000356c (/volumes/qa/sv_0/2f8f6bb4-3ea9-47a0-bd79-a0f50dc149d5/client.0/tmp/clients/client7/~dmtmp/PARADOX) see mds.b log and `damage ls` output for details" in cluster log
66
* https://tracker.ceph.com/issues/65021
67
    qa/suites/fs/nfs: cluster [WRN] Health check failed: 1 stray daemon(s) not managed by cephadm (CEPHADM_STRAY_DAEMON)" in cluster log
68
* https://tracker.ceph.com/issues/63699
69
    qa: failed cephfs-shell test_reading_conf
70 231 Patrick Donnelly
* https://tracker.ceph.com/issues/64711
71
    Test failure: test_cephfs_mirror_cancel_mirroring_and_readd (tasks.cephfs.test_mirroring.TestMirroring)
72
* https://tracker.ceph.com/issues/50821
73
    qa: untar_snap_rm failure during mds thrashing
74 232 Patrick Donnelly
* https://tracker.ceph.com/issues/65022
75
    qa: test_max_items_per_obj open procs not fully cleaned up
76 228 Patrick Donnelly
77 226 Venky Shankar
h3.  14th March 2024
78
79
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20240307.013758
80
81 227 Venky Shankar
(pjd.sh failures are related to a bug in the testing kernel. See - https://tracker.ceph.com/issues/64679#note-4)
82 226 Venky Shankar
83
* https://tracker.ceph.com/issues/62067
84
    ffsb.sh failure "Resource temporarily unavailable"
85
* https://tracker.ceph.com/issues/57676
86
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
87
* https://tracker.ceph.com/issues/64502
88
    pacific/quincy/v18.2.0: client: ceph-fuse fails to unmount after upgrade to main
89
* https://tracker.ceph.com/issues/64572
90
    workunits/fsx.sh failure
91
* https://tracker.ceph.com/issues/63700
92
    qa: test_cd_with_args failure
93
* https://tracker.ceph.com/issues/59684
94
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
95
* https://tracker.ceph.com/issues/61243
96
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
97
98 225 Venky Shankar
h3. 5th March 2024
99
100
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20240304.042522
101
102
* https://tracker.ceph.com/issues/57676
103
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
104
* https://tracker.ceph.com/issues/64502
105
    pacific/quincy/v18.2.0: client: ceph-fuse fails to unmount after upgrade to main
106
* https://tracker.ceph.com/issues/63949
107
    leak in mds.c detected by valgrind during CephFS QA run
108
* https://tracker.ceph.com/issues/57656
109
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
110
* https://tracker.ceph.com/issues/63699
111
    qa: failed cephfs-shell test_reading_conf
112
* https://tracker.ceph.com/issues/64572
113
    workunits/fsx.sh failure
114
* https://tracker.ceph.com/issues/64707 (new issue)
115
    suites/fsstress.sh hangs on one client - test times out
116
* https://tracker.ceph.com/issues/59684
117
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
118
* https://tracker.ceph.com/issues/63700
119
    qa: test_cd_with_args failure
120
* https://tracker.ceph.com/issues/64711
121
    Test failure: test_cephfs_mirror_cancel_mirroring_and_readd (tasks.cephfs.test_mirroring.TestMirroring)
122
* https://tracker.ceph.com/issues/64729 (new issue)
123
    mon.a (mon.0) 1281 : cluster 3 [WRN] MDS_SLOW_METADATA_IO: 3 MDSs report slow metadata IOs" in cluster log
124
* https://tracker.ceph.com/issues/64730
125
    fs/misc/multiple_rsync.sh workunit times out
126
127 224 Venky Shankar
h3. 26th Feb 2024
128
129
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20240216.060239
130
131
(This run is a bit messy due to
132
133
  a) OCI runtime issues in the testing kernel with centos9
134
  b) SELinux denials related failures
135
  c) Unrelated MON_DOWN warnings)
136
137
* https://tracker.ceph.com/issues/57676
138
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
139
* https://tracker.ceph.com/issues/63700
140
    qa: test_cd_with_args failure
141
* https://tracker.ceph.com/issues/63949
142
    leak in mds.c detected by valgrind during CephFS QA run
143
* https://tracker.ceph.com/issues/59684
144
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
145
* https://tracker.ceph.com/issues/61243
146
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
147
* https://tracker.ceph.com/issues/63699
148
    qa: failed cephfs-shell test_reading_conf
149
* https://tracker.ceph.com/issues/64172
150
    Test failure: test_multiple_path_r (tasks.cephfs.test_admin.TestFsAuthorize)
151
* https://tracker.ceph.com/issues/57656
152
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
153
* https://tracker.ceph.com/issues/64572
154
    workunits/fsx.sh failure
155
156 222 Patrick Donnelly
h3. 20th Feb 2024
157
158
https://github.com/ceph/ceph/pull/55601
159
https://github.com/ceph/ceph/pull/55659
160
161
https://pulpito.ceph.com/pdonnell-2024-02-20_07:23:03-fs:upgrade:mds_upgrade_sequence-wip-batrick-testing-20240220.022152-distro-default-smithi/
162
163
* https://tracker.ceph.com/issues/64502
164
    client: quincy ceph-fuse fails to unmount after upgrade to main
165
166 223 Patrick Donnelly
This run has numerous problems. #55601 introduces testing for the upgrade sequence from </code>reef/{v18.2.0,v18.2.1,reef}</code> as well as an extra dimension for the ceph-fuse client. The main "big" issue is i64502: the ceph-fuse client is not being unmounted when <code>fusermount -u</code> is called. Instead, the client begins to unmount only after daemons are shut down during test cleanup.
167 218 Venky Shankar
168
h3. 19th Feb 2024
169
170 220 Venky Shankar
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20240217.015652
171
172 218 Venky Shankar
* https://tracker.ceph.com/issues/61243
173
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
174
* https://tracker.ceph.com/issues/63700
175
    qa: test_cd_with_args failure
176
* https://tracker.ceph.com/issues/63141
177
    qa/cephfs: test_idem_unaffected_root_squash fails
178
* https://tracker.ceph.com/issues/59684
179
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
180
* https://tracker.ceph.com/issues/63949
181
    leak in mds.c detected by valgrind during CephFS QA run
182
* https://tracker.ceph.com/issues/63764
183
    Test failure: test_r_with_fsname_and_no_path_in_cap (tasks.cephfs.test_multifs_auth.TestMDSCaps)
184
* https://tracker.ceph.com/issues/63699
185
    qa: failed cephfs-shell test_reading_conf
186 219 Venky Shankar
* https://tracker.ceph.com/issues/64482
187
    ceph: stderr Error: OCI runtime error: crun: bpf create ``: Function not implemented
188 201 Rishabh Dave
189 217 Venky Shankar
h3. 29 Jan 2024
190
191
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20240119.075157-1
192
193
* https://tracker.ceph.com/issues/57676
194
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
195
* https://tracker.ceph.com/issues/63949
196
    leak in mds.c detected by valgrind during CephFS QA run
197
* https://tracker.ceph.com/issues/62067
198
    ffsb.sh failure "Resource temporarily unavailable"
199
* https://tracker.ceph.com/issues/64172
200
    Test failure: test_multiple_path_r (tasks.cephfs.test_admin.TestFsAuthorize)
201
* https://tracker.ceph.com/issues/63265
202
    qa: fs/snaps/snaptest-git-ceph.sh failed when reseting to tag 'v0.1'
203
* https://tracker.ceph.com/issues/61243
204
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
205
* https://tracker.ceph.com/issues/59684
206
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
207
* https://tracker.ceph.com/issues/57656
208
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
209
* https://tracker.ceph.com/issues/64209
210
    snaptest-multiple-capsnaps.sh fails with "got remote process result: 1"
211
212 216 Venky Shankar
h3. 17th Jan 2024
213
214
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20240103.072409-1
215
216
* https://tracker.ceph.com/issues/63764
217
    Test failure: test_r_with_fsname_and_no_path_in_cap (tasks.cephfs.test_multifs_auth.TestMDSCaps)
218
* https://tracker.ceph.com/issues/57676
219
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
220
* https://tracker.ceph.com/issues/51964
221
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
222
* https://tracker.ceph.com/issues/63949
223
    leak in mds.c detected by valgrind during CephFS QA run
224
* https://tracker.ceph.com/issues/62067
225
    ffsb.sh failure "Resource temporarily unavailable"
226
* https://tracker.ceph.com/issues/61243
227
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
228
* https://tracker.ceph.com/issues/63259
229
    mds: failed to store backtrace and force file system read-only
230
* https://tracker.ceph.com/issues/63265
231
    qa: fs/snaps/snaptest-git-ceph.sh failed when reseting to tag 'v0.1'
232
233
h3. 16 Jan 2024
234 215 Rishabh Dave
235 214 Rishabh Dave
https://pulpito.ceph.com/rishabh-2023-12-11_15:37:57-fs-rishabh-2023dec11-testing-default-smithi/
236
https://pulpito.ceph.com/rishabh-2023-12-17_11:19:43-fs-rishabh-2023dec11-testing-default-smithi/
237
https://pulpito.ceph.com/rishabh-2024-01-04_18:43:16-fs-rishabh-2024jan4-testing-default-smithi
238
239
* https://tracker.ceph.com/issues/63764
240
  Test failure: test_r_with_fsname_and_no_path_in_cap (tasks.cephfs.test_multifs_auth.TestMDSCaps)
241
* https://tracker.ceph.com/issues/63141
242
  qa/cephfs: test_idem_unaffected_root_squash fails
243
* https://tracker.ceph.com/issues/62067
244
  ffsb.sh failure "Resource temporarily unavailable" 
245
* https://tracker.ceph.com/issues/51964
246
  qa: test_cephfs_mirror_restart_sync_on_blocklist failure
247
* https://tracker.ceph.com/issues/54462 
248
  Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128
249
* https://tracker.ceph.com/issues/57676
250
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
251
252
* https://tracker.ceph.com/issues/63949
253
  valgrind leak in MDS
254
* https://tracker.ceph.com/issues/64041
255
  qa/cephfs: fs/upgrade/nofs suite attempts to jump more than 2 releases
256
* fsstress failure in last run was due a kernel MM layer failure, unrelated to CephFS
257
* from last run, job #7507400 failed due to MGR; FS wasn't degraded, so it's unrelated to CephFS
258
259 213 Venky Shankar
h3. 06 Dec 2023
260
261
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20231206.125818
262
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20231206.125818-x (rerun w/ squid kickoff changes)
263
264
* https://tracker.ceph.com/issues/63764
265
    Test failure: test_r_with_fsname_and_no_path_in_cap (tasks.cephfs.test_multifs_auth.TestMDSCaps)
266
* https://tracker.ceph.com/issues/63233
267
    mon|client|mds: valgrind reports possible leaks in the MDS
268
* https://tracker.ceph.com/issues/57676
269
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
270
* https://tracker.ceph.com/issues/62580
271
    testing: Test failure: test_snapshot_remove (tasks.cephfs.test_strays.TestStrays)
272
* https://tracker.ceph.com/issues/62067
273
    ffsb.sh failure "Resource temporarily unavailable"
274
* https://tracker.ceph.com/issues/61243
275
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
276
* https://tracker.ceph.com/issues/62081
277
    tasks/fscrypt-common does not finish, timesout
278
* https://tracker.ceph.com/issues/63265
279
    qa: fs/snaps/snaptest-git-ceph.sh failed when reseting to tag 'v0.1'
280
* https://tracker.ceph.com/issues/63806
281
    ffsb.sh workunit failure (MDS: std::out_of_range, damaged)
282
283 211 Patrick Donnelly
h3. 30 Nov 2023
284
285
https://pulpito.ceph.com/pdonnell-2023-11-30_08:05:19-fs:shell-wip-batrick-testing-20231130.014408-distro-default-smithi/
286
287
* https://tracker.ceph.com/issues/63699
288 212 Patrick Donnelly
    qa: failed cephfs-shell test_reading_conf
289
* https://tracker.ceph.com/issues/63700
290
    qa: test_cd_with_args failure
291 211 Patrick Donnelly
292 210 Venky Shankar
h3. 29 Nov 2023
293
294
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20231107.042705
295
296
* https://tracker.ceph.com/issues/63233
297
    mon|client|mds: valgrind reports possible leaks in the MDS
298
* https://tracker.ceph.com/issues/63141
299
    qa/cephfs: test_idem_unaffected_root_squash fails
300
* https://tracker.ceph.com/issues/57676
301
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
302
* https://tracker.ceph.com/issues/57655
303
    qa: fs:mixed-clients kernel_untar_build failure
304
* https://tracker.ceph.com/issues/62067
305
    ffsb.sh failure "Resource temporarily unavailable"
306
* https://tracker.ceph.com/issues/61243
307
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
308
* https://tracker.ceph.com/issues/62510 (pending RHEL back port)
    snaptest-git-ceph.sh failure with fs/thrash
309
* https://tracker.ceph.com/issues/62810
310
    Failure in snaptest-git-ceph.sh (it's an async unlink/create bug) -- Need to fix again
311
312 206 Venky Shankar
h3. 14 Nov 2023
313 207 Milind Changire
(Milind)
314
315
https://pulpito.ceph.com/mchangir-2023-11-13_10:27:15-fs-wip-mchangir-testing-20231110.052303-testing-default-smithi/
316
317
* https://tracker.ceph.com/issues/53859
318
  qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
319
* https://tracker.ceph.com/issues/63233
320
  mon|client|mds: valgrind reports possible leaks in the MDS
321
* https://tracker.ceph.com/issues/63521
322
  qa: Test failure: test_scrub_merge_dirfrags (tasks.cephfs.test_scrub_checks.TestScrubChecks)
323
* https://tracker.ceph.com/issues/57655
324
  qa: fs:mixed-clients kernel_untar_build failure
325
* https://tracker.ceph.com/issues/62580
326
    testing: Test failure: test_snapshot_remove (tasks.cephfs.test_strays.TestStrays)
327
* https://tracker.ceph.com/issues/57676
328
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
329
* https://tracker.ceph.com/issues/61243
330
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
331
* https://tracker.ceph.com/issues/63141
332
    qa/cephfs: test_idem_unaffected_root_squash fails
333
* https://tracker.ceph.com/issues/51964
334
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
335
* https://tracker.ceph.com/issues/63522
336
    No module named 'tasks.ceph_fuse'
337
    No module named 'tasks.kclient'
338
    No module named 'tasks.cephfs.fuse_mount'
339
    No module named 'tasks.ceph'
340
* https://tracker.ceph.com/issues/63523
341
    Command failed - qa/workunits/fs/misc/general_vxattrs.sh
342
343
344
h3. 14 Nov 2023
345 206 Venky Shankar
346
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20231106.073650
347
348
(nvm the fs:upgrade test failure - the PR is excluded from merge)
349
350
* https://tracker.ceph.com/issues/57676
351
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
352
* https://tracker.ceph.com/issues/63233
353
    mon|client|mds: valgrind reports possible leaks in the MDS
354
* https://tracker.ceph.com/issues/63141
355
    qa/cephfs: test_idem_unaffected_root_squash fails
356
* https://tracker.ceph.com/issues/62580
357
    testing: Test failure: test_snapshot_remove (tasks.cephfs.test_strays.TestStrays)
358
* https://tracker.ceph.com/issues/57655
359
    qa: fs:mixed-clients kernel_untar_build failure
360
* https://tracker.ceph.com/issues/51964
361
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
362
* https://tracker.ceph.com/issues/63519
363
    ceph-fuse: reef ceph-fuse crashes with main branch ceph-mds
364
* https://tracker.ceph.com/issues/57087
365
    qa: test_fragmented_injection (tasks.cephfs.test_data_scan.TestDataScan) failure
366
* https://tracker.ceph.com/issues/58945
367
    qa: xfstests-dev's generic test suite has 20 failures with fuse client
368
369 204 Rishabh Dave
h3. 7 Nov 2023
370
371 205 Rishabh Dave
fs: https://pulpito.ceph.com/rishabh-2023-11-04_04:30:51-fs-rishabh-2023nov3-testing-default-smithi/
372
re-run: https://pulpito.ceph.com/rishabh-2023-11-05_14:10:09-fs-rishabh-2023nov3-testing-default-smithi/
373
smoke: https://pulpito.ceph.com/rishabh-2023-11-08_08:39:05-smoke-rishabh-2023nov3-testing-default-smithi/
374 204 Rishabh Dave
375
* https://tracker.ceph.com/issues/53859
376
  qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
377
* https://tracker.ceph.com/issues/63233
378
  mon|client|mds: valgrind reports possible leaks in the MDS
379
* https://tracker.ceph.com/issues/57655
380
  qa: fs:mixed-clients kernel_untar_build failure
381
* https://tracker.ceph.com/issues/57676
382
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
383
384
* https://tracker.ceph.com/issues/63473
385
  fsstress.sh failed with errno 124
386
387 202 Rishabh Dave
h3. 3 Nov 2023
388 203 Rishabh Dave
389 202 Rishabh Dave
https://pulpito.ceph.com/rishabh-2023-10-27_06:26:52-fs-rishabh-2023oct26-testing-default-smithi/
390
391
* https://tracker.ceph.com/issues/63141
392
  qa/cephfs: test_idem_unaffected_root_squash fails
393
* https://tracker.ceph.com/issues/63233
394
  mon|client|mds: valgrind reports possible leaks in the MDS
395
* https://tracker.ceph.com/issues/57656
396
  dbench: write failed on handle 10010 (Resource temporarily unavailable)
397
* https://tracker.ceph.com/issues/57655
398
  qa: fs:mixed-clients kernel_untar_build failure
399
* https://tracker.ceph.com/issues/57676
400
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
401
402
* https://tracker.ceph.com/issues/59531
403
  "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio)"
404
* https://tracker.ceph.com/issues/52624
405
  qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
406
407 198 Patrick Donnelly
h3. 24 October 2023
408
409
https://pulpito.ceph.com/?branch=wip-batrick-testing-20231024.144545
410
411 200 Patrick Donnelly
Two failures:
412
413
https://pulpito.ceph.com/pdonnell-2023-10-26_05:21:22-fs-wip-batrick-testing-20231024.144545-distro-default-smithi/7438459/
414
https://pulpito.ceph.com/pdonnell-2023-10-26_05:21:22-fs-wip-batrick-testing-20231024.144545-distro-default-smithi/7438468/
415
416
probably related to https://github.com/ceph/ceph/pull/53255. Killing the mount as part of the test did not complete. Will research more.
417
418 198 Patrick Donnelly
* https://tracker.ceph.com/issues/52624
419
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
420
* https://tracker.ceph.com/issues/57676
421 199 Patrick Donnelly
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
422
* https://tracker.ceph.com/issues/63233
423
    mon|client|mds: valgrind reports possible leaks in the MDS
424
* https://tracker.ceph.com/issues/59531
425
    "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS 
426
* https://tracker.ceph.com/issues/57655
427
    qa: fs:mixed-clients kernel_untar_build failure
428 200 Patrick Donnelly
* https://tracker.ceph.com/issues/62067
429
    ffsb.sh failure "Resource temporarily unavailable"
430
* https://tracker.ceph.com/issues/63411
431
    qa: flush journal may cause timeouts of `scrub status`
432
* https://tracker.ceph.com/issues/61243
433
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
434
* https://tracker.ceph.com/issues/63141
435 198 Patrick Donnelly
    test_idem_unaffected_root_squash (test_admin.TestFsAuthorizeUpdate) fails
436 148 Rishabh Dave
437 195 Venky Shankar
h3. 18 Oct 2023
438
439
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20231018.065603
440
441
* https://tracker.ceph.com/issues/52624
442
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
443
* https://tracker.ceph.com/issues/57676
444
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
445
* https://tracker.ceph.com/issues/63233
446
    mon|client|mds: valgrind reports possible leaks in the MDS
447
* https://tracker.ceph.com/issues/63141
448
    qa/cephfs: test_idem_unaffected_root_squash fails
449
* https://tracker.ceph.com/issues/59531
450
  "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio)" 
451
* https://tracker.ceph.com/issues/62658
452
    error during scrub thrashing: reached maximum tries (31) after waiting for 900 seconds
453
* https://tracker.ceph.com/issues/62580
454
    testing: Test failure: test_snapshot_remove (tasks.cephfs.test_strays.TestStrays)
455
* https://tracker.ceph.com/issues/62067
456
    ffsb.sh failure "Resource temporarily unavailable"
457
* https://tracker.ceph.com/issues/57655
458
    qa: fs:mixed-clients kernel_untar_build failure
459
* https://tracker.ceph.com/issues/62036
460
    src/mds/MDCache.cc: 5131: FAILED ceph_assert(isolated_inodes.empty())
461
* https://tracker.ceph.com/issues/58945
462
    qa: xfstests-dev's generic test suite has 20 failures with fuse client
463
* https://tracker.ceph.com/issues/62847
464
    mds: blogbench requests stuck (5mds+scrub+snaps-flush)
465
466 193 Venky Shankar
h3. 13 Oct 2023
467
468
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20231013.093215
469
470
* https://tracker.ceph.com/issues/52624
471
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
472
* https://tracker.ceph.com/issues/62936
473
    Test failure: test_mirroring_init_failure_with_recovery (tasks.cephfs.test_mirroring.TestMirroring)
474
* https://tracker.ceph.com/issues/47292
475
    cephfs-shell: test_df_for_valid_file failure
476
* https://tracker.ceph.com/issues/63141
477
    qa/cephfs: test_idem_unaffected_root_squash fails
478
* https://tracker.ceph.com/issues/62081
479
    tasks/fscrypt-common does not finish, timesout
480 1 Patrick Donnelly
* https://tracker.ceph.com/issues/58945
481
    qa: xfstests-dev's generic test suite has 20 failures with fuse client
482 194 Venky Shankar
* https://tracker.ceph.com/issues/63233
483
    mon|client|mds: valgrind reports possible leaks in the MDS
484 193 Venky Shankar
485 190 Patrick Donnelly
h3. 16 Oct 2023
486
487
https://pulpito.ceph.com/?branch=wip-batrick-testing-20231016.203825
488
489 192 Patrick Donnelly
Infrastructure issues:
490
* /teuthology/pdonnell-2023-10-19_12:04:12-fs-wip-batrick-testing-20231016.203825-distro-default-smithi/7432286/teuthology.log
491
    Host lost.
492
493 196 Patrick Donnelly
One followup fix:
494
* https://pulpito.ceph.com/pdonnell-2023-10-20_00:33:29-fs-wip-batrick-testing-20231016.203825-distro-default-smithi/
495
496 192 Patrick Donnelly
Failures:
497
498
* https://tracker.ceph.com/issues/56694
499
    qa: avoid blocking forever on hung umount
500
* https://tracker.ceph.com/issues/63089
501
    qa: tasks/mirror times out
502
* https://tracker.ceph.com/issues/52624
503
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
504
* https://tracker.ceph.com/issues/59531
505
  "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio)" 
506
* https://tracker.ceph.com/issues/57676
507
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
508
* https://tracker.ceph.com/issues/62658 
509
    error during scrub thrashing: reached maximum tries (31) after waiting for 900 seconds
510
* https://tracker.ceph.com/issues/61243
511
  test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
512
* https://tracker.ceph.com/issues/57656
513
  dbench: write failed on handle 10010 (Resource temporarily unavailable)
514
* https://tracker.ceph.com/issues/63233
515
  mon|client|mds: valgrind reports possible leaks in the MDS
516 197 Patrick Donnelly
* https://tracker.ceph.com/issues/63278
517
  kclient: may wrongly decode session messages and believe it is blocklisted (dead jobs)
518 192 Patrick Donnelly
519 189 Rishabh Dave
h3. 9 Oct 2023
520
521
https://pulpito.ceph.com/rishabh-2023-10-06_11:56:52-fs-rishabh-cephfs-mon-testing-default-smithi/
522
523
* https://tracker.ceph.com/issues/54460
524
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
525
* https://tracker.ceph.com/issues/63141
526
  test_idem_unaffected_root_squash (test_admin.TestFsAuthorizeUpdate) fails
527
* https://tracker.ceph.com/issues/62937
528
  logrotate doesn't support parallel execution on same set of logfiles
529
* https://tracker.ceph.com/issues/61400
530
  valgrind+ceph-mon issues
531
* https://tracker.ceph.com/issues/57676
532
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
533
* https://tracker.ceph.com/issues/55805
534
  error during scrub thrashing reached max tries in 900 secs
535
536 188 Venky Shankar
h3. 26 Sep 2023
537
538
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230926.081818
539
540
* https://tracker.ceph.com/issues/52624
541
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
542
* https://tracker.ceph.com/issues/62873
543
    qa: FAIL: test_client_blocklisted_oldest_tid (tasks.cephfs.test_client_limits.TestClientLimits)
544
* https://tracker.ceph.com/issues/61400
545
    valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
546
* https://tracker.ceph.com/issues/57676
547
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
548
* https://tracker.ceph.com/issues/62682
549
    mon: no mdsmap broadcast after "fs set joinable" is set to true
550
* https://tracker.ceph.com/issues/63089
551
    qa: tasks/mirror times out
552
553 185 Rishabh Dave
h3. 22 Sep 2023
554
555
https://pulpito.ceph.com/rishabh-2023-09-12_12:12:15-fs-wip-rishabh-2023sep12-b2-testing-default-smithi/
556
557
* https://tracker.ceph.com/issues/59348
558
  qa: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota
559
* https://tracker.ceph.com/issues/59344
560
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
561
* https://tracker.ceph.com/issues/59531
562
  "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio)" 
563
* https://tracker.ceph.com/issues/61574
564
  build failure for mdtest project
565
* https://tracker.ceph.com/issues/62702
566
  fsstress.sh: MDS slow requests for the internal 'rename' requests
567
* https://tracker.ceph.com/issues/57676
568
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
569
570
* https://tracker.ceph.com/issues/62863 
571
  deadlock in ceph-fuse causes teuthology job to hang and fail
572
* https://tracker.ceph.com/issues/62870
573
  test_cluster_info (tasks.cephfs.test_nfs.TestNFS)
574
* https://tracker.ceph.com/issues/62873
575
  test_client_blocklisted_oldest_tid (tasks.cephfs.test_client_limits.TestClientLimits)
576
577 186 Venky Shankar
h3. 20 Sep 2023
578
579
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230920.072635
580
581
* https://tracker.ceph.com/issues/52624
582
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
583
* https://tracker.ceph.com/issues/61400
584
    valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
585
* https://tracker.ceph.com/issues/61399
586
    libmpich: undefined references to fi_strerror
587
* https://tracker.ceph.com/issues/62081
588
    tasks/fscrypt-common does not finish, timesout
589
* https://tracker.ceph.com/issues/62658 
590
    error during scrub thrashing: reached maximum tries (31) after waiting for 900 seconds
591
* https://tracker.ceph.com/issues/62915
592
    qa/suites/fs/nfs: No orchestrator configured (try `ceph orch set backend`) while running test cases
593
* https://tracker.ceph.com/issues/59531
594
    quincy: "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio)"
595
* https://tracker.ceph.com/issues/62873
596
  qa: FAIL: test_client_blocklisted_oldest_tid (tasks.cephfs.test_client_limits.TestClientLimits)
597
* https://tracker.ceph.com/issues/62936
598
    Test failure: test_mirroring_init_failure_with_recovery (tasks.cephfs.test_mirroring.TestMirroring)
599
* https://tracker.ceph.com/issues/62937
600
    Command failed on smithi027 with status 3: 'sudo logrotate /etc/logrotate.d/ceph-test.conf'
601
* https://tracker.ceph.com/issues/62510
602
    snaptest-git-ceph.sh failure with fs/thrash
603
* https://tracker.ceph.com/issues/62081
604
    tasks/fscrypt-common does not finish, timesout
605
* https://tracker.ceph.com/issues/62126
606
    test failure: suites/blogbench.sh stops running
607 187 Venky Shankar
* https://tracker.ceph.com/issues/62682
608
    mon: no mdsmap broadcast after "fs set joinable" is set to true
609 186 Venky Shankar
610 184 Milind Changire
h3. 19 Sep 2023
611
612
http://pulpito.front.sepia.ceph.com/mchangir-2023-09-12_05:40:22-fs-wip-mchangir-testing-20230908.140927-testing-default-smithi/
613
614
* https://tracker.ceph.com/issues/58220#note-9
615
  workunit fs/test_python.sh: test_disk_quota_exceeeded_error failure
616
* https://tracker.ceph.com/issues/62702
617
  Command failed (workunit test suites/fsstress.sh) on smithi124 with status 124
618
* https://tracker.ceph.com/issues/57676
619
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
620
* https://tracker.ceph.com/issues/59348
621
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
622
* https://tracker.ceph.com/issues/52624
623
  qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
624
* https://tracker.ceph.com/issues/51964
625
  qa: test_cephfs_mirror_restart_sync_on_blocklist failure
626
* https://tracker.ceph.com/issues/61243
627
  test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
628
* https://tracker.ceph.com/issues/59344
629
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
630
* https://tracker.ceph.com/issues/62873
631
  qa: FAIL: test_client_blocklisted_oldest_tid (tasks.cephfs.test_client_limits.TestClientLimits)
632
* https://tracker.ceph.com/issues/59413
633
  cephfs: qa snaptest-git-ceph.sh failed with "got remote process result: 128"
634
* https://tracker.ceph.com/issues/53859
635
  qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
636
* https://tracker.ceph.com/issues/62482
637
  qa: cluster [WRN] Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
638
639 178 Patrick Donnelly
640 177 Venky Shankar
h3. 13 Sep 2023
641
642
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230908.065909
643
644
* https://tracker.ceph.com/issues/52624
645
      qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
646
* https://tracker.ceph.com/issues/57655
647
    qa: fs:mixed-clients kernel_untar_build failure
648
* https://tracker.ceph.com/issues/57676
649
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
650
* https://tracker.ceph.com/issues/61243
651
    qa: tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev - 17 tests failed
652
* https://tracker.ceph.com/issues/62567
653
    postgres workunit times out - MDS_SLOW_REQUEST in logs
654
* https://tracker.ceph.com/issues/61400
655
    valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
656
* https://tracker.ceph.com/issues/61399
657
    libmpich: undefined references to fi_strerror
658
* https://tracker.ceph.com/issues/57655
659
    qa: fs:mixed-clients kernel_untar_build failure
660
* https://tracker.ceph.com/issues/57676
661
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
662
* https://tracker.ceph.com/issues/51964
663
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
664
* https://tracker.ceph.com/issues/62081
665
    tasks/fscrypt-common does not finish, timesout
666 178 Patrick Donnelly
667 179 Patrick Donnelly
h3. 2023 Sep 12
668 178 Patrick Donnelly
669
https://pulpito.ceph.com/pdonnell-2023-09-12_14:07:50-fs-wip-batrick-testing-20230912.122437-distro-default-smithi/
670 1 Patrick Donnelly
671 181 Patrick Donnelly
A few failures caused by qa refactoring in https://github.com/ceph/ceph/pull/48130 ; notably:
672
673 182 Patrick Donnelly
* Test failure: test_export_pin_many (tasks.cephfs.test_exports.TestExportPin) caused by fragmentation from config changes.
674 181 Patrick Donnelly
675
Failures:
676
677 179 Patrick Donnelly
* https://tracker.ceph.com/issues/59348
678
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
679
* https://tracker.ceph.com/issues/57656
680
  dbench: write failed on handle 10010 (Resource temporarily unavailable)
681
* https://tracker.ceph.com/issues/55805
682
  error scrub thrashing reached max tries in 900 secs
683
* https://tracker.ceph.com/issues/62067
684
    ffsb.sh failure "Resource temporarily unavailable"
685
* https://tracker.ceph.com/issues/59344
686
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
687
* https://tracker.ceph.com/issues/61399
688 180 Patrick Donnelly
  libmpich: undefined references to fi_strerror
689
* https://tracker.ceph.com/issues/62832
690
  common: config_proxy deadlock during shutdown (and possibly other times)
691
* https://tracker.ceph.com/issues/59413
692 1 Patrick Donnelly
  cephfs: qa snaptest-git-ceph.sh failed with "got remote process result: 128"
693 181 Patrick Donnelly
* https://tracker.ceph.com/issues/57676
694
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
695
* https://tracker.ceph.com/issues/62567
696
  Command failed with status 1: "sudo TESTDIR=/home/ubuntu/cephtest bash -c 'sudo -u postgres -- pgbench -s 500 -i'"
697
* https://tracker.ceph.com/issues/54460
698
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
699
* https://tracker.ceph.com/issues/58220#note-9
700
  workunit fs/test_python.sh: test_disk_quota_exceeeded_error failure
701
* https://tracker.ceph.com/issues/59348
702
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
703 183 Patrick Donnelly
* https://tracker.ceph.com/issues/62847
704
    mds: blogbench requests stuck (5mds+scrub+snaps-flush)
705
* https://tracker.ceph.com/issues/62848
706
    qa: fail_fs upgrade scenario hanging
707
* https://tracker.ceph.com/issues/62081
708
    tasks/fscrypt-common does not finish, timesout
709 177 Venky Shankar
710 176 Venky Shankar
h3. 11 Sep 2023
711 175 Venky Shankar
712
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230830.153114
713
714
* https://tracker.ceph.com/issues/52624
715
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
716
* https://tracker.ceph.com/issues/61399
717
    qa: build failure for ior (the failed instance is when compiling `mdtest`)
718
* https://tracker.ceph.com/issues/57655
719
    qa: fs:mixed-clients kernel_untar_build failure
720
* https://tracker.ceph.com/issues/61399
721
    ior build failure
722
* https://tracker.ceph.com/issues/59531
723
    quincy: "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio)"
724
* https://tracker.ceph.com/issues/59344
725
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
726
* https://tracker.ceph.com/issues/59346
727
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
728
* https://tracker.ceph.com/issues/59348
729
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
730
* https://tracker.ceph.com/issues/57676
731
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
732
* https://tracker.ceph.com/issues/61243
733
  qa: tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev - 17 tests failed
734
* https://tracker.ceph.com/issues/62567
735
  postgres workunit times out - MDS_SLOW_REQUEST in logs
736
737
738 174 Rishabh Dave
h3. 6 Sep 2023 Run 2
739
740
https://pulpito.ceph.com/rishabh-2023-08-25_01:50:32-fs-wip-rishabh-2023aug3-b5-testing-default-smithi/ 
741
742
* https://tracker.ceph.com/issues/51964
743
  test_cephfs_mirror_restart_sync_on_blocklist failure
744
* https://tracker.ceph.com/issues/59348
745
  test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
746
* https://tracker.ceph.com/issues/53859
747
  qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
748
* https://tracker.ceph.com/issues/61892
749
  test_strays.TestStrays.test_snapshot_remove failed
750
* https://tracker.ceph.com/issues/54460
751
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
752
* https://tracker.ceph.com/issues/59346
753
  fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write"
754
* https://tracker.ceph.com/issues/59344
755
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
756
* https://tracker.ceph.com/issues/62484
757
  qa: ffsb.sh test failure
758
* https://tracker.ceph.com/issues/62567
759
  Command failed with status 1: "sudo TESTDIR=/home/ubuntu/cephtest bash -c 'sudo -u postgres -- pgbench -s 500 -i'"
760
  
761
* https://tracker.ceph.com/issues/61399
762
  ior build failure
763
* https://tracker.ceph.com/issues/57676
764
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
765
* https://tracker.ceph.com/issues/55805
766
  error scrub thrashing reached max tries in 900 secs
767
768 172 Rishabh Dave
h3. 6 Sep 2023
769 171 Rishabh Dave
770 173 Rishabh Dave
https://pulpito.ceph.com/rishabh-2023-08-10_20:16:46-fs-wip-rishabh-2023Aug1-b4-testing-default-smithi/
771 171 Rishabh Dave
772 1 Patrick Donnelly
* https://tracker.ceph.com/issues/53859
773
  qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
774 173 Rishabh Dave
* https://tracker.ceph.com/issues/51964
775
  qa: test_cephfs_mirror_restart_sync_on_blocklist failure
776 1 Patrick Donnelly
* https://tracker.ceph.com/issues/61892
777 173 Rishabh Dave
  test_snapshot_remove (test_strays.TestStrays) failed
778
* https://tracker.ceph.com/issues/59348
779
  qa: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota
780
* https://tracker.ceph.com/issues/54462
781
  Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128
782
* https://tracker.ceph.com/issues/62556
783
  test_acls: xfstests_dev: python2 is missing
784
* https://tracker.ceph.com/issues/62067
785
  ffsb.sh failure "Resource temporarily unavailable"
786
* https://tracker.ceph.com/issues/57656
787
  dbench: write failed on handle 10010 (Resource temporarily unavailable)
788 1 Patrick Donnelly
* https://tracker.ceph.com/issues/59346
789
  fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write"
790 171 Rishabh Dave
* https://tracker.ceph.com/issues/59344
791 173 Rishabh Dave
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
792
793 171 Rishabh Dave
* https://tracker.ceph.com/issues/61399
794
  ior build failure
795
* https://tracker.ceph.com/issues/57676
796
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
797
* https://tracker.ceph.com/issues/55805
798
  error scrub thrashing reached max tries in 900 secs
799 173 Rishabh Dave
800
* https://tracker.ceph.com/issues/62567
801
  Command failed on smithi008 with status 1: "sudo TESTDIR=/home/ubuntu/cephtest bash -c 'sudo -u postgres -- pgbench -s 500 -i'"
802
* https://tracker.ceph.com/issues/62702
803
  workunit test suites/fsstress.sh on smithi066 with status 124
804 170 Rishabh Dave
805
h3. 5 Sep 2023
806
807
https://pulpito.ceph.com/rishabh-2023-08-25_06:38:25-fs-wip-rishabh-2023aug3-b5-testing-default-smithi/
808
orch:cephadm suite run: http://pulpito.front.sepia.ceph.com/rishabh-2023-09-05_12:16:09-orch:cephadm-wip-rishabh-2023aug3-b5-testing-default-smithi/
809
  this run has failures but acc to Adam King these are not relevant and should be ignored
810
811
* https://tracker.ceph.com/issues/61892
812
  test_snapshot_remove (test_strays.TestStrays) failed
813
* https://tracker.ceph.com/issues/59348
814
  test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota
815
* https://tracker.ceph.com/issues/54462
816
  Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128
817
* https://tracker.ceph.com/issues/62067
818
  ffsb.sh failure "Resource temporarily unavailable"
819
* https://tracker.ceph.com/issues/57656 
820
  dbench: write failed on handle 10010 (Resource temporarily unavailable)
821
* https://tracker.ceph.com/issues/59346
822
  fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write"
823
* https://tracker.ceph.com/issues/59344
824
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
825
* https://tracker.ceph.com/issues/50223
826
  client.xxxx isn't responding to mclientcaps(revoke)
827
* https://tracker.ceph.com/issues/57655
828
  qa: fs:mixed-clients kernel_untar_build failure
829
* https://tracker.ceph.com/issues/62187
830
  iozone.sh: line 5: iozone: command not found
831
 
832
* https://tracker.ceph.com/issues/61399
833
  ior build failure
834
* https://tracker.ceph.com/issues/57676
835
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
836
* https://tracker.ceph.com/issues/55805
837
  error scrub thrashing reached max tries in 900 secs
838 169 Venky Shankar
839
840
h3. 31 Aug 2023
841
842
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230824.045828
843
844
* https://tracker.ceph.com/issues/52624
845
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
846
* https://tracker.ceph.com/issues/62187
847
    iozone: command not found
848
* https://tracker.ceph.com/issues/61399
849
    ior build failure
850
* https://tracker.ceph.com/issues/59531
851
    quincy: "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio)"
852
* https://tracker.ceph.com/issues/61399
853
    qa: build failure for ior (the failed instance is when compiling `mdtest`)
854
* https://tracker.ceph.com/issues/57655
855
    qa: fs:mixed-clients kernel_untar_build failure
856
* https://tracker.ceph.com/issues/59344
857
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
858
* https://tracker.ceph.com/issues/59346
859
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
860
* https://tracker.ceph.com/issues/59348
861
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
862
* https://tracker.ceph.com/issues/59413
863
    cephfs: qa snaptest-git-ceph.sh failed with "got remote process result: 128"
864
* https://tracker.ceph.com/issues/62653
865
    qa: unimplemented fcntl command: 1036 with fsstress
866
* https://tracker.ceph.com/issues/61400
867
    valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
868
* https://tracker.ceph.com/issues/62658
869
    error during scrub thrashing: reached maximum tries (31) after waiting for 900 seconds
870
* https://tracker.ceph.com/issues/62188
871
    AttributeError: 'RemoteProcess' object has no attribute 'read' (happens only with multis-auth test)
872 168 Venky Shankar
873
874
h3. 25 Aug 2023
875
876
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230822.064807
877
878
* https://tracker.ceph.com/issues/59344
879
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
880
* https://tracker.ceph.com/issues/59346
881
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
882
* https://tracker.ceph.com/issues/59348
883
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
884
* https://tracker.ceph.com/issues/57655
885
    qa: fs:mixed-clients kernel_untar_build failure
886
* https://tracker.ceph.com/issues/61243
887
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
888
* https://tracker.ceph.com/issues/61399
889
    ior build failure
890
* https://tracker.ceph.com/issues/61399
891
    qa: build failure for ior (the failed instance is when compiling `mdtest`)
892
* https://tracker.ceph.com/issues/62484
893
    qa: ffsb.sh test failure
894
* https://tracker.ceph.com/issues/59531
895
    quincy: "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio)"
896
* https://tracker.ceph.com/issues/62510
897
    snaptest-git-ceph.sh failure with fs/thrash
898 167 Venky Shankar
899
900
h3. 24 Aug 2023
901
902
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230822.060131
903
904
* https://tracker.ceph.com/issues/57676
905
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
906
* https://tracker.ceph.com/issues/51964
907
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
908
* https://tracker.ceph.com/issues/59344
909
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
910
* https://tracker.ceph.com/issues/59346
911
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
912
* https://tracker.ceph.com/issues/59348
913
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
914
* https://tracker.ceph.com/issues/61399
915
    ior build failure
916
* https://tracker.ceph.com/issues/61399
917
    qa: build failure for ior (the failed instance is when compiling `mdtest`)
918
* https://tracker.ceph.com/issues/62510
919
    snaptest-git-ceph.sh failure with fs/thrash
920
* https://tracker.ceph.com/issues/62484
921
    qa: ffsb.sh test failure
922
* https://tracker.ceph.com/issues/57087
923
    qa: test_fragmented_injection (tasks.cephfs.test_data_scan.TestDataScan) failure
924
* https://tracker.ceph.com/issues/57656
925
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
926
* https://tracker.ceph.com/issues/62187
927
    iozone: command not found
928
* https://tracker.ceph.com/issues/62188
929
    AttributeError: 'RemoteProcess' object has no attribute 'read' (happens only with multis-auth test)
930
* https://tracker.ceph.com/issues/62567
931
    postgres workunit times out - MDS_SLOW_REQUEST in logs
932 166 Venky Shankar
933
934
h3. 22 Aug 2023
935
936
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230809.035933
937
938
* https://tracker.ceph.com/issues/57676
939
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
940
* https://tracker.ceph.com/issues/51964
941
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
942
* https://tracker.ceph.com/issues/59344
943
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
944
* https://tracker.ceph.com/issues/59346
945
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
946
* https://tracker.ceph.com/issues/59348
947
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
948
* https://tracker.ceph.com/issues/61399
949
    ior build failure
950
* https://tracker.ceph.com/issues/61399
951
    qa: build failure for ior (the failed instance is when compiling `mdtest`)
952
* https://tracker.ceph.com/issues/57655
953
    qa: fs:mixed-clients kernel_untar_build failure
954
* https://tracker.ceph.com/issues/61243
955
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
956
* https://tracker.ceph.com/issues/62188
957
    AttributeError: 'RemoteProcess' object has no attribute 'read' (happens only with multis-auth test)
958
* https://tracker.ceph.com/issues/62510
959
    snaptest-git-ceph.sh failure with fs/thrash
960
* https://tracker.ceph.com/issues/62511
961
    src/mds/MDLog.cc: 299: FAILED ceph_assert(!mds_is_shutting_down)
962 165 Venky Shankar
963
964
h3. 14 Aug 2023
965
966
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230808.093601
967
968
* https://tracker.ceph.com/issues/51964
969
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
970
* https://tracker.ceph.com/issues/61400
971
    valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
972
* https://tracker.ceph.com/issues/61399
973
    ior build failure
974
* https://tracker.ceph.com/issues/59348
975
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
976
* https://tracker.ceph.com/issues/59531
977
    cluster [WRN] OSD bench result of 137706.272521 IOPS exceeded the threshold
978
* https://tracker.ceph.com/issues/59344
979
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
980
* https://tracker.ceph.com/issues/59346
981
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
982
* https://tracker.ceph.com/issues/61399
983
    qa: build failure for ior (the failed instance is when compiling `mdtest`)
984
* https://tracker.ceph.com/issues/59684 [kclient bug]
985
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
986
* https://tracker.ceph.com/issues/61243 (NEW)
987
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
988
* https://tracker.ceph.com/issues/57655
989
    qa: fs:mixed-clients kernel_untar_build failure
990
* https://tracker.ceph.com/issues/57656
991
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
992 163 Venky Shankar
993
994
h3. 28 JULY 2023
995
996
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230725.053049
997
998
* https://tracker.ceph.com/issues/51964
999
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1000
* https://tracker.ceph.com/issues/61400
1001
    valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
1002
* https://tracker.ceph.com/issues/61399
1003
    ior build failure
1004
* https://tracker.ceph.com/issues/57676
1005
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1006
* https://tracker.ceph.com/issues/59348
1007
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1008
* https://tracker.ceph.com/issues/59531
1009
    cluster [WRN] OSD bench result of 137706.272521 IOPS exceeded the threshold
1010
* https://tracker.ceph.com/issues/59344
1011
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
1012
* https://tracker.ceph.com/issues/59346
1013
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
1014
* https://github.com/ceph/ceph/pull/52556
1015
    task/test_nfs: AttributeError: 'TestNFS' object has no attribute 'run_ceph_cmd' (see note #4)
1016
* https://tracker.ceph.com/issues/62187
1017
    iozone: command not found
1018
* https://tracker.ceph.com/issues/61399
1019
    qa: build failure for ior (the failed instance is when compiling `mdtest`)
1020
* https://tracker.ceph.com/issues/62188
1021 164 Rishabh Dave
    AttributeError: 'RemoteProcess' object has no attribute 'read' (happens only with multis-auth test)
1022 158 Rishabh Dave
1023
h3. 24 Jul 2023
1024
1025
https://pulpito.ceph.com/rishabh-2023-07-13_21:35:13-fs-wip-rishabh-2023Jul13-testing-default-smithi/
1026
https://pulpito.ceph.com/rishabh-2023-07-14_10:26:42-fs-wip-rishabh-2023Jul13-testing-default-smithi/
1027
There were few failure from one of the PRs under testing. Following run confirms that removing this PR fixes these failures -
1028
https://pulpito.ceph.com/rishabh-2023-07-18_02:11:50-fs-wip-rishabh-2023Jul13-m-quota-testing-default-smithi/
1029
One more extra run to check if blogbench.sh fail every time:
1030
https://pulpito.ceph.com/rishabh-2023-07-21_17:58:19-fs-wip-rishabh-2023Jul13-m-quota-testing-default-smithi/
1031
blogbench.sh failure were seen on above runs for first time, following run with main branch that confirms that "blogbench.sh" was not related to any of the PRs that are under testing -
1032 161 Rishabh Dave
https://pulpito.ceph.com/rishabh-2023-07-21_21:30:53-fs-wip-rishabh-2023Jul13-base-2-testing-default-smithi/
1033
1034
* https://tracker.ceph.com/issues/61892
1035
  test_snapshot_remove (test_strays.TestStrays) failed
1036
* https://tracker.ceph.com/issues/53859
1037
  test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
1038
* https://tracker.ceph.com/issues/61982
1039
  test_clean_stale_subvolume_snapshot_metadata (tasks.cephfs.test_volumes.TestSubvolumeSnapshots)
1040
* https://tracker.ceph.com/issues/52438
1041
  qa: ffsb timeout
1042
* https://tracker.ceph.com/issues/54460
1043
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1044
* https://tracker.ceph.com/issues/57655
1045
  qa: fs:mixed-clients kernel_untar_build failure
1046
* https://tracker.ceph.com/issues/48773
1047
  reached max tries: scrub does not complete
1048
* https://tracker.ceph.com/issues/58340
1049
  mds: fsstress.sh hangs with multimds
1050
* https://tracker.ceph.com/issues/61400
1051
  valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
1052
* https://tracker.ceph.com/issues/57206
1053
  libcephfs/test.sh: ceph_test_libcephfs_reclaim
1054
  
1055
* https://tracker.ceph.com/issues/57656
1056
  [testing] dbench: write failed on handle 10010 (Resource temporarily unavailable)
1057
* https://tracker.ceph.com/issues/61399
1058
  ior build failure
1059
* https://tracker.ceph.com/issues/57676
1060
  error during scrub thrashing: backtrace
1061
  
1062
* https://tracker.ceph.com/issues/38452
1063
  'sudo -u postgres -- pgbench -s 500 -i' failed
1064 158 Rishabh Dave
* https://tracker.ceph.com/issues/62126
1065 157 Venky Shankar
  blogbench.sh failure
1066
1067
h3. 18 July 2023
1068
1069
* https://tracker.ceph.com/issues/52624
1070
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
1071
* https://tracker.ceph.com/issues/57676
1072
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1073
* https://tracker.ceph.com/issues/54460
1074
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1075
* https://tracker.ceph.com/issues/57655
1076
    qa: fs:mixed-clients kernel_untar_build failure
1077
* https://tracker.ceph.com/issues/51964
1078
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1079
* https://tracker.ceph.com/issues/59344
1080
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
1081
* https://tracker.ceph.com/issues/61182
1082
    cephfs-mirror-ha-workunit: reached maximum tries (50) after waiting for 300 seconds
1083
* https://tracker.ceph.com/issues/61957
1084
    test_client_limits.TestClientLimits.test_client_release_bug
1085
* https://tracker.ceph.com/issues/59348
1086
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1087
* https://tracker.ceph.com/issues/61892
1088
    test_strays.TestStrays.test_snapshot_remove failed
1089
* https://tracker.ceph.com/issues/59346
1090
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
1091
* https://tracker.ceph.com/issues/44565
1092
    src/mds/SimpleLock.h: 528: FAILED ceph_assert(state == LOCK_XLOCK || state == LOCK_XLOCKDONE || state == LOCK_XLOCKSNAP || state == LOCK_LOCK_XLOCK || state == LOCK_LOCK || is_locallock())
1093
* https://tracker.ceph.com/issues/62067
1094
    ffsb.sh failure "Resource temporarily unavailable"
1095 156 Venky Shankar
1096
1097
h3. 17 July 2023
1098
1099
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230704.040136
1100
1101
* https://tracker.ceph.com/issues/61982
1102
    Test failure: test_clean_stale_subvolume_snapshot_metadata (tasks.cephfs.test_volumes.TestSubvolumeSnapshots)
1103
* https://tracker.ceph.com/issues/59344
1104
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
1105
* https://tracker.ceph.com/issues/61182
1106
    cephfs-mirror-ha-workunit: reached maximum tries (50) after waiting for 300 seconds
1107
* https://tracker.ceph.com/issues/61957
1108
    test_client_limits.TestClientLimits.test_client_release_bug
1109
* https://tracker.ceph.com/issues/61400
1110
    valgrind+ceph-mon: segmentation fault in rocksdb+tcmalloc
1111
* https://tracker.ceph.com/issues/59348
1112
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1113
* https://tracker.ceph.com/issues/61892
1114
    test_strays.TestStrays.test_snapshot_remove failed
1115
* https://tracker.ceph.com/issues/59346
1116
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
1117
* https://tracker.ceph.com/issues/62036
1118
    src/mds/MDCache.cc: 5131: FAILED ceph_assert(isolated_inodes.empty())
1119
* https://tracker.ceph.com/issues/61737
1120
    coredump from '/bin/podman pull quay.ceph.io/ceph-ci/ceph:pacific'
1121
* https://tracker.ceph.com/issues/44565
1122
    src/mds/SimpleLock.h: 528: FAILED ceph_assert(state == LOCK_XLOCK || state == LOCK_XLOCKDONE || state == LOCK_XLOCKSNAP || state == LOCK_LOCK_XLOCK || state == LOCK_LOCK || is_locallock())
1123 155 Rishabh Dave
1124 1 Patrick Donnelly
1125 153 Rishabh Dave
h3. 13 July 2023 Run 2
1126 152 Rishabh Dave
1127
1128
https://pulpito.ceph.com/rishabh-2023-07-08_23:33:40-fs-wip-rishabh-2023Jul9-testing-default-smithi/
1129
https://pulpito.ceph.com/rishabh-2023-07-09_20:19:09-fs-wip-rishabh-2023Jul9-testing-default-smithi/
1130
1131
* https://tracker.ceph.com/issues/61957
1132
  test_client_limits.TestClientLimits.test_client_release_bug
1133
* https://tracker.ceph.com/issues/61982
1134
  Test failure: test_clean_stale_subvolume_snapshot_metadata (tasks.cephfs.test_volumes.TestSubvolumeSnapshots)
1135
* https://tracker.ceph.com/issues/59348
1136
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1137
* https://tracker.ceph.com/issues/59344
1138
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
1139
* https://tracker.ceph.com/issues/54460
1140
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1141
* https://tracker.ceph.com/issues/57655
1142
  qa: fs:mixed-clients kernel_untar_build failure
1143
* https://tracker.ceph.com/issues/61400
1144
  valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
1145
* https://tracker.ceph.com/issues/61399
1146
  ior build failure
1147
1148 151 Venky Shankar
h3. 13 July 2023
1149
1150
https://pulpito.ceph.com/vshankar-2023-07-04_11:45:30-fs-wip-vshankar-testing-20230704.040242-testing-default-smithi/
1151
1152
* https://tracker.ceph.com/issues/54460
1153
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1154
* https://tracker.ceph.com/issues/61400
1155
    valgrind+ceph-mon: segmentation fault in rocksdb+tcmalloc
1156
* https://tracker.ceph.com/issues/57655
1157
    qa: fs:mixed-clients kernel_untar_build failure
1158
* https://tracker.ceph.com/issues/61945
1159
    LibCephFS.DelegTimeout failure
1160
* https://tracker.ceph.com/issues/52624
1161
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
1162
* https://tracker.ceph.com/issues/57676
1163
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1164
* https://tracker.ceph.com/issues/59348
1165
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1166
* https://tracker.ceph.com/issues/59344
1167
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
1168
* https://tracker.ceph.com/issues/51964
1169
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1170
* https://tracker.ceph.com/issues/59346
1171
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
1172
* https://tracker.ceph.com/issues/61982
1173
    Test failure: test_clean_stale_subvolume_snapshot_metadata (tasks.cephfs.test_volumes.TestSubvolumeSnapshots)
1174 150 Rishabh Dave
1175
1176
h3. 13 Jul 2023
1177
1178
https://pulpito.ceph.com/rishabh-2023-07-05_22:21:20-fs-wip-rishabh-2023Jul5-testing-default-smithi/
1179
https://pulpito.ceph.com/rishabh-2023-07-06_19:33:28-fs-wip-rishabh-2023Jul5-testing-default-smithi/
1180
1181
* https://tracker.ceph.com/issues/61957
1182
  test_client_limits.TestClientLimits.test_client_release_bug
1183
* https://tracker.ceph.com/issues/59348
1184
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1185
* https://tracker.ceph.com/issues/59346
1186
  fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
1187
* https://tracker.ceph.com/issues/48773
1188
  scrub does not complete: reached max tries
1189
* https://tracker.ceph.com/issues/59344
1190
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
1191
* https://tracker.ceph.com/issues/52438
1192
  qa: ffsb timeout
1193
* https://tracker.ceph.com/issues/57656
1194
  [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
1195
* https://tracker.ceph.com/issues/58742
1196
  xfstests-dev: kcephfs: generic
1197
* https://tracker.ceph.com/issues/61399
1198 148 Rishabh Dave
  libmpich: undefined references to fi_strerror
1199 149 Rishabh Dave
1200 148 Rishabh Dave
h3. 12 July 2023
1201
1202
https://pulpito.ceph.com/rishabh-2023-07-05_18:32:52-fs-wip-rishabh-CephManager-in-CephFSTestCase-testing-default-smithi/
1203
https://pulpito.ceph.com/rishabh-2023-07-06_19:46:43-fs-wip-rishabh-CephManager-in-CephFSTestCase-testing-default-smithi/
1204
1205
* https://tracker.ceph.com/issues/61892
1206
  test_strays.TestStrays.test_snapshot_remove failed
1207
* https://tracker.ceph.com/issues/59348
1208
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1209
* https://tracker.ceph.com/issues/53859
1210
  qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
1211
* https://tracker.ceph.com/issues/59346
1212
  fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write"
1213
* https://tracker.ceph.com/issues/58742
1214
  xfstests-dev: kcephfs: generic
1215
* https://tracker.ceph.com/issues/59344
1216
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
1217
* https://tracker.ceph.com/issues/52438
1218
  qa: ffsb timeout
1219
* https://tracker.ceph.com/issues/57656
1220
  [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
1221
* https://tracker.ceph.com/issues/54460
1222
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1223
* https://tracker.ceph.com/issues/57655
1224
  qa: fs:mixed-clients kernel_untar_build failure
1225
* https://tracker.ceph.com/issues/61182
1226
  cephfs-mirror-ha-workunit: reached maximum tries (50) after waiting for 300 seconds
1227
* https://tracker.ceph.com/issues/61400
1228
  valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
1229 147 Rishabh Dave
* https://tracker.ceph.com/issues/48773
1230 146 Patrick Donnelly
  reached max tries: scrub does not complete
1231
1232
h3. 05 July 2023
1233
1234
https://pulpito.ceph.com/pdonnell-2023-07-05_03:38:33-fs:libcephfs-wip-pdonnell-testing-20230705.003205-distro-default-smithi/
1235
1236 137 Rishabh Dave
* https://tracker.ceph.com/issues/59346
1237 143 Rishabh Dave
  fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write"
1238
1239
h3. 27 Jun 2023
1240
1241
https://pulpito.ceph.com/rishabh-2023-06-21_23:38:17-fs-wip-rishabh-improvements-authmon-testing-default-smithi/
1242 144 Rishabh Dave
https://pulpito.ceph.com/rishabh-2023-06-23_17:37:30-fs-wip-rishabh-improvements-authmon-distro-default-smithi/
1243
1244
* https://tracker.ceph.com/issues/59348
1245
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1246
* https://tracker.ceph.com/issues/54460
1247
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1248
* https://tracker.ceph.com/issues/59346
1249
  fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write"
1250
* https://tracker.ceph.com/issues/59344
1251
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
1252
* https://tracker.ceph.com/issues/61399
1253
  libmpich: undefined references to fi_strerror
1254
* https://tracker.ceph.com/issues/50223
1255
  client.xxxx isn't responding to mclientcaps(revoke)
1256 143 Rishabh Dave
* https://tracker.ceph.com/issues/61831
1257
  Test failure: test_mirroring_init_failure_with_recovery (tasks.cephfs.test_mirroring.TestMirroring)
1258 142 Venky Shankar
1259
1260
h3. 22 June 2023
1261
1262
* https://tracker.ceph.com/issues/57676
1263
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1264
* https://tracker.ceph.com/issues/54460
1265
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1266
* https://tracker.ceph.com/issues/59344
1267
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
1268
* https://tracker.ceph.com/issues/59348
1269
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1270
* https://tracker.ceph.com/issues/61400
1271
    valgrind+ceph-mon: segmentation fault in rocksdb+tcmalloc
1272
* https://tracker.ceph.com/issues/57655
1273
    qa: fs:mixed-clients kernel_untar_build failure
1274
* https://tracker.ceph.com/issues/61394
1275
    qa/quincy: cluster [WRN] evicting unresponsive client smithi152 (4298), after 303.726 seconds" in cluster log
1276
* https://tracker.ceph.com/issues/61762
1277
    qa: wait_for_clean: failed before timeout expired
1278
* https://tracker.ceph.com/issues/61775
1279
    cephfs-mirror: mirror daemon does not shutdown (in mirror ha tests)
1280
* https://tracker.ceph.com/issues/44565
1281
    src/mds/SimpleLock.h: 528: FAILED ceph_assert(state == LOCK_XLOCK || state == LOCK_XLOCKDONE || state == LOCK_XLOCKSNAP || state == LOCK_LOCK_XLOCK || state == LOCK_LOCK || is_locallock())
1282
* https://tracker.ceph.com/issues/61790
1283
    cephfs client to mds comms remain silent after reconnect
1284
* https://tracker.ceph.com/issues/61791
1285
    snaptest-git-ceph.sh test timed out (job dead)
1286 139 Venky Shankar
1287
1288
h3. 20 June 2023
1289
1290
https://pulpito.ceph.com/vshankar-2023-06-15_04:58:28-fs-wip-vshankar-testing-20230614.124123-testing-default-smithi/
1291
1292
* https://tracker.ceph.com/issues/57676
1293
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1294
* https://tracker.ceph.com/issues/54460
1295
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1296 140 Venky Shankar
* https://tracker.ceph.com/issues/54462
1297 1 Patrick Donnelly
  Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128
1298 141 Venky Shankar
* https://tracker.ceph.com/issues/58340
1299 139 Venky Shankar
  mds: fsstress.sh hangs with multimds
1300
* https://tracker.ceph.com/issues/59344
1301
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
1302
* https://tracker.ceph.com/issues/59348
1303
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1304
* https://tracker.ceph.com/issues/57656
1305
  [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
1306
* https://tracker.ceph.com/issues/61400
1307
    valgrind+ceph-mon: segmentation fault in rocksdb+tcmalloc
1308
* https://tracker.ceph.com/issues/57655
1309
    qa: fs:mixed-clients kernel_untar_build failure
1310
* https://tracker.ceph.com/issues/44565
1311
    src/mds/SimpleLock.h: 528: FAILED ceph_assert(state == LOCK_XLOCK || state == LOCK_XLOCKDONE || state == LOCK_XLOCKSNAP || state == LOCK_LOCK_XLOCK || state == LOCK_LOCK || is_locallock())
1312
* https://tracker.ceph.com/issues/61737
1313 138 Rishabh Dave
    coredump from '/bin/podman pull quay.ceph.io/ceph-ci/ceph:pacific'
1314
1315
h3. 16 June 2023
1316
1317 1 Patrick Donnelly
https://pulpito.ceph.com/rishabh-2023-05-16_10:39:13-fs-wip-rishabh-2023May15-1524-testing-default-smithi/
1318 145 Rishabh Dave
https://pulpito.ceph.com/rishabh-2023-05-17_11:09:48-fs-wip-rishabh-2023May15-1524-testing-default-smithi/
1319 138 Rishabh Dave
https://pulpito.ceph.com/rishabh-2023-05-18_10:01:53-fs-wip-rishabh-2023May15-1524-testing-default-smithi/
1320 1 Patrick Donnelly
(bins were rebuilt with a subset of orig PRs) https://pulpito.ceph.com/rishabh-2023-06-09_10:19:22-fs-wip-rishabh-2023Jun9-1308-testing-default-smithi/
1321
1322
1323
* https://tracker.ceph.com/issues/59344
1324
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
1325 138 Rishabh Dave
* https://tracker.ceph.com/issues/59348
1326
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1327 145 Rishabh Dave
* https://tracker.ceph.com/issues/59346
1328
  fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write"
1329
* https://tracker.ceph.com/issues/57656
1330
  [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
1331
* https://tracker.ceph.com/issues/54460
1332
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1333 138 Rishabh Dave
* https://tracker.ceph.com/issues/54462
1334
  Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128
1335 145 Rishabh Dave
* https://tracker.ceph.com/issues/61399
1336
  libmpich: undefined references to fi_strerror
1337
* https://tracker.ceph.com/issues/58945
1338
  xfstests-dev: ceph-fuse: generic 
1339 138 Rishabh Dave
* https://tracker.ceph.com/issues/58742
1340 136 Patrick Donnelly
  xfstests-dev: kcephfs: generic
1341
1342
h3. 24 May 2023
1343
1344
https://pulpito.ceph.com/pdonnell-2023-05-23_18:20:18-fs-wip-pdonnell-testing-20230523.134409-distro-default-smithi/
1345
1346
* https://tracker.ceph.com/issues/57676
1347
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1348
* https://tracker.ceph.com/issues/59683
1349
    Error: Unable to find a match: userspace-rcu-devel libedit-devel device-mapper-devel with fscrypt tests
1350
* https://tracker.ceph.com/issues/61399
1351
    qa: "[Makefile:299: ior] Error 1"
1352
* https://tracker.ceph.com/issues/61265
1353
    qa: tasks.cephfs.fuse_mount:process failed to terminate after unmount
1354
* https://tracker.ceph.com/issues/59348
1355
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1356
* https://tracker.ceph.com/issues/59346
1357
    qa/workunits/fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write"
1358
* https://tracker.ceph.com/issues/61400
1359
    valgrind+ceph-mon: segmentation fault in rocksdb+tcmalloc
1360
* https://tracker.ceph.com/issues/54460
1361
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1362
* https://tracker.ceph.com/issues/51964
1363
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1364
* https://tracker.ceph.com/issues/59344
1365
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
1366
* https://tracker.ceph.com/issues/61407
1367
    mds: abort on CInode::verify_dirfrags
1368
* https://tracker.ceph.com/issues/48773
1369
    qa: scrub does not complete
1370
* https://tracker.ceph.com/issues/57655
1371
    qa: fs:mixed-clients kernel_untar_build failure
1372
* https://tracker.ceph.com/issues/61409
1373 128 Venky Shankar
    qa: _test_stale_caps does not wait for file flush before stat
1374
1375
h3. 15 May 2023
1376 130 Venky Shankar
1377 128 Venky Shankar
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230509.090020
1378
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230509.090020-6
1379
1380
* https://tracker.ceph.com/issues/52624
1381
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
1382
* https://tracker.ceph.com/issues/54460
1383
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1384
* https://tracker.ceph.com/issues/57676
1385
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1386
* https://tracker.ceph.com/issues/59684 [kclient bug]
1387
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
1388
* https://tracker.ceph.com/issues/59348
1389
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1390 131 Venky Shankar
* https://tracker.ceph.com/issues/61148
1391
    dbench test results in call trace in dmesg [kclient bug]
1392 133 Kotresh Hiremath Ravishankar
* https://tracker.ceph.com/issues/58340
1393 134 Kotresh Hiremath Ravishankar
    mds: fsstress.sh hangs with multimds
1394 125 Venky Shankar
1395
 
1396 129 Rishabh Dave
h3. 11 May 2023
1397
1398
https://pulpito.ceph.com/yuriw-2023-05-10_18:21:40-fs-wip-yuri7-testing-2023-05-10-0742-distro-default-smithi/
1399
1400
* https://tracker.ceph.com/issues/59684 [kclient bug]
1401
  Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
1402
* https://tracker.ceph.com/issues/59348
1403
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1404
* https://tracker.ceph.com/issues/57655
1405
  qa: fs:mixed-clients kernel_untar_build failure
1406
* https://tracker.ceph.com/issues/57676
1407
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
1408
* https://tracker.ceph.com/issues/55805
1409
  error during scrub thrashing reached max tries in 900 secs
1410
* https://tracker.ceph.com/issues/54460
1411
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1412
* https://tracker.ceph.com/issues/57656
1413
  [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
1414
* https://tracker.ceph.com/issues/58220
1415
  Command failed (workunit test fs/quota/quota.sh) on smithi081 with status 1:
1416 1 Patrick Donnelly
* https://tracker.ceph.com/issues/58220#note-9
1417
  workunit fs/test_python.sh: test_disk_quota_exceeeded_error failure
1418 134 Kotresh Hiremath Ravishankar
* https://tracker.ceph.com/issues/59342
1419
  qa/workunits/kernel_untar_build.sh failed when compiling the Linux source
1420 135 Kotresh Hiremath Ravishankar
* https://tracker.ceph.com/issues/58949
1421
    test_cephfs.test_disk_quota_exceeeded_error - AssertionError: DiskQuotaExceeded not raised by write
1422 129 Rishabh Dave
* https://tracker.ceph.com/issues/61243 (NEW)
1423
  test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
1424
1425 125 Venky Shankar
h3. 11 May 2023
1426 127 Venky Shankar
1427
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230509.054005
1428 126 Venky Shankar
1429 125 Venky Shankar
(no fsstress job failure [https://tracker.ceph.com/issues/58340] since https://github.com/ceph/ceph/pull/49553
1430
 was included in the branch, however, the PR got updated and needs retest).
1431
1432
* https://tracker.ceph.com/issues/52624
1433
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
1434
* https://tracker.ceph.com/issues/54460
1435
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1436
* https://tracker.ceph.com/issues/57676
1437
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1438
* https://tracker.ceph.com/issues/59683
1439
    Error: Unable to find a match: userspace-rcu-devel libedit-devel device-mapper-devel with fscrypt tests
1440
* https://tracker.ceph.com/issues/59684 [kclient bug]
1441
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
1442
* https://tracker.ceph.com/issues/59348
1443 124 Venky Shankar
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1444
1445
h3. 09 May 2023
1446
1447
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230506.143554
1448
1449
* https://tracker.ceph.com/issues/52624
1450
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
1451
* https://tracker.ceph.com/issues/58340
1452
    mds: fsstress.sh hangs with multimds
1453
* https://tracker.ceph.com/issues/54460
1454
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1455
* https://tracker.ceph.com/issues/57676
1456
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1457
* https://tracker.ceph.com/issues/51964
1458
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1459
* https://tracker.ceph.com/issues/59350
1460
    qa: test_flag_scrub_mdsdir (tasks.cephfs.test_scrub_checks.TestScrubChecks) ... ERROR
1461
* https://tracker.ceph.com/issues/59683
1462
    Error: Unable to find a match: userspace-rcu-devel libedit-devel device-mapper-devel with fscrypt tests
1463
* https://tracker.ceph.com/issues/59684 [kclient bug]
1464
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
1465
* https://tracker.ceph.com/issues/59348
1466 123 Venky Shankar
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1467
1468
h3. 10 Apr 2023
1469
1470
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230330.105356
1471
1472
* https://tracker.ceph.com/issues/52624
1473
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
1474
* https://tracker.ceph.com/issues/58340
1475
    mds: fsstress.sh hangs with multimds
1476
* https://tracker.ceph.com/issues/54460
1477
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1478
* https://tracker.ceph.com/issues/57676
1479
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1480 119 Rishabh Dave
* https://tracker.ceph.com/issues/51964
1481 120 Rishabh Dave
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1482 121 Rishabh Dave
1483 120 Rishabh Dave
h3. 31 Mar 2023
1484 122 Rishabh Dave
1485
run: http://pulpito.front.sepia.ceph.com/rishabh-2023-03-03_21:39:49-fs-wip-rishabh-2023Mar03-2316-testing-default-smithi/
1486 120 Rishabh Dave
re-run (some PRs removed from batch): http://pulpito.front.sepia.ceph.com/rishabh-2023-03-11_05:54:03-fs-wip-rishabh-2023Mar10-1727-testing-default-smithi/
1487
re-run (some PRs removed from batch): http://pulpito.front.sepia.ceph.com/rishabh-2023-03-23_08:27:28-fs-wip-rishabh-2023Mar20-2250-testing-default-smithi/
1488
1489
There were many more re-runs for "failed+dead" jobs as well as for individual jobs. half of the PRs from the batch were removed (gradually over subsequent re-runs).
1490
1491
* https://tracker.ceph.com/issues/57676
1492
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
1493
* https://tracker.ceph.com/issues/54460
1494
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1495
* https://tracker.ceph.com/issues/58220
1496
  Command failed (workunit test fs/quota/quota.sh) on smithixxx with status 1
1497
* https://tracker.ceph.com/issues/58220#note-9
1498
  workunit fs/test_python.sh: test_disk_quota_exceeeded_error failure
1499
* https://tracker.ceph.com/issues/56695
1500
  Command failed (workunit test suites/pjd.sh)
1501
* https://tracker.ceph.com/issues/58564 
1502
  workuit dbench failed with error code 1
1503
* https://tracker.ceph.com/issues/57206
1504
  libcephfs/test.sh: ceph_test_libcephfs_reclaim
1505
* https://tracker.ceph.com/issues/57580
1506
  Test failure: test_newops_getvxattr (tasks.cephfs.test_newops.TestNewOps)
1507
* https://tracker.ceph.com/issues/58940
1508
  ceph osd hit ceph_abort
1509
* https://tracker.ceph.com/issues/55805
1510 118 Venky Shankar
  error scrub thrashing reached max tries in 900 secs
1511
1512
h3. 30 March 2023
1513
1514
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230315.085747
1515
1516
* https://tracker.ceph.com/issues/58938
1517
    qa: xfstests-dev's generic test suite has 7 failures with kclient
1518
* https://tracker.ceph.com/issues/51964
1519
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1520
* https://tracker.ceph.com/issues/58340
1521 114 Venky Shankar
    mds: fsstress.sh hangs with multimds
1522
1523 115 Venky Shankar
h3. 29 March 2023
1524 114 Venky Shankar
1525
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230317.095222
1526
1527
* https://tracker.ceph.com/issues/56695
1528
    [RHEL stock] pjd test failures
1529
* https://tracker.ceph.com/issues/57676
1530
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1531
* https://tracker.ceph.com/issues/57087
1532
    qa: test_fragmented_injection (tasks.cephfs.test_data_scan.TestDataScan) failure
1533 116 Venky Shankar
* https://tracker.ceph.com/issues/58340
1534
    mds: fsstress.sh hangs with multimds
1535 114 Venky Shankar
* https://tracker.ceph.com/issues/57655
1536
    qa: fs:mixed-clients kernel_untar_build failure
1537 117 Venky Shankar
* https://tracker.ceph.com/issues/59230
1538
    Test failure: test_object_deletion (tasks.cephfs.test_damage.TestDamage)
1539 114 Venky Shankar
* https://tracker.ceph.com/issues/58938
1540 113 Venky Shankar
    qa: xfstests-dev's generic test suite has 7 failures with kclient
1541
1542
h3. 13 Mar 2023
1543
1544
* https://tracker.ceph.com/issues/56695
1545
    [RHEL stock] pjd test failures
1546
* https://tracker.ceph.com/issues/57676
1547
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1548
* https://tracker.ceph.com/issues/51964
1549
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1550
* https://tracker.ceph.com/issues/54460
1551
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1552
* https://tracker.ceph.com/issues/57656
1553 112 Venky Shankar
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
1554
1555
h3. 09 Mar 2023
1556
1557
https://pulpito.ceph.com/vshankar-2023-03-03_04:39:14-fs-wip-vshankar-testing-20230303.023823-testing-default-smithi/
1558
https://pulpito.ceph.com/vshankar-2023-03-08_15:12:36-fs-wip-vshankar-testing-20230308.112059-testing-default-smithi/
1559
1560
* https://tracker.ceph.com/issues/56695
1561
    [RHEL stock] pjd test failures
1562
* https://tracker.ceph.com/issues/57676
1563
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1564
* https://tracker.ceph.com/issues/51964
1565
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1566
* https://tracker.ceph.com/issues/54460
1567
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1568
* https://tracker.ceph.com/issues/58340
1569
    mds: fsstress.sh hangs with multimds
1570
* https://tracker.ceph.com/issues/57087
1571 111 Venky Shankar
    qa: test_fragmented_injection (tasks.cephfs.test_data_scan.TestDataScan) failure
1572
1573
h3. 07 Mar 2023
1574
1575
https://pulpito.ceph.com/vshankar-2023-03-02_09:21:58-fs-wip-vshankar-testing-20230222.044949-testing-default-smithi/
1576
https://pulpito.ceph.com/vshankar-2023-03-07_05:15:12-fs-wip-vshankar-testing-20230307.030510-testing-default-smithi/
1577
1578
* https://tracker.ceph.com/issues/56695
1579
    [RHEL stock] pjd test failures
1580
* https://tracker.ceph.com/issues/57676
1581
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1582
* https://tracker.ceph.com/issues/51964
1583
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1584
* https://tracker.ceph.com/issues/57656
1585
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
1586
* https://tracker.ceph.com/issues/57655
1587
    qa: fs:mixed-clients kernel_untar_build failure
1588
* https://tracker.ceph.com/issues/58220
1589
    Command failed (workunit test fs/quota/quota.sh) on smithi081 with status 1:
1590
* https://tracker.ceph.com/issues/54460
1591
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1592
* https://tracker.ceph.com/issues/58934
1593 109 Venky Shankar
    snaptest-git-ceph.sh failure with ceph-fuse
1594
1595
h3. 28 Feb 2023
1596
1597
https://pulpito.ceph.com/vshankar-2023-02-24_02:11:45-fs-wip-vshankar-testing-20230222.025426-testing-default-smithi/
1598
1599
* https://tracker.ceph.com/issues/56695
1600
    [RHEL stock] pjd test failures
1601
* https://tracker.ceph.com/issues/57676
1602
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1603 110 Venky Shankar
* https://tracker.ceph.com/issues/56446
1604 109 Venky Shankar
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
1605
1606 107 Venky Shankar
(teuthology infra issues causing testing delays - merging PRs which have tests passing)
1607
1608
h3. 25 Jan 2023
1609
1610
https://pulpito.ceph.com/vshankar-2023-01-25_07:57:32-fs-wip-vshankar-testing-20230125.055346-testing-default-smithi/
1611
1612
* https://tracker.ceph.com/issues/52624
1613
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
1614
* https://tracker.ceph.com/issues/56695
1615
    [RHEL stock] pjd test failures
1616
* https://tracker.ceph.com/issues/57676
1617
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1618
* https://tracker.ceph.com/issues/56446
1619
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
1620
* https://tracker.ceph.com/issues/57206
1621
    libcephfs/test.sh: ceph_test_libcephfs_reclaim
1622
* https://tracker.ceph.com/issues/58220
1623
    Command failed (workunit test fs/quota/quota.sh) on smithi081 with status 1:
1624
* https://tracker.ceph.com/issues/58340
1625
  mds: fsstress.sh hangs with multimds
1626
* https://tracker.ceph.com/issues/56011
1627
    fs/thrash: snaptest-snap-rm-cmp.sh fails in mds5sum comparison
1628
* https://tracker.ceph.com/issues/54460
1629 101 Rishabh Dave
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1630
1631
h3. 30 JAN 2023
1632
1633
run: http://pulpito.front.sepia.ceph.com/rishabh-2022-11-28_08:04:11-fs-wip-rishabh-testing-2022Nov24-1818-testing-default-smithi/
1634
re-run: http://pulpito.front.sepia.ceph.com/rishabh-2023-01-13_12:08:33-fs-wip-rishabh-testing-2022Nov24-11Jan2023-distro-default-smithi/
1635 105 Rishabh Dave
re-run of re-run: http://pulpito.front.sepia.ceph.com/rishabh-2023-01-23_18:53:32-fs-wip-rishabh-testing-2022Nov24-11Jan2023-distro-default-smithi/
1636
1637 101 Rishabh Dave
* https://tracker.ceph.com/issues/52624
1638
  qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
1639
* https://tracker.ceph.com/issues/56695
1640
  [RHEL stock] pjd test failures
1641
* https://tracker.ceph.com/issues/57676
1642
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
1643
* https://tracker.ceph.com/issues/55332
1644
  Failure in snaptest-git-ceph.sh
1645
* https://tracker.ceph.com/issues/51964
1646
  qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1647
* https://tracker.ceph.com/issues/56446
1648
  Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
1649
* https://tracker.ceph.com/issues/57655 
1650
  qa: fs:mixed-clients kernel_untar_build failure
1651
* https://tracker.ceph.com/issues/54460
1652
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1653 103 Rishabh Dave
* https://tracker.ceph.com/issues/58340
1654
  mds: fsstress.sh hangs with multimds
1655 101 Rishabh Dave
* https://tracker.ceph.com/issues/58219
1656 102 Rishabh Dave
  Command crashed: 'ceph-dencoder type inode_backtrace_t import - decode dump_json'
1657
1658
* "Failed to load ceph-mgr modules: prometheus" in cluster log"
1659 106 Rishabh Dave
  http://pulpito.front.sepia.ceph.com/rishabh-2023-01-23_18:53:32-fs-wip-rishabh-testing-2022Nov24-11Jan2023-distro-default-smithi/7134086
1660
  Acc to Venky this was fixed in https://github.com/ceph/ceph/commit/cf6089200d96fc56b08ee17a4e31f19823370dc8
1661 102 Rishabh Dave
* Created https://tracker.ceph.com/issues/58564
1662 100 Venky Shankar
  workunit test suites/dbench.sh failed error code 1
1663
1664
h3. 15 Dec 2022
1665
1666
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20221215.112736
1667
1668
* https://tracker.ceph.com/issues/52624
1669
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1670
* https://tracker.ceph.com/issues/56695
1671
    [RHEL stock] pjd test failures
1672
* https://tracker.ceph.com/issues/58219
1673
* https://tracker.ceph.com/issues/57655
1674
* qa: fs:mixed-clients kernel_untar_build failure
1675
    Test failure: test_journal_migration (tasks.cephfs.test_journal_migration.TestJournalMigration)
1676
* https://tracker.ceph.com/issues/57676
1677
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1678
* https://tracker.ceph.com/issues/58340
1679 96 Venky Shankar
    mds: fsstress.sh hangs with multimds
1680
1681
h3. 08 Dec 2022
1682 99 Venky Shankar
1683 96 Venky Shankar
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20221130.043104
1684
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20221209.043803
1685
1686
(lots of transient git.ceph.com failures)
1687
1688
* https://tracker.ceph.com/issues/52624
1689
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1690
* https://tracker.ceph.com/issues/56695
1691
    [RHEL stock] pjd test failures
1692
* https://tracker.ceph.com/issues/57655
1693
    qa: fs:mixed-clients kernel_untar_build failure
1694
* https://tracker.ceph.com/issues/58219
1695
    Test failure: test_journal_migration (tasks.cephfs.test_journal_migration.TestJournalMigration)
1696
* https://tracker.ceph.com/issues/58220
1697
    Command failed (workunit test fs/quota/quota.sh) on smithi081 with status 1:
1698 97 Venky Shankar
* https://tracker.ceph.com/issues/57676
1699
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1700 98 Venky Shankar
* https://tracker.ceph.com/issues/53859
1701
    qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
1702
* https://tracker.ceph.com/issues/54460
1703
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1704 96 Venky Shankar
* https://tracker.ceph.com/issues/58244
1705 95 Venky Shankar
    Test failure: test_rebuild_inotable (tasks.cephfs.test_data_scan.TestDataScan)
1706
1707
h3. 14 Oct 2022
1708
1709
https://pulpito.ceph.com/vshankar-2022-10-12_04:56:59-fs-wip-vshankar-testing-20221011-145847-testing-default-smithi/
1710
https://pulpito.ceph.com/vshankar-2022-10-14_04:04:57-fs-wip-vshankar-testing-20221014-072608-testing-default-smithi/
1711
1712
* https://tracker.ceph.com/issues/52624
1713
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1714
* https://tracker.ceph.com/issues/55804
1715
    Command failed (workunit test suites/pjd.sh)
1716
* https://tracker.ceph.com/issues/51964
1717
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1718
* https://tracker.ceph.com/issues/57682
1719
    client: ERROR: test_reconnect_after_blocklisted
1720 90 Rishabh Dave
* https://tracker.ceph.com/issues/54460
1721 91 Rishabh Dave
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1722
1723
h3. 10 Oct 2022
1724 92 Rishabh Dave
1725 91 Rishabh Dave
http://pulpito.front.sepia.ceph.com/rishabh-2022-09-30_19:45:21-fs-wip-rishabh-testing-30Sep2022-testing-default-smithi/
1726
1727
reruns
1728
* fs-thrash, passed: http://pulpito.front.sepia.ceph.com/rishabh-2022-10-04_13:19:47-fs-wip-rishabh-testing-30Sep2022-testing-default-smithi/
1729 94 Rishabh Dave
* fs-verify, passed: http://pulpito.front.sepia.ceph.com/rishabh-2022-10-05_12:25:37-fs-wip-rishabh-testing-30Sep2022-testing-default-smithi/
1730 91 Rishabh Dave
* cephadm failures also passed after many re-runs: http://pulpito.front.sepia.ceph.com/rishabh-2022-10-06_13:50:51-fs-wip-rishabh-testing-30Sep2022-2-testing-default-smithi/
1731 93 Rishabh Dave
    ** needed this PR to be merged in ceph-ci branch - https://github.com/ceph/ceph/pull/47458
1732 91 Rishabh Dave
1733
known bugs
1734
* https://tracker.ceph.com/issues/52624
1735
  qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1736
* https://tracker.ceph.com/issues/50223
1737
  client.xxxx isn't responding to mclientcaps(revoke
1738
* https://tracker.ceph.com/issues/57299
1739
  qa: test_dump_loads fails with JSONDecodeError
1740
* https://tracker.ceph.com/issues/57655 [Exist in main as well]
1741
  qa: fs:mixed-clients kernel_untar_build failure
1742
* https://tracker.ceph.com/issues/57206
1743 90 Rishabh Dave
  libcephfs/test.sh: ceph_test_libcephfs_reclaim
1744
1745
h3. 2022 Sep 29
1746
1747
http://pulpito.front.sepia.ceph.com/rishabh-2022-09-14_12:48:43-fs-wip-rishabh-testing-2022Sep9-1708-testing-default-smithi/
1748
1749
* https://tracker.ceph.com/issues/55804
1750
  Command failed (workunit test suites/pjd.sh)
1751
* https://tracker.ceph.com/issues/36593
1752
  Command failed (workunit test fs/quota/quota.sh) on smithixxx with status 1
1753
* https://tracker.ceph.com/issues/52624
1754
  qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1755
* https://tracker.ceph.com/issues/51964
1756
  qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1757
* https://tracker.ceph.com/issues/56632
1758
  Test failure: test_subvolume_snapshot_clone_quota_exceeded
1759
* https://tracker.ceph.com/issues/50821
1760 88 Patrick Donnelly
  qa: untar_snap_rm failure during mds thrashing
1761
1762
h3. 2022 Sep 26
1763
1764
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20220923.171109
1765
1766
* https://tracker.ceph.com/issues/55804
1767
    qa failure: pjd link tests failed
1768
* https://tracker.ceph.com/issues/57676
1769
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1770
* https://tracker.ceph.com/issues/52624
1771
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1772
* https://tracker.ceph.com/issues/57580
1773
    Test failure: test_newops_getvxattr (tasks.cephfs.test_newops.TestNewOps)
1774
* https://tracker.ceph.com/issues/48773
1775
    qa: scrub does not complete
1776
* https://tracker.ceph.com/issues/57299
1777
    qa: test_dump_loads fails with JSONDecodeError
1778
* https://tracker.ceph.com/issues/57280
1779
    qa: tasks/kernel_cfuse_workunits_untarbuild_blogbench fails - Failed to fetch package version from shaman
1780
* https://tracker.ceph.com/issues/57205
1781
    Test failure: test_subvolume_group_ls_filter_internal_directories (tasks.cephfs.test_volumes.TestSubvolumeGroups)
1782
* https://tracker.ceph.com/issues/57656
1783
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
1784
* https://tracker.ceph.com/issues/57677
1785
    qa: "1 MDSs behind on trimming (MDS_TRIM)"
1786
* https://tracker.ceph.com/issues/57206
1787
    libcephfs/test.sh: ceph_test_libcephfs_reclaim
1788
* https://tracker.ceph.com/issues/57446
1789
    qa: test_subvolume_snapshot_info_if_orphan_clone fails
1790 89 Patrick Donnelly
* https://tracker.ceph.com/issues/57655 [Exist in main as well]
1791
    qa: fs:mixed-clients kernel_untar_build failure
1792 88 Patrick Donnelly
* https://tracker.ceph.com/issues/57682
1793
    client: ERROR: test_reconnect_after_blocklisted
1794 87 Patrick Donnelly
1795
1796
h3. 2022 Sep 22
1797
1798
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20220920.234701
1799
1800
* https://tracker.ceph.com/issues/57299
1801
    qa: test_dump_loads fails with JSONDecodeError
1802
* https://tracker.ceph.com/issues/57205
1803
    Test failure: test_subvolume_group_ls_filter_internal_directories (tasks.cephfs.test_volumes.TestSubvolumeGroups)
1804
* https://tracker.ceph.com/issues/52624
1805
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1806
* https://tracker.ceph.com/issues/57580
1807
    Test failure: test_newops_getvxattr (tasks.cephfs.test_newops.TestNewOps)
1808
* https://tracker.ceph.com/issues/57280
1809
    qa: tasks/kernel_cfuse_workunits_untarbuild_blogbench fails - Failed to fetch package version from shaman
1810
* https://tracker.ceph.com/issues/48773
1811
    qa: scrub does not complete
1812
* https://tracker.ceph.com/issues/56446
1813
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
1814
* https://tracker.ceph.com/issues/57206
1815
    libcephfs/test.sh: ceph_test_libcephfs_reclaim
1816
* https://tracker.ceph.com/issues/51267
1817
    CommandFailedError: Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithi096 with status 1:...
1818
1819
NEW:
1820
1821
* https://tracker.ceph.com/issues/57656
1822
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
1823
* https://tracker.ceph.com/issues/57655 [Exist in main as well]
1824
    qa: fs:mixed-clients kernel_untar_build failure
1825
* https://tracker.ceph.com/issues/57657
1826
    mds: scrub locates mismatch between child accounted_rstats and self rstats
1827
1828
Segfault probably caused by: https://github.com/ceph/ceph/pull/47795#issuecomment-1255724799
1829 80 Venky Shankar
1830 79 Venky Shankar
1831
h3. 2022 Sep 16
1832
1833
https://pulpito.ceph.com/?branch=wip-vshankar-testing1-20220905-132828
1834
1835
* https://tracker.ceph.com/issues/57446
1836
    qa: test_subvolume_snapshot_info_if_orphan_clone fails
1837
* https://tracker.ceph.com/issues/57299
1838
    qa: test_dump_loads fails with JSONDecodeError
1839
* https://tracker.ceph.com/issues/50223
1840
    client.xxxx isn't responding to mclientcaps(revoke)
1841
* https://tracker.ceph.com/issues/52624
1842
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1843
* https://tracker.ceph.com/issues/57205
1844
    Test failure: test_subvolume_group_ls_filter_internal_directories (tasks.cephfs.test_volumes.TestSubvolumeGroups)
1845
* https://tracker.ceph.com/issues/57280
1846
    qa: tasks/kernel_cfuse_workunits_untarbuild_blogbench fails - Failed to fetch package version from shaman
1847
* https://tracker.ceph.com/issues/51282
1848
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
1849
* https://tracker.ceph.com/issues/48203
1850
  https://tracker.ceph.com/issues/36593
1851
    qa: quota failure
1852
    qa: quota failure caused by clients stepping on each other
1853
* https://tracker.ceph.com/issues/57580
1854 77 Rishabh Dave
    Test failure: test_newops_getvxattr (tasks.cephfs.test_newops.TestNewOps)
1855
1856 76 Rishabh Dave
1857
h3. 2022 Aug 26
1858
1859
http://pulpito.front.sepia.ceph.com/rishabh-2022-08-22_17:49:59-fs-wip-rishabh-testing-2022Aug19-testing-default-smithi/
1860
http://pulpito.front.sepia.ceph.com/rishabh-2022-08-24_11:56:51-fs-wip-rishabh-testing-2022Aug19-testing-default-smithi/
1861
1862
* https://tracker.ceph.com/issues/57206
1863
  libcephfs/test.sh: ceph_test_libcephfs_reclaim
1864
* https://tracker.ceph.com/issues/56632
1865
  Test failure: test_subvolume_snapshot_clone_quota_exceeded (tasks.cephfs.test_volumes.TestSubvolumeSnapshotClones)
1866
* https://tracker.ceph.com/issues/56446
1867
  Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
1868
* https://tracker.ceph.com/issues/51964
1869
  qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1870
* https://tracker.ceph.com/issues/53859
1871
  qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
1872
1873
* https://tracker.ceph.com/issues/54460
1874
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1875
* https://tracker.ceph.com/issues/54462
1876
  Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128
1877
* https://tracker.ceph.com/issues/54460
1878
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1879
* https://tracker.ceph.com/issues/36593
1880
  Command failed (workunit test fs/quota/quota.sh) on smithixxx with status 1
1881
1882
* https://tracker.ceph.com/issues/52624
1883
  qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1884
* https://tracker.ceph.com/issues/55804
1885
  Command failed (workunit test suites/pjd.sh)
1886
* https://tracker.ceph.com/issues/50223
1887
  client.xxxx isn't responding to mclientcaps(revoke)
1888 75 Venky Shankar
1889
1890
h3. 2022 Aug 22
1891
1892
https://pulpito.ceph.com/vshankar-2022-08-12_09:34:24-fs-wip-vshankar-testing1-20220812-072441-testing-default-smithi/
1893
https://pulpito.ceph.com/vshankar-2022-08-18_04:30:42-fs-wip-vshankar-testing1-20220818-082047-testing-default-smithi/ (drop problematic PR and re-run)
1894
1895
* https://tracker.ceph.com/issues/52624
1896
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1897
* https://tracker.ceph.com/issues/56446
1898
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
1899
* https://tracker.ceph.com/issues/55804
1900
    Command failed (workunit test suites/pjd.sh)
1901
* https://tracker.ceph.com/issues/51278
1902
    mds: "FAILED ceph_assert(!segments.empty())"
1903
* https://tracker.ceph.com/issues/54460
1904
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1905
* https://tracker.ceph.com/issues/57205
1906
    Test failure: test_subvolume_group_ls_filter_internal_directories (tasks.cephfs.test_volumes.TestSubvolumeGroups)
1907
* https://tracker.ceph.com/issues/57206
1908
    ceph_test_libcephfs_reclaim crashes during test
1909
* https://tracker.ceph.com/issues/53859
1910
  Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
1911
* https://tracker.ceph.com/issues/50223
1912 72 Venky Shankar
    client.xxxx isn't responding to mclientcaps(revoke)
1913
1914
h3. 2022 Aug 12
1915
1916
https://pulpito.ceph.com/vshankar-2022-08-10_04:06:00-fs-wip-vshankar-testing-20220805-190751-testing-default-smithi/
1917
https://pulpito.ceph.com/vshankar-2022-08-11_12:16:58-fs-wip-vshankar-testing-20220811-145809-testing-default-smithi/ (drop problematic PR and re-run)
1918
1919
* https://tracker.ceph.com/issues/52624
1920
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1921
* https://tracker.ceph.com/issues/56446
1922
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
1923
* https://tracker.ceph.com/issues/51964
1924
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1925
* https://tracker.ceph.com/issues/55804
1926
    Command failed (workunit test suites/pjd.sh)
1927
* https://tracker.ceph.com/issues/50223
1928
    client.xxxx isn't responding to mclientcaps(revoke)
1929
* https://tracker.ceph.com/issues/50821
1930 73 Venky Shankar
    qa: untar_snap_rm failure during mds thrashing
1931 72 Venky Shankar
* https://tracker.ceph.com/issues/54460
1932 71 Venky Shankar
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1933
1934
h3. 2022 Aug 04
1935
1936
https://pulpito.ceph.com/?branch=wip-vshankar-testing1-20220804-123835 (only mgr/volumes, mgr/stats)
1937
1938 69 Rishabh Dave
Unrealted teuthology failure on rhel
1939 68 Rishabh Dave
1940
h3. 2022 Jul 25
1941
1942
http://pulpito.front.sepia.ceph.com/rishabh-2022-07-22_11:34:20-fs-wip-rishabh-testing-2022Jul22-1400-testing-default-smithi/
1943
1944 74 Rishabh Dave
1st re-run: http://pulpito.front.sepia.ceph.com/rishabh-2022-07-24_03:51:19-fs-wip-rishabh-testing-2022Jul22-1400-testing-default-smithi
1945
2nd re-run: http://pulpito.front.sepia.ceph.com/rishabh-2022-07-24_08:53:36-fs-wip-rishabh-testing-2022Jul22-1400-testing-default-smithi/
1946 68 Rishabh Dave
3rd re-run: http://pulpito.front.sepia.ceph.com/rishabh-2022-07-24_08:53:36-fs-wip-rishabh-testing-2022Jul22-1400-testing-default-smithi/
1947
4th (final) re-run: http://pulpito.front.sepia.ceph.com/rishabh-2022-07-28_03:59:01-fs-wip-rishabh-testing-2022Jul28-0143-testing-default-smithi/
1948
1949
* https://tracker.ceph.com/issues/55804
1950
  Command failed (workunit test suites/pjd.sh)
1951
* https://tracker.ceph.com/issues/50223
1952
  client.xxxx isn't responding to mclientcaps(revoke)
1953
1954
* https://tracker.ceph.com/issues/54460
1955
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1956 1 Patrick Donnelly
* https://tracker.ceph.com/issues/36593
1957 74 Rishabh Dave
  Command failed (workunit test fs/quota/quota.sh) on smithixxx with status 1
1958 68 Rishabh Dave
* https://tracker.ceph.com/issues/54462
1959 67 Patrick Donnelly
  Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128~
1960
1961
h3. 2022 July 22
1962
1963
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20220721.235756
1964
1965
MDS_HEALTH_DUMMY error in log fixed by followup commit.
1966
transient selinux ping failure
1967
1968
* https://tracker.ceph.com/issues/56694
1969
    qa: avoid blocking forever on hung umount
1970
* https://tracker.ceph.com/issues/56695
1971
    [RHEL stock] pjd test failures
1972
* https://tracker.ceph.com/issues/56696
1973
    admin keyring disappears during qa run
1974
* https://tracker.ceph.com/issues/56697
1975
    qa: fs/snaps fails for fuse
1976
* https://tracker.ceph.com/issues/50222
1977
    osd: 5.2s0 deep-scrub : stat mismatch
1978
* https://tracker.ceph.com/issues/56698
1979
    client: FAILED ceph_assert(_size == 0)
1980
* https://tracker.ceph.com/issues/50223
1981
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
1982 66 Rishabh Dave
1983 65 Rishabh Dave
1984
h3. 2022 Jul 15
1985
1986
http://pulpito.front.sepia.ceph.com/rishabh-2022-07-08_23:53:34-fs-wip-rishabh-testing-2022Jul08-1820-testing-default-smithi/
1987
1988
re-run: http://pulpito.front.sepia.ceph.com/rishabh-2022-07-15_06:42:04-fs-wip-rishabh-testing-2022Jul08-1820-testing-default-smithi/
1989
1990
* https://tracker.ceph.com/issues/53859
1991
  Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
1992
* https://tracker.ceph.com/issues/55804
1993
  Command failed (workunit test suites/pjd.sh)
1994
* https://tracker.ceph.com/issues/50223
1995
  client.xxxx isn't responding to mclientcaps(revoke)
1996
* https://tracker.ceph.com/issues/50222
1997
  osd: deep-scrub : stat mismatch
1998
1999
* https://tracker.ceph.com/issues/56632
2000
  Test failure: test_subvolume_snapshot_clone_quota_exceeded (tasks.cephfs.test_volumes.TestSubvolumeSnapshotClones)
2001
* https://tracker.ceph.com/issues/56634
2002
  workunit test fs/snaps/snaptest-intodir.sh
2003
* https://tracker.ceph.com/issues/56644
2004
  Test failure: test_rapid_creation (tasks.cephfs.test_fragment.TestFragmentation)
2005
2006 61 Rishabh Dave
2007
2008
h3. 2022 July 05
2009 62 Rishabh Dave
2010 64 Rishabh Dave
http://pulpito.front.sepia.ceph.com/rishabh-2022-07-02_14:14:52-fs-wip-rishabh-testing-20220702-1631-testing-default-smithi/
2011
2012
On 1st re-run some jobs passed - http://pulpito.front.sepia.ceph.com/rishabh-2022-07-03_15:10:28-fs-wip-rishabh-testing-20220702-1631-distro-default-smithi/
2013
2014
On 2nd re-run only few jobs failed -
2015 62 Rishabh Dave
http://pulpito.front.sepia.ceph.com/rishabh-2022-07-06_05:24:29-fs-wip-rishabh-testing-20220705-2132-distro-default-smithi/
2016
http://pulpito.front.sepia.ceph.com/rishabh-2022-07-06_05:24:29-fs-wip-rishabh-testing-20220705-2132-distro-default-smithi/
2017
2018
* https://tracker.ceph.com/issues/56446
2019
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
2020
* https://tracker.ceph.com/issues/55804
2021
    Command failed (workunit test suites/pjd.sh) on smithi047 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/
2022
2023
* https://tracker.ceph.com/issues/56445
2024 63 Rishabh Dave
    Command failed on smithi080 with status 123: "find /home/ubuntu/cephtest/archive/syslog -name '*.log' -print0 | sudo xargs -0 --no-run-if-empty -- gzip --"
2025
* https://tracker.ceph.com/issues/51267
2026
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithi098 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1
2027 62 Rishabh Dave
* https://tracker.ceph.com/issues/50224
2028
    Test failure: test_mirroring_init_failure_with_recovery (tasks.cephfs.test_mirroring.TestMirroring)
2029 61 Rishabh Dave
2030 58 Venky Shankar
2031
2032
h3. 2022 July 04
2033
2034
https://pulpito.ceph.com/vshankar-2022-06-29_09:19:00-fs-wip-vshankar-testing-20220627-100931-testing-default-smithi/
2035
(rhel runs were borked due to: https://lists.ceph.io/hyperkitty/list/dev@ceph.io/thread/JSZQFUKVLDND4W33PXDGCABPHNSPT6SS/, tests ran with --filter-out=rhel)
2036
2037
* https://tracker.ceph.com/issues/56445
2038 59 Rishabh Dave
    Command failed on smithi162 with status 123: "find /home/ubuntu/cephtest/archive/syslog -name '*.log' -print0 | sudo xargs -0 --no-run-if-empty -- gzip --"
2039
* https://tracker.ceph.com/issues/56446
2040
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
2041
* https://tracker.ceph.com/issues/51964
2042 60 Rishabh Dave
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
2043 59 Rishabh Dave
* https://tracker.ceph.com/issues/52624
2044 57 Venky Shankar
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2045
2046
h3. 2022 June 20
2047
2048
https://pulpito.ceph.com/vshankar-2022-06-15_04:03:39-fs-wip-vshankar-testing1-20220615-072516-testing-default-smithi/
2049
https://pulpito.ceph.com/vshankar-2022-06-19_08:22:46-fs-wip-vshankar-testing1-20220619-102531-testing-default-smithi/
2050
2051
* https://tracker.ceph.com/issues/52624
2052
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2053
* https://tracker.ceph.com/issues/55804
2054
    qa failure: pjd link tests failed
2055
* https://tracker.ceph.com/issues/54108
2056
    qa: iogen workunit: "The following counters failed to be set on mds daemons: {'mds.exported', 'mds.imported'}"
2057
* https://tracker.ceph.com/issues/55332
2058 56 Patrick Donnelly
    Failure in snaptest-git-ceph.sh (it's an async unlink/create bug)
2059
2060
h3. 2022 June 13
2061
2062
https://pulpito.ceph.com/pdonnell-2022-06-12_05:08:12-fs:workload-wip-pdonnell-testing-20220612.004943-distro-default-smithi/
2063
2064
* https://tracker.ceph.com/issues/56024
2065
    cephadm: removes ceph.conf during qa run causing command failure
2066
* https://tracker.ceph.com/issues/48773
2067
    qa: scrub does not complete
2068
* https://tracker.ceph.com/issues/56012
2069
    mds: src/mds/MDLog.cc: 283: FAILED ceph_assert(!mds->is_ any_replay())
2070 55 Venky Shankar
2071 54 Venky Shankar
2072
h3. 2022 Jun 13
2073
2074
https://pulpito.ceph.com/vshankar-2022-06-07_00:25:50-fs-wip-vshankar-testing-20220606-223254-testing-default-smithi/
2075
https://pulpito.ceph.com/vshankar-2022-06-10_01:04:46-fs-wip-vshankar-testing-20220609-175550-testing-default-smithi/
2076
2077
* https://tracker.ceph.com/issues/52624
2078
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2079
* https://tracker.ceph.com/issues/51964
2080
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
2081
* https://tracker.ceph.com/issues/53859
2082
    qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
2083
* https://tracker.ceph.com/issues/55804
2084
    qa failure: pjd link tests failed
2085
* https://tracker.ceph.com/issues/56003
2086
    client: src/include/xlist.h: 81: FAILED ceph_assert(_size == 0)
2087
* https://tracker.ceph.com/issues/56011
2088
    fs/thrash: snaptest-snap-rm-cmp.sh fails in mds5sum comparison
2089
* https://tracker.ceph.com/issues/56012
2090 53 Venky Shankar
    mds: src/mds/MDLog.cc: 283: FAILED ceph_assert(!mds->is_ any_replay())
2091
2092
h3. 2022 Jun 07
2093
2094
https://pulpito.ceph.com/vshankar-2022-06-06_21:25:41-fs-wip-vshankar-testing1-20220606-230129-testing-default-smithi/
2095
https://pulpito.ceph.com/vshankar-2022-06-07_10:53:31-fs-wip-vshankar-testing1-20220607-104134-testing-default-smithi/ (rerun after dropping a problematic PR)
2096
2097
* https://tracker.ceph.com/issues/52624
2098
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2099
* https://tracker.ceph.com/issues/50223
2100
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2101
* https://tracker.ceph.com/issues/50224
2102 51 Venky Shankar
    qa: test_mirroring_init_failure_with_recovery failure
2103
2104
h3. 2022 May 12
2105 52 Venky Shankar
2106 51 Venky Shankar
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20220509-125847
2107
https://pulpito.ceph.com/vshankar-2022-05-13_17:09:16-fs-wip-vshankar-testing-20220513-120051-testing-default-smithi/ (drop prs + rerun)
2108
2109
* https://tracker.ceph.com/issues/52624
2110
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2111
* https://tracker.ceph.com/issues/50223
2112
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2113
* https://tracker.ceph.com/issues/55332
2114
    Failure in snaptest-git-ceph.sh
2115
* https://tracker.ceph.com/issues/53859
2116 1 Patrick Donnelly
    qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
2117 52 Venky Shankar
* https://tracker.ceph.com/issues/55538
2118
    Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead)
2119 51 Venky Shankar
* https://tracker.ceph.com/issues/55258
2120 49 Venky Shankar
    lots of "heartbeat_check: no reply from X.X.X.X" in OSD logs (cropss up again, though very infrequent)
2121
2122 50 Venky Shankar
h3. 2022 May 04
2123
2124
https://pulpito.ceph.com/vshankar-2022-05-01_13:18:44-fs-wip-vshankar-testing1-20220428-204527-testing-default-smithi/
2125 49 Venky Shankar
https://pulpito.ceph.com/vshankar-2022-05-02_16:58:59-fs-wip-vshankar-testing1-20220502-201957-testing-default-smithi/ (after dropping PRs)
2126
2127
* https://tracker.ceph.com/issues/52624
2128
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2129
* https://tracker.ceph.com/issues/50223
2130
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2131
* https://tracker.ceph.com/issues/55332
2132
    Failure in snaptest-git-ceph.sh
2133
* https://tracker.ceph.com/issues/53859
2134
    qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
2135
* https://tracker.ceph.com/issues/55516
2136
    qa: fs suite tests failing with "json.decoder.JSONDecodeError: Extra data: line 2 column 82 (char 82)"
2137
* https://tracker.ceph.com/issues/55537
2138
    mds: crash during fs:upgrade test
2139
* https://tracker.ceph.com/issues/55538
2140 48 Venky Shankar
    Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead)
2141
2142
h3. 2022 Apr 25
2143
2144
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20220420-113951 (owner vshankar)
2145
2146
* https://tracker.ceph.com/issues/52624
2147
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2148
* https://tracker.ceph.com/issues/50223
2149
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2150
* https://tracker.ceph.com/issues/55258
2151
    lots of "heartbeat_check: no reply from X.X.X.X" in OSD logs
2152
* https://tracker.ceph.com/issues/55377
2153 47 Venky Shankar
    kclient: mds revoke Fwb caps stuck after the kclient tries writebcak once
2154
2155
h3. 2022 Apr 14
2156
2157
https://pulpito.ceph.com/?branch=wip-vshankar-testing1-20220411-144044
2158
2159
* https://tracker.ceph.com/issues/52624
2160
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2161
* https://tracker.ceph.com/issues/50223
2162
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2163
* https://tracker.ceph.com/issues/52438
2164
    qa: ffsb timeout
2165
* https://tracker.ceph.com/issues/55170
2166
    mds: crash during rejoin (CDir::fetch_keys)
2167
* https://tracker.ceph.com/issues/55331
2168
    pjd failure
2169
* https://tracker.ceph.com/issues/48773
2170
    qa: scrub does not complete
2171
* https://tracker.ceph.com/issues/55332
2172
    Failure in snaptest-git-ceph.sh
2173
* https://tracker.ceph.com/issues/55258
2174 45 Venky Shankar
    lots of "heartbeat_check: no reply from X.X.X.X" in OSD logs
2175
2176 46 Venky Shankar
h3. 2022 Apr 11
2177 45 Venky Shankar
2178
https://pulpito.ceph.com/?branch=wip-vshankar-testing-55110-20220408-203242
2179
2180
* https://tracker.ceph.com/issues/48773
2181
    qa: scrub does not complete
2182
* https://tracker.ceph.com/issues/52624
2183
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2184
* https://tracker.ceph.com/issues/52438
2185
    qa: ffsb timeout
2186
* https://tracker.ceph.com/issues/48680
2187
    mds: scrubbing stuck "scrub active (0 inodes in the stack)"
2188
* https://tracker.ceph.com/issues/55236
2189
    qa: fs/snaps tests fails with "hit max job timeout"
2190
* https://tracker.ceph.com/issues/54108
2191
    qa: iogen workunit: "The following counters failed to be set on mds daemons: {'mds.exported', 'mds.imported'}"
2192
* https://tracker.ceph.com/issues/54971
2193
    Test failure: test_perf_stats_stale_metrics (tasks.cephfs.test_mds_metrics.TestMDSMetrics)
2194
* https://tracker.ceph.com/issues/50223
2195
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2196
* https://tracker.ceph.com/issues/55258
2197 44 Venky Shankar
    lots of "heartbeat_check: no reply from X.X.X.X" in OSD logs
2198 42 Venky Shankar
2199 43 Venky Shankar
h3. 2022 Mar 21
2200
2201
https://pulpito.ceph.com/vshankar-2022-03-20_02:16:37-fs-wip-vshankar-testing-20220319-163539-testing-default-smithi/
2202
2203
Run didn't go well, lots of failures - debugging by dropping PRs and running against master branch. Only merging unrelated PRs that pass tests.
2204
2205
2206 42 Venky Shankar
h3. 2022 Mar 08
2207
2208
https://pulpito.ceph.com/vshankar-2022-02-28_04:32:15-fs-wip-vshankar-testing-20220226-211550-testing-default-smithi/
2209
2210
rerun with
2211
- (drop) https://github.com/ceph/ceph/pull/44679
2212
- (drop) https://github.com/ceph/ceph/pull/44958
2213
https://pulpito.ceph.com/vshankar-2022-03-06_14:47:51-fs-wip-vshankar-testing-20220304-132102-testing-default-smithi/
2214
2215
* https://tracker.ceph.com/issues/54419 (new)
2216
    `ceph orch upgrade start` seems to never reach completion
2217
* https://tracker.ceph.com/issues/51964
2218
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
2219
* https://tracker.ceph.com/issues/52624
2220
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2221
* https://tracker.ceph.com/issues/50223
2222
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2223
* https://tracker.ceph.com/issues/52438
2224
    qa: ffsb timeout
2225
* https://tracker.ceph.com/issues/50821
2226
    qa: untar_snap_rm failure during mds thrashing
2227 41 Venky Shankar
2228
2229
h3. 2022 Feb 09
2230
2231
https://pulpito.ceph.com/vshankar-2022-02-05_17:27:49-fs-wip-vshankar-testing-20220201-113815-testing-default-smithi/
2232
2233
rerun with
2234
- (drop) https://github.com/ceph/ceph/pull/37938
2235
- (drop) https://github.com/ceph/ceph/pull/44335
2236
- (drop) https://github.com/ceph/ceph/pull/44491
2237
- (drop) https://github.com/ceph/ceph/pull/44501
2238
https://pulpito.ceph.com/vshankar-2022-02-08_14:27:29-fs-wip-vshankar-testing-20220208-181241-testing-default-smithi/
2239
2240
* https://tracker.ceph.com/issues/51964
2241
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
2242
* https://tracker.ceph.com/issues/54066
2243
    test_subvolume_no_upgrade_v1_sanity fails with `AssertionError: 1000 != 0`
2244
* https://tracker.ceph.com/issues/48773
2245
    qa: scrub does not complete
2246
* https://tracker.ceph.com/issues/52624
2247
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2248
* https://tracker.ceph.com/issues/50223
2249
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2250
* https://tracker.ceph.com/issues/52438
2251 40 Patrick Donnelly
    qa: ffsb timeout
2252
2253
h3. 2022 Feb 01
2254
2255
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20220127.171526
2256
2257
* https://tracker.ceph.com/issues/54107
2258
    kclient: hang during umount
2259
* https://tracker.ceph.com/issues/54106
2260
    kclient: hang during workunit cleanup
2261
* https://tracker.ceph.com/issues/54108
2262
    qa: iogen workunit: "The following counters failed to be set on mds daemons: {'mds.exported', 'mds.imported'}"
2263
* https://tracker.ceph.com/issues/48773
2264
    qa: scrub does not complete
2265
* https://tracker.ceph.com/issues/52438
2266
    qa: ffsb timeout
2267 36 Venky Shankar
2268
2269
h3. 2022 Jan 13
2270 39 Venky Shankar
2271 36 Venky Shankar
https://pulpito.ceph.com/vshankar-2022-01-06_13:18:41-fs-wip-vshankar-testing-20220106-145819-testing-default-smithi/
2272 38 Venky Shankar
2273
rerun with:
2274 36 Venky Shankar
- (add) https://github.com/ceph/ceph/pull/44570
2275
- (drop) https://github.com/ceph/ceph/pull/43184
2276
https://pulpito.ceph.com/vshankar-2022-01-13_04:42:40-fs-wip-vshankar-testing-20220106-145819-testing-default-smithi/
2277
2278
* https://tracker.ceph.com/issues/50223
2279
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2280
* https://tracker.ceph.com/issues/51282
2281
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2282
* https://tracker.ceph.com/issues/48773
2283
    qa: scrub does not complete
2284
* https://tracker.ceph.com/issues/52624
2285
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2286
* https://tracker.ceph.com/issues/53859
2287 34 Venky Shankar
    qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
2288
2289
h3. 2022 Jan 03
2290
2291
https://pulpito.ceph.com/vshankar-2021-12-22_07:37:44-fs-wip-vshankar-testing-20211216-114012-testing-default-smithi/
2292
https://pulpito.ceph.com/vshankar-2022-01-03_12:27:45-fs-wip-vshankar-testing-20220103-142738-testing-default-smithi/ (rerun)
2293
2294
* https://tracker.ceph.com/issues/50223
2295
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2296
* https://tracker.ceph.com/issues/51964
2297
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
2298
* https://tracker.ceph.com/issues/51267
2299
    CommandFailedError: Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithi096 with status 1:...
2300
* https://tracker.ceph.com/issues/51282
2301
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2302
* https://tracker.ceph.com/issues/50821
2303
    qa: untar_snap_rm failure during mds thrashing
2304 35 Ramana Raja
* https://tracker.ceph.com/issues/51278
2305
    mds: "FAILED ceph_assert(!segments.empty())"
2306
* https://tracker.ceph.com/issues/52279
2307 34 Venky Shankar
    cephadm tests fail due to: error adding seccomp filter rule for syscall bdflush: requested action matches default action of filter
2308 33 Patrick Donnelly
2309
2310
h3. 2021 Dec 22
2311
2312
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211222.014316
2313
2314
* https://tracker.ceph.com/issues/52624
2315
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2316
* https://tracker.ceph.com/issues/50223
2317
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2318
* https://tracker.ceph.com/issues/52279
2319
    cephadm tests fail due to: error adding seccomp filter rule for syscall bdflush: requested action matches default action of filter
2320
* https://tracker.ceph.com/issues/50224
2321
    qa: test_mirroring_init_failure_with_recovery failure
2322
* https://tracker.ceph.com/issues/48773
2323
    qa: scrub does not complete
2324 32 Venky Shankar
2325
2326
h3. 2021 Nov 30
2327
2328
https://pulpito.ceph.com/vshankar-2021-11-24_07:14:27-fs-wip-vshankar-testing-20211124-094330-testing-default-smithi/
2329
https://pulpito.ceph.com/vshankar-2021-11-30_06:23:32-fs-wip-vshankar-testing-20211124-094330-distro-default-smithi/ (rerun w/ QA fixes)
2330
2331
* https://tracker.ceph.com/issues/53436
2332
    mds, mon: mds beacon messages get dropped? (mds never reaches up:active state)
2333
* https://tracker.ceph.com/issues/51964
2334
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
2335
* https://tracker.ceph.com/issues/48812
2336
    qa: test_scrub_pause_and_resume_with_abort failure
2337
* https://tracker.ceph.com/issues/51076
2338
    "wait_for_recovery: failed before timeout expired" during thrashosd test with EC backend.
2339
* https://tracker.ceph.com/issues/50223
2340
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2341
* https://tracker.ceph.com/issues/52624
2342
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2343
* https://tracker.ceph.com/issues/50250
2344
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
2345 31 Patrick Donnelly
2346
2347
h3. 2021 November 9
2348
2349
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211109.180315
2350
2351
* https://tracker.ceph.com/issues/53214
2352
    qa: "dd: error reading '/sys/kernel/debug/ceph/2a934501-6731-4052-a836-f42229a869be.client4874/metrics': Is a directory"
2353
* https://tracker.ceph.com/issues/48773
2354
    qa: scrub does not complete
2355
* https://tracker.ceph.com/issues/50223
2356
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2357
* https://tracker.ceph.com/issues/51282
2358
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2359
* https://tracker.ceph.com/issues/52624
2360
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2361
* https://tracker.ceph.com/issues/53216
2362
    qa: "RuntimeError: value of attributes should be either str or None. client_id"
2363
* https://tracker.ceph.com/issues/50250
2364
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
2365
2366 30 Patrick Donnelly
2367
2368
h3. 2021 November 03
2369
2370
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211103.023355
2371
2372
* https://tracker.ceph.com/issues/51964
2373
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
2374
* https://tracker.ceph.com/issues/51282
2375
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2376
* https://tracker.ceph.com/issues/52436
2377
    fs/ceph: "corrupt mdsmap"
2378
* https://tracker.ceph.com/issues/53074
2379
    pybind/mgr/cephadm: upgrade sequence does not continue if no MDS are active
2380
* https://tracker.ceph.com/issues/53150
2381
    pybind/mgr/cephadm/upgrade: tolerate MDS failures during upgrade straddling v16.2.5
2382
* https://tracker.ceph.com/issues/53155
2383
    MDSMonitor: assertion during upgrade to v16.2.5+
2384 29 Patrick Donnelly
2385
2386
h3. 2021 October 26
2387
2388
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211025.000447
2389
2390
* https://tracker.ceph.com/issues/53074
2391
    pybind/mgr/cephadm: upgrade sequence does not continue if no MDS are active
2392
* https://tracker.ceph.com/issues/52997
2393
    testing: hang ing umount
2394
* https://tracker.ceph.com/issues/50824
2395
    qa: snaptest-git-ceph bus error
2396
* https://tracker.ceph.com/issues/52436
2397
    fs/ceph: "corrupt mdsmap"
2398
* https://tracker.ceph.com/issues/48773
2399
    qa: scrub does not complete
2400
* https://tracker.ceph.com/issues/53082
2401
    ceph-fuse: segmenetation fault in Client::handle_mds_map
2402
* https://tracker.ceph.com/issues/50223
2403
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2404
* https://tracker.ceph.com/issues/52624
2405
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2406
* https://tracker.ceph.com/issues/50224
2407
    qa: test_mirroring_init_failure_with_recovery failure
2408
* https://tracker.ceph.com/issues/50821
2409
    qa: untar_snap_rm failure during mds thrashing
2410
* https://tracker.ceph.com/issues/50250
2411
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
2412
2413 27 Patrick Donnelly
2414
2415 28 Patrick Donnelly
h3. 2021 October 19
2416 27 Patrick Donnelly
2417
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211019.013028
2418
2419
* https://tracker.ceph.com/issues/52995
2420
    qa: test_standby_count_wanted failure
2421
* https://tracker.ceph.com/issues/52948
2422
    osd: fails to come up: "teuthology.misc:7 of 8 OSDs are up"
2423
* https://tracker.ceph.com/issues/52996
2424
    qa: test_perf_counters via test_openfiletable
2425
* https://tracker.ceph.com/issues/48772
2426
    qa: pjd: not ok 9, 44, 80
2427
* https://tracker.ceph.com/issues/52997
2428
    testing: hang ing umount
2429
* https://tracker.ceph.com/issues/50250
2430
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
2431
* https://tracker.ceph.com/issues/52624
2432
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2433
* https://tracker.ceph.com/issues/50223
2434
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2435
* https://tracker.ceph.com/issues/50821
2436
    qa: untar_snap_rm failure during mds thrashing
2437
* https://tracker.ceph.com/issues/48773
2438
    qa: scrub does not complete
2439 26 Patrick Donnelly
2440
2441
h3. 2021 October 12
2442
2443
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211012.192211
2444
2445
Some failures caused by teuthology bug: https://tracker.ceph.com/issues/52944
2446
2447
New test caused failure: https://github.com/ceph/ceph/pull/43297#discussion_r729883167
2448
2449
2450
* https://tracker.ceph.com/issues/51282
2451
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2452
* https://tracker.ceph.com/issues/52948
2453
    osd: fails to come up: "teuthology.misc:7 of 8 OSDs are up"
2454
* https://tracker.ceph.com/issues/48773
2455
    qa: scrub does not complete
2456
* https://tracker.ceph.com/issues/50224
2457
    qa: test_mirroring_init_failure_with_recovery failure
2458
* https://tracker.ceph.com/issues/52949
2459
    RuntimeError: The following counters failed to be set on mds daemons: {'mds.dir_split'}
2460 25 Patrick Donnelly
2461 23 Patrick Donnelly
2462 24 Patrick Donnelly
h3. 2021 October 02
2463
2464
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211002.163337
2465
2466
Some failures caused by cephadm upgrade test. Fixed in follow-up qa commit.
2467
2468
test_simple failures caused by PR in this set.
2469
2470
A few reruns because of QA infra noise.
2471
2472
* https://tracker.ceph.com/issues/52822
2473
    qa: failed pacific install on fs:upgrade
2474
* https://tracker.ceph.com/issues/52624
2475
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2476
* https://tracker.ceph.com/issues/50223
2477
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2478
* https://tracker.ceph.com/issues/48773
2479
    qa: scrub does not complete
2480
2481
2482 23 Patrick Donnelly
h3. 2021 September 20
2483
2484
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210917.174826
2485
2486
* https://tracker.ceph.com/issues/52677
2487
    qa: test_simple failure
2488
* https://tracker.ceph.com/issues/51279
2489
    kclient hangs on umount (testing branch)
2490
* https://tracker.ceph.com/issues/50223
2491
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2492
* https://tracker.ceph.com/issues/50250
2493
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
2494
* https://tracker.ceph.com/issues/52624
2495
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2496
* https://tracker.ceph.com/issues/52438
2497
    qa: ffsb timeout
2498 22 Patrick Donnelly
2499
2500
h3. 2021 September 10
2501
2502
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210910.181451
2503
2504
* https://tracker.ceph.com/issues/50223
2505
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2506
* https://tracker.ceph.com/issues/50250
2507
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
2508
* https://tracker.ceph.com/issues/52624
2509
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2510
* https://tracker.ceph.com/issues/52625
2511
    qa: test_kill_mdstable (tasks.cephfs.test_snapshots.TestSnapshots)
2512
* https://tracker.ceph.com/issues/52439
2513
    qa: acls does not compile on centos stream
2514
* https://tracker.ceph.com/issues/50821
2515
    qa: untar_snap_rm failure during mds thrashing
2516
* https://tracker.ceph.com/issues/48773
2517
    qa: scrub does not complete
2518
* https://tracker.ceph.com/issues/52626
2519
    mds: ScrubStack.cc: 831: FAILED ceph_assert(diri)
2520
* https://tracker.ceph.com/issues/51279
2521
    kclient hangs on umount (testing branch)
2522 21 Patrick Donnelly
2523
2524
h3. 2021 August 27
2525
2526
Several jobs died because of device failures.
2527
2528
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210827.024746
2529
2530
* https://tracker.ceph.com/issues/52430
2531
    mds: fast async create client mount breaks racy test
2532
* https://tracker.ceph.com/issues/52436
2533
    fs/ceph: "corrupt mdsmap"
2534
* https://tracker.ceph.com/issues/52437
2535
    mds: InoTable::replay_release_ids abort via test_inotable_sync
2536
* https://tracker.ceph.com/issues/51282
2537
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2538
* https://tracker.ceph.com/issues/52438
2539
    qa: ffsb timeout
2540
* https://tracker.ceph.com/issues/52439
2541
    qa: acls does not compile on centos stream
2542 20 Patrick Donnelly
2543
2544
h3. 2021 July 30
2545
2546
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210729.214022
2547
2548
* https://tracker.ceph.com/issues/50250
2549
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
2550
* https://tracker.ceph.com/issues/51282
2551
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2552
* https://tracker.ceph.com/issues/48773
2553
    qa: scrub does not complete
2554
* https://tracker.ceph.com/issues/51975
2555
    pybind/mgr/stats: KeyError
2556 19 Patrick Donnelly
2557
2558
h3. 2021 July 28
2559
2560
https://pulpito.ceph.com/pdonnell-2021-07-28_00:39:45-fs-wip-pdonnell-testing-20210727.213757-distro-basic-smithi/
2561
2562
with qa fix: https://pulpito.ceph.com/pdonnell-2021-07-28_16:20:28-fs-wip-pdonnell-testing-20210728.141004-distro-basic-smithi/
2563
2564
* https://tracker.ceph.com/issues/51905
2565
    qa: "error reading sessionmap 'mds1_sessionmap'"
2566
* https://tracker.ceph.com/issues/48773
2567
    qa: scrub does not complete
2568
* https://tracker.ceph.com/issues/50250
2569
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
2570
* https://tracker.ceph.com/issues/51267
2571
    CommandFailedError: Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithi096 with status 1:...
2572
* https://tracker.ceph.com/issues/51279
2573
    kclient hangs on umount (testing branch)
2574 18 Patrick Donnelly
2575
2576
h3. 2021 July 16
2577
2578
https://pulpito.ceph.com/pdonnell-2021-07-16_05:50:11-fs-wip-pdonnell-testing-20210716.022804-distro-basic-smithi/
2579
2580
* https://tracker.ceph.com/issues/48773
2581
    qa: scrub does not complete
2582
* https://tracker.ceph.com/issues/48772
2583
    qa: pjd: not ok 9, 44, 80
2584
* https://tracker.ceph.com/issues/45434
2585
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2586
* https://tracker.ceph.com/issues/51279
2587
    kclient hangs on umount (testing branch)
2588
* https://tracker.ceph.com/issues/50824
2589
    qa: snaptest-git-ceph bus error
2590 17 Patrick Donnelly
2591
2592
h3. 2021 July 04
2593
2594
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210703.052904
2595
2596
* https://tracker.ceph.com/issues/48773
2597
    qa: scrub does not complete
2598
* https://tracker.ceph.com/issues/39150
2599
    mon: "FAILED ceph_assert(session_map.sessions.empty())" when out of quorum
2600
* https://tracker.ceph.com/issues/45434
2601
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2602
* https://tracker.ceph.com/issues/51282
2603
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2604
* https://tracker.ceph.com/issues/48771
2605
    qa: iogen: workload fails to cause balancing
2606
* https://tracker.ceph.com/issues/51279
2607
    kclient hangs on umount (testing branch)
2608
* https://tracker.ceph.com/issues/50250
2609
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
2610 16 Patrick Donnelly
2611
2612
h3. 2021 July 01
2613
2614
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210701.192056
2615
2616
* https://tracker.ceph.com/issues/51197
2617
    qa: [WRN] Scrub error on inode 0x10000001520 (/client.0/tmp/t/linux-5.4/Documentation/driver-api) see mds.f log and `damage ls` output for details
2618
* https://tracker.ceph.com/issues/50866
2619
    osd: stat mismatch on objects
2620
* https://tracker.ceph.com/issues/48773
2621
    qa: scrub does not complete
2622 15 Patrick Donnelly
2623
2624
h3. 2021 June 26
2625
2626
https://pulpito.ceph.com/pdonnell-2021-06-26_00:57:00-fs-wip-pdonnell-testing-20210625.225421-distro-basic-smithi/
2627
2628
* https://tracker.ceph.com/issues/51183
2629
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
2630
* https://tracker.ceph.com/issues/51410
2631
    kclient: fails to finish reconnect during MDS thrashing (testing branch)
2632
* https://tracker.ceph.com/issues/48773
2633
    qa: scrub does not complete
2634
* https://tracker.ceph.com/issues/51282
2635
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2636
* https://tracker.ceph.com/issues/51169
2637
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
2638
* https://tracker.ceph.com/issues/48772
2639
    qa: pjd: not ok 9, 44, 80
2640 14 Patrick Donnelly
2641
2642
h3. 2021 June 21
2643
2644
https://pulpito.ceph.com/pdonnell-2021-06-22_00:27:21-fs-wip-pdonnell-testing-20210621.231646-distro-basic-smithi/
2645
2646
One failure caused by PR: https://github.com/ceph/ceph/pull/41935#issuecomment-866472599
2647
2648
* https://tracker.ceph.com/issues/51282
2649
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2650
* https://tracker.ceph.com/issues/51183
2651
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
2652
* https://tracker.ceph.com/issues/48773
2653
    qa: scrub does not complete
2654
* https://tracker.ceph.com/issues/48771
2655
    qa: iogen: workload fails to cause balancing
2656
* https://tracker.ceph.com/issues/51169
2657
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
2658
* https://tracker.ceph.com/issues/50495
2659
    libcephfs: shutdown race fails with status 141
2660
* https://tracker.ceph.com/issues/45434
2661
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2662
* https://tracker.ceph.com/issues/50824
2663
    qa: snaptest-git-ceph bus error
2664
* https://tracker.ceph.com/issues/50223
2665
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2666 13 Patrick Donnelly
2667
2668
h3. 2021 June 16
2669
2670
https://pulpito.ceph.com/pdonnell-2021-06-16_21:26:55-fs-wip-pdonnell-testing-20210616.191804-distro-basic-smithi/
2671
2672
MDS abort class of failures caused by PR: https://github.com/ceph/ceph/pull/41667
2673
2674
* https://tracker.ceph.com/issues/45434
2675
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2676
* https://tracker.ceph.com/issues/51169
2677
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
2678
* https://tracker.ceph.com/issues/43216
2679
    MDSMonitor: removes MDS coming out of quorum election
2680
* https://tracker.ceph.com/issues/51278
2681
    mds: "FAILED ceph_assert(!segments.empty())"
2682
* https://tracker.ceph.com/issues/51279
2683
    kclient hangs on umount (testing branch)
2684
* https://tracker.ceph.com/issues/51280
2685
    mds: "FAILED ceph_assert(r == 0 || r == -2)"
2686
* https://tracker.ceph.com/issues/51183
2687
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
2688
* https://tracker.ceph.com/issues/51281
2689
    qa: snaptest-snap-rm-cmp.sh: "echo 'FAIL: bad match, /tmp/a 4637e766853d1ad16a7b17079e2c6f03 != real c3883760b18d50e8d78819c54d579b00'"
2690
* https://tracker.ceph.com/issues/48773
2691
    qa: scrub does not complete
2692
* https://tracker.ceph.com/issues/51076
2693
    "wait_for_recovery: failed before timeout expired" during thrashosd test with EC backend.
2694
* https://tracker.ceph.com/issues/51228
2695
    qa: rmdir: failed to remove 'a/.snap/*': No such file or directory
2696
* https://tracker.ceph.com/issues/51282
2697
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2698 12 Patrick Donnelly
2699
2700
h3. 2021 June 14
2701
2702
https://pulpito.ceph.com/pdonnell-2021-06-14_20:53:05-fs-wip-pdonnell-testing-20210614.173325-distro-basic-smithi/
2703
2704
Some Ubuntu 20.04 upgrade fallout. In particular, upgrade tests are failing due to missing packages for 18.04 Pacific.
2705
2706
* https://tracker.ceph.com/issues/51169
2707
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
2708
* https://tracker.ceph.com/issues/51228
2709
    qa: rmdir: failed to remove 'a/.snap/*': No such file or directory
2710
* https://tracker.ceph.com/issues/48773
2711
    qa: scrub does not complete
2712
* https://tracker.ceph.com/issues/51183
2713
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
2714
* https://tracker.ceph.com/issues/45434
2715
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2716
* https://tracker.ceph.com/issues/51182
2717
    pybind/mgr/snap_schedule: Invalid command: Unexpected argument 'fs=cephfs'
2718
* https://tracker.ceph.com/issues/51229
2719
    qa: test_multi_snap_schedule list difference failure
2720
* https://tracker.ceph.com/issues/50821
2721
    qa: untar_snap_rm failure during mds thrashing
2722 11 Patrick Donnelly
2723
2724
h3. 2021 June 13
2725
2726
https://pulpito.ceph.com/pdonnell-2021-06-12_02:45:35-fs-wip-pdonnell-testing-20210612.002809-distro-basic-smithi/
2727
2728
Some Ubuntu 20.04 upgrade fallout. In particular, upgrade tests are failing due to missing packages for 18.04 Pacific.
2729
2730
* https://tracker.ceph.com/issues/51169
2731
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
2732
* https://tracker.ceph.com/issues/48773
2733
    qa: scrub does not complete
2734
* https://tracker.ceph.com/issues/51182
2735
    pybind/mgr/snap_schedule: Invalid command: Unexpected argument 'fs=cephfs'
2736
* https://tracker.ceph.com/issues/51183
2737
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
2738
* https://tracker.ceph.com/issues/51197
2739
    qa: [WRN] Scrub error on inode 0x10000001520 (/client.0/tmp/t/linux-5.4/Documentation/driver-api) see mds.f log and `damage ls` output for details
2740
* https://tracker.ceph.com/issues/45434
2741 10 Patrick Donnelly
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2742
2743
h3. 2021 June 11
2744
2745
https://pulpito.ceph.com/pdonnell-2021-06-11_18:02:10-fs-wip-pdonnell-testing-20210611.162716-distro-basic-smithi/
2746
2747
Some Ubuntu 20.04 upgrade fallout. In particular, upgrade tests are failing due to missing packages for 18.04 Pacific.
2748
2749
* https://tracker.ceph.com/issues/51169
2750
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
2751
* https://tracker.ceph.com/issues/45434
2752
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2753
* https://tracker.ceph.com/issues/48771
2754
    qa: iogen: workload fails to cause balancing
2755
* https://tracker.ceph.com/issues/43216
2756
    MDSMonitor: removes MDS coming out of quorum election
2757
* https://tracker.ceph.com/issues/51182
2758
    pybind/mgr/snap_schedule: Invalid command: Unexpected argument 'fs=cephfs'
2759
* https://tracker.ceph.com/issues/50223
2760
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2761
* https://tracker.ceph.com/issues/48773
2762
    qa: scrub does not complete
2763
* https://tracker.ceph.com/issues/51183
2764
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
2765
* https://tracker.ceph.com/issues/51184
2766
    qa: fs:bugs does not specify distro
2767 9 Patrick Donnelly
2768
2769
h3. 2021 June 03
2770
2771
https://pulpito.ceph.com/pdonnell-2021-06-03_03:40:33-fs-wip-pdonnell-testing-20210603.020013-distro-basic-smithi/
2772
2773
* https://tracker.ceph.com/issues/45434
2774
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2775
* https://tracker.ceph.com/issues/50016
2776
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
2777
* https://tracker.ceph.com/issues/50821
2778
    qa: untar_snap_rm failure during mds thrashing
2779
* https://tracker.ceph.com/issues/50622 (regression)
2780
    msg: active_connections regression
2781
* https://tracker.ceph.com/issues/49845#note-2 (regression)
2782
    qa: failed umount in test_volumes
2783
* https://tracker.ceph.com/issues/48773
2784
    qa: scrub does not complete
2785
* https://tracker.ceph.com/issues/43216
2786
    MDSMonitor: removes MDS coming out of quorum election
2787 7 Patrick Donnelly
2788
2789 8 Patrick Donnelly
h3. 2021 May 18
2790
2791
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210518.214114
2792
2793
Regression in testing kernel caused some failures. Ilya fixed those and rerun
2794
looked better. Some odd new noise in the rerun relating to packaging and "No
2795
module named 'tasks.ceph'".
2796
2797
* https://tracker.ceph.com/issues/50824
2798
    qa: snaptest-git-ceph bus error
2799
* https://tracker.ceph.com/issues/50622 (regression)
2800
    msg: active_connections regression
2801
* https://tracker.ceph.com/issues/49845#note-2 (regression)
2802
    qa: failed umount in test_volumes
2803
* https://tracker.ceph.com/issues/48203 (stock kernel update required)
2804
    qa: quota failure
2805
2806
2807 7 Patrick Donnelly
h3. 2021 May 18
2808
2809
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210518.025642
2810
2811
* https://tracker.ceph.com/issues/50821
2812
    qa: untar_snap_rm failure during mds thrashing
2813
* https://tracker.ceph.com/issues/48773
2814
    qa: scrub does not complete
2815
* https://tracker.ceph.com/issues/45591
2816
    mgr: FAILED ceph_assert(daemon != nullptr)
2817
* https://tracker.ceph.com/issues/50866
2818
    osd: stat mismatch on objects
2819
* https://tracker.ceph.com/issues/50016
2820
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
2821
* https://tracker.ceph.com/issues/50867
2822
    qa: fs:mirror: reduced data availability
2823
* https://tracker.ceph.com/issues/50821
2824
    qa: untar_snap_rm failure during mds thrashing
2825
* https://tracker.ceph.com/issues/50622 (regression)
2826
    msg: active_connections regression
2827
* https://tracker.ceph.com/issues/50223
2828
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2829
* https://tracker.ceph.com/issues/50868
2830
    qa: "kern.log.gz already exists; not overwritten"
2831
* https://tracker.ceph.com/issues/50870
2832
    qa: test_full: "rm: cannot remove 'large_file_a': Permission denied"
2833 6 Patrick Donnelly
2834
2835
h3. 2021 May 11
2836
2837
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210511.232042
2838
2839
* one class of failures caused by PR
2840
* https://tracker.ceph.com/issues/48812
2841
    qa: test_scrub_pause_and_resume_with_abort failure
2842
* https://tracker.ceph.com/issues/50390
2843
    mds: monclient: wait_auth_rotating timed out after 30
2844
* https://tracker.ceph.com/issues/48773
2845
    qa: scrub does not complete
2846
* https://tracker.ceph.com/issues/50821
2847
    qa: untar_snap_rm failure during mds thrashing
2848
* https://tracker.ceph.com/issues/50224
2849
    qa: test_mirroring_init_failure_with_recovery failure
2850
* https://tracker.ceph.com/issues/50622 (regression)
2851
    msg: active_connections regression
2852
* https://tracker.ceph.com/issues/50825
2853
    qa: snaptest-git-ceph hang during mon thrashing v2
2854
* https://tracker.ceph.com/issues/50821
2855
    qa: untar_snap_rm failure during mds thrashing
2856
* https://tracker.ceph.com/issues/50823
2857
    qa: RuntimeError: timeout waiting for cluster to stabilize
2858 5 Patrick Donnelly
2859
2860
h3. 2021 May 14
2861
2862
https://pulpito.ceph.com/pdonnell-2021-05-14_21:45:42-fs-master-distro-basic-smithi/
2863
2864
* https://tracker.ceph.com/issues/48812
2865
    qa: test_scrub_pause_and_resume_with_abort failure
2866
* https://tracker.ceph.com/issues/50821
2867
    qa: untar_snap_rm failure during mds thrashing
2868
* https://tracker.ceph.com/issues/50622 (regression)
2869
    msg: active_connections regression
2870
* https://tracker.ceph.com/issues/50822
2871
    qa: testing kernel patch for client metrics causes mds abort
2872
* https://tracker.ceph.com/issues/48773
2873
    qa: scrub does not complete
2874
* https://tracker.ceph.com/issues/50823
2875
    qa: RuntimeError: timeout waiting for cluster to stabilize
2876
* https://tracker.ceph.com/issues/50824
2877
    qa: snaptest-git-ceph bus error
2878
* https://tracker.ceph.com/issues/50825
2879
    qa: snaptest-git-ceph hang during mon thrashing v2
2880
* https://tracker.ceph.com/issues/50826
2881
    kceph: stock RHEL kernel hangs on snaptests with mon|osd thrashers
2882 4 Patrick Donnelly
2883
2884
h3. 2021 May 01
2885
2886
https://pulpito.ceph.com/pdonnell-2021-05-01_09:07:09-fs-wip-pdonnell-testing-20210501.040415-distro-basic-smithi/
2887
2888
* https://tracker.ceph.com/issues/45434
2889
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2890
* https://tracker.ceph.com/issues/50281
2891
    qa: untar_snap_rm timeout
2892
* https://tracker.ceph.com/issues/48203 (stock kernel update required)
2893
    qa: quota failure
2894
* https://tracker.ceph.com/issues/48773
2895
    qa: scrub does not complete
2896
* https://tracker.ceph.com/issues/50390
2897
    mds: monclient: wait_auth_rotating timed out after 30
2898
* https://tracker.ceph.com/issues/50250
2899
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details"
2900
* https://tracker.ceph.com/issues/50622 (regression)
2901
    msg: active_connections regression
2902
* https://tracker.ceph.com/issues/45591
2903
    mgr: FAILED ceph_assert(daemon != nullptr)
2904
* https://tracker.ceph.com/issues/50221
2905
    qa: snaptest-git-ceph failure in git diff
2906
* https://tracker.ceph.com/issues/50016
2907
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
2908 3 Patrick Donnelly
2909
2910
h3. 2021 Apr 15
2911
2912
https://pulpito.ceph.com/pdonnell-2021-04-15_01:35:57-fs-wip-pdonnell-testing-20210414.230315-distro-basic-smithi/
2913
2914
* https://tracker.ceph.com/issues/50281
2915
    qa: untar_snap_rm timeout
2916
* https://tracker.ceph.com/issues/50220
2917
    qa: dbench workload timeout
2918
* https://tracker.ceph.com/issues/50246
2919
    mds: failure replaying journal (EMetaBlob)
2920
* https://tracker.ceph.com/issues/50250
2921
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details"
2922
* https://tracker.ceph.com/issues/50016
2923
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
2924
* https://tracker.ceph.com/issues/50222
2925
    osd: 5.2s0 deep-scrub : stat mismatch
2926
* https://tracker.ceph.com/issues/45434
2927
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2928
* https://tracker.ceph.com/issues/49845
2929
    qa: failed umount in test_volumes
2930
* https://tracker.ceph.com/issues/37808
2931
    osd: osdmap cache weak_refs assert during shutdown
2932
* https://tracker.ceph.com/issues/50387
2933
    client: fs/snaps failure
2934
* https://tracker.ceph.com/issues/50389
2935
    mds: "cluster [ERR] Error recovering journal 0x203: (2) No such file or directory" in cluster log"
2936
* https://tracker.ceph.com/issues/50216
2937
    qa: "ls: cannot access 'lost+found': No such file or directory"
2938
* https://tracker.ceph.com/issues/50390
2939
    mds: monclient: wait_auth_rotating timed out after 30
2940
2941 1 Patrick Donnelly
2942
2943 2 Patrick Donnelly
h3. 2021 Apr 08
2944
2945
https://pulpito.ceph.com/pdonnell-2021-04-08_22:42:24-fs-wip-pdonnell-testing-20210408.192301-distro-basic-smithi/
2946
2947
* https://tracker.ceph.com/issues/45434
2948
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2949
* https://tracker.ceph.com/issues/50016
2950
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
2951
* https://tracker.ceph.com/issues/48773
2952
    qa: scrub does not complete
2953
* https://tracker.ceph.com/issues/50279
2954
    qa: "Replacing daemon mds.b as rank 0 with standby daemon mds.c"
2955
* https://tracker.ceph.com/issues/50246
2956
    mds: failure replaying journal (EMetaBlob)
2957
* https://tracker.ceph.com/issues/48365
2958
    qa: ffsb build failure on CentOS 8.2
2959
* https://tracker.ceph.com/issues/50216
2960
    qa: "ls: cannot access 'lost+found': No such file or directory"
2961
* https://tracker.ceph.com/issues/50223
2962
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2963
* https://tracker.ceph.com/issues/50280
2964
    cephadm: RuntimeError: uid/gid not found
2965
* https://tracker.ceph.com/issues/50281
2966
    qa: untar_snap_rm timeout
2967
2968 1 Patrick Donnelly
h3. 2021 Apr 08
2969
2970
https://pulpito.ceph.com/pdonnell-2021-04-08_04:31:36-fs-wip-pdonnell-testing-20210408.024225-distro-basic-smithi/
2971
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210408.142238 (with logic inversion / QA fix)
2972
2973
* https://tracker.ceph.com/issues/50246
2974
    mds: failure replaying journal (EMetaBlob)
2975
* https://tracker.ceph.com/issues/50250
2976
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details"
2977
2978
2979
h3. 2021 Apr 07
2980
2981
https://pulpito.ceph.com/pdonnell-2021-04-07_02:12:41-fs-wip-pdonnell-testing-20210406.213012-distro-basic-smithi/
2982
2983
* https://tracker.ceph.com/issues/50215
2984
    qa: "log [ERR] : error reading sessionmap 'mds2_sessionmap'"
2985
* https://tracker.ceph.com/issues/49466
2986
    qa: "Command failed on gibba030 with status 1: 'set -ex\nsudo dd of=/tmp/tmp.ZEeZBasJer'"
2987
* https://tracker.ceph.com/issues/50216
2988
    qa: "ls: cannot access 'lost+found': No such file or directory"
2989
* https://tracker.ceph.com/issues/48773
2990
    qa: scrub does not complete
2991
* https://tracker.ceph.com/issues/49845
2992
    qa: failed umount in test_volumes
2993
* https://tracker.ceph.com/issues/50220
2994
    qa: dbench workload timeout
2995
* https://tracker.ceph.com/issues/50221
2996
    qa: snaptest-git-ceph failure in git diff
2997
* https://tracker.ceph.com/issues/50222
2998
    osd: 5.2s0 deep-scrub : stat mismatch
2999
* https://tracker.ceph.com/issues/50223
3000
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
3001
* https://tracker.ceph.com/issues/50224
3002
    qa: test_mirroring_init_failure_with_recovery failure
3003
3004
h3. 2021 Apr 01
3005
3006
https://pulpito.ceph.com/pdonnell-2021-04-01_00:45:34-fs-wip-pdonnell-testing-20210331.222326-distro-basic-smithi/
3007
3008
* https://tracker.ceph.com/issues/48772
3009
    qa: pjd: not ok 9, 44, 80
3010
* https://tracker.ceph.com/issues/50177
3011
    osd: "stalled aio... buggy kernel or bad device?"
3012
* https://tracker.ceph.com/issues/48771
3013
    qa: iogen: workload fails to cause balancing
3014
* https://tracker.ceph.com/issues/49845
3015
    qa: failed umount in test_volumes
3016
* https://tracker.ceph.com/issues/48773
3017
    qa: scrub does not complete
3018
* https://tracker.ceph.com/issues/48805
3019
    mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details"
3020
* https://tracker.ceph.com/issues/50178
3021
    qa: "TypeError: run() got an unexpected keyword argument 'shell'"
3022
* https://tracker.ceph.com/issues/45434
3023
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
3024
3025
h3. 2021 Mar 24
3026
3027
https://pulpito.ceph.com/pdonnell-2021-03-24_23:26:35-fs-wip-pdonnell-testing-20210324.190252-distro-basic-smithi/
3028
3029
* https://tracker.ceph.com/issues/49500
3030
    qa: "Assertion `cb_done' failed."
3031
* https://tracker.ceph.com/issues/50019
3032
    qa: mount failure with cephadm "probably no MDS server is up?"
3033
* https://tracker.ceph.com/issues/50020
3034
    qa: "RADOS object not found (Failed to operate read op for oid cephfs_mirror)"
3035
* https://tracker.ceph.com/issues/48773
3036
    qa: scrub does not complete
3037
* https://tracker.ceph.com/issues/45434
3038
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
3039
* https://tracker.ceph.com/issues/48805
3040
    mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details"
3041
* https://tracker.ceph.com/issues/48772
3042
    qa: pjd: not ok 9, 44, 80
3043
* https://tracker.ceph.com/issues/50021
3044
    qa: snaptest-git-ceph failure during mon thrashing
3045
* https://tracker.ceph.com/issues/48771
3046
    qa: iogen: workload fails to cause balancing
3047
* https://tracker.ceph.com/issues/50016
3048
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
3049
* https://tracker.ceph.com/issues/49466
3050
    qa: "Command failed on gibba030 with status 1: 'set -ex\nsudo dd of=/tmp/tmp.ZEeZBasJer'"
3051
3052
3053
h3. 2021 Mar 18
3054
3055
https://pulpito.ceph.com/pdonnell-2021-03-18_13:46:31-fs-wip-pdonnell-testing-20210318.024145-distro-basic-smithi/
3056
3057
* https://tracker.ceph.com/issues/49466
3058
    qa: "Command failed on gibba030 with status 1: 'set -ex\nsudo dd of=/tmp/tmp.ZEeZBasJer'"
3059
* https://tracker.ceph.com/issues/48773
3060
    qa: scrub does not complete
3061
* https://tracker.ceph.com/issues/48805
3062
    mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details"
3063
* https://tracker.ceph.com/issues/45434
3064
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
3065
* https://tracker.ceph.com/issues/49845
3066
    qa: failed umount in test_volumes
3067
* https://tracker.ceph.com/issues/49605
3068
    mgr: drops command on the floor
3069
* https://tracker.ceph.com/issues/48203 (stock kernel update required)
3070
    qa: quota failure
3071
* https://tracker.ceph.com/issues/49928
3072
    client: items pinned in cache preventing unmount x2
3073
3074
h3. 2021 Mar 15
3075
3076
https://pulpito.ceph.com/pdonnell-2021-03-15_22:16:56-fs-wip-pdonnell-testing-20210315.182203-distro-basic-smithi/
3077
3078
* https://tracker.ceph.com/issues/49842
3079
    qa: stuck pkg install
3080
* https://tracker.ceph.com/issues/49466
3081
    qa: "Command failed on gibba030 with status 1: 'set -ex\nsudo dd of=/tmp/tmp.ZEeZBasJer'"
3082
* https://tracker.ceph.com/issues/49822
3083
    test: test_mirroring_command_idempotency (tasks.cephfs.test_admin.TestMirroringCommands) failure
3084
* https://tracker.ceph.com/issues/49240
3085
    terminate called after throwing an instance of 'std::bad_alloc'
3086
* https://tracker.ceph.com/issues/48773
3087
    qa: scrub does not complete
3088
* https://tracker.ceph.com/issues/45434
3089
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
3090
* https://tracker.ceph.com/issues/49500
3091
    qa: "Assertion `cb_done' failed."
3092
* https://tracker.ceph.com/issues/49843
3093
    qa: fs/snaps/snaptest-upchildrealms.sh failure
3094
* https://tracker.ceph.com/issues/49845
3095
    qa: failed umount in test_volumes
3096
* https://tracker.ceph.com/issues/48805
3097
    mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details"
3098
* https://tracker.ceph.com/issues/49605
3099
    mgr: drops command on the floor
3100
3101
and failure caused by PR: https://github.com/ceph/ceph/pull/39969
3102
3103
3104
h3. 2021 Mar 09
3105
3106
https://pulpito.ceph.com/pdonnell-2021-03-09_03:27:39-fs-wip-pdonnell-testing-20210308.214827-distro-basic-smithi/
3107
3108
* https://tracker.ceph.com/issues/49500
3109
    qa: "Assertion `cb_done' failed."
3110
* https://tracker.ceph.com/issues/48805
3111
    mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details"
3112
* https://tracker.ceph.com/issues/48773
3113
    qa: scrub does not complete
3114
* https://tracker.ceph.com/issues/45434
3115
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
3116
* https://tracker.ceph.com/issues/49240
3117
    terminate called after throwing an instance of 'std::bad_alloc'
3118
* https://tracker.ceph.com/issues/49466
3119
    qa: "Command failed on gibba030 with status 1: 'set -ex\nsudo dd of=/tmp/tmp.ZEeZBasJer'"
3120
* https://tracker.ceph.com/issues/49684
3121
    qa: fs:cephadm mount does not wait for mds to be created
3122
* https://tracker.ceph.com/issues/48771
3123
    qa: iogen: workload fails to cause balancing