Project

General

Profile

Main » History » Version 240

Patrick Donnelly, 04/02/2024 10:06 PM

1 221 Patrick Donnelly
h1. <code>main</code> branch
2 1 Patrick Donnelly
3 240 Patrick Donnelly
h3. 2024-04-02
4
5
https://tracker.ceph.com/issues/65215
6
7
* "qa: error during scrub thrashing: rank damage found: {'backtrace'}":https://tracker.ceph.com/issues/57676
8
* "qa: ceph tell 4.3a deep-scrub command not found":https://tracker.ceph.com/issues/64972
9
* "pacific/quincy/v18.2.0: client: ceph-fuse fails to unmount after upgrade to main":https://tracker.ceph.com/issues/64502
10
* "Test failure: test_cephfs_mirror_cancel_mirroring_and_readd":https://tracker.ceph.com/issues/64711
11
* "workunits/fsx.sh failure":https://tracker.ceph.com/issues/64572
12
* "qa: failed cephfs-shell test_reading_conf":https://tracker.ceph.com/issues/63699
13
* "centos 9 testing reveals rocksdb Leak_StillReachable memory leak in mons":https://tracker.ceph.com/issues/61774
14
* "qa: test_max_items_per_obj open procs not fully cleaned up":https://tracker.ceph.com/issues/65022
15
* "qa: dbench workload timeout":https://tracker.ceph.com/issues/50220
16
* "suites/fsstress.sh hangs on one client - test times out":https://tracker.ceph.com/issues/64707
17
18
19 236 Patrick Donnelly
h3. 2024-03-28
20
21
https://tracker.ceph.com/issues/65213
22
23 237 Patrick Donnelly
* "qa: error during scrub thrashing: rank damage found: {'backtrace'}":https://tracker.ceph.com/issues/57676
24
* "workunits/fsx.sh failure":https://tracker.ceph.com/issues/64572
25
* "PG_DEGRADED warnings during cluster creation via cephadm: Health check failed: Degraded data":https://tracker.ceph.com/issues/65018
26 238 Patrick Donnelly
* "suites/fsstress.sh hangs on one client - test times out":https://tracker.ceph.com/issues/64707
27
* "qa: ceph tell 4.3a deep-scrub command not found":https://tracker.ceph.com/issues/64972
28
* "qa: iogen workunit: The following counters failed to be set on mds daemons: {'mds.exported', 'mds.imported'}":https://tracker.ceph.com/issues/54108
29 239 Patrick Donnelly
* "qa: Scrub error on inode 0x1000000356c (/volumes/qa/sv_0/2f8f6bb4-3ea9-47a0-bd79-a0f50dc149d5/client.0/tmp/clients/client7/~dmtmp/PARADOX) see mds.b log and `damage ls` output for details in cluster log":https://tracker.ceph.com/issues/65020
30
* "qa: failed cephfs-shell test_reading_conf":https://tracker.ceph.com/issues/63699
31
* "Test failure: test_cephfs_mirror_cancel_mirroring_and_readd":https://tracker.ceph.com/issues/64711
32
* "qa: test_max_items_per_obj open procs not fully cleaned up":https://tracker.ceph.com/issues/65022
33
* "pacific/quincy/v18.2.0: client: ceph-fuse fails to unmount after upgrade to main":https://tracker.ceph.com/issues/64502
34
* "centos 9 testing reveals rocksdb Leak_StillReachable memory leak in mons":https://tracker.ceph.com/issues/61774
35
* "qa: Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)":https://tracker.ceph.com/issues/52624
36
* "qa: dbench workload timeout":https://tracker.ceph.com/issues/50220
37
38
39 236 Patrick Donnelly
40 235 Milind Changire
h3. 2024-03-25
41
42
https://pulpito.ceph.com/mchangir-2024-03-22_09:46:06-fs:upgrade-wip-mchangir-testing-main-20240318.032620-testing-default-smithi/
43
* https://tracker.ceph.com/issues/64502
44
  fusermount -u fails with: teuthology.exceptions.MaxWhileTries: reached maximum tries (51) after waiting for 300 seconds
45
46
https://pulpito.ceph.com/mchangir-2024-03-22_09:48:09-fs:libcephfs-wip-mchangir-testing-main-20240318.032620-testing-default-smithi/
47
48
* https://tracker.ceph.com/issues/62245
49
  libcephfs/test.sh failed - https://tracker.ceph.com/issues/62245#note-3
50
51
52 228 Patrick Donnelly
h3. 2024-03-20
53
54 234 Patrick Donnelly
https://pulpito.ceph.com/?branch=wip-batrick-testing-20240320.145742
55 228 Patrick Donnelly
56 233 Patrick Donnelly
https://github.com/batrick/ceph/commit/360516069d9393362c4cc6eb9371680fe16d66ab
57
58 229 Patrick Donnelly
Ubuntu jobs filtered out because builds were skipped by jenkins/shaman.
59 1 Patrick Donnelly
60 229 Patrick Donnelly
This run has a lot more failures because https://github.com/ceph/ceph/pull/55455 fixed log WRN/ERR checks.
61 228 Patrick Donnelly
62 229 Patrick Donnelly
* https://tracker.ceph.com/issues/57676
63
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
64
* https://tracker.ceph.com/issues/64572
65
    workunits/fsx.sh failure
66
* https://tracker.ceph.com/issues/65018
67
    PG_DEGRADED warnings during cluster creation via cephadm: "Health check failed: Degraded data redundancy: 2/192 objects degraded (1.042%), 1 pg degraded (PG_DEGRADED)"
68
* https://tracker.ceph.com/issues/64707 (new issue)
69
    suites/fsstress.sh hangs on one client - test times out
70 1 Patrick Donnelly
* https://tracker.ceph.com/issues/64988
71
    qa: fs:workloads mgr client evicted indicated by "cluster [WRN] evicting unresponsive client smithi042:x (15288), after 303.306 seconds"
72
* https://tracker.ceph.com/issues/59684
73
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
74 230 Patrick Donnelly
* https://tracker.ceph.com/issues/64972
75
    qa: "ceph tell 4.3a deep-scrub" command not found
76
* https://tracker.ceph.com/issues/54108
77
    qa: iogen workunit: "The following counters failed to be set on mds daemons: {'mds.exported', 'mds.imported'}"
78
* https://tracker.ceph.com/issues/65019
79
    qa/suites/fs/top: [WRN] Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)" in cluster log 
80
* https://tracker.ceph.com/issues/65020
81
    qa: Scrub error on inode 0x1000000356c (/volumes/qa/sv_0/2f8f6bb4-3ea9-47a0-bd79-a0f50dc149d5/client.0/tmp/clients/client7/~dmtmp/PARADOX) see mds.b log and `damage ls` output for details" in cluster log
82
* https://tracker.ceph.com/issues/65021
83
    qa/suites/fs/nfs: cluster [WRN] Health check failed: 1 stray daemon(s) not managed by cephadm (CEPHADM_STRAY_DAEMON)" in cluster log
84
* https://tracker.ceph.com/issues/63699
85
    qa: failed cephfs-shell test_reading_conf
86 231 Patrick Donnelly
* https://tracker.ceph.com/issues/64711
87
    Test failure: test_cephfs_mirror_cancel_mirroring_and_readd (tasks.cephfs.test_mirroring.TestMirroring)
88
* https://tracker.ceph.com/issues/50821
89
    qa: untar_snap_rm failure during mds thrashing
90 232 Patrick Donnelly
* https://tracker.ceph.com/issues/65022
91
    qa: test_max_items_per_obj open procs not fully cleaned up
92 228 Patrick Donnelly
93 226 Venky Shankar
h3.  14th March 2024
94
95
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20240307.013758
96
97 227 Venky Shankar
(pjd.sh failures are related to a bug in the testing kernel. See - https://tracker.ceph.com/issues/64679#note-4)
98 226 Venky Shankar
99
* https://tracker.ceph.com/issues/62067
100
    ffsb.sh failure "Resource temporarily unavailable"
101
* https://tracker.ceph.com/issues/57676
102
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
103
* https://tracker.ceph.com/issues/64502
104
    pacific/quincy/v18.2.0: client: ceph-fuse fails to unmount after upgrade to main
105
* https://tracker.ceph.com/issues/64572
106
    workunits/fsx.sh failure
107
* https://tracker.ceph.com/issues/63700
108
    qa: test_cd_with_args failure
109
* https://tracker.ceph.com/issues/59684
110
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
111
* https://tracker.ceph.com/issues/61243
112
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
113
114 225 Venky Shankar
h3. 5th March 2024
115
116
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20240304.042522
117
118
* https://tracker.ceph.com/issues/57676
119
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
120
* https://tracker.ceph.com/issues/64502
121
    pacific/quincy/v18.2.0: client: ceph-fuse fails to unmount after upgrade to main
122
* https://tracker.ceph.com/issues/63949
123
    leak in mds.c detected by valgrind during CephFS QA run
124
* https://tracker.ceph.com/issues/57656
125
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
126
* https://tracker.ceph.com/issues/63699
127
    qa: failed cephfs-shell test_reading_conf
128
* https://tracker.ceph.com/issues/64572
129
    workunits/fsx.sh failure
130
* https://tracker.ceph.com/issues/64707 (new issue)
131
    suites/fsstress.sh hangs on one client - test times out
132
* https://tracker.ceph.com/issues/59684
133
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
134
* https://tracker.ceph.com/issues/63700
135
    qa: test_cd_with_args failure
136
* https://tracker.ceph.com/issues/64711
137
    Test failure: test_cephfs_mirror_cancel_mirroring_and_readd (tasks.cephfs.test_mirroring.TestMirroring)
138
* https://tracker.ceph.com/issues/64729 (new issue)
139
    mon.a (mon.0) 1281 : cluster 3 [WRN] MDS_SLOW_METADATA_IO: 3 MDSs report slow metadata IOs" in cluster log
140
* https://tracker.ceph.com/issues/64730
141
    fs/misc/multiple_rsync.sh workunit times out
142
143 224 Venky Shankar
h3. 26th Feb 2024
144
145
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20240216.060239
146
147
(This run is a bit messy due to
148
149
  a) OCI runtime issues in the testing kernel with centos9
150
  b) SELinux denials related failures
151
  c) Unrelated MON_DOWN warnings)
152
153
* https://tracker.ceph.com/issues/57676
154
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
155
* https://tracker.ceph.com/issues/63700
156
    qa: test_cd_with_args failure
157
* https://tracker.ceph.com/issues/63949
158
    leak in mds.c detected by valgrind during CephFS QA run
159
* https://tracker.ceph.com/issues/59684
160
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
161
* https://tracker.ceph.com/issues/61243
162
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
163
* https://tracker.ceph.com/issues/63699
164
    qa: failed cephfs-shell test_reading_conf
165
* https://tracker.ceph.com/issues/64172
166
    Test failure: test_multiple_path_r (tasks.cephfs.test_admin.TestFsAuthorize)
167
* https://tracker.ceph.com/issues/57656
168
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
169
* https://tracker.ceph.com/issues/64572
170
    workunits/fsx.sh failure
171
172 222 Patrick Donnelly
h3. 20th Feb 2024
173
174
https://github.com/ceph/ceph/pull/55601
175
https://github.com/ceph/ceph/pull/55659
176
177
https://pulpito.ceph.com/pdonnell-2024-02-20_07:23:03-fs:upgrade:mds_upgrade_sequence-wip-batrick-testing-20240220.022152-distro-default-smithi/
178
179
* https://tracker.ceph.com/issues/64502
180
    client: quincy ceph-fuse fails to unmount after upgrade to main
181
182 223 Patrick Donnelly
This run has numerous problems. #55601 introduces testing for the upgrade sequence from </code>reef/{v18.2.0,v18.2.1,reef}</code> as well as an extra dimension for the ceph-fuse client. The main "big" issue is i64502: the ceph-fuse client is not being unmounted when <code>fusermount -u</code> is called. Instead, the client begins to unmount only after daemons are shut down during test cleanup.
183 218 Venky Shankar
184
h3. 19th Feb 2024
185
186 220 Venky Shankar
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20240217.015652
187
188 218 Venky Shankar
* https://tracker.ceph.com/issues/61243
189
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
190
* https://tracker.ceph.com/issues/63700
191
    qa: test_cd_with_args failure
192
* https://tracker.ceph.com/issues/63141
193
    qa/cephfs: test_idem_unaffected_root_squash fails
194
* https://tracker.ceph.com/issues/59684
195
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
196
* https://tracker.ceph.com/issues/63949
197
    leak in mds.c detected by valgrind during CephFS QA run
198
* https://tracker.ceph.com/issues/63764
199
    Test failure: test_r_with_fsname_and_no_path_in_cap (tasks.cephfs.test_multifs_auth.TestMDSCaps)
200
* https://tracker.ceph.com/issues/63699
201
    qa: failed cephfs-shell test_reading_conf
202 219 Venky Shankar
* https://tracker.ceph.com/issues/64482
203
    ceph: stderr Error: OCI runtime error: crun: bpf create ``: Function not implemented
204 201 Rishabh Dave
205 217 Venky Shankar
h3. 29 Jan 2024
206
207
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20240119.075157-1
208
209
* https://tracker.ceph.com/issues/57676
210
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
211
* https://tracker.ceph.com/issues/63949
212
    leak in mds.c detected by valgrind during CephFS QA run
213
* https://tracker.ceph.com/issues/62067
214
    ffsb.sh failure "Resource temporarily unavailable"
215
* https://tracker.ceph.com/issues/64172
216
    Test failure: test_multiple_path_r (tasks.cephfs.test_admin.TestFsAuthorize)
217
* https://tracker.ceph.com/issues/63265
218
    qa: fs/snaps/snaptest-git-ceph.sh failed when reseting to tag 'v0.1'
219
* https://tracker.ceph.com/issues/61243
220
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
221
* https://tracker.ceph.com/issues/59684
222
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
223
* https://tracker.ceph.com/issues/57656
224
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
225
* https://tracker.ceph.com/issues/64209
226
    snaptest-multiple-capsnaps.sh fails with "got remote process result: 1"
227
228 216 Venky Shankar
h3. 17th Jan 2024
229
230
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20240103.072409-1
231
232
* https://tracker.ceph.com/issues/63764
233
    Test failure: test_r_with_fsname_and_no_path_in_cap (tasks.cephfs.test_multifs_auth.TestMDSCaps)
234
* https://tracker.ceph.com/issues/57676
235
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
236
* https://tracker.ceph.com/issues/51964
237
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
238
* https://tracker.ceph.com/issues/63949
239
    leak in mds.c detected by valgrind during CephFS QA run
240
* https://tracker.ceph.com/issues/62067
241
    ffsb.sh failure "Resource temporarily unavailable"
242
* https://tracker.ceph.com/issues/61243
243
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
244
* https://tracker.ceph.com/issues/63259
245
    mds: failed to store backtrace and force file system read-only
246
* https://tracker.ceph.com/issues/63265
247
    qa: fs/snaps/snaptest-git-ceph.sh failed when reseting to tag 'v0.1'
248
249
h3. 16 Jan 2024
250 215 Rishabh Dave
251 214 Rishabh Dave
https://pulpito.ceph.com/rishabh-2023-12-11_15:37:57-fs-rishabh-2023dec11-testing-default-smithi/
252
https://pulpito.ceph.com/rishabh-2023-12-17_11:19:43-fs-rishabh-2023dec11-testing-default-smithi/
253
https://pulpito.ceph.com/rishabh-2024-01-04_18:43:16-fs-rishabh-2024jan4-testing-default-smithi
254
255
* https://tracker.ceph.com/issues/63764
256
  Test failure: test_r_with_fsname_and_no_path_in_cap (tasks.cephfs.test_multifs_auth.TestMDSCaps)
257
* https://tracker.ceph.com/issues/63141
258
  qa/cephfs: test_idem_unaffected_root_squash fails
259
* https://tracker.ceph.com/issues/62067
260
  ffsb.sh failure "Resource temporarily unavailable" 
261
* https://tracker.ceph.com/issues/51964
262
  qa: test_cephfs_mirror_restart_sync_on_blocklist failure
263
* https://tracker.ceph.com/issues/54462 
264
  Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128
265
* https://tracker.ceph.com/issues/57676
266
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
267
268
* https://tracker.ceph.com/issues/63949
269
  valgrind leak in MDS
270
* https://tracker.ceph.com/issues/64041
271
  qa/cephfs: fs/upgrade/nofs suite attempts to jump more than 2 releases
272
* fsstress failure in last run was due a kernel MM layer failure, unrelated to CephFS
273
* from last run, job #7507400 failed due to MGR; FS wasn't degraded, so it's unrelated to CephFS
274
275 213 Venky Shankar
h3. 06 Dec 2023
276
277
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20231206.125818
278
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20231206.125818-x (rerun w/ squid kickoff changes)
279
280
* https://tracker.ceph.com/issues/63764
281
    Test failure: test_r_with_fsname_and_no_path_in_cap (tasks.cephfs.test_multifs_auth.TestMDSCaps)
282
* https://tracker.ceph.com/issues/63233
283
    mon|client|mds: valgrind reports possible leaks in the MDS
284
* https://tracker.ceph.com/issues/57676
285
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
286
* https://tracker.ceph.com/issues/62580
287
    testing: Test failure: test_snapshot_remove (tasks.cephfs.test_strays.TestStrays)
288
* https://tracker.ceph.com/issues/62067
289
    ffsb.sh failure "Resource temporarily unavailable"
290
* https://tracker.ceph.com/issues/61243
291
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
292
* https://tracker.ceph.com/issues/62081
293
    tasks/fscrypt-common does not finish, timesout
294
* https://tracker.ceph.com/issues/63265
295
    qa: fs/snaps/snaptest-git-ceph.sh failed when reseting to tag 'v0.1'
296
* https://tracker.ceph.com/issues/63806
297
    ffsb.sh workunit failure (MDS: std::out_of_range, damaged)
298
299 211 Patrick Donnelly
h3. 30 Nov 2023
300
301
https://pulpito.ceph.com/pdonnell-2023-11-30_08:05:19-fs:shell-wip-batrick-testing-20231130.014408-distro-default-smithi/
302
303
* https://tracker.ceph.com/issues/63699
304 212 Patrick Donnelly
    qa: failed cephfs-shell test_reading_conf
305
* https://tracker.ceph.com/issues/63700
306
    qa: test_cd_with_args failure
307 211 Patrick Donnelly
308 210 Venky Shankar
h3. 29 Nov 2023
309
310
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20231107.042705
311
312
* https://tracker.ceph.com/issues/63233
313
    mon|client|mds: valgrind reports possible leaks in the MDS
314
* https://tracker.ceph.com/issues/63141
315
    qa/cephfs: test_idem_unaffected_root_squash fails
316
* https://tracker.ceph.com/issues/57676
317
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
318
* https://tracker.ceph.com/issues/57655
319
    qa: fs:mixed-clients kernel_untar_build failure
320
* https://tracker.ceph.com/issues/62067
321
    ffsb.sh failure "Resource temporarily unavailable"
322
* https://tracker.ceph.com/issues/61243
323
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
324
* https://tracker.ceph.com/issues/62510 (pending RHEL back port)
    snaptest-git-ceph.sh failure with fs/thrash
325
* https://tracker.ceph.com/issues/62810
326
    Failure in snaptest-git-ceph.sh (it's an async unlink/create bug) -- Need to fix again
327
328 206 Venky Shankar
h3. 14 Nov 2023
329 207 Milind Changire
(Milind)
330
331
https://pulpito.ceph.com/mchangir-2023-11-13_10:27:15-fs-wip-mchangir-testing-20231110.052303-testing-default-smithi/
332
333
* https://tracker.ceph.com/issues/53859
334
  qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
335
* https://tracker.ceph.com/issues/63233
336
  mon|client|mds: valgrind reports possible leaks in the MDS
337
* https://tracker.ceph.com/issues/63521
338
  qa: Test failure: test_scrub_merge_dirfrags (tasks.cephfs.test_scrub_checks.TestScrubChecks)
339
* https://tracker.ceph.com/issues/57655
340
  qa: fs:mixed-clients kernel_untar_build failure
341
* https://tracker.ceph.com/issues/62580
342
    testing: Test failure: test_snapshot_remove (tasks.cephfs.test_strays.TestStrays)
343
* https://tracker.ceph.com/issues/57676
344
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
345
* https://tracker.ceph.com/issues/61243
346
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
347
* https://tracker.ceph.com/issues/63141
348
    qa/cephfs: test_idem_unaffected_root_squash fails
349
* https://tracker.ceph.com/issues/51964
350
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
351
* https://tracker.ceph.com/issues/63522
352
    No module named 'tasks.ceph_fuse'
353
    No module named 'tasks.kclient'
354
    No module named 'tasks.cephfs.fuse_mount'
355
    No module named 'tasks.ceph'
356
* https://tracker.ceph.com/issues/63523
357
    Command failed - qa/workunits/fs/misc/general_vxattrs.sh
358
359
360
h3. 14 Nov 2023
361 206 Venky Shankar
362
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20231106.073650
363
364
(nvm the fs:upgrade test failure - the PR is excluded from merge)
365
366
* https://tracker.ceph.com/issues/57676
367
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
368
* https://tracker.ceph.com/issues/63233
369
    mon|client|mds: valgrind reports possible leaks in the MDS
370
* https://tracker.ceph.com/issues/63141
371
    qa/cephfs: test_idem_unaffected_root_squash fails
372
* https://tracker.ceph.com/issues/62580
373
    testing: Test failure: test_snapshot_remove (tasks.cephfs.test_strays.TestStrays)
374
* https://tracker.ceph.com/issues/57655
375
    qa: fs:mixed-clients kernel_untar_build failure
376
* https://tracker.ceph.com/issues/51964
377
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
378
* https://tracker.ceph.com/issues/63519
379
    ceph-fuse: reef ceph-fuse crashes with main branch ceph-mds
380
* https://tracker.ceph.com/issues/57087
381
    qa: test_fragmented_injection (tasks.cephfs.test_data_scan.TestDataScan) failure
382
* https://tracker.ceph.com/issues/58945
383
    qa: xfstests-dev's generic test suite has 20 failures with fuse client
384
385 204 Rishabh Dave
h3. 7 Nov 2023
386
387 205 Rishabh Dave
fs: https://pulpito.ceph.com/rishabh-2023-11-04_04:30:51-fs-rishabh-2023nov3-testing-default-smithi/
388
re-run: https://pulpito.ceph.com/rishabh-2023-11-05_14:10:09-fs-rishabh-2023nov3-testing-default-smithi/
389
smoke: https://pulpito.ceph.com/rishabh-2023-11-08_08:39:05-smoke-rishabh-2023nov3-testing-default-smithi/
390 204 Rishabh Dave
391
* https://tracker.ceph.com/issues/53859
392
  qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
393
* https://tracker.ceph.com/issues/63233
394
  mon|client|mds: valgrind reports possible leaks in the MDS
395
* https://tracker.ceph.com/issues/57655
396
  qa: fs:mixed-clients kernel_untar_build failure
397
* https://tracker.ceph.com/issues/57676
398
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
399
400
* https://tracker.ceph.com/issues/63473
401
  fsstress.sh failed with errno 124
402
403 202 Rishabh Dave
h3. 3 Nov 2023
404 203 Rishabh Dave
405 202 Rishabh Dave
https://pulpito.ceph.com/rishabh-2023-10-27_06:26:52-fs-rishabh-2023oct26-testing-default-smithi/
406
407
* https://tracker.ceph.com/issues/63141
408
  qa/cephfs: test_idem_unaffected_root_squash fails
409
* https://tracker.ceph.com/issues/63233
410
  mon|client|mds: valgrind reports possible leaks in the MDS
411
* https://tracker.ceph.com/issues/57656
412
  dbench: write failed on handle 10010 (Resource temporarily unavailable)
413
* https://tracker.ceph.com/issues/57655
414
  qa: fs:mixed-clients kernel_untar_build failure
415
* https://tracker.ceph.com/issues/57676
416
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
417
418
* https://tracker.ceph.com/issues/59531
419
  "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio)"
420
* https://tracker.ceph.com/issues/52624
421
  qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
422
423 198 Patrick Donnelly
h3. 24 October 2023
424
425
https://pulpito.ceph.com/?branch=wip-batrick-testing-20231024.144545
426
427 200 Patrick Donnelly
Two failures:
428
429
https://pulpito.ceph.com/pdonnell-2023-10-26_05:21:22-fs-wip-batrick-testing-20231024.144545-distro-default-smithi/7438459/
430
https://pulpito.ceph.com/pdonnell-2023-10-26_05:21:22-fs-wip-batrick-testing-20231024.144545-distro-default-smithi/7438468/
431
432
probably related to https://github.com/ceph/ceph/pull/53255. Killing the mount as part of the test did not complete. Will research more.
433
434 198 Patrick Donnelly
* https://tracker.ceph.com/issues/52624
435
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
436
* https://tracker.ceph.com/issues/57676
437 199 Patrick Donnelly
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
438
* https://tracker.ceph.com/issues/63233
439
    mon|client|mds: valgrind reports possible leaks in the MDS
440
* https://tracker.ceph.com/issues/59531
441
    "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS 
442
* https://tracker.ceph.com/issues/57655
443
    qa: fs:mixed-clients kernel_untar_build failure
444 200 Patrick Donnelly
* https://tracker.ceph.com/issues/62067
445
    ffsb.sh failure "Resource temporarily unavailable"
446
* https://tracker.ceph.com/issues/63411
447
    qa: flush journal may cause timeouts of `scrub status`
448
* https://tracker.ceph.com/issues/61243
449
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
450
* https://tracker.ceph.com/issues/63141
451 198 Patrick Donnelly
    test_idem_unaffected_root_squash (test_admin.TestFsAuthorizeUpdate) fails
452 148 Rishabh Dave
453 195 Venky Shankar
h3. 18 Oct 2023
454
455
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20231018.065603
456
457
* https://tracker.ceph.com/issues/52624
458
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
459
* https://tracker.ceph.com/issues/57676
460
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
461
* https://tracker.ceph.com/issues/63233
462
    mon|client|mds: valgrind reports possible leaks in the MDS
463
* https://tracker.ceph.com/issues/63141
464
    qa/cephfs: test_idem_unaffected_root_squash fails
465
* https://tracker.ceph.com/issues/59531
466
  "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio)" 
467
* https://tracker.ceph.com/issues/62658
468
    error during scrub thrashing: reached maximum tries (31) after waiting for 900 seconds
469
* https://tracker.ceph.com/issues/62580
470
    testing: Test failure: test_snapshot_remove (tasks.cephfs.test_strays.TestStrays)
471
* https://tracker.ceph.com/issues/62067
472
    ffsb.sh failure "Resource temporarily unavailable"
473
* https://tracker.ceph.com/issues/57655
474
    qa: fs:mixed-clients kernel_untar_build failure
475
* https://tracker.ceph.com/issues/62036
476
    src/mds/MDCache.cc: 5131: FAILED ceph_assert(isolated_inodes.empty())
477
* https://tracker.ceph.com/issues/58945
478
    qa: xfstests-dev's generic test suite has 20 failures with fuse client
479
* https://tracker.ceph.com/issues/62847
480
    mds: blogbench requests stuck (5mds+scrub+snaps-flush)
481
482 193 Venky Shankar
h3. 13 Oct 2023
483
484
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20231013.093215
485
486
* https://tracker.ceph.com/issues/52624
487
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
488
* https://tracker.ceph.com/issues/62936
489
    Test failure: test_mirroring_init_failure_with_recovery (tasks.cephfs.test_mirroring.TestMirroring)
490
* https://tracker.ceph.com/issues/47292
491
    cephfs-shell: test_df_for_valid_file failure
492
* https://tracker.ceph.com/issues/63141
493
    qa/cephfs: test_idem_unaffected_root_squash fails
494
* https://tracker.ceph.com/issues/62081
495
    tasks/fscrypt-common does not finish, timesout
496 1 Patrick Donnelly
* https://tracker.ceph.com/issues/58945
497
    qa: xfstests-dev's generic test suite has 20 failures with fuse client
498 194 Venky Shankar
* https://tracker.ceph.com/issues/63233
499
    mon|client|mds: valgrind reports possible leaks in the MDS
500 193 Venky Shankar
501 190 Patrick Donnelly
h3. 16 Oct 2023
502
503
https://pulpito.ceph.com/?branch=wip-batrick-testing-20231016.203825
504
505 192 Patrick Donnelly
Infrastructure issues:
506
* /teuthology/pdonnell-2023-10-19_12:04:12-fs-wip-batrick-testing-20231016.203825-distro-default-smithi/7432286/teuthology.log
507
    Host lost.
508
509 196 Patrick Donnelly
One followup fix:
510
* https://pulpito.ceph.com/pdonnell-2023-10-20_00:33:29-fs-wip-batrick-testing-20231016.203825-distro-default-smithi/
511
512 192 Patrick Donnelly
Failures:
513
514
* https://tracker.ceph.com/issues/56694
515
    qa: avoid blocking forever on hung umount
516
* https://tracker.ceph.com/issues/63089
517
    qa: tasks/mirror times out
518
* https://tracker.ceph.com/issues/52624
519
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
520
* https://tracker.ceph.com/issues/59531
521
  "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio)" 
522
* https://tracker.ceph.com/issues/57676
523
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
524
* https://tracker.ceph.com/issues/62658 
525
    error during scrub thrashing: reached maximum tries (31) after waiting for 900 seconds
526
* https://tracker.ceph.com/issues/61243
527
  test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
528
* https://tracker.ceph.com/issues/57656
529
  dbench: write failed on handle 10010 (Resource temporarily unavailable)
530
* https://tracker.ceph.com/issues/63233
531
  mon|client|mds: valgrind reports possible leaks in the MDS
532 197 Patrick Donnelly
* https://tracker.ceph.com/issues/63278
533
  kclient: may wrongly decode session messages and believe it is blocklisted (dead jobs)
534 192 Patrick Donnelly
535 189 Rishabh Dave
h3. 9 Oct 2023
536
537
https://pulpito.ceph.com/rishabh-2023-10-06_11:56:52-fs-rishabh-cephfs-mon-testing-default-smithi/
538
539
* https://tracker.ceph.com/issues/54460
540
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
541
* https://tracker.ceph.com/issues/63141
542
  test_idem_unaffected_root_squash (test_admin.TestFsAuthorizeUpdate) fails
543
* https://tracker.ceph.com/issues/62937
544
  logrotate doesn't support parallel execution on same set of logfiles
545
* https://tracker.ceph.com/issues/61400
546
  valgrind+ceph-mon issues
547
* https://tracker.ceph.com/issues/57676
548
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
549
* https://tracker.ceph.com/issues/55805
550
  error during scrub thrashing reached max tries in 900 secs
551
552 188 Venky Shankar
h3. 26 Sep 2023
553
554
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230926.081818
555
556
* https://tracker.ceph.com/issues/52624
557
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
558
* https://tracker.ceph.com/issues/62873
559
    qa: FAIL: test_client_blocklisted_oldest_tid (tasks.cephfs.test_client_limits.TestClientLimits)
560
* https://tracker.ceph.com/issues/61400
561
    valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
562
* https://tracker.ceph.com/issues/57676
563
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
564
* https://tracker.ceph.com/issues/62682
565
    mon: no mdsmap broadcast after "fs set joinable" is set to true
566
* https://tracker.ceph.com/issues/63089
567
    qa: tasks/mirror times out
568
569 185 Rishabh Dave
h3. 22 Sep 2023
570
571
https://pulpito.ceph.com/rishabh-2023-09-12_12:12:15-fs-wip-rishabh-2023sep12-b2-testing-default-smithi/
572
573
* https://tracker.ceph.com/issues/59348
574
  qa: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota
575
* https://tracker.ceph.com/issues/59344
576
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
577
* https://tracker.ceph.com/issues/59531
578
  "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio)" 
579
* https://tracker.ceph.com/issues/61574
580
  build failure for mdtest project
581
* https://tracker.ceph.com/issues/62702
582
  fsstress.sh: MDS slow requests for the internal 'rename' requests
583
* https://tracker.ceph.com/issues/57676
584
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
585
586
* https://tracker.ceph.com/issues/62863 
587
  deadlock in ceph-fuse causes teuthology job to hang and fail
588
* https://tracker.ceph.com/issues/62870
589
  test_cluster_info (tasks.cephfs.test_nfs.TestNFS)
590
* https://tracker.ceph.com/issues/62873
591
  test_client_blocklisted_oldest_tid (tasks.cephfs.test_client_limits.TestClientLimits)
592
593 186 Venky Shankar
h3. 20 Sep 2023
594
595
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230920.072635
596
597
* https://tracker.ceph.com/issues/52624
598
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
599
* https://tracker.ceph.com/issues/61400
600
    valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
601
* https://tracker.ceph.com/issues/61399
602
    libmpich: undefined references to fi_strerror
603
* https://tracker.ceph.com/issues/62081
604
    tasks/fscrypt-common does not finish, timesout
605
* https://tracker.ceph.com/issues/62658 
606
    error during scrub thrashing: reached maximum tries (31) after waiting for 900 seconds
607
* https://tracker.ceph.com/issues/62915
608
    qa/suites/fs/nfs: No orchestrator configured (try `ceph orch set backend`) while running test cases
609
* https://tracker.ceph.com/issues/59531
610
    quincy: "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio)"
611
* https://tracker.ceph.com/issues/62873
612
  qa: FAIL: test_client_blocklisted_oldest_tid (tasks.cephfs.test_client_limits.TestClientLimits)
613
* https://tracker.ceph.com/issues/62936
614
    Test failure: test_mirroring_init_failure_with_recovery (tasks.cephfs.test_mirroring.TestMirroring)
615
* https://tracker.ceph.com/issues/62937
616
    Command failed on smithi027 with status 3: 'sudo logrotate /etc/logrotate.d/ceph-test.conf'
617
* https://tracker.ceph.com/issues/62510
618
    snaptest-git-ceph.sh failure with fs/thrash
619
* https://tracker.ceph.com/issues/62081
620
    tasks/fscrypt-common does not finish, timesout
621
* https://tracker.ceph.com/issues/62126
622
    test failure: suites/blogbench.sh stops running
623 187 Venky Shankar
* https://tracker.ceph.com/issues/62682
624
    mon: no mdsmap broadcast after "fs set joinable" is set to true
625 186 Venky Shankar
626 184 Milind Changire
h3. 19 Sep 2023
627
628
http://pulpito.front.sepia.ceph.com/mchangir-2023-09-12_05:40:22-fs-wip-mchangir-testing-20230908.140927-testing-default-smithi/
629
630
* https://tracker.ceph.com/issues/58220#note-9
631
  workunit fs/test_python.sh: test_disk_quota_exceeeded_error failure
632
* https://tracker.ceph.com/issues/62702
633
  Command failed (workunit test suites/fsstress.sh) on smithi124 with status 124
634
* https://tracker.ceph.com/issues/57676
635
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
636
* https://tracker.ceph.com/issues/59348
637
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
638
* https://tracker.ceph.com/issues/52624
639
  qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
640
* https://tracker.ceph.com/issues/51964
641
  qa: test_cephfs_mirror_restart_sync_on_blocklist failure
642
* https://tracker.ceph.com/issues/61243
643
  test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
644
* https://tracker.ceph.com/issues/59344
645
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
646
* https://tracker.ceph.com/issues/62873
647
  qa: FAIL: test_client_blocklisted_oldest_tid (tasks.cephfs.test_client_limits.TestClientLimits)
648
* https://tracker.ceph.com/issues/59413
649
  cephfs: qa snaptest-git-ceph.sh failed with "got remote process result: 128"
650
* https://tracker.ceph.com/issues/53859
651
  qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
652
* https://tracker.ceph.com/issues/62482
653
  qa: cluster [WRN] Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
654
655 178 Patrick Donnelly
656 177 Venky Shankar
h3. 13 Sep 2023
657
658
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230908.065909
659
660
* https://tracker.ceph.com/issues/52624
661
      qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
662
* https://tracker.ceph.com/issues/57655
663
    qa: fs:mixed-clients kernel_untar_build failure
664
* https://tracker.ceph.com/issues/57676
665
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
666
* https://tracker.ceph.com/issues/61243
667
    qa: tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev - 17 tests failed
668
* https://tracker.ceph.com/issues/62567
669
    postgres workunit times out - MDS_SLOW_REQUEST in logs
670
* https://tracker.ceph.com/issues/61400
671
    valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
672
* https://tracker.ceph.com/issues/61399
673
    libmpich: undefined references to fi_strerror
674
* https://tracker.ceph.com/issues/57655
675
    qa: fs:mixed-clients kernel_untar_build failure
676
* https://tracker.ceph.com/issues/57676
677
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
678
* https://tracker.ceph.com/issues/51964
679
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
680
* https://tracker.ceph.com/issues/62081
681
    tasks/fscrypt-common does not finish, timesout
682 178 Patrick Donnelly
683 179 Patrick Donnelly
h3. 2023 Sep 12
684 178 Patrick Donnelly
685
https://pulpito.ceph.com/pdonnell-2023-09-12_14:07:50-fs-wip-batrick-testing-20230912.122437-distro-default-smithi/
686 1 Patrick Donnelly
687 181 Patrick Donnelly
A few failures caused by qa refactoring in https://github.com/ceph/ceph/pull/48130 ; notably:
688
689 182 Patrick Donnelly
* Test failure: test_export_pin_many (tasks.cephfs.test_exports.TestExportPin) caused by fragmentation from config changes.
690 181 Patrick Donnelly
691
Failures:
692
693 179 Patrick Donnelly
* https://tracker.ceph.com/issues/59348
694
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
695
* https://tracker.ceph.com/issues/57656
696
  dbench: write failed on handle 10010 (Resource temporarily unavailable)
697
* https://tracker.ceph.com/issues/55805
698
  error scrub thrashing reached max tries in 900 secs
699
* https://tracker.ceph.com/issues/62067
700
    ffsb.sh failure "Resource temporarily unavailable"
701
* https://tracker.ceph.com/issues/59344
702
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
703
* https://tracker.ceph.com/issues/61399
704 180 Patrick Donnelly
  libmpich: undefined references to fi_strerror
705
* https://tracker.ceph.com/issues/62832
706
  common: config_proxy deadlock during shutdown (and possibly other times)
707
* https://tracker.ceph.com/issues/59413
708 1 Patrick Donnelly
  cephfs: qa snaptest-git-ceph.sh failed with "got remote process result: 128"
709 181 Patrick Donnelly
* https://tracker.ceph.com/issues/57676
710
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
711
* https://tracker.ceph.com/issues/62567
712
  Command failed with status 1: "sudo TESTDIR=/home/ubuntu/cephtest bash -c 'sudo -u postgres -- pgbench -s 500 -i'"
713
* https://tracker.ceph.com/issues/54460
714
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
715
* https://tracker.ceph.com/issues/58220#note-9
716
  workunit fs/test_python.sh: test_disk_quota_exceeeded_error failure
717
* https://tracker.ceph.com/issues/59348
718
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
719 183 Patrick Donnelly
* https://tracker.ceph.com/issues/62847
720
    mds: blogbench requests stuck (5mds+scrub+snaps-flush)
721
* https://tracker.ceph.com/issues/62848
722
    qa: fail_fs upgrade scenario hanging
723
* https://tracker.ceph.com/issues/62081
724
    tasks/fscrypt-common does not finish, timesout
725 177 Venky Shankar
726 176 Venky Shankar
h3. 11 Sep 2023
727 175 Venky Shankar
728
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230830.153114
729
730
* https://tracker.ceph.com/issues/52624
731
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
732
* https://tracker.ceph.com/issues/61399
733
    qa: build failure for ior (the failed instance is when compiling `mdtest`)
734
* https://tracker.ceph.com/issues/57655
735
    qa: fs:mixed-clients kernel_untar_build failure
736
* https://tracker.ceph.com/issues/61399
737
    ior build failure
738
* https://tracker.ceph.com/issues/59531
739
    quincy: "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio)"
740
* https://tracker.ceph.com/issues/59344
741
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
742
* https://tracker.ceph.com/issues/59346
743
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
744
* https://tracker.ceph.com/issues/59348
745
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
746
* https://tracker.ceph.com/issues/57676
747
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
748
* https://tracker.ceph.com/issues/61243
749
  qa: tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev - 17 tests failed
750
* https://tracker.ceph.com/issues/62567
751
  postgres workunit times out - MDS_SLOW_REQUEST in logs
752
753
754 174 Rishabh Dave
h3. 6 Sep 2023 Run 2
755
756
https://pulpito.ceph.com/rishabh-2023-08-25_01:50:32-fs-wip-rishabh-2023aug3-b5-testing-default-smithi/ 
757
758
* https://tracker.ceph.com/issues/51964
759
  test_cephfs_mirror_restart_sync_on_blocklist failure
760
* https://tracker.ceph.com/issues/59348
761
  test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
762
* https://tracker.ceph.com/issues/53859
763
  qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
764
* https://tracker.ceph.com/issues/61892
765
  test_strays.TestStrays.test_snapshot_remove failed
766
* https://tracker.ceph.com/issues/54460
767
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
768
* https://tracker.ceph.com/issues/59346
769
  fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write"
770
* https://tracker.ceph.com/issues/59344
771
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
772
* https://tracker.ceph.com/issues/62484
773
  qa: ffsb.sh test failure
774
* https://tracker.ceph.com/issues/62567
775
  Command failed with status 1: "sudo TESTDIR=/home/ubuntu/cephtest bash -c 'sudo -u postgres -- pgbench -s 500 -i'"
776
  
777
* https://tracker.ceph.com/issues/61399
778
  ior build failure
779
* https://tracker.ceph.com/issues/57676
780
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
781
* https://tracker.ceph.com/issues/55805
782
  error scrub thrashing reached max tries in 900 secs
783
784 172 Rishabh Dave
h3. 6 Sep 2023
785 171 Rishabh Dave
786 173 Rishabh Dave
https://pulpito.ceph.com/rishabh-2023-08-10_20:16:46-fs-wip-rishabh-2023Aug1-b4-testing-default-smithi/
787 171 Rishabh Dave
788 1 Patrick Donnelly
* https://tracker.ceph.com/issues/53859
789
  qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
790 173 Rishabh Dave
* https://tracker.ceph.com/issues/51964
791
  qa: test_cephfs_mirror_restart_sync_on_blocklist failure
792 1 Patrick Donnelly
* https://tracker.ceph.com/issues/61892
793 173 Rishabh Dave
  test_snapshot_remove (test_strays.TestStrays) failed
794
* https://tracker.ceph.com/issues/59348
795
  qa: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota
796
* https://tracker.ceph.com/issues/54462
797
  Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128
798
* https://tracker.ceph.com/issues/62556
799
  test_acls: xfstests_dev: python2 is missing
800
* https://tracker.ceph.com/issues/62067
801
  ffsb.sh failure "Resource temporarily unavailable"
802
* https://tracker.ceph.com/issues/57656
803
  dbench: write failed on handle 10010 (Resource temporarily unavailable)
804 1 Patrick Donnelly
* https://tracker.ceph.com/issues/59346
805
  fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write"
806 171 Rishabh Dave
* https://tracker.ceph.com/issues/59344
807 173 Rishabh Dave
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
808
809 171 Rishabh Dave
* https://tracker.ceph.com/issues/61399
810
  ior build failure
811
* https://tracker.ceph.com/issues/57676
812
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
813
* https://tracker.ceph.com/issues/55805
814
  error scrub thrashing reached max tries in 900 secs
815 173 Rishabh Dave
816
* https://tracker.ceph.com/issues/62567
817
  Command failed on smithi008 with status 1: "sudo TESTDIR=/home/ubuntu/cephtest bash -c 'sudo -u postgres -- pgbench -s 500 -i'"
818
* https://tracker.ceph.com/issues/62702
819
  workunit test suites/fsstress.sh on smithi066 with status 124
820 170 Rishabh Dave
821
h3. 5 Sep 2023
822
823
https://pulpito.ceph.com/rishabh-2023-08-25_06:38:25-fs-wip-rishabh-2023aug3-b5-testing-default-smithi/
824
orch:cephadm suite run: http://pulpito.front.sepia.ceph.com/rishabh-2023-09-05_12:16:09-orch:cephadm-wip-rishabh-2023aug3-b5-testing-default-smithi/
825
  this run has failures but acc to Adam King these are not relevant and should be ignored
826
827
* https://tracker.ceph.com/issues/61892
828
  test_snapshot_remove (test_strays.TestStrays) failed
829
* https://tracker.ceph.com/issues/59348
830
  test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota
831
* https://tracker.ceph.com/issues/54462
832
  Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128
833
* https://tracker.ceph.com/issues/62067
834
  ffsb.sh failure "Resource temporarily unavailable"
835
* https://tracker.ceph.com/issues/57656 
836
  dbench: write failed on handle 10010 (Resource temporarily unavailable)
837
* https://tracker.ceph.com/issues/59346
838
  fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write"
839
* https://tracker.ceph.com/issues/59344
840
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
841
* https://tracker.ceph.com/issues/50223
842
  client.xxxx isn't responding to mclientcaps(revoke)
843
* https://tracker.ceph.com/issues/57655
844
  qa: fs:mixed-clients kernel_untar_build failure
845
* https://tracker.ceph.com/issues/62187
846
  iozone.sh: line 5: iozone: command not found
847
 
848
* https://tracker.ceph.com/issues/61399
849
  ior build failure
850
* https://tracker.ceph.com/issues/57676
851
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
852
* https://tracker.ceph.com/issues/55805
853
  error scrub thrashing reached max tries in 900 secs
854 169 Venky Shankar
855
856
h3. 31 Aug 2023
857
858
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230824.045828
859
860
* https://tracker.ceph.com/issues/52624
861
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
862
* https://tracker.ceph.com/issues/62187
863
    iozone: command not found
864
* https://tracker.ceph.com/issues/61399
865
    ior build failure
866
* https://tracker.ceph.com/issues/59531
867
    quincy: "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio)"
868
* https://tracker.ceph.com/issues/61399
869
    qa: build failure for ior (the failed instance is when compiling `mdtest`)
870
* https://tracker.ceph.com/issues/57655
871
    qa: fs:mixed-clients kernel_untar_build failure
872
* https://tracker.ceph.com/issues/59344
873
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
874
* https://tracker.ceph.com/issues/59346
875
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
876
* https://tracker.ceph.com/issues/59348
877
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
878
* https://tracker.ceph.com/issues/59413
879
    cephfs: qa snaptest-git-ceph.sh failed with "got remote process result: 128"
880
* https://tracker.ceph.com/issues/62653
881
    qa: unimplemented fcntl command: 1036 with fsstress
882
* https://tracker.ceph.com/issues/61400
883
    valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
884
* https://tracker.ceph.com/issues/62658
885
    error during scrub thrashing: reached maximum tries (31) after waiting for 900 seconds
886
* https://tracker.ceph.com/issues/62188
887
    AttributeError: 'RemoteProcess' object has no attribute 'read' (happens only with multis-auth test)
888 168 Venky Shankar
889
890
h3. 25 Aug 2023
891
892
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230822.064807
893
894
* https://tracker.ceph.com/issues/59344
895
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
896
* https://tracker.ceph.com/issues/59346
897
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
898
* https://tracker.ceph.com/issues/59348
899
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
900
* https://tracker.ceph.com/issues/57655
901
    qa: fs:mixed-clients kernel_untar_build failure
902
* https://tracker.ceph.com/issues/61243
903
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
904
* https://tracker.ceph.com/issues/61399
905
    ior build failure
906
* https://tracker.ceph.com/issues/61399
907
    qa: build failure for ior (the failed instance is when compiling `mdtest`)
908
* https://tracker.ceph.com/issues/62484
909
    qa: ffsb.sh test failure
910
* https://tracker.ceph.com/issues/59531
911
    quincy: "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio)"
912
* https://tracker.ceph.com/issues/62510
913
    snaptest-git-ceph.sh failure with fs/thrash
914 167 Venky Shankar
915
916
h3. 24 Aug 2023
917
918
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230822.060131
919
920
* https://tracker.ceph.com/issues/57676
921
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
922
* https://tracker.ceph.com/issues/51964
923
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
924
* https://tracker.ceph.com/issues/59344
925
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
926
* https://tracker.ceph.com/issues/59346
927
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
928
* https://tracker.ceph.com/issues/59348
929
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
930
* https://tracker.ceph.com/issues/61399
931
    ior build failure
932
* https://tracker.ceph.com/issues/61399
933
    qa: build failure for ior (the failed instance is when compiling `mdtest`)
934
* https://tracker.ceph.com/issues/62510
935
    snaptest-git-ceph.sh failure with fs/thrash
936
* https://tracker.ceph.com/issues/62484
937
    qa: ffsb.sh test failure
938
* https://tracker.ceph.com/issues/57087
939
    qa: test_fragmented_injection (tasks.cephfs.test_data_scan.TestDataScan) failure
940
* https://tracker.ceph.com/issues/57656
941
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
942
* https://tracker.ceph.com/issues/62187
943
    iozone: command not found
944
* https://tracker.ceph.com/issues/62188
945
    AttributeError: 'RemoteProcess' object has no attribute 'read' (happens only with multis-auth test)
946
* https://tracker.ceph.com/issues/62567
947
    postgres workunit times out - MDS_SLOW_REQUEST in logs
948 166 Venky Shankar
949
950
h3. 22 Aug 2023
951
952
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230809.035933
953
954
* https://tracker.ceph.com/issues/57676
955
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
956
* https://tracker.ceph.com/issues/51964
957
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
958
* https://tracker.ceph.com/issues/59344
959
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
960
* https://tracker.ceph.com/issues/59346
961
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
962
* https://tracker.ceph.com/issues/59348
963
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
964
* https://tracker.ceph.com/issues/61399
965
    ior build failure
966
* https://tracker.ceph.com/issues/61399
967
    qa: build failure for ior (the failed instance is when compiling `mdtest`)
968
* https://tracker.ceph.com/issues/57655
969
    qa: fs:mixed-clients kernel_untar_build failure
970
* https://tracker.ceph.com/issues/61243
971
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
972
* https://tracker.ceph.com/issues/62188
973
    AttributeError: 'RemoteProcess' object has no attribute 'read' (happens only with multis-auth test)
974
* https://tracker.ceph.com/issues/62510
975
    snaptest-git-ceph.sh failure with fs/thrash
976
* https://tracker.ceph.com/issues/62511
977
    src/mds/MDLog.cc: 299: FAILED ceph_assert(!mds_is_shutting_down)
978 165 Venky Shankar
979
980
h3. 14 Aug 2023
981
982
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230808.093601
983
984
* https://tracker.ceph.com/issues/51964
985
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
986
* https://tracker.ceph.com/issues/61400
987
    valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
988
* https://tracker.ceph.com/issues/61399
989
    ior build failure
990
* https://tracker.ceph.com/issues/59348
991
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
992
* https://tracker.ceph.com/issues/59531
993
    cluster [WRN] OSD bench result of 137706.272521 IOPS exceeded the threshold
994
* https://tracker.ceph.com/issues/59344
995
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
996
* https://tracker.ceph.com/issues/59346
997
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
998
* https://tracker.ceph.com/issues/61399
999
    qa: build failure for ior (the failed instance is when compiling `mdtest`)
1000
* https://tracker.ceph.com/issues/59684 [kclient bug]
1001
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
1002
* https://tracker.ceph.com/issues/61243 (NEW)
1003
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
1004
* https://tracker.ceph.com/issues/57655
1005
    qa: fs:mixed-clients kernel_untar_build failure
1006
* https://tracker.ceph.com/issues/57656
1007
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
1008 163 Venky Shankar
1009
1010
h3. 28 JULY 2023
1011
1012
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230725.053049
1013
1014
* https://tracker.ceph.com/issues/51964
1015
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1016
* https://tracker.ceph.com/issues/61400
1017
    valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
1018
* https://tracker.ceph.com/issues/61399
1019
    ior build failure
1020
* https://tracker.ceph.com/issues/57676
1021
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1022
* https://tracker.ceph.com/issues/59348
1023
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1024
* https://tracker.ceph.com/issues/59531
1025
    cluster [WRN] OSD bench result of 137706.272521 IOPS exceeded the threshold
1026
* https://tracker.ceph.com/issues/59344
1027
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
1028
* https://tracker.ceph.com/issues/59346
1029
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
1030
* https://github.com/ceph/ceph/pull/52556
1031
    task/test_nfs: AttributeError: 'TestNFS' object has no attribute 'run_ceph_cmd' (see note #4)
1032
* https://tracker.ceph.com/issues/62187
1033
    iozone: command not found
1034
* https://tracker.ceph.com/issues/61399
1035
    qa: build failure for ior (the failed instance is when compiling `mdtest`)
1036
* https://tracker.ceph.com/issues/62188
1037 164 Rishabh Dave
    AttributeError: 'RemoteProcess' object has no attribute 'read' (happens only with multis-auth test)
1038 158 Rishabh Dave
1039
h3. 24 Jul 2023
1040
1041
https://pulpito.ceph.com/rishabh-2023-07-13_21:35:13-fs-wip-rishabh-2023Jul13-testing-default-smithi/
1042
https://pulpito.ceph.com/rishabh-2023-07-14_10:26:42-fs-wip-rishabh-2023Jul13-testing-default-smithi/
1043
There were few failure from one of the PRs under testing. Following run confirms that removing this PR fixes these failures -
1044
https://pulpito.ceph.com/rishabh-2023-07-18_02:11:50-fs-wip-rishabh-2023Jul13-m-quota-testing-default-smithi/
1045
One more extra run to check if blogbench.sh fail every time:
1046
https://pulpito.ceph.com/rishabh-2023-07-21_17:58:19-fs-wip-rishabh-2023Jul13-m-quota-testing-default-smithi/
1047
blogbench.sh failure were seen on above runs for first time, following run with main branch that confirms that "blogbench.sh" was not related to any of the PRs that are under testing -
1048 161 Rishabh Dave
https://pulpito.ceph.com/rishabh-2023-07-21_21:30:53-fs-wip-rishabh-2023Jul13-base-2-testing-default-smithi/
1049
1050
* https://tracker.ceph.com/issues/61892
1051
  test_snapshot_remove (test_strays.TestStrays) failed
1052
* https://tracker.ceph.com/issues/53859
1053
  test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
1054
* https://tracker.ceph.com/issues/61982
1055
  test_clean_stale_subvolume_snapshot_metadata (tasks.cephfs.test_volumes.TestSubvolumeSnapshots)
1056
* https://tracker.ceph.com/issues/52438
1057
  qa: ffsb timeout
1058
* https://tracker.ceph.com/issues/54460
1059
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1060
* https://tracker.ceph.com/issues/57655
1061
  qa: fs:mixed-clients kernel_untar_build failure
1062
* https://tracker.ceph.com/issues/48773
1063
  reached max tries: scrub does not complete
1064
* https://tracker.ceph.com/issues/58340
1065
  mds: fsstress.sh hangs with multimds
1066
* https://tracker.ceph.com/issues/61400
1067
  valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
1068
* https://tracker.ceph.com/issues/57206
1069
  libcephfs/test.sh: ceph_test_libcephfs_reclaim
1070
  
1071
* https://tracker.ceph.com/issues/57656
1072
  [testing] dbench: write failed on handle 10010 (Resource temporarily unavailable)
1073
* https://tracker.ceph.com/issues/61399
1074
  ior build failure
1075
* https://tracker.ceph.com/issues/57676
1076
  error during scrub thrashing: backtrace
1077
  
1078
* https://tracker.ceph.com/issues/38452
1079
  'sudo -u postgres -- pgbench -s 500 -i' failed
1080 158 Rishabh Dave
* https://tracker.ceph.com/issues/62126
1081 157 Venky Shankar
  blogbench.sh failure
1082
1083
h3. 18 July 2023
1084
1085
* https://tracker.ceph.com/issues/52624
1086
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
1087
* https://tracker.ceph.com/issues/57676
1088
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1089
* https://tracker.ceph.com/issues/54460
1090
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1091
* https://tracker.ceph.com/issues/57655
1092
    qa: fs:mixed-clients kernel_untar_build failure
1093
* https://tracker.ceph.com/issues/51964
1094
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1095
* https://tracker.ceph.com/issues/59344
1096
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
1097
* https://tracker.ceph.com/issues/61182
1098
    cephfs-mirror-ha-workunit: reached maximum tries (50) after waiting for 300 seconds
1099
* https://tracker.ceph.com/issues/61957
1100
    test_client_limits.TestClientLimits.test_client_release_bug
1101
* https://tracker.ceph.com/issues/59348
1102
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1103
* https://tracker.ceph.com/issues/61892
1104
    test_strays.TestStrays.test_snapshot_remove failed
1105
* https://tracker.ceph.com/issues/59346
1106
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
1107
* https://tracker.ceph.com/issues/44565
1108
    src/mds/SimpleLock.h: 528: FAILED ceph_assert(state == LOCK_XLOCK || state == LOCK_XLOCKDONE || state == LOCK_XLOCKSNAP || state == LOCK_LOCK_XLOCK || state == LOCK_LOCK || is_locallock())
1109
* https://tracker.ceph.com/issues/62067
1110
    ffsb.sh failure "Resource temporarily unavailable"
1111 156 Venky Shankar
1112
1113
h3. 17 July 2023
1114
1115
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230704.040136
1116
1117
* https://tracker.ceph.com/issues/61982
1118
    Test failure: test_clean_stale_subvolume_snapshot_metadata (tasks.cephfs.test_volumes.TestSubvolumeSnapshots)
1119
* https://tracker.ceph.com/issues/59344
1120
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
1121
* https://tracker.ceph.com/issues/61182
1122
    cephfs-mirror-ha-workunit: reached maximum tries (50) after waiting for 300 seconds
1123
* https://tracker.ceph.com/issues/61957
1124
    test_client_limits.TestClientLimits.test_client_release_bug
1125
* https://tracker.ceph.com/issues/61400
1126
    valgrind+ceph-mon: segmentation fault in rocksdb+tcmalloc
1127
* https://tracker.ceph.com/issues/59348
1128
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1129
* https://tracker.ceph.com/issues/61892
1130
    test_strays.TestStrays.test_snapshot_remove failed
1131
* https://tracker.ceph.com/issues/59346
1132
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
1133
* https://tracker.ceph.com/issues/62036
1134
    src/mds/MDCache.cc: 5131: FAILED ceph_assert(isolated_inodes.empty())
1135
* https://tracker.ceph.com/issues/61737
1136
    coredump from '/bin/podman pull quay.ceph.io/ceph-ci/ceph:pacific'
1137
* https://tracker.ceph.com/issues/44565
1138
    src/mds/SimpleLock.h: 528: FAILED ceph_assert(state == LOCK_XLOCK || state == LOCK_XLOCKDONE || state == LOCK_XLOCKSNAP || state == LOCK_LOCK_XLOCK || state == LOCK_LOCK || is_locallock())
1139 155 Rishabh Dave
1140 1 Patrick Donnelly
1141 153 Rishabh Dave
h3. 13 July 2023 Run 2
1142 152 Rishabh Dave
1143
1144
https://pulpito.ceph.com/rishabh-2023-07-08_23:33:40-fs-wip-rishabh-2023Jul9-testing-default-smithi/
1145
https://pulpito.ceph.com/rishabh-2023-07-09_20:19:09-fs-wip-rishabh-2023Jul9-testing-default-smithi/
1146
1147
* https://tracker.ceph.com/issues/61957
1148
  test_client_limits.TestClientLimits.test_client_release_bug
1149
* https://tracker.ceph.com/issues/61982
1150
  Test failure: test_clean_stale_subvolume_snapshot_metadata (tasks.cephfs.test_volumes.TestSubvolumeSnapshots)
1151
* https://tracker.ceph.com/issues/59348
1152
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1153
* https://tracker.ceph.com/issues/59344
1154
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
1155
* https://tracker.ceph.com/issues/54460
1156
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1157
* https://tracker.ceph.com/issues/57655
1158
  qa: fs:mixed-clients kernel_untar_build failure
1159
* https://tracker.ceph.com/issues/61400
1160
  valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
1161
* https://tracker.ceph.com/issues/61399
1162
  ior build failure
1163
1164 151 Venky Shankar
h3. 13 July 2023
1165
1166
https://pulpito.ceph.com/vshankar-2023-07-04_11:45:30-fs-wip-vshankar-testing-20230704.040242-testing-default-smithi/
1167
1168
* https://tracker.ceph.com/issues/54460
1169
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1170
* https://tracker.ceph.com/issues/61400
1171
    valgrind+ceph-mon: segmentation fault in rocksdb+tcmalloc
1172
* https://tracker.ceph.com/issues/57655
1173
    qa: fs:mixed-clients kernel_untar_build failure
1174
* https://tracker.ceph.com/issues/61945
1175
    LibCephFS.DelegTimeout failure
1176
* https://tracker.ceph.com/issues/52624
1177
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
1178
* https://tracker.ceph.com/issues/57676
1179
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1180
* https://tracker.ceph.com/issues/59348
1181
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1182
* https://tracker.ceph.com/issues/59344
1183
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
1184
* https://tracker.ceph.com/issues/51964
1185
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1186
* https://tracker.ceph.com/issues/59346
1187
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
1188
* https://tracker.ceph.com/issues/61982
1189
    Test failure: test_clean_stale_subvolume_snapshot_metadata (tasks.cephfs.test_volumes.TestSubvolumeSnapshots)
1190 150 Rishabh Dave
1191
1192
h3. 13 Jul 2023
1193
1194
https://pulpito.ceph.com/rishabh-2023-07-05_22:21:20-fs-wip-rishabh-2023Jul5-testing-default-smithi/
1195
https://pulpito.ceph.com/rishabh-2023-07-06_19:33:28-fs-wip-rishabh-2023Jul5-testing-default-smithi/
1196
1197
* https://tracker.ceph.com/issues/61957
1198
  test_client_limits.TestClientLimits.test_client_release_bug
1199
* https://tracker.ceph.com/issues/59348
1200
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1201
* https://tracker.ceph.com/issues/59346
1202
  fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
1203
* https://tracker.ceph.com/issues/48773
1204
  scrub does not complete: reached max tries
1205
* https://tracker.ceph.com/issues/59344
1206
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
1207
* https://tracker.ceph.com/issues/52438
1208
  qa: ffsb timeout
1209
* https://tracker.ceph.com/issues/57656
1210
  [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
1211
* https://tracker.ceph.com/issues/58742
1212
  xfstests-dev: kcephfs: generic
1213
* https://tracker.ceph.com/issues/61399
1214 148 Rishabh Dave
  libmpich: undefined references to fi_strerror
1215 149 Rishabh Dave
1216 148 Rishabh Dave
h3. 12 July 2023
1217
1218
https://pulpito.ceph.com/rishabh-2023-07-05_18:32:52-fs-wip-rishabh-CephManager-in-CephFSTestCase-testing-default-smithi/
1219
https://pulpito.ceph.com/rishabh-2023-07-06_19:46:43-fs-wip-rishabh-CephManager-in-CephFSTestCase-testing-default-smithi/
1220
1221
* https://tracker.ceph.com/issues/61892
1222
  test_strays.TestStrays.test_snapshot_remove failed
1223
* https://tracker.ceph.com/issues/59348
1224
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1225
* https://tracker.ceph.com/issues/53859
1226
  qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
1227
* https://tracker.ceph.com/issues/59346
1228
  fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write"
1229
* https://tracker.ceph.com/issues/58742
1230
  xfstests-dev: kcephfs: generic
1231
* https://tracker.ceph.com/issues/59344
1232
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
1233
* https://tracker.ceph.com/issues/52438
1234
  qa: ffsb timeout
1235
* https://tracker.ceph.com/issues/57656
1236
  [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
1237
* https://tracker.ceph.com/issues/54460
1238
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1239
* https://tracker.ceph.com/issues/57655
1240
  qa: fs:mixed-clients kernel_untar_build failure
1241
* https://tracker.ceph.com/issues/61182
1242
  cephfs-mirror-ha-workunit: reached maximum tries (50) after waiting for 300 seconds
1243
* https://tracker.ceph.com/issues/61400
1244
  valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
1245 147 Rishabh Dave
* https://tracker.ceph.com/issues/48773
1246 146 Patrick Donnelly
  reached max tries: scrub does not complete
1247
1248
h3. 05 July 2023
1249
1250
https://pulpito.ceph.com/pdonnell-2023-07-05_03:38:33-fs:libcephfs-wip-pdonnell-testing-20230705.003205-distro-default-smithi/
1251
1252 137 Rishabh Dave
* https://tracker.ceph.com/issues/59346
1253 143 Rishabh Dave
  fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write"
1254
1255
h3. 27 Jun 2023
1256
1257
https://pulpito.ceph.com/rishabh-2023-06-21_23:38:17-fs-wip-rishabh-improvements-authmon-testing-default-smithi/
1258 144 Rishabh Dave
https://pulpito.ceph.com/rishabh-2023-06-23_17:37:30-fs-wip-rishabh-improvements-authmon-distro-default-smithi/
1259
1260
* https://tracker.ceph.com/issues/59348
1261
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1262
* https://tracker.ceph.com/issues/54460
1263
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1264
* https://tracker.ceph.com/issues/59346
1265
  fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write"
1266
* https://tracker.ceph.com/issues/59344
1267
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
1268
* https://tracker.ceph.com/issues/61399
1269
  libmpich: undefined references to fi_strerror
1270
* https://tracker.ceph.com/issues/50223
1271
  client.xxxx isn't responding to mclientcaps(revoke)
1272 143 Rishabh Dave
* https://tracker.ceph.com/issues/61831
1273
  Test failure: test_mirroring_init_failure_with_recovery (tasks.cephfs.test_mirroring.TestMirroring)
1274 142 Venky Shankar
1275
1276
h3. 22 June 2023
1277
1278
* https://tracker.ceph.com/issues/57676
1279
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1280
* https://tracker.ceph.com/issues/54460
1281
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1282
* https://tracker.ceph.com/issues/59344
1283
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
1284
* https://tracker.ceph.com/issues/59348
1285
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1286
* https://tracker.ceph.com/issues/61400
1287
    valgrind+ceph-mon: segmentation fault in rocksdb+tcmalloc
1288
* https://tracker.ceph.com/issues/57655
1289
    qa: fs:mixed-clients kernel_untar_build failure
1290
* https://tracker.ceph.com/issues/61394
1291
    qa/quincy: cluster [WRN] evicting unresponsive client smithi152 (4298), after 303.726 seconds" in cluster log
1292
* https://tracker.ceph.com/issues/61762
1293
    qa: wait_for_clean: failed before timeout expired
1294
* https://tracker.ceph.com/issues/61775
1295
    cephfs-mirror: mirror daemon does not shutdown (in mirror ha tests)
1296
* https://tracker.ceph.com/issues/44565
1297
    src/mds/SimpleLock.h: 528: FAILED ceph_assert(state == LOCK_XLOCK || state == LOCK_XLOCKDONE || state == LOCK_XLOCKSNAP || state == LOCK_LOCK_XLOCK || state == LOCK_LOCK || is_locallock())
1298
* https://tracker.ceph.com/issues/61790
1299
    cephfs client to mds comms remain silent after reconnect
1300
* https://tracker.ceph.com/issues/61791
1301
    snaptest-git-ceph.sh test timed out (job dead)
1302 139 Venky Shankar
1303
1304
h3. 20 June 2023
1305
1306
https://pulpito.ceph.com/vshankar-2023-06-15_04:58:28-fs-wip-vshankar-testing-20230614.124123-testing-default-smithi/
1307
1308
* https://tracker.ceph.com/issues/57676
1309
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1310
* https://tracker.ceph.com/issues/54460
1311
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1312 140 Venky Shankar
* https://tracker.ceph.com/issues/54462
1313 1 Patrick Donnelly
  Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128
1314 141 Venky Shankar
* https://tracker.ceph.com/issues/58340
1315 139 Venky Shankar
  mds: fsstress.sh hangs with multimds
1316
* https://tracker.ceph.com/issues/59344
1317
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
1318
* https://tracker.ceph.com/issues/59348
1319
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1320
* https://tracker.ceph.com/issues/57656
1321
  [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
1322
* https://tracker.ceph.com/issues/61400
1323
    valgrind+ceph-mon: segmentation fault in rocksdb+tcmalloc
1324
* https://tracker.ceph.com/issues/57655
1325
    qa: fs:mixed-clients kernel_untar_build failure
1326
* https://tracker.ceph.com/issues/44565
1327
    src/mds/SimpleLock.h: 528: FAILED ceph_assert(state == LOCK_XLOCK || state == LOCK_XLOCKDONE || state == LOCK_XLOCKSNAP || state == LOCK_LOCK_XLOCK || state == LOCK_LOCK || is_locallock())
1328
* https://tracker.ceph.com/issues/61737
1329 138 Rishabh Dave
    coredump from '/bin/podman pull quay.ceph.io/ceph-ci/ceph:pacific'
1330
1331
h3. 16 June 2023
1332
1333 1 Patrick Donnelly
https://pulpito.ceph.com/rishabh-2023-05-16_10:39:13-fs-wip-rishabh-2023May15-1524-testing-default-smithi/
1334 145 Rishabh Dave
https://pulpito.ceph.com/rishabh-2023-05-17_11:09:48-fs-wip-rishabh-2023May15-1524-testing-default-smithi/
1335 138 Rishabh Dave
https://pulpito.ceph.com/rishabh-2023-05-18_10:01:53-fs-wip-rishabh-2023May15-1524-testing-default-smithi/
1336 1 Patrick Donnelly
(bins were rebuilt with a subset of orig PRs) https://pulpito.ceph.com/rishabh-2023-06-09_10:19:22-fs-wip-rishabh-2023Jun9-1308-testing-default-smithi/
1337
1338
1339
* https://tracker.ceph.com/issues/59344
1340
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
1341 138 Rishabh Dave
* https://tracker.ceph.com/issues/59348
1342
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1343 145 Rishabh Dave
* https://tracker.ceph.com/issues/59346
1344
  fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write"
1345
* https://tracker.ceph.com/issues/57656
1346
  [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
1347
* https://tracker.ceph.com/issues/54460
1348
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1349 138 Rishabh Dave
* https://tracker.ceph.com/issues/54462
1350
  Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128
1351 145 Rishabh Dave
* https://tracker.ceph.com/issues/61399
1352
  libmpich: undefined references to fi_strerror
1353
* https://tracker.ceph.com/issues/58945
1354
  xfstests-dev: ceph-fuse: generic 
1355 138 Rishabh Dave
* https://tracker.ceph.com/issues/58742
1356 136 Patrick Donnelly
  xfstests-dev: kcephfs: generic
1357
1358
h3. 24 May 2023
1359
1360
https://pulpito.ceph.com/pdonnell-2023-05-23_18:20:18-fs-wip-pdonnell-testing-20230523.134409-distro-default-smithi/
1361
1362
* https://tracker.ceph.com/issues/57676
1363
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1364
* https://tracker.ceph.com/issues/59683
1365
    Error: Unable to find a match: userspace-rcu-devel libedit-devel device-mapper-devel with fscrypt tests
1366
* https://tracker.ceph.com/issues/61399
1367
    qa: "[Makefile:299: ior] Error 1"
1368
* https://tracker.ceph.com/issues/61265
1369
    qa: tasks.cephfs.fuse_mount:process failed to terminate after unmount
1370
* https://tracker.ceph.com/issues/59348
1371
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1372
* https://tracker.ceph.com/issues/59346
1373
    qa/workunits/fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write"
1374
* https://tracker.ceph.com/issues/61400
1375
    valgrind+ceph-mon: segmentation fault in rocksdb+tcmalloc
1376
* https://tracker.ceph.com/issues/54460
1377
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1378
* https://tracker.ceph.com/issues/51964
1379
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1380
* https://tracker.ceph.com/issues/59344
1381
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
1382
* https://tracker.ceph.com/issues/61407
1383
    mds: abort on CInode::verify_dirfrags
1384
* https://tracker.ceph.com/issues/48773
1385
    qa: scrub does not complete
1386
* https://tracker.ceph.com/issues/57655
1387
    qa: fs:mixed-clients kernel_untar_build failure
1388
* https://tracker.ceph.com/issues/61409
1389 128 Venky Shankar
    qa: _test_stale_caps does not wait for file flush before stat
1390
1391
h3. 15 May 2023
1392 130 Venky Shankar
1393 128 Venky Shankar
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230509.090020
1394
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230509.090020-6
1395
1396
* https://tracker.ceph.com/issues/52624
1397
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
1398
* https://tracker.ceph.com/issues/54460
1399
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1400
* https://tracker.ceph.com/issues/57676
1401
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1402
* https://tracker.ceph.com/issues/59684 [kclient bug]
1403
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
1404
* https://tracker.ceph.com/issues/59348
1405
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1406 131 Venky Shankar
* https://tracker.ceph.com/issues/61148
1407
    dbench test results in call trace in dmesg [kclient bug]
1408 133 Kotresh Hiremath Ravishankar
* https://tracker.ceph.com/issues/58340
1409 134 Kotresh Hiremath Ravishankar
    mds: fsstress.sh hangs with multimds
1410 125 Venky Shankar
1411
 
1412 129 Rishabh Dave
h3. 11 May 2023
1413
1414
https://pulpito.ceph.com/yuriw-2023-05-10_18:21:40-fs-wip-yuri7-testing-2023-05-10-0742-distro-default-smithi/
1415
1416
* https://tracker.ceph.com/issues/59684 [kclient bug]
1417
  Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
1418
* https://tracker.ceph.com/issues/59348
1419
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1420
* https://tracker.ceph.com/issues/57655
1421
  qa: fs:mixed-clients kernel_untar_build failure
1422
* https://tracker.ceph.com/issues/57676
1423
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
1424
* https://tracker.ceph.com/issues/55805
1425
  error during scrub thrashing reached max tries in 900 secs
1426
* https://tracker.ceph.com/issues/54460
1427
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1428
* https://tracker.ceph.com/issues/57656
1429
  [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
1430
* https://tracker.ceph.com/issues/58220
1431
  Command failed (workunit test fs/quota/quota.sh) on smithi081 with status 1:
1432 1 Patrick Donnelly
* https://tracker.ceph.com/issues/58220#note-9
1433
  workunit fs/test_python.sh: test_disk_quota_exceeeded_error failure
1434 134 Kotresh Hiremath Ravishankar
* https://tracker.ceph.com/issues/59342
1435
  qa/workunits/kernel_untar_build.sh failed when compiling the Linux source
1436 135 Kotresh Hiremath Ravishankar
* https://tracker.ceph.com/issues/58949
1437
    test_cephfs.test_disk_quota_exceeeded_error - AssertionError: DiskQuotaExceeded not raised by write
1438 129 Rishabh Dave
* https://tracker.ceph.com/issues/61243 (NEW)
1439
  test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
1440
1441 125 Venky Shankar
h3. 11 May 2023
1442 127 Venky Shankar
1443
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230509.054005
1444 126 Venky Shankar
1445 125 Venky Shankar
(no fsstress job failure [https://tracker.ceph.com/issues/58340] since https://github.com/ceph/ceph/pull/49553
1446
 was included in the branch, however, the PR got updated and needs retest).
1447
1448
* https://tracker.ceph.com/issues/52624
1449
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
1450
* https://tracker.ceph.com/issues/54460
1451
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1452
* https://tracker.ceph.com/issues/57676
1453
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1454
* https://tracker.ceph.com/issues/59683
1455
    Error: Unable to find a match: userspace-rcu-devel libedit-devel device-mapper-devel with fscrypt tests
1456
* https://tracker.ceph.com/issues/59684 [kclient bug]
1457
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
1458
* https://tracker.ceph.com/issues/59348
1459 124 Venky Shankar
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1460
1461
h3. 09 May 2023
1462
1463
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230506.143554
1464
1465
* https://tracker.ceph.com/issues/52624
1466
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
1467
* https://tracker.ceph.com/issues/58340
1468
    mds: fsstress.sh hangs with multimds
1469
* https://tracker.ceph.com/issues/54460
1470
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1471
* https://tracker.ceph.com/issues/57676
1472
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1473
* https://tracker.ceph.com/issues/51964
1474
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1475
* https://tracker.ceph.com/issues/59350
1476
    qa: test_flag_scrub_mdsdir (tasks.cephfs.test_scrub_checks.TestScrubChecks) ... ERROR
1477
* https://tracker.ceph.com/issues/59683
1478
    Error: Unable to find a match: userspace-rcu-devel libedit-devel device-mapper-devel with fscrypt tests
1479
* https://tracker.ceph.com/issues/59684 [kclient bug]
1480
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
1481
* https://tracker.ceph.com/issues/59348
1482 123 Venky Shankar
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1483
1484
h3. 10 Apr 2023
1485
1486
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230330.105356
1487
1488
* https://tracker.ceph.com/issues/52624
1489
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
1490
* https://tracker.ceph.com/issues/58340
1491
    mds: fsstress.sh hangs with multimds
1492
* https://tracker.ceph.com/issues/54460
1493
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1494
* https://tracker.ceph.com/issues/57676
1495
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1496 119 Rishabh Dave
* https://tracker.ceph.com/issues/51964
1497 120 Rishabh Dave
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1498 121 Rishabh Dave
1499 120 Rishabh Dave
h3. 31 Mar 2023
1500 122 Rishabh Dave
1501
run: http://pulpito.front.sepia.ceph.com/rishabh-2023-03-03_21:39:49-fs-wip-rishabh-2023Mar03-2316-testing-default-smithi/
1502 120 Rishabh Dave
re-run (some PRs removed from batch): http://pulpito.front.sepia.ceph.com/rishabh-2023-03-11_05:54:03-fs-wip-rishabh-2023Mar10-1727-testing-default-smithi/
1503
re-run (some PRs removed from batch): http://pulpito.front.sepia.ceph.com/rishabh-2023-03-23_08:27:28-fs-wip-rishabh-2023Mar20-2250-testing-default-smithi/
1504
1505
There were many more re-runs for "failed+dead" jobs as well as for individual jobs. half of the PRs from the batch were removed (gradually over subsequent re-runs).
1506
1507
* https://tracker.ceph.com/issues/57676
1508
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
1509
* https://tracker.ceph.com/issues/54460
1510
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1511
* https://tracker.ceph.com/issues/58220
1512
  Command failed (workunit test fs/quota/quota.sh) on smithixxx with status 1
1513
* https://tracker.ceph.com/issues/58220#note-9
1514
  workunit fs/test_python.sh: test_disk_quota_exceeeded_error failure
1515
* https://tracker.ceph.com/issues/56695
1516
  Command failed (workunit test suites/pjd.sh)
1517
* https://tracker.ceph.com/issues/58564 
1518
  workuit dbench failed with error code 1
1519
* https://tracker.ceph.com/issues/57206
1520
  libcephfs/test.sh: ceph_test_libcephfs_reclaim
1521
* https://tracker.ceph.com/issues/57580
1522
  Test failure: test_newops_getvxattr (tasks.cephfs.test_newops.TestNewOps)
1523
* https://tracker.ceph.com/issues/58940
1524
  ceph osd hit ceph_abort
1525
* https://tracker.ceph.com/issues/55805
1526 118 Venky Shankar
  error scrub thrashing reached max tries in 900 secs
1527
1528
h3. 30 March 2023
1529
1530
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230315.085747
1531
1532
* https://tracker.ceph.com/issues/58938
1533
    qa: xfstests-dev's generic test suite has 7 failures with kclient
1534
* https://tracker.ceph.com/issues/51964
1535
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1536
* https://tracker.ceph.com/issues/58340
1537 114 Venky Shankar
    mds: fsstress.sh hangs with multimds
1538
1539 115 Venky Shankar
h3. 29 March 2023
1540 114 Venky Shankar
1541
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230317.095222
1542
1543
* https://tracker.ceph.com/issues/56695
1544
    [RHEL stock] pjd test failures
1545
* https://tracker.ceph.com/issues/57676
1546
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1547
* https://tracker.ceph.com/issues/57087
1548
    qa: test_fragmented_injection (tasks.cephfs.test_data_scan.TestDataScan) failure
1549 116 Venky Shankar
* https://tracker.ceph.com/issues/58340
1550
    mds: fsstress.sh hangs with multimds
1551 114 Venky Shankar
* https://tracker.ceph.com/issues/57655
1552
    qa: fs:mixed-clients kernel_untar_build failure
1553 117 Venky Shankar
* https://tracker.ceph.com/issues/59230
1554
    Test failure: test_object_deletion (tasks.cephfs.test_damage.TestDamage)
1555 114 Venky Shankar
* https://tracker.ceph.com/issues/58938
1556 113 Venky Shankar
    qa: xfstests-dev's generic test suite has 7 failures with kclient
1557
1558
h3. 13 Mar 2023
1559
1560
* https://tracker.ceph.com/issues/56695
1561
    [RHEL stock] pjd test failures
1562
* https://tracker.ceph.com/issues/57676
1563
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1564
* https://tracker.ceph.com/issues/51964
1565
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1566
* https://tracker.ceph.com/issues/54460
1567
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1568
* https://tracker.ceph.com/issues/57656
1569 112 Venky Shankar
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
1570
1571
h3. 09 Mar 2023
1572
1573
https://pulpito.ceph.com/vshankar-2023-03-03_04:39:14-fs-wip-vshankar-testing-20230303.023823-testing-default-smithi/
1574
https://pulpito.ceph.com/vshankar-2023-03-08_15:12:36-fs-wip-vshankar-testing-20230308.112059-testing-default-smithi/
1575
1576
* https://tracker.ceph.com/issues/56695
1577
    [RHEL stock] pjd test failures
1578
* https://tracker.ceph.com/issues/57676
1579
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1580
* https://tracker.ceph.com/issues/51964
1581
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1582
* https://tracker.ceph.com/issues/54460
1583
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1584
* https://tracker.ceph.com/issues/58340
1585
    mds: fsstress.sh hangs with multimds
1586
* https://tracker.ceph.com/issues/57087
1587 111 Venky Shankar
    qa: test_fragmented_injection (tasks.cephfs.test_data_scan.TestDataScan) failure
1588
1589
h3. 07 Mar 2023
1590
1591
https://pulpito.ceph.com/vshankar-2023-03-02_09:21:58-fs-wip-vshankar-testing-20230222.044949-testing-default-smithi/
1592
https://pulpito.ceph.com/vshankar-2023-03-07_05:15:12-fs-wip-vshankar-testing-20230307.030510-testing-default-smithi/
1593
1594
* https://tracker.ceph.com/issues/56695
1595
    [RHEL stock] pjd test failures
1596
* https://tracker.ceph.com/issues/57676
1597
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1598
* https://tracker.ceph.com/issues/51964
1599
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1600
* https://tracker.ceph.com/issues/57656
1601
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
1602
* https://tracker.ceph.com/issues/57655
1603
    qa: fs:mixed-clients kernel_untar_build failure
1604
* https://tracker.ceph.com/issues/58220
1605
    Command failed (workunit test fs/quota/quota.sh) on smithi081 with status 1:
1606
* https://tracker.ceph.com/issues/54460
1607
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1608
* https://tracker.ceph.com/issues/58934
1609 109 Venky Shankar
    snaptest-git-ceph.sh failure with ceph-fuse
1610
1611
h3. 28 Feb 2023
1612
1613
https://pulpito.ceph.com/vshankar-2023-02-24_02:11:45-fs-wip-vshankar-testing-20230222.025426-testing-default-smithi/
1614
1615
* https://tracker.ceph.com/issues/56695
1616
    [RHEL stock] pjd test failures
1617
* https://tracker.ceph.com/issues/57676
1618
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1619 110 Venky Shankar
* https://tracker.ceph.com/issues/56446
1620 109 Venky Shankar
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
1621
1622 107 Venky Shankar
(teuthology infra issues causing testing delays - merging PRs which have tests passing)
1623
1624
h3. 25 Jan 2023
1625
1626
https://pulpito.ceph.com/vshankar-2023-01-25_07:57:32-fs-wip-vshankar-testing-20230125.055346-testing-default-smithi/
1627
1628
* https://tracker.ceph.com/issues/52624
1629
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
1630
* https://tracker.ceph.com/issues/56695
1631
    [RHEL stock] pjd test failures
1632
* https://tracker.ceph.com/issues/57676
1633
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1634
* https://tracker.ceph.com/issues/56446
1635
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
1636
* https://tracker.ceph.com/issues/57206
1637
    libcephfs/test.sh: ceph_test_libcephfs_reclaim
1638
* https://tracker.ceph.com/issues/58220
1639
    Command failed (workunit test fs/quota/quota.sh) on smithi081 with status 1:
1640
* https://tracker.ceph.com/issues/58340
1641
  mds: fsstress.sh hangs with multimds
1642
* https://tracker.ceph.com/issues/56011
1643
    fs/thrash: snaptest-snap-rm-cmp.sh fails in mds5sum comparison
1644
* https://tracker.ceph.com/issues/54460
1645 101 Rishabh Dave
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1646
1647
h3. 30 JAN 2023
1648
1649
run: http://pulpito.front.sepia.ceph.com/rishabh-2022-11-28_08:04:11-fs-wip-rishabh-testing-2022Nov24-1818-testing-default-smithi/
1650
re-run: http://pulpito.front.sepia.ceph.com/rishabh-2023-01-13_12:08:33-fs-wip-rishabh-testing-2022Nov24-11Jan2023-distro-default-smithi/
1651 105 Rishabh Dave
re-run of re-run: http://pulpito.front.sepia.ceph.com/rishabh-2023-01-23_18:53:32-fs-wip-rishabh-testing-2022Nov24-11Jan2023-distro-default-smithi/
1652
1653 101 Rishabh Dave
* https://tracker.ceph.com/issues/52624
1654
  qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
1655
* https://tracker.ceph.com/issues/56695
1656
  [RHEL stock] pjd test failures
1657
* https://tracker.ceph.com/issues/57676
1658
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
1659
* https://tracker.ceph.com/issues/55332
1660
  Failure in snaptest-git-ceph.sh
1661
* https://tracker.ceph.com/issues/51964
1662
  qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1663
* https://tracker.ceph.com/issues/56446
1664
  Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
1665
* https://tracker.ceph.com/issues/57655 
1666
  qa: fs:mixed-clients kernel_untar_build failure
1667
* https://tracker.ceph.com/issues/54460
1668
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1669 103 Rishabh Dave
* https://tracker.ceph.com/issues/58340
1670
  mds: fsstress.sh hangs with multimds
1671 101 Rishabh Dave
* https://tracker.ceph.com/issues/58219
1672 102 Rishabh Dave
  Command crashed: 'ceph-dencoder type inode_backtrace_t import - decode dump_json'
1673
1674
* "Failed to load ceph-mgr modules: prometheus" in cluster log"
1675 106 Rishabh Dave
  http://pulpito.front.sepia.ceph.com/rishabh-2023-01-23_18:53:32-fs-wip-rishabh-testing-2022Nov24-11Jan2023-distro-default-smithi/7134086
1676
  Acc to Venky this was fixed in https://github.com/ceph/ceph/commit/cf6089200d96fc56b08ee17a4e31f19823370dc8
1677 102 Rishabh Dave
* Created https://tracker.ceph.com/issues/58564
1678 100 Venky Shankar
  workunit test suites/dbench.sh failed error code 1
1679
1680
h3. 15 Dec 2022
1681
1682
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20221215.112736
1683
1684
* https://tracker.ceph.com/issues/52624
1685
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1686
* https://tracker.ceph.com/issues/56695
1687
    [RHEL stock] pjd test failures
1688
* https://tracker.ceph.com/issues/58219
1689
* https://tracker.ceph.com/issues/57655
1690
* qa: fs:mixed-clients kernel_untar_build failure
1691
    Test failure: test_journal_migration (tasks.cephfs.test_journal_migration.TestJournalMigration)
1692
* https://tracker.ceph.com/issues/57676
1693
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1694
* https://tracker.ceph.com/issues/58340
1695 96 Venky Shankar
    mds: fsstress.sh hangs with multimds
1696
1697
h3. 08 Dec 2022
1698 99 Venky Shankar
1699 96 Venky Shankar
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20221130.043104
1700
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20221209.043803
1701
1702
(lots of transient git.ceph.com failures)
1703
1704
* https://tracker.ceph.com/issues/52624
1705
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1706
* https://tracker.ceph.com/issues/56695
1707
    [RHEL stock] pjd test failures
1708
* https://tracker.ceph.com/issues/57655
1709
    qa: fs:mixed-clients kernel_untar_build failure
1710
* https://tracker.ceph.com/issues/58219
1711
    Test failure: test_journal_migration (tasks.cephfs.test_journal_migration.TestJournalMigration)
1712
* https://tracker.ceph.com/issues/58220
1713
    Command failed (workunit test fs/quota/quota.sh) on smithi081 with status 1:
1714 97 Venky Shankar
* https://tracker.ceph.com/issues/57676
1715
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1716 98 Venky Shankar
* https://tracker.ceph.com/issues/53859
1717
    qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
1718
* https://tracker.ceph.com/issues/54460
1719
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1720 96 Venky Shankar
* https://tracker.ceph.com/issues/58244
1721 95 Venky Shankar
    Test failure: test_rebuild_inotable (tasks.cephfs.test_data_scan.TestDataScan)
1722
1723
h3. 14 Oct 2022
1724
1725
https://pulpito.ceph.com/vshankar-2022-10-12_04:56:59-fs-wip-vshankar-testing-20221011-145847-testing-default-smithi/
1726
https://pulpito.ceph.com/vshankar-2022-10-14_04:04:57-fs-wip-vshankar-testing-20221014-072608-testing-default-smithi/
1727
1728
* https://tracker.ceph.com/issues/52624
1729
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1730
* https://tracker.ceph.com/issues/55804
1731
    Command failed (workunit test suites/pjd.sh)
1732
* https://tracker.ceph.com/issues/51964
1733
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1734
* https://tracker.ceph.com/issues/57682
1735
    client: ERROR: test_reconnect_after_blocklisted
1736 90 Rishabh Dave
* https://tracker.ceph.com/issues/54460
1737 91 Rishabh Dave
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1738
1739
h3. 10 Oct 2022
1740 92 Rishabh Dave
1741 91 Rishabh Dave
http://pulpito.front.sepia.ceph.com/rishabh-2022-09-30_19:45:21-fs-wip-rishabh-testing-30Sep2022-testing-default-smithi/
1742
1743
reruns
1744
* fs-thrash, passed: http://pulpito.front.sepia.ceph.com/rishabh-2022-10-04_13:19:47-fs-wip-rishabh-testing-30Sep2022-testing-default-smithi/
1745 94 Rishabh Dave
* fs-verify, passed: http://pulpito.front.sepia.ceph.com/rishabh-2022-10-05_12:25:37-fs-wip-rishabh-testing-30Sep2022-testing-default-smithi/
1746 91 Rishabh Dave
* cephadm failures also passed after many re-runs: http://pulpito.front.sepia.ceph.com/rishabh-2022-10-06_13:50:51-fs-wip-rishabh-testing-30Sep2022-2-testing-default-smithi/
1747 93 Rishabh Dave
    ** needed this PR to be merged in ceph-ci branch - https://github.com/ceph/ceph/pull/47458
1748 91 Rishabh Dave
1749
known bugs
1750
* https://tracker.ceph.com/issues/52624
1751
  qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1752
* https://tracker.ceph.com/issues/50223
1753
  client.xxxx isn't responding to mclientcaps(revoke
1754
* https://tracker.ceph.com/issues/57299
1755
  qa: test_dump_loads fails with JSONDecodeError
1756
* https://tracker.ceph.com/issues/57655 [Exist in main as well]
1757
  qa: fs:mixed-clients kernel_untar_build failure
1758
* https://tracker.ceph.com/issues/57206
1759 90 Rishabh Dave
  libcephfs/test.sh: ceph_test_libcephfs_reclaim
1760
1761
h3. 2022 Sep 29
1762
1763
http://pulpito.front.sepia.ceph.com/rishabh-2022-09-14_12:48:43-fs-wip-rishabh-testing-2022Sep9-1708-testing-default-smithi/
1764
1765
* https://tracker.ceph.com/issues/55804
1766
  Command failed (workunit test suites/pjd.sh)
1767
* https://tracker.ceph.com/issues/36593
1768
  Command failed (workunit test fs/quota/quota.sh) on smithixxx with status 1
1769
* https://tracker.ceph.com/issues/52624
1770
  qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1771
* https://tracker.ceph.com/issues/51964
1772
  qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1773
* https://tracker.ceph.com/issues/56632
1774
  Test failure: test_subvolume_snapshot_clone_quota_exceeded
1775
* https://tracker.ceph.com/issues/50821
1776 88 Patrick Donnelly
  qa: untar_snap_rm failure during mds thrashing
1777
1778
h3. 2022 Sep 26
1779
1780
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20220923.171109
1781
1782
* https://tracker.ceph.com/issues/55804
1783
    qa failure: pjd link tests failed
1784
* https://tracker.ceph.com/issues/57676
1785
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1786
* https://tracker.ceph.com/issues/52624
1787
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1788
* https://tracker.ceph.com/issues/57580
1789
    Test failure: test_newops_getvxattr (tasks.cephfs.test_newops.TestNewOps)
1790
* https://tracker.ceph.com/issues/48773
1791
    qa: scrub does not complete
1792
* https://tracker.ceph.com/issues/57299
1793
    qa: test_dump_loads fails with JSONDecodeError
1794
* https://tracker.ceph.com/issues/57280
1795
    qa: tasks/kernel_cfuse_workunits_untarbuild_blogbench fails - Failed to fetch package version from shaman
1796
* https://tracker.ceph.com/issues/57205
1797
    Test failure: test_subvolume_group_ls_filter_internal_directories (tasks.cephfs.test_volumes.TestSubvolumeGroups)
1798
* https://tracker.ceph.com/issues/57656
1799
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
1800
* https://tracker.ceph.com/issues/57677
1801
    qa: "1 MDSs behind on trimming (MDS_TRIM)"
1802
* https://tracker.ceph.com/issues/57206
1803
    libcephfs/test.sh: ceph_test_libcephfs_reclaim
1804
* https://tracker.ceph.com/issues/57446
1805
    qa: test_subvolume_snapshot_info_if_orphan_clone fails
1806 89 Patrick Donnelly
* https://tracker.ceph.com/issues/57655 [Exist in main as well]
1807
    qa: fs:mixed-clients kernel_untar_build failure
1808 88 Patrick Donnelly
* https://tracker.ceph.com/issues/57682
1809
    client: ERROR: test_reconnect_after_blocklisted
1810 87 Patrick Donnelly
1811
1812
h3. 2022 Sep 22
1813
1814
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20220920.234701
1815
1816
* https://tracker.ceph.com/issues/57299
1817
    qa: test_dump_loads fails with JSONDecodeError
1818
* https://tracker.ceph.com/issues/57205
1819
    Test failure: test_subvolume_group_ls_filter_internal_directories (tasks.cephfs.test_volumes.TestSubvolumeGroups)
1820
* https://tracker.ceph.com/issues/52624
1821
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1822
* https://tracker.ceph.com/issues/57580
1823
    Test failure: test_newops_getvxattr (tasks.cephfs.test_newops.TestNewOps)
1824
* https://tracker.ceph.com/issues/57280
1825
    qa: tasks/kernel_cfuse_workunits_untarbuild_blogbench fails - Failed to fetch package version from shaman
1826
* https://tracker.ceph.com/issues/48773
1827
    qa: scrub does not complete
1828
* https://tracker.ceph.com/issues/56446
1829
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
1830
* https://tracker.ceph.com/issues/57206
1831
    libcephfs/test.sh: ceph_test_libcephfs_reclaim
1832
* https://tracker.ceph.com/issues/51267
1833
    CommandFailedError: Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithi096 with status 1:...
1834
1835
NEW:
1836
1837
* https://tracker.ceph.com/issues/57656
1838
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
1839
* https://tracker.ceph.com/issues/57655 [Exist in main as well]
1840
    qa: fs:mixed-clients kernel_untar_build failure
1841
* https://tracker.ceph.com/issues/57657
1842
    mds: scrub locates mismatch between child accounted_rstats and self rstats
1843
1844
Segfault probably caused by: https://github.com/ceph/ceph/pull/47795#issuecomment-1255724799
1845 80 Venky Shankar
1846 79 Venky Shankar
1847
h3. 2022 Sep 16
1848
1849
https://pulpito.ceph.com/?branch=wip-vshankar-testing1-20220905-132828
1850
1851
* https://tracker.ceph.com/issues/57446
1852
    qa: test_subvolume_snapshot_info_if_orphan_clone fails
1853
* https://tracker.ceph.com/issues/57299
1854
    qa: test_dump_loads fails with JSONDecodeError
1855
* https://tracker.ceph.com/issues/50223
1856
    client.xxxx isn't responding to mclientcaps(revoke)
1857
* https://tracker.ceph.com/issues/52624
1858
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1859
* https://tracker.ceph.com/issues/57205
1860
    Test failure: test_subvolume_group_ls_filter_internal_directories (tasks.cephfs.test_volumes.TestSubvolumeGroups)
1861
* https://tracker.ceph.com/issues/57280
1862
    qa: tasks/kernel_cfuse_workunits_untarbuild_blogbench fails - Failed to fetch package version from shaman
1863
* https://tracker.ceph.com/issues/51282
1864
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
1865
* https://tracker.ceph.com/issues/48203
1866
  https://tracker.ceph.com/issues/36593
1867
    qa: quota failure
1868
    qa: quota failure caused by clients stepping on each other
1869
* https://tracker.ceph.com/issues/57580
1870 77 Rishabh Dave
    Test failure: test_newops_getvxattr (tasks.cephfs.test_newops.TestNewOps)
1871
1872 76 Rishabh Dave
1873
h3. 2022 Aug 26
1874
1875
http://pulpito.front.sepia.ceph.com/rishabh-2022-08-22_17:49:59-fs-wip-rishabh-testing-2022Aug19-testing-default-smithi/
1876
http://pulpito.front.sepia.ceph.com/rishabh-2022-08-24_11:56:51-fs-wip-rishabh-testing-2022Aug19-testing-default-smithi/
1877
1878
* https://tracker.ceph.com/issues/57206
1879
  libcephfs/test.sh: ceph_test_libcephfs_reclaim
1880
* https://tracker.ceph.com/issues/56632
1881
  Test failure: test_subvolume_snapshot_clone_quota_exceeded (tasks.cephfs.test_volumes.TestSubvolumeSnapshotClones)
1882
* https://tracker.ceph.com/issues/56446
1883
  Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
1884
* https://tracker.ceph.com/issues/51964
1885
  qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1886
* https://tracker.ceph.com/issues/53859
1887
  qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
1888
1889
* https://tracker.ceph.com/issues/54460
1890
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1891
* https://tracker.ceph.com/issues/54462
1892
  Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128
1893
* https://tracker.ceph.com/issues/54460
1894
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1895
* https://tracker.ceph.com/issues/36593
1896
  Command failed (workunit test fs/quota/quota.sh) on smithixxx with status 1
1897
1898
* https://tracker.ceph.com/issues/52624
1899
  qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1900
* https://tracker.ceph.com/issues/55804
1901
  Command failed (workunit test suites/pjd.sh)
1902
* https://tracker.ceph.com/issues/50223
1903
  client.xxxx isn't responding to mclientcaps(revoke)
1904 75 Venky Shankar
1905
1906
h3. 2022 Aug 22
1907
1908
https://pulpito.ceph.com/vshankar-2022-08-12_09:34:24-fs-wip-vshankar-testing1-20220812-072441-testing-default-smithi/
1909
https://pulpito.ceph.com/vshankar-2022-08-18_04:30:42-fs-wip-vshankar-testing1-20220818-082047-testing-default-smithi/ (drop problematic PR and re-run)
1910
1911
* https://tracker.ceph.com/issues/52624
1912
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1913
* https://tracker.ceph.com/issues/56446
1914
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
1915
* https://tracker.ceph.com/issues/55804
1916
    Command failed (workunit test suites/pjd.sh)
1917
* https://tracker.ceph.com/issues/51278
1918
    mds: "FAILED ceph_assert(!segments.empty())"
1919
* https://tracker.ceph.com/issues/54460
1920
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1921
* https://tracker.ceph.com/issues/57205
1922
    Test failure: test_subvolume_group_ls_filter_internal_directories (tasks.cephfs.test_volumes.TestSubvolumeGroups)
1923
* https://tracker.ceph.com/issues/57206
1924
    ceph_test_libcephfs_reclaim crashes during test
1925
* https://tracker.ceph.com/issues/53859
1926
  Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
1927
* https://tracker.ceph.com/issues/50223
1928 72 Venky Shankar
    client.xxxx isn't responding to mclientcaps(revoke)
1929
1930
h3. 2022 Aug 12
1931
1932
https://pulpito.ceph.com/vshankar-2022-08-10_04:06:00-fs-wip-vshankar-testing-20220805-190751-testing-default-smithi/
1933
https://pulpito.ceph.com/vshankar-2022-08-11_12:16:58-fs-wip-vshankar-testing-20220811-145809-testing-default-smithi/ (drop problematic PR and re-run)
1934
1935
* https://tracker.ceph.com/issues/52624
1936
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1937
* https://tracker.ceph.com/issues/56446
1938
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
1939
* https://tracker.ceph.com/issues/51964
1940
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1941
* https://tracker.ceph.com/issues/55804
1942
    Command failed (workunit test suites/pjd.sh)
1943
* https://tracker.ceph.com/issues/50223
1944
    client.xxxx isn't responding to mclientcaps(revoke)
1945
* https://tracker.ceph.com/issues/50821
1946 73 Venky Shankar
    qa: untar_snap_rm failure during mds thrashing
1947 72 Venky Shankar
* https://tracker.ceph.com/issues/54460
1948 71 Venky Shankar
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1949
1950
h3. 2022 Aug 04
1951
1952
https://pulpito.ceph.com/?branch=wip-vshankar-testing1-20220804-123835 (only mgr/volumes, mgr/stats)
1953
1954 69 Rishabh Dave
Unrealted teuthology failure on rhel
1955 68 Rishabh Dave
1956
h3. 2022 Jul 25
1957
1958
http://pulpito.front.sepia.ceph.com/rishabh-2022-07-22_11:34:20-fs-wip-rishabh-testing-2022Jul22-1400-testing-default-smithi/
1959
1960 74 Rishabh Dave
1st re-run: http://pulpito.front.sepia.ceph.com/rishabh-2022-07-24_03:51:19-fs-wip-rishabh-testing-2022Jul22-1400-testing-default-smithi
1961
2nd re-run: http://pulpito.front.sepia.ceph.com/rishabh-2022-07-24_08:53:36-fs-wip-rishabh-testing-2022Jul22-1400-testing-default-smithi/
1962 68 Rishabh Dave
3rd re-run: http://pulpito.front.sepia.ceph.com/rishabh-2022-07-24_08:53:36-fs-wip-rishabh-testing-2022Jul22-1400-testing-default-smithi/
1963
4th (final) re-run: http://pulpito.front.sepia.ceph.com/rishabh-2022-07-28_03:59:01-fs-wip-rishabh-testing-2022Jul28-0143-testing-default-smithi/
1964
1965
* https://tracker.ceph.com/issues/55804
1966
  Command failed (workunit test suites/pjd.sh)
1967
* https://tracker.ceph.com/issues/50223
1968
  client.xxxx isn't responding to mclientcaps(revoke)
1969
1970
* https://tracker.ceph.com/issues/54460
1971
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1972 1 Patrick Donnelly
* https://tracker.ceph.com/issues/36593
1973 74 Rishabh Dave
  Command failed (workunit test fs/quota/quota.sh) on smithixxx with status 1
1974 68 Rishabh Dave
* https://tracker.ceph.com/issues/54462
1975 67 Patrick Donnelly
  Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128~
1976
1977
h3. 2022 July 22
1978
1979
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20220721.235756
1980
1981
MDS_HEALTH_DUMMY error in log fixed by followup commit.
1982
transient selinux ping failure
1983
1984
* https://tracker.ceph.com/issues/56694
1985
    qa: avoid blocking forever on hung umount
1986
* https://tracker.ceph.com/issues/56695
1987
    [RHEL stock] pjd test failures
1988
* https://tracker.ceph.com/issues/56696
1989
    admin keyring disappears during qa run
1990
* https://tracker.ceph.com/issues/56697
1991
    qa: fs/snaps fails for fuse
1992
* https://tracker.ceph.com/issues/50222
1993
    osd: 5.2s0 deep-scrub : stat mismatch
1994
* https://tracker.ceph.com/issues/56698
1995
    client: FAILED ceph_assert(_size == 0)
1996
* https://tracker.ceph.com/issues/50223
1997
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
1998 66 Rishabh Dave
1999 65 Rishabh Dave
2000
h3. 2022 Jul 15
2001
2002
http://pulpito.front.sepia.ceph.com/rishabh-2022-07-08_23:53:34-fs-wip-rishabh-testing-2022Jul08-1820-testing-default-smithi/
2003
2004
re-run: http://pulpito.front.sepia.ceph.com/rishabh-2022-07-15_06:42:04-fs-wip-rishabh-testing-2022Jul08-1820-testing-default-smithi/
2005
2006
* https://tracker.ceph.com/issues/53859
2007
  Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
2008
* https://tracker.ceph.com/issues/55804
2009
  Command failed (workunit test suites/pjd.sh)
2010
* https://tracker.ceph.com/issues/50223
2011
  client.xxxx isn't responding to mclientcaps(revoke)
2012
* https://tracker.ceph.com/issues/50222
2013
  osd: deep-scrub : stat mismatch
2014
2015
* https://tracker.ceph.com/issues/56632
2016
  Test failure: test_subvolume_snapshot_clone_quota_exceeded (tasks.cephfs.test_volumes.TestSubvolumeSnapshotClones)
2017
* https://tracker.ceph.com/issues/56634
2018
  workunit test fs/snaps/snaptest-intodir.sh
2019
* https://tracker.ceph.com/issues/56644
2020
  Test failure: test_rapid_creation (tasks.cephfs.test_fragment.TestFragmentation)
2021
2022 61 Rishabh Dave
2023
2024
h3. 2022 July 05
2025 62 Rishabh Dave
2026 64 Rishabh Dave
http://pulpito.front.sepia.ceph.com/rishabh-2022-07-02_14:14:52-fs-wip-rishabh-testing-20220702-1631-testing-default-smithi/
2027
2028
On 1st re-run some jobs passed - http://pulpito.front.sepia.ceph.com/rishabh-2022-07-03_15:10:28-fs-wip-rishabh-testing-20220702-1631-distro-default-smithi/
2029
2030
On 2nd re-run only few jobs failed -
2031 62 Rishabh Dave
http://pulpito.front.sepia.ceph.com/rishabh-2022-07-06_05:24:29-fs-wip-rishabh-testing-20220705-2132-distro-default-smithi/
2032
http://pulpito.front.sepia.ceph.com/rishabh-2022-07-06_05:24:29-fs-wip-rishabh-testing-20220705-2132-distro-default-smithi/
2033
2034
* https://tracker.ceph.com/issues/56446
2035
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
2036
* https://tracker.ceph.com/issues/55804
2037
    Command failed (workunit test suites/pjd.sh) on smithi047 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/
2038
2039
* https://tracker.ceph.com/issues/56445
2040 63 Rishabh Dave
    Command failed on smithi080 with status 123: "find /home/ubuntu/cephtest/archive/syslog -name '*.log' -print0 | sudo xargs -0 --no-run-if-empty -- gzip --"
2041
* https://tracker.ceph.com/issues/51267
2042
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithi098 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1
2043 62 Rishabh Dave
* https://tracker.ceph.com/issues/50224
2044
    Test failure: test_mirroring_init_failure_with_recovery (tasks.cephfs.test_mirroring.TestMirroring)
2045 61 Rishabh Dave
2046 58 Venky Shankar
2047
2048
h3. 2022 July 04
2049
2050
https://pulpito.ceph.com/vshankar-2022-06-29_09:19:00-fs-wip-vshankar-testing-20220627-100931-testing-default-smithi/
2051
(rhel runs were borked due to: https://lists.ceph.io/hyperkitty/list/dev@ceph.io/thread/JSZQFUKVLDND4W33PXDGCABPHNSPT6SS/, tests ran with --filter-out=rhel)
2052
2053
* https://tracker.ceph.com/issues/56445
2054 59 Rishabh Dave
    Command failed on smithi162 with status 123: "find /home/ubuntu/cephtest/archive/syslog -name '*.log' -print0 | sudo xargs -0 --no-run-if-empty -- gzip --"
2055
* https://tracker.ceph.com/issues/56446
2056
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
2057
* https://tracker.ceph.com/issues/51964
2058 60 Rishabh Dave
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
2059 59 Rishabh Dave
* https://tracker.ceph.com/issues/52624
2060 57 Venky Shankar
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2061
2062
h3. 2022 June 20
2063
2064
https://pulpito.ceph.com/vshankar-2022-06-15_04:03:39-fs-wip-vshankar-testing1-20220615-072516-testing-default-smithi/
2065
https://pulpito.ceph.com/vshankar-2022-06-19_08:22:46-fs-wip-vshankar-testing1-20220619-102531-testing-default-smithi/
2066
2067
* https://tracker.ceph.com/issues/52624
2068
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2069
* https://tracker.ceph.com/issues/55804
2070
    qa failure: pjd link tests failed
2071
* https://tracker.ceph.com/issues/54108
2072
    qa: iogen workunit: "The following counters failed to be set on mds daemons: {'mds.exported', 'mds.imported'}"
2073
* https://tracker.ceph.com/issues/55332
2074 56 Patrick Donnelly
    Failure in snaptest-git-ceph.sh (it's an async unlink/create bug)
2075
2076
h3. 2022 June 13
2077
2078
https://pulpito.ceph.com/pdonnell-2022-06-12_05:08:12-fs:workload-wip-pdonnell-testing-20220612.004943-distro-default-smithi/
2079
2080
* https://tracker.ceph.com/issues/56024
2081
    cephadm: removes ceph.conf during qa run causing command failure
2082
* https://tracker.ceph.com/issues/48773
2083
    qa: scrub does not complete
2084
* https://tracker.ceph.com/issues/56012
2085
    mds: src/mds/MDLog.cc: 283: FAILED ceph_assert(!mds->is_ any_replay())
2086 55 Venky Shankar
2087 54 Venky Shankar
2088
h3. 2022 Jun 13
2089
2090
https://pulpito.ceph.com/vshankar-2022-06-07_00:25:50-fs-wip-vshankar-testing-20220606-223254-testing-default-smithi/
2091
https://pulpito.ceph.com/vshankar-2022-06-10_01:04:46-fs-wip-vshankar-testing-20220609-175550-testing-default-smithi/
2092
2093
* https://tracker.ceph.com/issues/52624
2094
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2095
* https://tracker.ceph.com/issues/51964
2096
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
2097
* https://tracker.ceph.com/issues/53859
2098
    qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
2099
* https://tracker.ceph.com/issues/55804
2100
    qa failure: pjd link tests failed
2101
* https://tracker.ceph.com/issues/56003
2102
    client: src/include/xlist.h: 81: FAILED ceph_assert(_size == 0)
2103
* https://tracker.ceph.com/issues/56011
2104
    fs/thrash: snaptest-snap-rm-cmp.sh fails in mds5sum comparison
2105
* https://tracker.ceph.com/issues/56012
2106 53 Venky Shankar
    mds: src/mds/MDLog.cc: 283: FAILED ceph_assert(!mds->is_ any_replay())
2107
2108
h3. 2022 Jun 07
2109
2110
https://pulpito.ceph.com/vshankar-2022-06-06_21:25:41-fs-wip-vshankar-testing1-20220606-230129-testing-default-smithi/
2111
https://pulpito.ceph.com/vshankar-2022-06-07_10:53:31-fs-wip-vshankar-testing1-20220607-104134-testing-default-smithi/ (rerun after dropping a problematic PR)
2112
2113
* https://tracker.ceph.com/issues/52624
2114
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2115
* https://tracker.ceph.com/issues/50223
2116
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2117
* https://tracker.ceph.com/issues/50224
2118 51 Venky Shankar
    qa: test_mirroring_init_failure_with_recovery failure
2119
2120
h3. 2022 May 12
2121 52 Venky Shankar
2122 51 Venky Shankar
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20220509-125847
2123
https://pulpito.ceph.com/vshankar-2022-05-13_17:09:16-fs-wip-vshankar-testing-20220513-120051-testing-default-smithi/ (drop prs + rerun)
2124
2125
* https://tracker.ceph.com/issues/52624
2126
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2127
* https://tracker.ceph.com/issues/50223
2128
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2129
* https://tracker.ceph.com/issues/55332
2130
    Failure in snaptest-git-ceph.sh
2131
* https://tracker.ceph.com/issues/53859
2132 1 Patrick Donnelly
    qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
2133 52 Venky Shankar
* https://tracker.ceph.com/issues/55538
2134
    Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead)
2135 51 Venky Shankar
* https://tracker.ceph.com/issues/55258
2136 49 Venky Shankar
    lots of "heartbeat_check: no reply from X.X.X.X" in OSD logs (cropss up again, though very infrequent)
2137
2138 50 Venky Shankar
h3. 2022 May 04
2139
2140
https://pulpito.ceph.com/vshankar-2022-05-01_13:18:44-fs-wip-vshankar-testing1-20220428-204527-testing-default-smithi/
2141 49 Venky Shankar
https://pulpito.ceph.com/vshankar-2022-05-02_16:58:59-fs-wip-vshankar-testing1-20220502-201957-testing-default-smithi/ (after dropping PRs)
2142
2143
* https://tracker.ceph.com/issues/52624
2144
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2145
* https://tracker.ceph.com/issues/50223
2146
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2147
* https://tracker.ceph.com/issues/55332
2148
    Failure in snaptest-git-ceph.sh
2149
* https://tracker.ceph.com/issues/53859
2150
    qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
2151
* https://tracker.ceph.com/issues/55516
2152
    qa: fs suite tests failing with "json.decoder.JSONDecodeError: Extra data: line 2 column 82 (char 82)"
2153
* https://tracker.ceph.com/issues/55537
2154
    mds: crash during fs:upgrade test
2155
* https://tracker.ceph.com/issues/55538
2156 48 Venky Shankar
    Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead)
2157
2158
h3. 2022 Apr 25
2159
2160
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20220420-113951 (owner vshankar)
2161
2162
* https://tracker.ceph.com/issues/52624
2163
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2164
* https://tracker.ceph.com/issues/50223
2165
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2166
* https://tracker.ceph.com/issues/55258
2167
    lots of "heartbeat_check: no reply from X.X.X.X" in OSD logs
2168
* https://tracker.ceph.com/issues/55377
2169 47 Venky Shankar
    kclient: mds revoke Fwb caps stuck after the kclient tries writebcak once
2170
2171
h3. 2022 Apr 14
2172
2173
https://pulpito.ceph.com/?branch=wip-vshankar-testing1-20220411-144044
2174
2175
* https://tracker.ceph.com/issues/52624
2176
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2177
* https://tracker.ceph.com/issues/50223
2178
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2179
* https://tracker.ceph.com/issues/52438
2180
    qa: ffsb timeout
2181
* https://tracker.ceph.com/issues/55170
2182
    mds: crash during rejoin (CDir::fetch_keys)
2183
* https://tracker.ceph.com/issues/55331
2184
    pjd failure
2185
* https://tracker.ceph.com/issues/48773
2186
    qa: scrub does not complete
2187
* https://tracker.ceph.com/issues/55332
2188
    Failure in snaptest-git-ceph.sh
2189
* https://tracker.ceph.com/issues/55258
2190 45 Venky Shankar
    lots of "heartbeat_check: no reply from X.X.X.X" in OSD logs
2191
2192 46 Venky Shankar
h3. 2022 Apr 11
2193 45 Venky Shankar
2194
https://pulpito.ceph.com/?branch=wip-vshankar-testing-55110-20220408-203242
2195
2196
* https://tracker.ceph.com/issues/48773
2197
    qa: scrub does not complete
2198
* https://tracker.ceph.com/issues/52624
2199
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2200
* https://tracker.ceph.com/issues/52438
2201
    qa: ffsb timeout
2202
* https://tracker.ceph.com/issues/48680
2203
    mds: scrubbing stuck "scrub active (0 inodes in the stack)"
2204
* https://tracker.ceph.com/issues/55236
2205
    qa: fs/snaps tests fails with "hit max job timeout"
2206
* https://tracker.ceph.com/issues/54108
2207
    qa: iogen workunit: "The following counters failed to be set on mds daemons: {'mds.exported', 'mds.imported'}"
2208
* https://tracker.ceph.com/issues/54971
2209
    Test failure: test_perf_stats_stale_metrics (tasks.cephfs.test_mds_metrics.TestMDSMetrics)
2210
* https://tracker.ceph.com/issues/50223
2211
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2212
* https://tracker.ceph.com/issues/55258
2213 44 Venky Shankar
    lots of "heartbeat_check: no reply from X.X.X.X" in OSD logs
2214 42 Venky Shankar
2215 43 Venky Shankar
h3. 2022 Mar 21
2216
2217
https://pulpito.ceph.com/vshankar-2022-03-20_02:16:37-fs-wip-vshankar-testing-20220319-163539-testing-default-smithi/
2218
2219
Run didn't go well, lots of failures - debugging by dropping PRs and running against master branch. Only merging unrelated PRs that pass tests.
2220
2221
2222 42 Venky Shankar
h3. 2022 Mar 08
2223
2224
https://pulpito.ceph.com/vshankar-2022-02-28_04:32:15-fs-wip-vshankar-testing-20220226-211550-testing-default-smithi/
2225
2226
rerun with
2227
- (drop) https://github.com/ceph/ceph/pull/44679
2228
- (drop) https://github.com/ceph/ceph/pull/44958
2229
https://pulpito.ceph.com/vshankar-2022-03-06_14:47:51-fs-wip-vshankar-testing-20220304-132102-testing-default-smithi/
2230
2231
* https://tracker.ceph.com/issues/54419 (new)
2232
    `ceph orch upgrade start` seems to never reach completion
2233
* https://tracker.ceph.com/issues/51964
2234
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
2235
* https://tracker.ceph.com/issues/52624
2236
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2237
* https://tracker.ceph.com/issues/50223
2238
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2239
* https://tracker.ceph.com/issues/52438
2240
    qa: ffsb timeout
2241
* https://tracker.ceph.com/issues/50821
2242
    qa: untar_snap_rm failure during mds thrashing
2243 41 Venky Shankar
2244
2245
h3. 2022 Feb 09
2246
2247
https://pulpito.ceph.com/vshankar-2022-02-05_17:27:49-fs-wip-vshankar-testing-20220201-113815-testing-default-smithi/
2248
2249
rerun with
2250
- (drop) https://github.com/ceph/ceph/pull/37938
2251
- (drop) https://github.com/ceph/ceph/pull/44335
2252
- (drop) https://github.com/ceph/ceph/pull/44491
2253
- (drop) https://github.com/ceph/ceph/pull/44501
2254
https://pulpito.ceph.com/vshankar-2022-02-08_14:27:29-fs-wip-vshankar-testing-20220208-181241-testing-default-smithi/
2255
2256
* https://tracker.ceph.com/issues/51964
2257
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
2258
* https://tracker.ceph.com/issues/54066
2259
    test_subvolume_no_upgrade_v1_sanity fails with `AssertionError: 1000 != 0`
2260
* https://tracker.ceph.com/issues/48773
2261
    qa: scrub does not complete
2262
* https://tracker.ceph.com/issues/52624
2263
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2264
* https://tracker.ceph.com/issues/50223
2265
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2266
* https://tracker.ceph.com/issues/52438
2267 40 Patrick Donnelly
    qa: ffsb timeout
2268
2269
h3. 2022 Feb 01
2270
2271
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20220127.171526
2272
2273
* https://tracker.ceph.com/issues/54107
2274
    kclient: hang during umount
2275
* https://tracker.ceph.com/issues/54106
2276
    kclient: hang during workunit cleanup
2277
* https://tracker.ceph.com/issues/54108
2278
    qa: iogen workunit: "The following counters failed to be set on mds daemons: {'mds.exported', 'mds.imported'}"
2279
* https://tracker.ceph.com/issues/48773
2280
    qa: scrub does not complete
2281
* https://tracker.ceph.com/issues/52438
2282
    qa: ffsb timeout
2283 36 Venky Shankar
2284
2285
h3. 2022 Jan 13
2286 39 Venky Shankar
2287 36 Venky Shankar
https://pulpito.ceph.com/vshankar-2022-01-06_13:18:41-fs-wip-vshankar-testing-20220106-145819-testing-default-smithi/
2288 38 Venky Shankar
2289
rerun with:
2290 36 Venky Shankar
- (add) https://github.com/ceph/ceph/pull/44570
2291
- (drop) https://github.com/ceph/ceph/pull/43184
2292
https://pulpito.ceph.com/vshankar-2022-01-13_04:42:40-fs-wip-vshankar-testing-20220106-145819-testing-default-smithi/
2293
2294
* https://tracker.ceph.com/issues/50223
2295
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2296
* https://tracker.ceph.com/issues/51282
2297
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2298
* https://tracker.ceph.com/issues/48773
2299
    qa: scrub does not complete
2300
* https://tracker.ceph.com/issues/52624
2301
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2302
* https://tracker.ceph.com/issues/53859
2303 34 Venky Shankar
    qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
2304
2305
h3. 2022 Jan 03
2306
2307
https://pulpito.ceph.com/vshankar-2021-12-22_07:37:44-fs-wip-vshankar-testing-20211216-114012-testing-default-smithi/
2308
https://pulpito.ceph.com/vshankar-2022-01-03_12:27:45-fs-wip-vshankar-testing-20220103-142738-testing-default-smithi/ (rerun)
2309
2310
* https://tracker.ceph.com/issues/50223
2311
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2312
* https://tracker.ceph.com/issues/51964
2313
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
2314
* https://tracker.ceph.com/issues/51267
2315
    CommandFailedError: Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithi096 with status 1:...
2316
* https://tracker.ceph.com/issues/51282
2317
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2318
* https://tracker.ceph.com/issues/50821
2319
    qa: untar_snap_rm failure during mds thrashing
2320 35 Ramana Raja
* https://tracker.ceph.com/issues/51278
2321
    mds: "FAILED ceph_assert(!segments.empty())"
2322
* https://tracker.ceph.com/issues/52279
2323 34 Venky Shankar
    cephadm tests fail due to: error adding seccomp filter rule for syscall bdflush: requested action matches default action of filter
2324 33 Patrick Donnelly
2325
2326
h3. 2021 Dec 22
2327
2328
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211222.014316
2329
2330
* https://tracker.ceph.com/issues/52624
2331
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2332
* https://tracker.ceph.com/issues/50223
2333
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2334
* https://tracker.ceph.com/issues/52279
2335
    cephadm tests fail due to: error adding seccomp filter rule for syscall bdflush: requested action matches default action of filter
2336
* https://tracker.ceph.com/issues/50224
2337
    qa: test_mirroring_init_failure_with_recovery failure
2338
* https://tracker.ceph.com/issues/48773
2339
    qa: scrub does not complete
2340 32 Venky Shankar
2341
2342
h3. 2021 Nov 30
2343
2344
https://pulpito.ceph.com/vshankar-2021-11-24_07:14:27-fs-wip-vshankar-testing-20211124-094330-testing-default-smithi/
2345
https://pulpito.ceph.com/vshankar-2021-11-30_06:23:32-fs-wip-vshankar-testing-20211124-094330-distro-default-smithi/ (rerun w/ QA fixes)
2346
2347
* https://tracker.ceph.com/issues/53436
2348
    mds, mon: mds beacon messages get dropped? (mds never reaches up:active state)
2349
* https://tracker.ceph.com/issues/51964
2350
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
2351
* https://tracker.ceph.com/issues/48812
2352
    qa: test_scrub_pause_and_resume_with_abort failure
2353
* https://tracker.ceph.com/issues/51076
2354
    "wait_for_recovery: failed before timeout expired" during thrashosd test with EC backend.
2355
* https://tracker.ceph.com/issues/50223
2356
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2357
* https://tracker.ceph.com/issues/52624
2358
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2359
* https://tracker.ceph.com/issues/50250
2360
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
2361 31 Patrick Donnelly
2362
2363
h3. 2021 November 9
2364
2365
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211109.180315
2366
2367
* https://tracker.ceph.com/issues/53214
2368
    qa: "dd: error reading '/sys/kernel/debug/ceph/2a934501-6731-4052-a836-f42229a869be.client4874/metrics': Is a directory"
2369
* https://tracker.ceph.com/issues/48773
2370
    qa: scrub does not complete
2371
* https://tracker.ceph.com/issues/50223
2372
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2373
* https://tracker.ceph.com/issues/51282
2374
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2375
* https://tracker.ceph.com/issues/52624
2376
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2377
* https://tracker.ceph.com/issues/53216
2378
    qa: "RuntimeError: value of attributes should be either str or None. client_id"
2379
* https://tracker.ceph.com/issues/50250
2380
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
2381
2382 30 Patrick Donnelly
2383
2384
h3. 2021 November 03
2385
2386
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211103.023355
2387
2388
* https://tracker.ceph.com/issues/51964
2389
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
2390
* https://tracker.ceph.com/issues/51282
2391
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2392
* https://tracker.ceph.com/issues/52436
2393
    fs/ceph: "corrupt mdsmap"
2394
* https://tracker.ceph.com/issues/53074
2395
    pybind/mgr/cephadm: upgrade sequence does not continue if no MDS are active
2396
* https://tracker.ceph.com/issues/53150
2397
    pybind/mgr/cephadm/upgrade: tolerate MDS failures during upgrade straddling v16.2.5
2398
* https://tracker.ceph.com/issues/53155
2399
    MDSMonitor: assertion during upgrade to v16.2.5+
2400 29 Patrick Donnelly
2401
2402
h3. 2021 October 26
2403
2404
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211025.000447
2405
2406
* https://tracker.ceph.com/issues/53074
2407
    pybind/mgr/cephadm: upgrade sequence does not continue if no MDS are active
2408
* https://tracker.ceph.com/issues/52997
2409
    testing: hang ing umount
2410
* https://tracker.ceph.com/issues/50824
2411
    qa: snaptest-git-ceph bus error
2412
* https://tracker.ceph.com/issues/52436
2413
    fs/ceph: "corrupt mdsmap"
2414
* https://tracker.ceph.com/issues/48773
2415
    qa: scrub does not complete
2416
* https://tracker.ceph.com/issues/53082
2417
    ceph-fuse: segmenetation fault in Client::handle_mds_map
2418
* https://tracker.ceph.com/issues/50223
2419
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2420
* https://tracker.ceph.com/issues/52624
2421
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2422
* https://tracker.ceph.com/issues/50224
2423
    qa: test_mirroring_init_failure_with_recovery failure
2424
* https://tracker.ceph.com/issues/50821
2425
    qa: untar_snap_rm failure during mds thrashing
2426
* https://tracker.ceph.com/issues/50250
2427
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
2428
2429 27 Patrick Donnelly
2430
2431 28 Patrick Donnelly
h3. 2021 October 19
2432 27 Patrick Donnelly
2433
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211019.013028
2434
2435
* https://tracker.ceph.com/issues/52995
2436
    qa: test_standby_count_wanted failure
2437
* https://tracker.ceph.com/issues/52948
2438
    osd: fails to come up: "teuthology.misc:7 of 8 OSDs are up"
2439
* https://tracker.ceph.com/issues/52996
2440
    qa: test_perf_counters via test_openfiletable
2441
* https://tracker.ceph.com/issues/48772
2442
    qa: pjd: not ok 9, 44, 80
2443
* https://tracker.ceph.com/issues/52997
2444
    testing: hang ing umount
2445
* https://tracker.ceph.com/issues/50250
2446
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
2447
* https://tracker.ceph.com/issues/52624
2448
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2449
* https://tracker.ceph.com/issues/50223
2450
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2451
* https://tracker.ceph.com/issues/50821
2452
    qa: untar_snap_rm failure during mds thrashing
2453
* https://tracker.ceph.com/issues/48773
2454
    qa: scrub does not complete
2455 26 Patrick Donnelly
2456
2457
h3. 2021 October 12
2458
2459
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211012.192211
2460
2461
Some failures caused by teuthology bug: https://tracker.ceph.com/issues/52944
2462
2463
New test caused failure: https://github.com/ceph/ceph/pull/43297#discussion_r729883167
2464
2465
2466
* https://tracker.ceph.com/issues/51282
2467
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2468
* https://tracker.ceph.com/issues/52948
2469
    osd: fails to come up: "teuthology.misc:7 of 8 OSDs are up"
2470
* https://tracker.ceph.com/issues/48773
2471
    qa: scrub does not complete
2472
* https://tracker.ceph.com/issues/50224
2473
    qa: test_mirroring_init_failure_with_recovery failure
2474
* https://tracker.ceph.com/issues/52949
2475
    RuntimeError: The following counters failed to be set on mds daemons: {'mds.dir_split'}
2476 25 Patrick Donnelly
2477 23 Patrick Donnelly
2478 24 Patrick Donnelly
h3. 2021 October 02
2479
2480
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211002.163337
2481
2482
Some failures caused by cephadm upgrade test. Fixed in follow-up qa commit.
2483
2484
test_simple failures caused by PR in this set.
2485
2486
A few reruns because of QA infra noise.
2487
2488
* https://tracker.ceph.com/issues/52822
2489
    qa: failed pacific install on fs:upgrade
2490
* https://tracker.ceph.com/issues/52624
2491
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2492
* https://tracker.ceph.com/issues/50223
2493
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2494
* https://tracker.ceph.com/issues/48773
2495
    qa: scrub does not complete
2496
2497
2498 23 Patrick Donnelly
h3. 2021 September 20
2499
2500
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210917.174826
2501
2502
* https://tracker.ceph.com/issues/52677
2503
    qa: test_simple failure
2504
* https://tracker.ceph.com/issues/51279
2505
    kclient hangs on umount (testing branch)
2506
* https://tracker.ceph.com/issues/50223
2507
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2508
* https://tracker.ceph.com/issues/50250
2509
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
2510
* https://tracker.ceph.com/issues/52624
2511
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2512
* https://tracker.ceph.com/issues/52438
2513
    qa: ffsb timeout
2514 22 Patrick Donnelly
2515
2516
h3. 2021 September 10
2517
2518
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210910.181451
2519
2520
* https://tracker.ceph.com/issues/50223
2521
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2522
* https://tracker.ceph.com/issues/50250
2523
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
2524
* https://tracker.ceph.com/issues/52624
2525
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2526
* https://tracker.ceph.com/issues/52625
2527
    qa: test_kill_mdstable (tasks.cephfs.test_snapshots.TestSnapshots)
2528
* https://tracker.ceph.com/issues/52439
2529
    qa: acls does not compile on centos stream
2530
* https://tracker.ceph.com/issues/50821
2531
    qa: untar_snap_rm failure during mds thrashing
2532
* https://tracker.ceph.com/issues/48773
2533
    qa: scrub does not complete
2534
* https://tracker.ceph.com/issues/52626
2535
    mds: ScrubStack.cc: 831: FAILED ceph_assert(diri)
2536
* https://tracker.ceph.com/issues/51279
2537
    kclient hangs on umount (testing branch)
2538 21 Patrick Donnelly
2539
2540
h3. 2021 August 27
2541
2542
Several jobs died because of device failures.
2543
2544
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210827.024746
2545
2546
* https://tracker.ceph.com/issues/52430
2547
    mds: fast async create client mount breaks racy test
2548
* https://tracker.ceph.com/issues/52436
2549
    fs/ceph: "corrupt mdsmap"
2550
* https://tracker.ceph.com/issues/52437
2551
    mds: InoTable::replay_release_ids abort via test_inotable_sync
2552
* https://tracker.ceph.com/issues/51282
2553
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2554
* https://tracker.ceph.com/issues/52438
2555
    qa: ffsb timeout
2556
* https://tracker.ceph.com/issues/52439
2557
    qa: acls does not compile on centos stream
2558 20 Patrick Donnelly
2559
2560
h3. 2021 July 30
2561
2562
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210729.214022
2563
2564
* https://tracker.ceph.com/issues/50250
2565
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
2566
* https://tracker.ceph.com/issues/51282
2567
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2568
* https://tracker.ceph.com/issues/48773
2569
    qa: scrub does not complete
2570
* https://tracker.ceph.com/issues/51975
2571
    pybind/mgr/stats: KeyError
2572 19 Patrick Donnelly
2573
2574
h3. 2021 July 28
2575
2576
https://pulpito.ceph.com/pdonnell-2021-07-28_00:39:45-fs-wip-pdonnell-testing-20210727.213757-distro-basic-smithi/
2577
2578
with qa fix: https://pulpito.ceph.com/pdonnell-2021-07-28_16:20:28-fs-wip-pdonnell-testing-20210728.141004-distro-basic-smithi/
2579
2580
* https://tracker.ceph.com/issues/51905
2581
    qa: "error reading sessionmap 'mds1_sessionmap'"
2582
* https://tracker.ceph.com/issues/48773
2583
    qa: scrub does not complete
2584
* https://tracker.ceph.com/issues/50250
2585
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
2586
* https://tracker.ceph.com/issues/51267
2587
    CommandFailedError: Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithi096 with status 1:...
2588
* https://tracker.ceph.com/issues/51279
2589
    kclient hangs on umount (testing branch)
2590 18 Patrick Donnelly
2591
2592
h3. 2021 July 16
2593
2594
https://pulpito.ceph.com/pdonnell-2021-07-16_05:50:11-fs-wip-pdonnell-testing-20210716.022804-distro-basic-smithi/
2595
2596
* https://tracker.ceph.com/issues/48773
2597
    qa: scrub does not complete
2598
* https://tracker.ceph.com/issues/48772
2599
    qa: pjd: not ok 9, 44, 80
2600
* https://tracker.ceph.com/issues/45434
2601
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2602
* https://tracker.ceph.com/issues/51279
2603
    kclient hangs on umount (testing branch)
2604
* https://tracker.ceph.com/issues/50824
2605
    qa: snaptest-git-ceph bus error
2606 17 Patrick Donnelly
2607
2608
h3. 2021 July 04
2609
2610
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210703.052904
2611
2612
* https://tracker.ceph.com/issues/48773
2613
    qa: scrub does not complete
2614
* https://tracker.ceph.com/issues/39150
2615
    mon: "FAILED ceph_assert(session_map.sessions.empty())" when out of quorum
2616
* https://tracker.ceph.com/issues/45434
2617
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2618
* https://tracker.ceph.com/issues/51282
2619
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2620
* https://tracker.ceph.com/issues/48771
2621
    qa: iogen: workload fails to cause balancing
2622
* https://tracker.ceph.com/issues/51279
2623
    kclient hangs on umount (testing branch)
2624
* https://tracker.ceph.com/issues/50250
2625
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
2626 16 Patrick Donnelly
2627
2628
h3. 2021 July 01
2629
2630
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210701.192056
2631
2632
* https://tracker.ceph.com/issues/51197
2633
    qa: [WRN] Scrub error on inode 0x10000001520 (/client.0/tmp/t/linux-5.4/Documentation/driver-api) see mds.f log and `damage ls` output for details
2634
* https://tracker.ceph.com/issues/50866
2635
    osd: stat mismatch on objects
2636
* https://tracker.ceph.com/issues/48773
2637
    qa: scrub does not complete
2638 15 Patrick Donnelly
2639
2640
h3. 2021 June 26
2641
2642
https://pulpito.ceph.com/pdonnell-2021-06-26_00:57:00-fs-wip-pdonnell-testing-20210625.225421-distro-basic-smithi/
2643
2644
* https://tracker.ceph.com/issues/51183
2645
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
2646
* https://tracker.ceph.com/issues/51410
2647
    kclient: fails to finish reconnect during MDS thrashing (testing branch)
2648
* https://tracker.ceph.com/issues/48773
2649
    qa: scrub does not complete
2650
* https://tracker.ceph.com/issues/51282
2651
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2652
* https://tracker.ceph.com/issues/51169
2653
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
2654
* https://tracker.ceph.com/issues/48772
2655
    qa: pjd: not ok 9, 44, 80
2656 14 Patrick Donnelly
2657
2658
h3. 2021 June 21
2659
2660
https://pulpito.ceph.com/pdonnell-2021-06-22_00:27:21-fs-wip-pdonnell-testing-20210621.231646-distro-basic-smithi/
2661
2662
One failure caused by PR: https://github.com/ceph/ceph/pull/41935#issuecomment-866472599
2663
2664
* https://tracker.ceph.com/issues/51282
2665
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2666
* https://tracker.ceph.com/issues/51183
2667
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
2668
* https://tracker.ceph.com/issues/48773
2669
    qa: scrub does not complete
2670
* https://tracker.ceph.com/issues/48771
2671
    qa: iogen: workload fails to cause balancing
2672
* https://tracker.ceph.com/issues/51169
2673
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
2674
* https://tracker.ceph.com/issues/50495
2675
    libcephfs: shutdown race fails with status 141
2676
* https://tracker.ceph.com/issues/45434
2677
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2678
* https://tracker.ceph.com/issues/50824
2679
    qa: snaptest-git-ceph bus error
2680
* https://tracker.ceph.com/issues/50223
2681
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2682 13 Patrick Donnelly
2683
2684
h3. 2021 June 16
2685
2686
https://pulpito.ceph.com/pdonnell-2021-06-16_21:26:55-fs-wip-pdonnell-testing-20210616.191804-distro-basic-smithi/
2687
2688
MDS abort class of failures caused by PR: https://github.com/ceph/ceph/pull/41667
2689
2690
* https://tracker.ceph.com/issues/45434
2691
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2692
* https://tracker.ceph.com/issues/51169
2693
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
2694
* https://tracker.ceph.com/issues/43216
2695
    MDSMonitor: removes MDS coming out of quorum election
2696
* https://tracker.ceph.com/issues/51278
2697
    mds: "FAILED ceph_assert(!segments.empty())"
2698
* https://tracker.ceph.com/issues/51279
2699
    kclient hangs on umount (testing branch)
2700
* https://tracker.ceph.com/issues/51280
2701
    mds: "FAILED ceph_assert(r == 0 || r == -2)"
2702
* https://tracker.ceph.com/issues/51183
2703
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
2704
* https://tracker.ceph.com/issues/51281
2705
    qa: snaptest-snap-rm-cmp.sh: "echo 'FAIL: bad match, /tmp/a 4637e766853d1ad16a7b17079e2c6f03 != real c3883760b18d50e8d78819c54d579b00'"
2706
* https://tracker.ceph.com/issues/48773
2707
    qa: scrub does not complete
2708
* https://tracker.ceph.com/issues/51076
2709
    "wait_for_recovery: failed before timeout expired" during thrashosd test with EC backend.
2710
* https://tracker.ceph.com/issues/51228
2711
    qa: rmdir: failed to remove 'a/.snap/*': No such file or directory
2712
* https://tracker.ceph.com/issues/51282
2713
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2714 12 Patrick Donnelly
2715
2716
h3. 2021 June 14
2717
2718
https://pulpito.ceph.com/pdonnell-2021-06-14_20:53:05-fs-wip-pdonnell-testing-20210614.173325-distro-basic-smithi/
2719
2720
Some Ubuntu 20.04 upgrade fallout. In particular, upgrade tests are failing due to missing packages for 18.04 Pacific.
2721
2722
* https://tracker.ceph.com/issues/51169
2723
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
2724
* https://tracker.ceph.com/issues/51228
2725
    qa: rmdir: failed to remove 'a/.snap/*': No such file or directory
2726
* https://tracker.ceph.com/issues/48773
2727
    qa: scrub does not complete
2728
* https://tracker.ceph.com/issues/51183
2729
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
2730
* https://tracker.ceph.com/issues/45434
2731
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2732
* https://tracker.ceph.com/issues/51182
2733
    pybind/mgr/snap_schedule: Invalid command: Unexpected argument 'fs=cephfs'
2734
* https://tracker.ceph.com/issues/51229
2735
    qa: test_multi_snap_schedule list difference failure
2736
* https://tracker.ceph.com/issues/50821
2737
    qa: untar_snap_rm failure during mds thrashing
2738 11 Patrick Donnelly
2739
2740
h3. 2021 June 13
2741
2742
https://pulpito.ceph.com/pdonnell-2021-06-12_02:45:35-fs-wip-pdonnell-testing-20210612.002809-distro-basic-smithi/
2743
2744
Some Ubuntu 20.04 upgrade fallout. In particular, upgrade tests are failing due to missing packages for 18.04 Pacific.
2745
2746
* https://tracker.ceph.com/issues/51169
2747
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
2748
* https://tracker.ceph.com/issues/48773
2749
    qa: scrub does not complete
2750
* https://tracker.ceph.com/issues/51182
2751
    pybind/mgr/snap_schedule: Invalid command: Unexpected argument 'fs=cephfs'
2752
* https://tracker.ceph.com/issues/51183
2753
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
2754
* https://tracker.ceph.com/issues/51197
2755
    qa: [WRN] Scrub error on inode 0x10000001520 (/client.0/tmp/t/linux-5.4/Documentation/driver-api) see mds.f log and `damage ls` output for details
2756
* https://tracker.ceph.com/issues/45434
2757 10 Patrick Donnelly
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2758
2759
h3. 2021 June 11
2760
2761
https://pulpito.ceph.com/pdonnell-2021-06-11_18:02:10-fs-wip-pdonnell-testing-20210611.162716-distro-basic-smithi/
2762
2763
Some Ubuntu 20.04 upgrade fallout. In particular, upgrade tests are failing due to missing packages for 18.04 Pacific.
2764
2765
* https://tracker.ceph.com/issues/51169
2766
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
2767
* https://tracker.ceph.com/issues/45434
2768
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2769
* https://tracker.ceph.com/issues/48771
2770
    qa: iogen: workload fails to cause balancing
2771
* https://tracker.ceph.com/issues/43216
2772
    MDSMonitor: removes MDS coming out of quorum election
2773
* https://tracker.ceph.com/issues/51182
2774
    pybind/mgr/snap_schedule: Invalid command: Unexpected argument 'fs=cephfs'
2775
* https://tracker.ceph.com/issues/50223
2776
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2777
* https://tracker.ceph.com/issues/48773
2778
    qa: scrub does not complete
2779
* https://tracker.ceph.com/issues/51183
2780
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
2781
* https://tracker.ceph.com/issues/51184
2782
    qa: fs:bugs does not specify distro
2783 9 Patrick Donnelly
2784
2785
h3. 2021 June 03
2786
2787
https://pulpito.ceph.com/pdonnell-2021-06-03_03:40:33-fs-wip-pdonnell-testing-20210603.020013-distro-basic-smithi/
2788
2789
* https://tracker.ceph.com/issues/45434
2790
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2791
* https://tracker.ceph.com/issues/50016
2792
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
2793
* https://tracker.ceph.com/issues/50821
2794
    qa: untar_snap_rm failure during mds thrashing
2795
* https://tracker.ceph.com/issues/50622 (regression)
2796
    msg: active_connections regression
2797
* https://tracker.ceph.com/issues/49845#note-2 (regression)
2798
    qa: failed umount in test_volumes
2799
* https://tracker.ceph.com/issues/48773
2800
    qa: scrub does not complete
2801
* https://tracker.ceph.com/issues/43216
2802
    MDSMonitor: removes MDS coming out of quorum election
2803 7 Patrick Donnelly
2804
2805 8 Patrick Donnelly
h3. 2021 May 18
2806
2807
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210518.214114
2808
2809
Regression in testing kernel caused some failures. Ilya fixed those and rerun
2810
looked better. Some odd new noise in the rerun relating to packaging and "No
2811
module named 'tasks.ceph'".
2812
2813
* https://tracker.ceph.com/issues/50824
2814
    qa: snaptest-git-ceph bus error
2815
* https://tracker.ceph.com/issues/50622 (regression)
2816
    msg: active_connections regression
2817
* https://tracker.ceph.com/issues/49845#note-2 (regression)
2818
    qa: failed umount in test_volumes
2819
* https://tracker.ceph.com/issues/48203 (stock kernel update required)
2820
    qa: quota failure
2821
2822
2823 7 Patrick Donnelly
h3. 2021 May 18
2824
2825
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210518.025642
2826
2827
* https://tracker.ceph.com/issues/50821
2828
    qa: untar_snap_rm failure during mds thrashing
2829
* https://tracker.ceph.com/issues/48773
2830
    qa: scrub does not complete
2831
* https://tracker.ceph.com/issues/45591
2832
    mgr: FAILED ceph_assert(daemon != nullptr)
2833
* https://tracker.ceph.com/issues/50866
2834
    osd: stat mismatch on objects
2835
* https://tracker.ceph.com/issues/50016
2836
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
2837
* https://tracker.ceph.com/issues/50867
2838
    qa: fs:mirror: reduced data availability
2839
* https://tracker.ceph.com/issues/50821
2840
    qa: untar_snap_rm failure during mds thrashing
2841
* https://tracker.ceph.com/issues/50622 (regression)
2842
    msg: active_connections regression
2843
* https://tracker.ceph.com/issues/50223
2844
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2845
* https://tracker.ceph.com/issues/50868
2846
    qa: "kern.log.gz already exists; not overwritten"
2847
* https://tracker.ceph.com/issues/50870
2848
    qa: test_full: "rm: cannot remove 'large_file_a': Permission denied"
2849 6 Patrick Donnelly
2850
2851
h3. 2021 May 11
2852
2853
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210511.232042
2854
2855
* one class of failures caused by PR
2856
* https://tracker.ceph.com/issues/48812
2857
    qa: test_scrub_pause_and_resume_with_abort failure
2858
* https://tracker.ceph.com/issues/50390
2859
    mds: monclient: wait_auth_rotating timed out after 30
2860
* https://tracker.ceph.com/issues/48773
2861
    qa: scrub does not complete
2862
* https://tracker.ceph.com/issues/50821
2863
    qa: untar_snap_rm failure during mds thrashing
2864
* https://tracker.ceph.com/issues/50224
2865
    qa: test_mirroring_init_failure_with_recovery failure
2866
* https://tracker.ceph.com/issues/50622 (regression)
2867
    msg: active_connections regression
2868
* https://tracker.ceph.com/issues/50825
2869
    qa: snaptest-git-ceph hang during mon thrashing v2
2870
* https://tracker.ceph.com/issues/50821
2871
    qa: untar_snap_rm failure during mds thrashing
2872
* https://tracker.ceph.com/issues/50823
2873
    qa: RuntimeError: timeout waiting for cluster to stabilize
2874 5 Patrick Donnelly
2875
2876
h3. 2021 May 14
2877
2878
https://pulpito.ceph.com/pdonnell-2021-05-14_21:45:42-fs-master-distro-basic-smithi/
2879
2880
* https://tracker.ceph.com/issues/48812
2881
    qa: test_scrub_pause_and_resume_with_abort failure
2882
* https://tracker.ceph.com/issues/50821
2883
    qa: untar_snap_rm failure during mds thrashing
2884
* https://tracker.ceph.com/issues/50622 (regression)
2885
    msg: active_connections regression
2886
* https://tracker.ceph.com/issues/50822
2887
    qa: testing kernel patch for client metrics causes mds abort
2888
* https://tracker.ceph.com/issues/48773
2889
    qa: scrub does not complete
2890
* https://tracker.ceph.com/issues/50823
2891
    qa: RuntimeError: timeout waiting for cluster to stabilize
2892
* https://tracker.ceph.com/issues/50824
2893
    qa: snaptest-git-ceph bus error
2894
* https://tracker.ceph.com/issues/50825
2895
    qa: snaptest-git-ceph hang during mon thrashing v2
2896
* https://tracker.ceph.com/issues/50826
2897
    kceph: stock RHEL kernel hangs on snaptests with mon|osd thrashers
2898 4 Patrick Donnelly
2899
2900
h3. 2021 May 01
2901
2902
https://pulpito.ceph.com/pdonnell-2021-05-01_09:07:09-fs-wip-pdonnell-testing-20210501.040415-distro-basic-smithi/
2903
2904
* https://tracker.ceph.com/issues/45434
2905
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2906
* https://tracker.ceph.com/issues/50281
2907
    qa: untar_snap_rm timeout
2908
* https://tracker.ceph.com/issues/48203 (stock kernel update required)
2909
    qa: quota failure
2910
* https://tracker.ceph.com/issues/48773
2911
    qa: scrub does not complete
2912
* https://tracker.ceph.com/issues/50390
2913
    mds: monclient: wait_auth_rotating timed out after 30
2914
* https://tracker.ceph.com/issues/50250
2915
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details"
2916
* https://tracker.ceph.com/issues/50622 (regression)
2917
    msg: active_connections regression
2918
* https://tracker.ceph.com/issues/45591
2919
    mgr: FAILED ceph_assert(daemon != nullptr)
2920
* https://tracker.ceph.com/issues/50221
2921
    qa: snaptest-git-ceph failure in git diff
2922
* https://tracker.ceph.com/issues/50016
2923
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
2924 3 Patrick Donnelly
2925
2926
h3. 2021 Apr 15
2927
2928
https://pulpito.ceph.com/pdonnell-2021-04-15_01:35:57-fs-wip-pdonnell-testing-20210414.230315-distro-basic-smithi/
2929
2930
* https://tracker.ceph.com/issues/50281
2931
    qa: untar_snap_rm timeout
2932
* https://tracker.ceph.com/issues/50220
2933
    qa: dbench workload timeout
2934
* https://tracker.ceph.com/issues/50246
2935
    mds: failure replaying journal (EMetaBlob)
2936
* https://tracker.ceph.com/issues/50250
2937
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details"
2938
* https://tracker.ceph.com/issues/50016
2939
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
2940
* https://tracker.ceph.com/issues/50222
2941
    osd: 5.2s0 deep-scrub : stat mismatch
2942
* https://tracker.ceph.com/issues/45434
2943
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2944
* https://tracker.ceph.com/issues/49845
2945
    qa: failed umount in test_volumes
2946
* https://tracker.ceph.com/issues/37808
2947
    osd: osdmap cache weak_refs assert during shutdown
2948
* https://tracker.ceph.com/issues/50387
2949
    client: fs/snaps failure
2950
* https://tracker.ceph.com/issues/50389
2951
    mds: "cluster [ERR] Error recovering journal 0x203: (2) No such file or directory" in cluster log"
2952
* https://tracker.ceph.com/issues/50216
2953
    qa: "ls: cannot access 'lost+found': No such file or directory"
2954
* https://tracker.ceph.com/issues/50390
2955
    mds: monclient: wait_auth_rotating timed out after 30
2956
2957 1 Patrick Donnelly
2958
2959 2 Patrick Donnelly
h3. 2021 Apr 08
2960
2961
https://pulpito.ceph.com/pdonnell-2021-04-08_22:42:24-fs-wip-pdonnell-testing-20210408.192301-distro-basic-smithi/
2962
2963
* https://tracker.ceph.com/issues/45434
2964
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2965
* https://tracker.ceph.com/issues/50016
2966
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
2967
* https://tracker.ceph.com/issues/48773
2968
    qa: scrub does not complete
2969
* https://tracker.ceph.com/issues/50279
2970
    qa: "Replacing daemon mds.b as rank 0 with standby daemon mds.c"
2971
* https://tracker.ceph.com/issues/50246
2972
    mds: failure replaying journal (EMetaBlob)
2973
* https://tracker.ceph.com/issues/48365
2974
    qa: ffsb build failure on CentOS 8.2
2975
* https://tracker.ceph.com/issues/50216
2976
    qa: "ls: cannot access 'lost+found': No such file or directory"
2977
* https://tracker.ceph.com/issues/50223
2978
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2979
* https://tracker.ceph.com/issues/50280
2980
    cephadm: RuntimeError: uid/gid not found
2981
* https://tracker.ceph.com/issues/50281
2982
    qa: untar_snap_rm timeout
2983
2984 1 Patrick Donnelly
h3. 2021 Apr 08
2985
2986
https://pulpito.ceph.com/pdonnell-2021-04-08_04:31:36-fs-wip-pdonnell-testing-20210408.024225-distro-basic-smithi/
2987
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210408.142238 (with logic inversion / QA fix)
2988
2989
* https://tracker.ceph.com/issues/50246
2990
    mds: failure replaying journal (EMetaBlob)
2991
* https://tracker.ceph.com/issues/50250
2992
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details"
2993
2994
2995
h3. 2021 Apr 07
2996
2997
https://pulpito.ceph.com/pdonnell-2021-04-07_02:12:41-fs-wip-pdonnell-testing-20210406.213012-distro-basic-smithi/
2998
2999
* https://tracker.ceph.com/issues/50215
3000
    qa: "log [ERR] : error reading sessionmap 'mds2_sessionmap'"
3001
* https://tracker.ceph.com/issues/49466
3002
    qa: "Command failed on gibba030 with status 1: 'set -ex\nsudo dd of=/tmp/tmp.ZEeZBasJer'"
3003
* https://tracker.ceph.com/issues/50216
3004
    qa: "ls: cannot access 'lost+found': No such file or directory"
3005
* https://tracker.ceph.com/issues/48773
3006
    qa: scrub does not complete
3007
* https://tracker.ceph.com/issues/49845
3008
    qa: failed umount in test_volumes
3009
* https://tracker.ceph.com/issues/50220
3010
    qa: dbench workload timeout
3011
* https://tracker.ceph.com/issues/50221
3012
    qa: snaptest-git-ceph failure in git diff
3013
* https://tracker.ceph.com/issues/50222
3014
    osd: 5.2s0 deep-scrub : stat mismatch
3015
* https://tracker.ceph.com/issues/50223
3016
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
3017
* https://tracker.ceph.com/issues/50224
3018
    qa: test_mirroring_init_failure_with_recovery failure
3019
3020
h3. 2021 Apr 01
3021
3022
https://pulpito.ceph.com/pdonnell-2021-04-01_00:45:34-fs-wip-pdonnell-testing-20210331.222326-distro-basic-smithi/
3023
3024
* https://tracker.ceph.com/issues/48772
3025
    qa: pjd: not ok 9, 44, 80
3026
* https://tracker.ceph.com/issues/50177
3027
    osd: "stalled aio... buggy kernel or bad device?"
3028
* https://tracker.ceph.com/issues/48771
3029
    qa: iogen: workload fails to cause balancing
3030
* https://tracker.ceph.com/issues/49845
3031
    qa: failed umount in test_volumes
3032
* https://tracker.ceph.com/issues/48773
3033
    qa: scrub does not complete
3034
* https://tracker.ceph.com/issues/48805
3035
    mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details"
3036
* https://tracker.ceph.com/issues/50178
3037
    qa: "TypeError: run() got an unexpected keyword argument 'shell'"
3038
* https://tracker.ceph.com/issues/45434
3039
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
3040
3041
h3. 2021 Mar 24
3042
3043
https://pulpito.ceph.com/pdonnell-2021-03-24_23:26:35-fs-wip-pdonnell-testing-20210324.190252-distro-basic-smithi/
3044
3045
* https://tracker.ceph.com/issues/49500
3046
    qa: "Assertion `cb_done' failed."
3047
* https://tracker.ceph.com/issues/50019
3048
    qa: mount failure with cephadm "probably no MDS server is up?"
3049
* https://tracker.ceph.com/issues/50020
3050
    qa: "RADOS object not found (Failed to operate read op for oid cephfs_mirror)"
3051
* https://tracker.ceph.com/issues/48773
3052
    qa: scrub does not complete
3053
* https://tracker.ceph.com/issues/45434
3054
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
3055
* https://tracker.ceph.com/issues/48805
3056
    mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details"
3057
* https://tracker.ceph.com/issues/48772
3058
    qa: pjd: not ok 9, 44, 80
3059
* https://tracker.ceph.com/issues/50021
3060
    qa: snaptest-git-ceph failure during mon thrashing
3061
* https://tracker.ceph.com/issues/48771
3062
    qa: iogen: workload fails to cause balancing
3063
* https://tracker.ceph.com/issues/50016
3064
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
3065
* https://tracker.ceph.com/issues/49466
3066
    qa: "Command failed on gibba030 with status 1: 'set -ex\nsudo dd of=/tmp/tmp.ZEeZBasJer'"
3067
3068
3069
h3. 2021 Mar 18
3070
3071
https://pulpito.ceph.com/pdonnell-2021-03-18_13:46:31-fs-wip-pdonnell-testing-20210318.024145-distro-basic-smithi/
3072
3073
* https://tracker.ceph.com/issues/49466
3074
    qa: "Command failed on gibba030 with status 1: 'set -ex\nsudo dd of=/tmp/tmp.ZEeZBasJer'"
3075
* https://tracker.ceph.com/issues/48773
3076
    qa: scrub does not complete
3077
* https://tracker.ceph.com/issues/48805
3078
    mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details"
3079
* https://tracker.ceph.com/issues/45434
3080
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
3081
* https://tracker.ceph.com/issues/49845
3082
    qa: failed umount in test_volumes
3083
* https://tracker.ceph.com/issues/49605
3084
    mgr: drops command on the floor
3085
* https://tracker.ceph.com/issues/48203 (stock kernel update required)
3086
    qa: quota failure
3087
* https://tracker.ceph.com/issues/49928
3088
    client: items pinned in cache preventing unmount x2
3089
3090
h3. 2021 Mar 15
3091
3092
https://pulpito.ceph.com/pdonnell-2021-03-15_22:16:56-fs-wip-pdonnell-testing-20210315.182203-distro-basic-smithi/
3093
3094
* https://tracker.ceph.com/issues/49842
3095
    qa: stuck pkg install
3096
* https://tracker.ceph.com/issues/49466
3097
    qa: "Command failed on gibba030 with status 1: 'set -ex\nsudo dd of=/tmp/tmp.ZEeZBasJer'"
3098
* https://tracker.ceph.com/issues/49822
3099
    test: test_mirroring_command_idempotency (tasks.cephfs.test_admin.TestMirroringCommands) failure
3100
* https://tracker.ceph.com/issues/49240
3101
    terminate called after throwing an instance of 'std::bad_alloc'
3102
* https://tracker.ceph.com/issues/48773
3103
    qa: scrub does not complete
3104
* https://tracker.ceph.com/issues/45434
3105
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
3106
* https://tracker.ceph.com/issues/49500
3107
    qa: "Assertion `cb_done' failed."
3108
* https://tracker.ceph.com/issues/49843
3109
    qa: fs/snaps/snaptest-upchildrealms.sh failure
3110
* https://tracker.ceph.com/issues/49845
3111
    qa: failed umount in test_volumes
3112
* https://tracker.ceph.com/issues/48805
3113
    mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details"
3114
* https://tracker.ceph.com/issues/49605
3115
    mgr: drops command on the floor
3116
3117
and failure caused by PR: https://github.com/ceph/ceph/pull/39969
3118
3119
3120
h3. 2021 Mar 09
3121
3122
https://pulpito.ceph.com/pdonnell-2021-03-09_03:27:39-fs-wip-pdonnell-testing-20210308.214827-distro-basic-smithi/
3123
3124
* https://tracker.ceph.com/issues/49500
3125
    qa: "Assertion `cb_done' failed."
3126
* https://tracker.ceph.com/issues/48805
3127
    mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details"
3128
* https://tracker.ceph.com/issues/48773
3129
    qa: scrub does not complete
3130
* https://tracker.ceph.com/issues/45434
3131
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
3132
* https://tracker.ceph.com/issues/49240
3133
    terminate called after throwing an instance of 'std::bad_alloc'
3134
* https://tracker.ceph.com/issues/49466
3135
    qa: "Command failed on gibba030 with status 1: 'set -ex\nsudo dd of=/tmp/tmp.ZEeZBasJer'"
3136
* https://tracker.ceph.com/issues/49684
3137
    qa: fs:cephadm mount does not wait for mds to be created
3138
* https://tracker.ceph.com/issues/48771
3139
    qa: iogen: workload fails to cause balancing