Project

General

Profile

Main » History » Version 244

Patrick Donnelly, 04/03/2024 12:30 PM

1 221 Patrick Donnelly
h1. <code>main</code> branch
2 1 Patrick Donnelly
3 240 Patrick Donnelly
h3. 2024-04-02
4
5
https://tracker.ceph.com/issues/65215
6
7
* "qa: error during scrub thrashing: rank damage found: {'backtrace'}":https://tracker.ceph.com/issues/57676
8
* "qa: ceph tell 4.3a deep-scrub command not found":https://tracker.ceph.com/issues/64972
9
* "pacific/quincy/v18.2.0: client: ceph-fuse fails to unmount after upgrade to main":https://tracker.ceph.com/issues/64502
10
* "Test failure: test_cephfs_mirror_cancel_mirroring_and_readd":https://tracker.ceph.com/issues/64711
11
* "workunits/fsx.sh failure":https://tracker.ceph.com/issues/64572
12
* "qa: failed cephfs-shell test_reading_conf":https://tracker.ceph.com/issues/63699
13
* "centos 9 testing reveals rocksdb Leak_StillReachable memory leak in mons":https://tracker.ceph.com/issues/61774
14
* "qa: test_max_items_per_obj open procs not fully cleaned up":https://tracker.ceph.com/issues/65022
15
* "qa: dbench workload timeout":https://tracker.ceph.com/issues/50220
16
* "suites/fsstress.sh hangs on one client - test times out":https://tracker.ceph.com/issues/64707
17 241 Patrick Donnelly
* "qa/suites/fs/nfs: cluster [WRN] Health check failed: 1 stray daemon(s) not managed by cephadm (CEPHADM_STRAY_DAEMON)" in cluster log":https://tracker.ceph.com/issues/65021
18
* "qa: Scrub error on inode 0x1000000356c (/volumes/qa/sv_0/2f8f6bb4-3ea9-47a0-bd79-a0f50dc149d5/client.0/tmp/clients/client7/~dmtmp/PARADOX) see mds.b log and `damage ls` output for details in cluster log":https://tracker.ceph.com/issues/65020
19
* "qa: iogen workunit: The following counters failed to be set on mds daemons: {'mds.exported', 'mds.imported'}":https://tracker.ceph.com/issues/54108
20
* "ffsb.sh failure Resource temporarily unavailable":https://tracker.ceph.com/issues/62067
21
* "QA failure: test_fscrypt_dummy_encryption_with_quick_group":https://tracker.ceph.com/issues/65136
22 244 Patrick Donnelly
* "qa: cluster [WRN] Health detail: HEALTH_WARN 1 pool(s) do not have an application enabled in cluster log":https://tracker.ceph.com/issues/65271
23 241 Patrick Donnelly
* "qa: test_cephfs_mirror_cancel_sync fails in a 100 jobs run of fs:mirror suite":https://tracker.ceph.com/issues/64534
24 240 Patrick Donnelly
25 236 Patrick Donnelly
h3. 2024-03-28
26
27
https://tracker.ceph.com/issues/65213
28
29 237 Patrick Donnelly
* "qa: error during scrub thrashing: rank damage found: {'backtrace'}":https://tracker.ceph.com/issues/57676
30
* "workunits/fsx.sh failure":https://tracker.ceph.com/issues/64572
31
* "PG_DEGRADED warnings during cluster creation via cephadm: Health check failed: Degraded data":https://tracker.ceph.com/issues/65018
32 238 Patrick Donnelly
* "suites/fsstress.sh hangs on one client - test times out":https://tracker.ceph.com/issues/64707
33
* "qa: ceph tell 4.3a deep-scrub command not found":https://tracker.ceph.com/issues/64972
34
* "qa: iogen workunit: The following counters failed to be set on mds daemons: {'mds.exported', 'mds.imported'}":https://tracker.ceph.com/issues/54108
35 239 Patrick Donnelly
* "qa: Scrub error on inode 0x1000000356c (/volumes/qa/sv_0/2f8f6bb4-3ea9-47a0-bd79-a0f50dc149d5/client.0/tmp/clients/client7/~dmtmp/PARADOX) see mds.b log and `damage ls` output for details in cluster log":https://tracker.ceph.com/issues/65020
36
* "qa: failed cephfs-shell test_reading_conf":https://tracker.ceph.com/issues/63699
37
* "Test failure: test_cephfs_mirror_cancel_mirroring_and_readd":https://tracker.ceph.com/issues/64711
38
* "qa: test_max_items_per_obj open procs not fully cleaned up":https://tracker.ceph.com/issues/65022
39
* "pacific/quincy/v18.2.0: client: ceph-fuse fails to unmount after upgrade to main":https://tracker.ceph.com/issues/64502
40
* "centos 9 testing reveals rocksdb Leak_StillReachable memory leak in mons":https://tracker.ceph.com/issues/61774
41
* "qa: Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)":https://tracker.ceph.com/issues/52624
42
* "qa: dbench workload timeout":https://tracker.ceph.com/issues/50220
43
44
45 236 Patrick Donnelly
46 235 Milind Changire
h3. 2024-03-25
47
48
https://pulpito.ceph.com/mchangir-2024-03-22_09:46:06-fs:upgrade-wip-mchangir-testing-main-20240318.032620-testing-default-smithi/
49
* https://tracker.ceph.com/issues/64502
50
  fusermount -u fails with: teuthology.exceptions.MaxWhileTries: reached maximum tries (51) after waiting for 300 seconds
51
52
https://pulpito.ceph.com/mchangir-2024-03-22_09:48:09-fs:libcephfs-wip-mchangir-testing-main-20240318.032620-testing-default-smithi/
53
54
* https://tracker.ceph.com/issues/62245
55
  libcephfs/test.sh failed - https://tracker.ceph.com/issues/62245#note-3
56
57
58 228 Patrick Donnelly
h3. 2024-03-20
59
60 234 Patrick Donnelly
https://pulpito.ceph.com/?branch=wip-batrick-testing-20240320.145742
61 228 Patrick Donnelly
62 233 Patrick Donnelly
https://github.com/batrick/ceph/commit/360516069d9393362c4cc6eb9371680fe16d66ab
63
64 229 Patrick Donnelly
Ubuntu jobs filtered out because builds were skipped by jenkins/shaman.
65 1 Patrick Donnelly
66 229 Patrick Donnelly
This run has a lot more failures because https://github.com/ceph/ceph/pull/55455 fixed log WRN/ERR checks.
67 228 Patrick Donnelly
68 229 Patrick Donnelly
* https://tracker.ceph.com/issues/57676
69
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
70
* https://tracker.ceph.com/issues/64572
71
    workunits/fsx.sh failure
72
* https://tracker.ceph.com/issues/65018
73
    PG_DEGRADED warnings during cluster creation via cephadm: "Health check failed: Degraded data redundancy: 2/192 objects degraded (1.042%), 1 pg degraded (PG_DEGRADED)"
74
* https://tracker.ceph.com/issues/64707 (new issue)
75
    suites/fsstress.sh hangs on one client - test times out
76 1 Patrick Donnelly
* https://tracker.ceph.com/issues/64988
77
    qa: fs:workloads mgr client evicted indicated by "cluster [WRN] evicting unresponsive client smithi042:x (15288), after 303.306 seconds"
78
* https://tracker.ceph.com/issues/59684
79
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
80 230 Patrick Donnelly
* https://tracker.ceph.com/issues/64972
81
    qa: "ceph tell 4.3a deep-scrub" command not found
82
* https://tracker.ceph.com/issues/54108
83
    qa: iogen workunit: "The following counters failed to be set on mds daemons: {'mds.exported', 'mds.imported'}"
84
* https://tracker.ceph.com/issues/65019
85
    qa/suites/fs/top: [WRN] Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)" in cluster log 
86
* https://tracker.ceph.com/issues/65020
87
    qa: Scrub error on inode 0x1000000356c (/volumes/qa/sv_0/2f8f6bb4-3ea9-47a0-bd79-a0f50dc149d5/client.0/tmp/clients/client7/~dmtmp/PARADOX) see mds.b log and `damage ls` output for details" in cluster log
88
* https://tracker.ceph.com/issues/65021
89
    qa/suites/fs/nfs: cluster [WRN] Health check failed: 1 stray daemon(s) not managed by cephadm (CEPHADM_STRAY_DAEMON)" in cluster log
90
* https://tracker.ceph.com/issues/63699
91
    qa: failed cephfs-shell test_reading_conf
92 231 Patrick Donnelly
* https://tracker.ceph.com/issues/64711
93
    Test failure: test_cephfs_mirror_cancel_mirroring_and_readd (tasks.cephfs.test_mirroring.TestMirroring)
94
* https://tracker.ceph.com/issues/50821
95
    qa: untar_snap_rm failure during mds thrashing
96 232 Patrick Donnelly
* https://tracker.ceph.com/issues/65022
97
    qa: test_max_items_per_obj open procs not fully cleaned up
98 228 Patrick Donnelly
99 226 Venky Shankar
h3.  14th March 2024
100
101
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20240307.013758
102
103 227 Venky Shankar
(pjd.sh failures are related to a bug in the testing kernel. See - https://tracker.ceph.com/issues/64679#note-4)
104 226 Venky Shankar
105
* https://tracker.ceph.com/issues/62067
106
    ffsb.sh failure "Resource temporarily unavailable"
107
* https://tracker.ceph.com/issues/57676
108
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
109
* https://tracker.ceph.com/issues/64502
110
    pacific/quincy/v18.2.0: client: ceph-fuse fails to unmount after upgrade to main
111
* https://tracker.ceph.com/issues/64572
112
    workunits/fsx.sh failure
113
* https://tracker.ceph.com/issues/63700
114
    qa: test_cd_with_args failure
115
* https://tracker.ceph.com/issues/59684
116
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
117
* https://tracker.ceph.com/issues/61243
118
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
119
120 225 Venky Shankar
h3. 5th March 2024
121
122
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20240304.042522
123
124
* https://tracker.ceph.com/issues/57676
125
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
126
* https://tracker.ceph.com/issues/64502
127
    pacific/quincy/v18.2.0: client: ceph-fuse fails to unmount after upgrade to main
128
* https://tracker.ceph.com/issues/63949
129
    leak in mds.c detected by valgrind during CephFS QA run
130
* https://tracker.ceph.com/issues/57656
131
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
132
* https://tracker.ceph.com/issues/63699
133
    qa: failed cephfs-shell test_reading_conf
134
* https://tracker.ceph.com/issues/64572
135
    workunits/fsx.sh failure
136
* https://tracker.ceph.com/issues/64707 (new issue)
137
    suites/fsstress.sh hangs on one client - test times out
138
* https://tracker.ceph.com/issues/59684
139
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
140
* https://tracker.ceph.com/issues/63700
141
    qa: test_cd_with_args failure
142
* https://tracker.ceph.com/issues/64711
143
    Test failure: test_cephfs_mirror_cancel_mirroring_and_readd (tasks.cephfs.test_mirroring.TestMirroring)
144
* https://tracker.ceph.com/issues/64729 (new issue)
145
    mon.a (mon.0) 1281 : cluster 3 [WRN] MDS_SLOW_METADATA_IO: 3 MDSs report slow metadata IOs" in cluster log
146
* https://tracker.ceph.com/issues/64730
147
    fs/misc/multiple_rsync.sh workunit times out
148
149 224 Venky Shankar
h3. 26th Feb 2024
150
151
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20240216.060239
152
153
(This run is a bit messy due to
154
155
  a) OCI runtime issues in the testing kernel with centos9
156
  b) SELinux denials related failures
157
  c) Unrelated MON_DOWN warnings)
158
159
* https://tracker.ceph.com/issues/57676
160
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
161
* https://tracker.ceph.com/issues/63700
162
    qa: test_cd_with_args failure
163
* https://tracker.ceph.com/issues/63949
164
    leak in mds.c detected by valgrind during CephFS QA run
165
* https://tracker.ceph.com/issues/59684
166
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
167
* https://tracker.ceph.com/issues/61243
168
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
169
* https://tracker.ceph.com/issues/63699
170
    qa: failed cephfs-shell test_reading_conf
171
* https://tracker.ceph.com/issues/64172
172
    Test failure: test_multiple_path_r (tasks.cephfs.test_admin.TestFsAuthorize)
173
* https://tracker.ceph.com/issues/57656
174
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
175
* https://tracker.ceph.com/issues/64572
176
    workunits/fsx.sh failure
177
178 222 Patrick Donnelly
h3. 20th Feb 2024
179
180
https://github.com/ceph/ceph/pull/55601
181
https://github.com/ceph/ceph/pull/55659
182
183
https://pulpito.ceph.com/pdonnell-2024-02-20_07:23:03-fs:upgrade:mds_upgrade_sequence-wip-batrick-testing-20240220.022152-distro-default-smithi/
184
185
* https://tracker.ceph.com/issues/64502
186
    client: quincy ceph-fuse fails to unmount after upgrade to main
187
188 223 Patrick Donnelly
This run has numerous problems. #55601 introduces testing for the upgrade sequence from </code>reef/{v18.2.0,v18.2.1,reef}</code> as well as an extra dimension for the ceph-fuse client. The main "big" issue is i64502: the ceph-fuse client is not being unmounted when <code>fusermount -u</code> is called. Instead, the client begins to unmount only after daemons are shut down during test cleanup.
189 218 Venky Shankar
190
h3. 19th Feb 2024
191
192 220 Venky Shankar
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20240217.015652
193
194 218 Venky Shankar
* https://tracker.ceph.com/issues/61243
195
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
196
* https://tracker.ceph.com/issues/63700
197
    qa: test_cd_with_args failure
198
* https://tracker.ceph.com/issues/63141
199
    qa/cephfs: test_idem_unaffected_root_squash fails
200
* https://tracker.ceph.com/issues/59684
201
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
202
* https://tracker.ceph.com/issues/63949
203
    leak in mds.c detected by valgrind during CephFS QA run
204
* https://tracker.ceph.com/issues/63764
205
    Test failure: test_r_with_fsname_and_no_path_in_cap (tasks.cephfs.test_multifs_auth.TestMDSCaps)
206
* https://tracker.ceph.com/issues/63699
207
    qa: failed cephfs-shell test_reading_conf
208 219 Venky Shankar
* https://tracker.ceph.com/issues/64482
209
    ceph: stderr Error: OCI runtime error: crun: bpf create ``: Function not implemented
210 201 Rishabh Dave
211 217 Venky Shankar
h3. 29 Jan 2024
212
213
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20240119.075157-1
214
215
* https://tracker.ceph.com/issues/57676
216
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
217
* https://tracker.ceph.com/issues/63949
218
    leak in mds.c detected by valgrind during CephFS QA run
219
* https://tracker.ceph.com/issues/62067
220
    ffsb.sh failure "Resource temporarily unavailable"
221
* https://tracker.ceph.com/issues/64172
222
    Test failure: test_multiple_path_r (tasks.cephfs.test_admin.TestFsAuthorize)
223
* https://tracker.ceph.com/issues/63265
224
    qa: fs/snaps/snaptest-git-ceph.sh failed when reseting to tag 'v0.1'
225
* https://tracker.ceph.com/issues/61243
226
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
227
* https://tracker.ceph.com/issues/59684
228
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
229
* https://tracker.ceph.com/issues/57656
230
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
231
* https://tracker.ceph.com/issues/64209
232
    snaptest-multiple-capsnaps.sh fails with "got remote process result: 1"
233
234 216 Venky Shankar
h3. 17th Jan 2024
235
236
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20240103.072409-1
237
238
* https://tracker.ceph.com/issues/63764
239
    Test failure: test_r_with_fsname_and_no_path_in_cap (tasks.cephfs.test_multifs_auth.TestMDSCaps)
240
* https://tracker.ceph.com/issues/57676
241
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
242
* https://tracker.ceph.com/issues/51964
243
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
244
* https://tracker.ceph.com/issues/63949
245
    leak in mds.c detected by valgrind during CephFS QA run
246
* https://tracker.ceph.com/issues/62067
247
    ffsb.sh failure "Resource temporarily unavailable"
248
* https://tracker.ceph.com/issues/61243
249
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
250
* https://tracker.ceph.com/issues/63259
251
    mds: failed to store backtrace and force file system read-only
252
* https://tracker.ceph.com/issues/63265
253
    qa: fs/snaps/snaptest-git-ceph.sh failed when reseting to tag 'v0.1'
254
255
h3. 16 Jan 2024
256 215 Rishabh Dave
257 214 Rishabh Dave
https://pulpito.ceph.com/rishabh-2023-12-11_15:37:57-fs-rishabh-2023dec11-testing-default-smithi/
258
https://pulpito.ceph.com/rishabh-2023-12-17_11:19:43-fs-rishabh-2023dec11-testing-default-smithi/
259
https://pulpito.ceph.com/rishabh-2024-01-04_18:43:16-fs-rishabh-2024jan4-testing-default-smithi
260
261
* https://tracker.ceph.com/issues/63764
262
  Test failure: test_r_with_fsname_and_no_path_in_cap (tasks.cephfs.test_multifs_auth.TestMDSCaps)
263
* https://tracker.ceph.com/issues/63141
264
  qa/cephfs: test_idem_unaffected_root_squash fails
265
* https://tracker.ceph.com/issues/62067
266
  ffsb.sh failure "Resource temporarily unavailable" 
267
* https://tracker.ceph.com/issues/51964
268
  qa: test_cephfs_mirror_restart_sync_on_blocklist failure
269
* https://tracker.ceph.com/issues/54462 
270
  Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128
271
* https://tracker.ceph.com/issues/57676
272
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
273
274
* https://tracker.ceph.com/issues/63949
275
  valgrind leak in MDS
276
* https://tracker.ceph.com/issues/64041
277
  qa/cephfs: fs/upgrade/nofs suite attempts to jump more than 2 releases
278
* fsstress failure in last run was due a kernel MM layer failure, unrelated to CephFS
279
* from last run, job #7507400 failed due to MGR; FS wasn't degraded, so it's unrelated to CephFS
280
281 213 Venky Shankar
h3. 06 Dec 2023
282
283
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20231206.125818
284
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20231206.125818-x (rerun w/ squid kickoff changes)
285
286
* https://tracker.ceph.com/issues/63764
287
    Test failure: test_r_with_fsname_and_no_path_in_cap (tasks.cephfs.test_multifs_auth.TestMDSCaps)
288
* https://tracker.ceph.com/issues/63233
289
    mon|client|mds: valgrind reports possible leaks in the MDS
290
* https://tracker.ceph.com/issues/57676
291
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
292
* https://tracker.ceph.com/issues/62580
293
    testing: Test failure: test_snapshot_remove (tasks.cephfs.test_strays.TestStrays)
294
* https://tracker.ceph.com/issues/62067
295
    ffsb.sh failure "Resource temporarily unavailable"
296
* https://tracker.ceph.com/issues/61243
297
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
298
* https://tracker.ceph.com/issues/62081
299
    tasks/fscrypt-common does not finish, timesout
300
* https://tracker.ceph.com/issues/63265
301
    qa: fs/snaps/snaptest-git-ceph.sh failed when reseting to tag 'v0.1'
302
* https://tracker.ceph.com/issues/63806
303
    ffsb.sh workunit failure (MDS: std::out_of_range, damaged)
304
305 211 Patrick Donnelly
h3. 30 Nov 2023
306
307
https://pulpito.ceph.com/pdonnell-2023-11-30_08:05:19-fs:shell-wip-batrick-testing-20231130.014408-distro-default-smithi/
308
309
* https://tracker.ceph.com/issues/63699
310 212 Patrick Donnelly
    qa: failed cephfs-shell test_reading_conf
311
* https://tracker.ceph.com/issues/63700
312
    qa: test_cd_with_args failure
313 211 Patrick Donnelly
314 210 Venky Shankar
h3. 29 Nov 2023
315
316
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20231107.042705
317
318
* https://tracker.ceph.com/issues/63233
319
    mon|client|mds: valgrind reports possible leaks in the MDS
320
* https://tracker.ceph.com/issues/63141
321
    qa/cephfs: test_idem_unaffected_root_squash fails
322
* https://tracker.ceph.com/issues/57676
323
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
324
* https://tracker.ceph.com/issues/57655
325
    qa: fs:mixed-clients kernel_untar_build failure
326
* https://tracker.ceph.com/issues/62067
327
    ffsb.sh failure "Resource temporarily unavailable"
328
* https://tracker.ceph.com/issues/61243
329
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
330
* https://tracker.ceph.com/issues/62510 (pending RHEL back port)
    snaptest-git-ceph.sh failure with fs/thrash
331
* https://tracker.ceph.com/issues/62810
332
    Failure in snaptest-git-ceph.sh (it's an async unlink/create bug) -- Need to fix again
333
334 206 Venky Shankar
h3. 14 Nov 2023
335 207 Milind Changire
(Milind)
336
337
https://pulpito.ceph.com/mchangir-2023-11-13_10:27:15-fs-wip-mchangir-testing-20231110.052303-testing-default-smithi/
338
339
* https://tracker.ceph.com/issues/53859
340
  qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
341
* https://tracker.ceph.com/issues/63233
342
  mon|client|mds: valgrind reports possible leaks in the MDS
343
* https://tracker.ceph.com/issues/63521
344
  qa: Test failure: test_scrub_merge_dirfrags (tasks.cephfs.test_scrub_checks.TestScrubChecks)
345
* https://tracker.ceph.com/issues/57655
346
  qa: fs:mixed-clients kernel_untar_build failure
347
* https://tracker.ceph.com/issues/62580
348
    testing: Test failure: test_snapshot_remove (tasks.cephfs.test_strays.TestStrays)
349
* https://tracker.ceph.com/issues/57676
350
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
351
* https://tracker.ceph.com/issues/61243
352
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
353
* https://tracker.ceph.com/issues/63141
354
    qa/cephfs: test_idem_unaffected_root_squash fails
355
* https://tracker.ceph.com/issues/51964
356
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
357
* https://tracker.ceph.com/issues/63522
358
    No module named 'tasks.ceph_fuse'
359
    No module named 'tasks.kclient'
360
    No module named 'tasks.cephfs.fuse_mount'
361
    No module named 'tasks.ceph'
362
* https://tracker.ceph.com/issues/63523
363
    Command failed - qa/workunits/fs/misc/general_vxattrs.sh
364
365
366
h3. 14 Nov 2023
367 206 Venky Shankar
368
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20231106.073650
369
370
(nvm the fs:upgrade test failure - the PR is excluded from merge)
371
372
* https://tracker.ceph.com/issues/57676
373
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
374
* https://tracker.ceph.com/issues/63233
375
    mon|client|mds: valgrind reports possible leaks in the MDS
376
* https://tracker.ceph.com/issues/63141
377
    qa/cephfs: test_idem_unaffected_root_squash fails
378
* https://tracker.ceph.com/issues/62580
379
    testing: Test failure: test_snapshot_remove (tasks.cephfs.test_strays.TestStrays)
380
* https://tracker.ceph.com/issues/57655
381
    qa: fs:mixed-clients kernel_untar_build failure
382
* https://tracker.ceph.com/issues/51964
383
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
384
* https://tracker.ceph.com/issues/63519
385
    ceph-fuse: reef ceph-fuse crashes with main branch ceph-mds
386
* https://tracker.ceph.com/issues/57087
387
    qa: test_fragmented_injection (tasks.cephfs.test_data_scan.TestDataScan) failure
388
* https://tracker.ceph.com/issues/58945
389
    qa: xfstests-dev's generic test suite has 20 failures with fuse client
390
391 204 Rishabh Dave
h3. 7 Nov 2023
392
393 205 Rishabh Dave
fs: https://pulpito.ceph.com/rishabh-2023-11-04_04:30:51-fs-rishabh-2023nov3-testing-default-smithi/
394
re-run: https://pulpito.ceph.com/rishabh-2023-11-05_14:10:09-fs-rishabh-2023nov3-testing-default-smithi/
395
smoke: https://pulpito.ceph.com/rishabh-2023-11-08_08:39:05-smoke-rishabh-2023nov3-testing-default-smithi/
396 204 Rishabh Dave
397
* https://tracker.ceph.com/issues/53859
398
  qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
399
* https://tracker.ceph.com/issues/63233
400
  mon|client|mds: valgrind reports possible leaks in the MDS
401
* https://tracker.ceph.com/issues/57655
402
  qa: fs:mixed-clients kernel_untar_build failure
403
* https://tracker.ceph.com/issues/57676
404
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
405
406
* https://tracker.ceph.com/issues/63473
407
  fsstress.sh failed with errno 124
408
409 202 Rishabh Dave
h3. 3 Nov 2023
410 203 Rishabh Dave
411 202 Rishabh Dave
https://pulpito.ceph.com/rishabh-2023-10-27_06:26:52-fs-rishabh-2023oct26-testing-default-smithi/
412
413
* https://tracker.ceph.com/issues/63141
414
  qa/cephfs: test_idem_unaffected_root_squash fails
415
* https://tracker.ceph.com/issues/63233
416
  mon|client|mds: valgrind reports possible leaks in the MDS
417
* https://tracker.ceph.com/issues/57656
418
  dbench: write failed on handle 10010 (Resource temporarily unavailable)
419
* https://tracker.ceph.com/issues/57655
420
  qa: fs:mixed-clients kernel_untar_build failure
421
* https://tracker.ceph.com/issues/57676
422
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
423
424
* https://tracker.ceph.com/issues/59531
425
  "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio)"
426
* https://tracker.ceph.com/issues/52624
427
  qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
428
429 198 Patrick Donnelly
h3. 24 October 2023
430
431
https://pulpito.ceph.com/?branch=wip-batrick-testing-20231024.144545
432
433 200 Patrick Donnelly
Two failures:
434
435
https://pulpito.ceph.com/pdonnell-2023-10-26_05:21:22-fs-wip-batrick-testing-20231024.144545-distro-default-smithi/7438459/
436
https://pulpito.ceph.com/pdonnell-2023-10-26_05:21:22-fs-wip-batrick-testing-20231024.144545-distro-default-smithi/7438468/
437
438
probably related to https://github.com/ceph/ceph/pull/53255. Killing the mount as part of the test did not complete. Will research more.
439
440 198 Patrick Donnelly
* https://tracker.ceph.com/issues/52624
441
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
442
* https://tracker.ceph.com/issues/57676
443 199 Patrick Donnelly
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
444
* https://tracker.ceph.com/issues/63233
445
    mon|client|mds: valgrind reports possible leaks in the MDS
446
* https://tracker.ceph.com/issues/59531
447
    "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS 
448
* https://tracker.ceph.com/issues/57655
449
    qa: fs:mixed-clients kernel_untar_build failure
450 200 Patrick Donnelly
* https://tracker.ceph.com/issues/62067
451
    ffsb.sh failure "Resource temporarily unavailable"
452
* https://tracker.ceph.com/issues/63411
453
    qa: flush journal may cause timeouts of `scrub status`
454
* https://tracker.ceph.com/issues/61243
455
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
456
* https://tracker.ceph.com/issues/63141
457 198 Patrick Donnelly
    test_idem_unaffected_root_squash (test_admin.TestFsAuthorizeUpdate) fails
458 148 Rishabh Dave
459 195 Venky Shankar
h3. 18 Oct 2023
460
461
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20231018.065603
462
463
* https://tracker.ceph.com/issues/52624
464
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
465
* https://tracker.ceph.com/issues/57676
466
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
467
* https://tracker.ceph.com/issues/63233
468
    mon|client|mds: valgrind reports possible leaks in the MDS
469
* https://tracker.ceph.com/issues/63141
470
    qa/cephfs: test_idem_unaffected_root_squash fails
471
* https://tracker.ceph.com/issues/59531
472
  "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio)" 
473
* https://tracker.ceph.com/issues/62658
474
    error during scrub thrashing: reached maximum tries (31) after waiting for 900 seconds
475
* https://tracker.ceph.com/issues/62580
476
    testing: Test failure: test_snapshot_remove (tasks.cephfs.test_strays.TestStrays)
477
* https://tracker.ceph.com/issues/62067
478
    ffsb.sh failure "Resource temporarily unavailable"
479
* https://tracker.ceph.com/issues/57655
480
    qa: fs:mixed-clients kernel_untar_build failure
481
* https://tracker.ceph.com/issues/62036
482
    src/mds/MDCache.cc: 5131: FAILED ceph_assert(isolated_inodes.empty())
483
* https://tracker.ceph.com/issues/58945
484
    qa: xfstests-dev's generic test suite has 20 failures with fuse client
485
* https://tracker.ceph.com/issues/62847
486
    mds: blogbench requests stuck (5mds+scrub+snaps-flush)
487
488 193 Venky Shankar
h3. 13 Oct 2023
489
490
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20231013.093215
491
492
* https://tracker.ceph.com/issues/52624
493
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
494
* https://tracker.ceph.com/issues/62936
495
    Test failure: test_mirroring_init_failure_with_recovery (tasks.cephfs.test_mirroring.TestMirroring)
496
* https://tracker.ceph.com/issues/47292
497
    cephfs-shell: test_df_for_valid_file failure
498
* https://tracker.ceph.com/issues/63141
499
    qa/cephfs: test_idem_unaffected_root_squash fails
500
* https://tracker.ceph.com/issues/62081
501
    tasks/fscrypt-common does not finish, timesout
502 1 Patrick Donnelly
* https://tracker.ceph.com/issues/58945
503
    qa: xfstests-dev's generic test suite has 20 failures with fuse client
504 194 Venky Shankar
* https://tracker.ceph.com/issues/63233
505
    mon|client|mds: valgrind reports possible leaks in the MDS
506 193 Venky Shankar
507 190 Patrick Donnelly
h3. 16 Oct 2023
508
509
https://pulpito.ceph.com/?branch=wip-batrick-testing-20231016.203825
510
511 192 Patrick Donnelly
Infrastructure issues:
512
* /teuthology/pdonnell-2023-10-19_12:04:12-fs-wip-batrick-testing-20231016.203825-distro-default-smithi/7432286/teuthology.log
513
    Host lost.
514
515 196 Patrick Donnelly
One followup fix:
516
* https://pulpito.ceph.com/pdonnell-2023-10-20_00:33:29-fs-wip-batrick-testing-20231016.203825-distro-default-smithi/
517
518 192 Patrick Donnelly
Failures:
519
520
* https://tracker.ceph.com/issues/56694
521
    qa: avoid blocking forever on hung umount
522
* https://tracker.ceph.com/issues/63089
523
    qa: tasks/mirror times out
524
* https://tracker.ceph.com/issues/52624
525
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
526
* https://tracker.ceph.com/issues/59531
527
  "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio)" 
528
* https://tracker.ceph.com/issues/57676
529
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
530
* https://tracker.ceph.com/issues/62658 
531
    error during scrub thrashing: reached maximum tries (31) after waiting for 900 seconds
532
* https://tracker.ceph.com/issues/61243
533
  test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
534
* https://tracker.ceph.com/issues/57656
535
  dbench: write failed on handle 10010 (Resource temporarily unavailable)
536
* https://tracker.ceph.com/issues/63233
537
  mon|client|mds: valgrind reports possible leaks in the MDS
538 197 Patrick Donnelly
* https://tracker.ceph.com/issues/63278
539
  kclient: may wrongly decode session messages and believe it is blocklisted (dead jobs)
540 192 Patrick Donnelly
541 189 Rishabh Dave
h3. 9 Oct 2023
542
543
https://pulpito.ceph.com/rishabh-2023-10-06_11:56:52-fs-rishabh-cephfs-mon-testing-default-smithi/
544
545
* https://tracker.ceph.com/issues/54460
546
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
547
* https://tracker.ceph.com/issues/63141
548
  test_idem_unaffected_root_squash (test_admin.TestFsAuthorizeUpdate) fails
549
* https://tracker.ceph.com/issues/62937
550
  logrotate doesn't support parallel execution on same set of logfiles
551
* https://tracker.ceph.com/issues/61400
552
  valgrind+ceph-mon issues
553
* https://tracker.ceph.com/issues/57676
554
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
555
* https://tracker.ceph.com/issues/55805
556
  error during scrub thrashing reached max tries in 900 secs
557
558 188 Venky Shankar
h3. 26 Sep 2023
559
560
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230926.081818
561
562
* https://tracker.ceph.com/issues/52624
563
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
564
* https://tracker.ceph.com/issues/62873
565
    qa: FAIL: test_client_blocklisted_oldest_tid (tasks.cephfs.test_client_limits.TestClientLimits)
566
* https://tracker.ceph.com/issues/61400
567
    valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
568
* https://tracker.ceph.com/issues/57676
569
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
570
* https://tracker.ceph.com/issues/62682
571
    mon: no mdsmap broadcast after "fs set joinable" is set to true
572
* https://tracker.ceph.com/issues/63089
573
    qa: tasks/mirror times out
574
575 185 Rishabh Dave
h3. 22 Sep 2023
576
577
https://pulpito.ceph.com/rishabh-2023-09-12_12:12:15-fs-wip-rishabh-2023sep12-b2-testing-default-smithi/
578
579
* https://tracker.ceph.com/issues/59348
580
  qa: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota
581
* https://tracker.ceph.com/issues/59344
582
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
583
* https://tracker.ceph.com/issues/59531
584
  "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio)" 
585
* https://tracker.ceph.com/issues/61574
586
  build failure for mdtest project
587
* https://tracker.ceph.com/issues/62702
588
  fsstress.sh: MDS slow requests for the internal 'rename' requests
589
* https://tracker.ceph.com/issues/57676
590
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
591
592
* https://tracker.ceph.com/issues/62863 
593
  deadlock in ceph-fuse causes teuthology job to hang and fail
594
* https://tracker.ceph.com/issues/62870
595
  test_cluster_info (tasks.cephfs.test_nfs.TestNFS)
596
* https://tracker.ceph.com/issues/62873
597
  test_client_blocklisted_oldest_tid (tasks.cephfs.test_client_limits.TestClientLimits)
598
599 186 Venky Shankar
h3. 20 Sep 2023
600
601
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230920.072635
602
603
* https://tracker.ceph.com/issues/52624
604
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
605
* https://tracker.ceph.com/issues/61400
606
    valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
607
* https://tracker.ceph.com/issues/61399
608
    libmpich: undefined references to fi_strerror
609
* https://tracker.ceph.com/issues/62081
610
    tasks/fscrypt-common does not finish, timesout
611
* https://tracker.ceph.com/issues/62658 
612
    error during scrub thrashing: reached maximum tries (31) after waiting for 900 seconds
613
* https://tracker.ceph.com/issues/62915
614
    qa/suites/fs/nfs: No orchestrator configured (try `ceph orch set backend`) while running test cases
615
* https://tracker.ceph.com/issues/59531
616
    quincy: "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio)"
617
* https://tracker.ceph.com/issues/62873
618
  qa: FAIL: test_client_blocklisted_oldest_tid (tasks.cephfs.test_client_limits.TestClientLimits)
619
* https://tracker.ceph.com/issues/62936
620
    Test failure: test_mirroring_init_failure_with_recovery (tasks.cephfs.test_mirroring.TestMirroring)
621
* https://tracker.ceph.com/issues/62937
622
    Command failed on smithi027 with status 3: 'sudo logrotate /etc/logrotate.d/ceph-test.conf'
623
* https://tracker.ceph.com/issues/62510
624
    snaptest-git-ceph.sh failure with fs/thrash
625
* https://tracker.ceph.com/issues/62081
626
    tasks/fscrypt-common does not finish, timesout
627
* https://tracker.ceph.com/issues/62126
628
    test failure: suites/blogbench.sh stops running
629 187 Venky Shankar
* https://tracker.ceph.com/issues/62682
630
    mon: no mdsmap broadcast after "fs set joinable" is set to true
631 186 Venky Shankar
632 184 Milind Changire
h3. 19 Sep 2023
633
634
http://pulpito.front.sepia.ceph.com/mchangir-2023-09-12_05:40:22-fs-wip-mchangir-testing-20230908.140927-testing-default-smithi/
635
636
* https://tracker.ceph.com/issues/58220#note-9
637
  workunit fs/test_python.sh: test_disk_quota_exceeeded_error failure
638
* https://tracker.ceph.com/issues/62702
639
  Command failed (workunit test suites/fsstress.sh) on smithi124 with status 124
640
* https://tracker.ceph.com/issues/57676
641
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
642
* https://tracker.ceph.com/issues/59348
643
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
644
* https://tracker.ceph.com/issues/52624
645
  qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
646
* https://tracker.ceph.com/issues/51964
647
  qa: test_cephfs_mirror_restart_sync_on_blocklist failure
648
* https://tracker.ceph.com/issues/61243
649
  test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
650
* https://tracker.ceph.com/issues/59344
651
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
652
* https://tracker.ceph.com/issues/62873
653
  qa: FAIL: test_client_blocklisted_oldest_tid (tasks.cephfs.test_client_limits.TestClientLimits)
654
* https://tracker.ceph.com/issues/59413
655
  cephfs: qa snaptest-git-ceph.sh failed with "got remote process result: 128"
656
* https://tracker.ceph.com/issues/53859
657
  qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
658
* https://tracker.ceph.com/issues/62482
659
  qa: cluster [WRN] Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
660
661 178 Patrick Donnelly
662 177 Venky Shankar
h3. 13 Sep 2023
663
664
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230908.065909
665
666
* https://tracker.ceph.com/issues/52624
667
      qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
668
* https://tracker.ceph.com/issues/57655
669
    qa: fs:mixed-clients kernel_untar_build failure
670
* https://tracker.ceph.com/issues/57676
671
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
672
* https://tracker.ceph.com/issues/61243
673
    qa: tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev - 17 tests failed
674
* https://tracker.ceph.com/issues/62567
675
    postgres workunit times out - MDS_SLOW_REQUEST in logs
676
* https://tracker.ceph.com/issues/61400
677
    valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
678
* https://tracker.ceph.com/issues/61399
679
    libmpich: undefined references to fi_strerror
680
* https://tracker.ceph.com/issues/57655
681
    qa: fs:mixed-clients kernel_untar_build failure
682
* https://tracker.ceph.com/issues/57676
683
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
684
* https://tracker.ceph.com/issues/51964
685
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
686
* https://tracker.ceph.com/issues/62081
687
    tasks/fscrypt-common does not finish, timesout
688 178 Patrick Donnelly
689 179 Patrick Donnelly
h3. 2023 Sep 12
690 178 Patrick Donnelly
691
https://pulpito.ceph.com/pdonnell-2023-09-12_14:07:50-fs-wip-batrick-testing-20230912.122437-distro-default-smithi/
692 1 Patrick Donnelly
693 181 Patrick Donnelly
A few failures caused by qa refactoring in https://github.com/ceph/ceph/pull/48130 ; notably:
694
695 182 Patrick Donnelly
* Test failure: test_export_pin_many (tasks.cephfs.test_exports.TestExportPin) caused by fragmentation from config changes.
696 181 Patrick Donnelly
697
Failures:
698
699 179 Patrick Donnelly
* https://tracker.ceph.com/issues/59348
700
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
701
* https://tracker.ceph.com/issues/57656
702
  dbench: write failed on handle 10010 (Resource temporarily unavailable)
703
* https://tracker.ceph.com/issues/55805
704
  error scrub thrashing reached max tries in 900 secs
705
* https://tracker.ceph.com/issues/62067
706
    ffsb.sh failure "Resource temporarily unavailable"
707
* https://tracker.ceph.com/issues/59344
708
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
709
* https://tracker.ceph.com/issues/61399
710 180 Patrick Donnelly
  libmpich: undefined references to fi_strerror
711
* https://tracker.ceph.com/issues/62832
712
  common: config_proxy deadlock during shutdown (and possibly other times)
713
* https://tracker.ceph.com/issues/59413
714 1 Patrick Donnelly
  cephfs: qa snaptest-git-ceph.sh failed with "got remote process result: 128"
715 181 Patrick Donnelly
* https://tracker.ceph.com/issues/57676
716
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
717
* https://tracker.ceph.com/issues/62567
718
  Command failed with status 1: "sudo TESTDIR=/home/ubuntu/cephtest bash -c 'sudo -u postgres -- pgbench -s 500 -i'"
719
* https://tracker.ceph.com/issues/54460
720
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
721
* https://tracker.ceph.com/issues/58220#note-9
722
  workunit fs/test_python.sh: test_disk_quota_exceeeded_error failure
723
* https://tracker.ceph.com/issues/59348
724
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
725 183 Patrick Donnelly
* https://tracker.ceph.com/issues/62847
726
    mds: blogbench requests stuck (5mds+scrub+snaps-flush)
727
* https://tracker.ceph.com/issues/62848
728
    qa: fail_fs upgrade scenario hanging
729
* https://tracker.ceph.com/issues/62081
730
    tasks/fscrypt-common does not finish, timesout
731 177 Venky Shankar
732 176 Venky Shankar
h3. 11 Sep 2023
733 175 Venky Shankar
734
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230830.153114
735
736
* https://tracker.ceph.com/issues/52624
737
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
738
* https://tracker.ceph.com/issues/61399
739
    qa: build failure for ior (the failed instance is when compiling `mdtest`)
740
* https://tracker.ceph.com/issues/57655
741
    qa: fs:mixed-clients kernel_untar_build failure
742
* https://tracker.ceph.com/issues/61399
743
    ior build failure
744
* https://tracker.ceph.com/issues/59531
745
    quincy: "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio)"
746
* https://tracker.ceph.com/issues/59344
747
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
748
* https://tracker.ceph.com/issues/59346
749
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
750
* https://tracker.ceph.com/issues/59348
751
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
752
* https://tracker.ceph.com/issues/57676
753
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
754
* https://tracker.ceph.com/issues/61243
755
  qa: tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev - 17 tests failed
756
* https://tracker.ceph.com/issues/62567
757
  postgres workunit times out - MDS_SLOW_REQUEST in logs
758
759
760 174 Rishabh Dave
h3. 6 Sep 2023 Run 2
761
762
https://pulpito.ceph.com/rishabh-2023-08-25_01:50:32-fs-wip-rishabh-2023aug3-b5-testing-default-smithi/ 
763
764
* https://tracker.ceph.com/issues/51964
765
  test_cephfs_mirror_restart_sync_on_blocklist failure
766
* https://tracker.ceph.com/issues/59348
767
  test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
768
* https://tracker.ceph.com/issues/53859
769
  qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
770
* https://tracker.ceph.com/issues/61892
771
  test_strays.TestStrays.test_snapshot_remove failed
772
* https://tracker.ceph.com/issues/54460
773
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
774
* https://tracker.ceph.com/issues/59346
775
  fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write"
776
* https://tracker.ceph.com/issues/59344
777
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
778
* https://tracker.ceph.com/issues/62484
779
  qa: ffsb.sh test failure
780
* https://tracker.ceph.com/issues/62567
781
  Command failed with status 1: "sudo TESTDIR=/home/ubuntu/cephtest bash -c 'sudo -u postgres -- pgbench -s 500 -i'"
782
  
783
* https://tracker.ceph.com/issues/61399
784
  ior build failure
785
* https://tracker.ceph.com/issues/57676
786
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
787
* https://tracker.ceph.com/issues/55805
788
  error scrub thrashing reached max tries in 900 secs
789
790 172 Rishabh Dave
h3. 6 Sep 2023
791 171 Rishabh Dave
792 173 Rishabh Dave
https://pulpito.ceph.com/rishabh-2023-08-10_20:16:46-fs-wip-rishabh-2023Aug1-b4-testing-default-smithi/
793 171 Rishabh Dave
794 1 Patrick Donnelly
* https://tracker.ceph.com/issues/53859
795
  qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
796 173 Rishabh Dave
* https://tracker.ceph.com/issues/51964
797
  qa: test_cephfs_mirror_restart_sync_on_blocklist failure
798 1 Patrick Donnelly
* https://tracker.ceph.com/issues/61892
799 173 Rishabh Dave
  test_snapshot_remove (test_strays.TestStrays) failed
800
* https://tracker.ceph.com/issues/59348
801
  qa: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota
802
* https://tracker.ceph.com/issues/54462
803
  Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128
804
* https://tracker.ceph.com/issues/62556
805
  test_acls: xfstests_dev: python2 is missing
806
* https://tracker.ceph.com/issues/62067
807
  ffsb.sh failure "Resource temporarily unavailable"
808
* https://tracker.ceph.com/issues/57656
809
  dbench: write failed on handle 10010 (Resource temporarily unavailable)
810 1 Patrick Donnelly
* https://tracker.ceph.com/issues/59346
811
  fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write"
812 171 Rishabh Dave
* https://tracker.ceph.com/issues/59344
813 173 Rishabh Dave
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
814
815 171 Rishabh Dave
* https://tracker.ceph.com/issues/61399
816
  ior build failure
817
* https://tracker.ceph.com/issues/57676
818
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
819
* https://tracker.ceph.com/issues/55805
820
  error scrub thrashing reached max tries in 900 secs
821 173 Rishabh Dave
822
* https://tracker.ceph.com/issues/62567
823
  Command failed on smithi008 with status 1: "sudo TESTDIR=/home/ubuntu/cephtest bash -c 'sudo -u postgres -- pgbench -s 500 -i'"
824
* https://tracker.ceph.com/issues/62702
825
  workunit test suites/fsstress.sh on smithi066 with status 124
826 170 Rishabh Dave
827
h3. 5 Sep 2023
828
829
https://pulpito.ceph.com/rishabh-2023-08-25_06:38:25-fs-wip-rishabh-2023aug3-b5-testing-default-smithi/
830
orch:cephadm suite run: http://pulpito.front.sepia.ceph.com/rishabh-2023-09-05_12:16:09-orch:cephadm-wip-rishabh-2023aug3-b5-testing-default-smithi/
831
  this run has failures but acc to Adam King these are not relevant and should be ignored
832
833
* https://tracker.ceph.com/issues/61892
834
  test_snapshot_remove (test_strays.TestStrays) failed
835
* https://tracker.ceph.com/issues/59348
836
  test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota
837
* https://tracker.ceph.com/issues/54462
838
  Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128
839
* https://tracker.ceph.com/issues/62067
840
  ffsb.sh failure "Resource temporarily unavailable"
841
* https://tracker.ceph.com/issues/57656 
842
  dbench: write failed on handle 10010 (Resource temporarily unavailable)
843
* https://tracker.ceph.com/issues/59346
844
  fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write"
845
* https://tracker.ceph.com/issues/59344
846
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
847
* https://tracker.ceph.com/issues/50223
848
  client.xxxx isn't responding to mclientcaps(revoke)
849
* https://tracker.ceph.com/issues/57655
850
  qa: fs:mixed-clients kernel_untar_build failure
851
* https://tracker.ceph.com/issues/62187
852
  iozone.sh: line 5: iozone: command not found
853
 
854
* https://tracker.ceph.com/issues/61399
855
  ior build failure
856
* https://tracker.ceph.com/issues/57676
857
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
858
* https://tracker.ceph.com/issues/55805
859
  error scrub thrashing reached max tries in 900 secs
860 169 Venky Shankar
861
862
h3. 31 Aug 2023
863
864
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230824.045828
865
866
* https://tracker.ceph.com/issues/52624
867
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
868
* https://tracker.ceph.com/issues/62187
869
    iozone: command not found
870
* https://tracker.ceph.com/issues/61399
871
    ior build failure
872
* https://tracker.ceph.com/issues/59531
873
    quincy: "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio)"
874
* https://tracker.ceph.com/issues/61399
875
    qa: build failure for ior (the failed instance is when compiling `mdtest`)
876
* https://tracker.ceph.com/issues/57655
877
    qa: fs:mixed-clients kernel_untar_build failure
878
* https://tracker.ceph.com/issues/59344
879
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
880
* https://tracker.ceph.com/issues/59346
881
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
882
* https://tracker.ceph.com/issues/59348
883
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
884
* https://tracker.ceph.com/issues/59413
885
    cephfs: qa snaptest-git-ceph.sh failed with "got remote process result: 128"
886
* https://tracker.ceph.com/issues/62653
887
    qa: unimplemented fcntl command: 1036 with fsstress
888
* https://tracker.ceph.com/issues/61400
889
    valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
890
* https://tracker.ceph.com/issues/62658
891
    error during scrub thrashing: reached maximum tries (31) after waiting for 900 seconds
892
* https://tracker.ceph.com/issues/62188
893
    AttributeError: 'RemoteProcess' object has no attribute 'read' (happens only with multis-auth test)
894 168 Venky Shankar
895
896
h3. 25 Aug 2023
897
898
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230822.064807
899
900
* https://tracker.ceph.com/issues/59344
901
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
902
* https://tracker.ceph.com/issues/59346
903
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
904
* https://tracker.ceph.com/issues/59348
905
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
906
* https://tracker.ceph.com/issues/57655
907
    qa: fs:mixed-clients kernel_untar_build failure
908
* https://tracker.ceph.com/issues/61243
909
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
910
* https://tracker.ceph.com/issues/61399
911
    ior build failure
912
* https://tracker.ceph.com/issues/61399
913
    qa: build failure for ior (the failed instance is when compiling `mdtest`)
914
* https://tracker.ceph.com/issues/62484
915
    qa: ffsb.sh test failure
916
* https://tracker.ceph.com/issues/59531
917
    quincy: "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio)"
918
* https://tracker.ceph.com/issues/62510
919
    snaptest-git-ceph.sh failure with fs/thrash
920 167 Venky Shankar
921
922
h3. 24 Aug 2023
923
924
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230822.060131
925
926
* https://tracker.ceph.com/issues/57676
927
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
928
* https://tracker.ceph.com/issues/51964
929
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
930
* https://tracker.ceph.com/issues/59344
931
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
932
* https://tracker.ceph.com/issues/59346
933
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
934
* https://tracker.ceph.com/issues/59348
935
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
936
* https://tracker.ceph.com/issues/61399
937
    ior build failure
938
* https://tracker.ceph.com/issues/61399
939
    qa: build failure for ior (the failed instance is when compiling `mdtest`)
940
* https://tracker.ceph.com/issues/62510
941
    snaptest-git-ceph.sh failure with fs/thrash
942
* https://tracker.ceph.com/issues/62484
943
    qa: ffsb.sh test failure
944
* https://tracker.ceph.com/issues/57087
945
    qa: test_fragmented_injection (tasks.cephfs.test_data_scan.TestDataScan) failure
946
* https://tracker.ceph.com/issues/57656
947
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
948
* https://tracker.ceph.com/issues/62187
949
    iozone: command not found
950
* https://tracker.ceph.com/issues/62188
951
    AttributeError: 'RemoteProcess' object has no attribute 'read' (happens only with multis-auth test)
952
* https://tracker.ceph.com/issues/62567
953
    postgres workunit times out - MDS_SLOW_REQUEST in logs
954 166 Venky Shankar
955
956
h3. 22 Aug 2023
957
958
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230809.035933
959
960
* https://tracker.ceph.com/issues/57676
961
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
962
* https://tracker.ceph.com/issues/51964
963
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
964
* https://tracker.ceph.com/issues/59344
965
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
966
* https://tracker.ceph.com/issues/59346
967
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
968
* https://tracker.ceph.com/issues/59348
969
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
970
* https://tracker.ceph.com/issues/61399
971
    ior build failure
972
* https://tracker.ceph.com/issues/61399
973
    qa: build failure for ior (the failed instance is when compiling `mdtest`)
974
* https://tracker.ceph.com/issues/57655
975
    qa: fs:mixed-clients kernel_untar_build failure
976
* https://tracker.ceph.com/issues/61243
977
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
978
* https://tracker.ceph.com/issues/62188
979
    AttributeError: 'RemoteProcess' object has no attribute 'read' (happens only with multis-auth test)
980
* https://tracker.ceph.com/issues/62510
981
    snaptest-git-ceph.sh failure with fs/thrash
982
* https://tracker.ceph.com/issues/62511
983
    src/mds/MDLog.cc: 299: FAILED ceph_assert(!mds_is_shutting_down)
984 165 Venky Shankar
985
986
h3. 14 Aug 2023
987
988
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230808.093601
989
990
* https://tracker.ceph.com/issues/51964
991
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
992
* https://tracker.ceph.com/issues/61400
993
    valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
994
* https://tracker.ceph.com/issues/61399
995
    ior build failure
996
* https://tracker.ceph.com/issues/59348
997
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
998
* https://tracker.ceph.com/issues/59531
999
    cluster [WRN] OSD bench result of 137706.272521 IOPS exceeded the threshold
1000
* https://tracker.ceph.com/issues/59344
1001
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
1002
* https://tracker.ceph.com/issues/59346
1003
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
1004
* https://tracker.ceph.com/issues/61399
1005
    qa: build failure for ior (the failed instance is when compiling `mdtest`)
1006
* https://tracker.ceph.com/issues/59684 [kclient bug]
1007
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
1008
* https://tracker.ceph.com/issues/61243 (NEW)
1009
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
1010
* https://tracker.ceph.com/issues/57655
1011
    qa: fs:mixed-clients kernel_untar_build failure
1012
* https://tracker.ceph.com/issues/57656
1013
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
1014 163 Venky Shankar
1015
1016
h3. 28 JULY 2023
1017
1018
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230725.053049
1019
1020
* https://tracker.ceph.com/issues/51964
1021
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1022
* https://tracker.ceph.com/issues/61400
1023
    valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
1024
* https://tracker.ceph.com/issues/61399
1025
    ior build failure
1026
* https://tracker.ceph.com/issues/57676
1027
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1028
* https://tracker.ceph.com/issues/59348
1029
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1030
* https://tracker.ceph.com/issues/59531
1031
    cluster [WRN] OSD bench result of 137706.272521 IOPS exceeded the threshold
1032
* https://tracker.ceph.com/issues/59344
1033
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
1034
* https://tracker.ceph.com/issues/59346
1035
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
1036
* https://github.com/ceph/ceph/pull/52556
1037
    task/test_nfs: AttributeError: 'TestNFS' object has no attribute 'run_ceph_cmd' (see note #4)
1038
* https://tracker.ceph.com/issues/62187
1039
    iozone: command not found
1040
* https://tracker.ceph.com/issues/61399
1041
    qa: build failure for ior (the failed instance is when compiling `mdtest`)
1042
* https://tracker.ceph.com/issues/62188
1043 164 Rishabh Dave
    AttributeError: 'RemoteProcess' object has no attribute 'read' (happens only with multis-auth test)
1044 158 Rishabh Dave
1045
h3. 24 Jul 2023
1046
1047
https://pulpito.ceph.com/rishabh-2023-07-13_21:35:13-fs-wip-rishabh-2023Jul13-testing-default-smithi/
1048
https://pulpito.ceph.com/rishabh-2023-07-14_10:26:42-fs-wip-rishabh-2023Jul13-testing-default-smithi/
1049
There were few failure from one of the PRs under testing. Following run confirms that removing this PR fixes these failures -
1050
https://pulpito.ceph.com/rishabh-2023-07-18_02:11:50-fs-wip-rishabh-2023Jul13-m-quota-testing-default-smithi/
1051
One more extra run to check if blogbench.sh fail every time:
1052
https://pulpito.ceph.com/rishabh-2023-07-21_17:58:19-fs-wip-rishabh-2023Jul13-m-quota-testing-default-smithi/
1053
blogbench.sh failure were seen on above runs for first time, following run with main branch that confirms that "blogbench.sh" was not related to any of the PRs that are under testing -
1054 161 Rishabh Dave
https://pulpito.ceph.com/rishabh-2023-07-21_21:30:53-fs-wip-rishabh-2023Jul13-base-2-testing-default-smithi/
1055
1056
* https://tracker.ceph.com/issues/61892
1057
  test_snapshot_remove (test_strays.TestStrays) failed
1058
* https://tracker.ceph.com/issues/53859
1059
  test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
1060
* https://tracker.ceph.com/issues/61982
1061
  test_clean_stale_subvolume_snapshot_metadata (tasks.cephfs.test_volumes.TestSubvolumeSnapshots)
1062
* https://tracker.ceph.com/issues/52438
1063
  qa: ffsb timeout
1064
* https://tracker.ceph.com/issues/54460
1065
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1066
* https://tracker.ceph.com/issues/57655
1067
  qa: fs:mixed-clients kernel_untar_build failure
1068
* https://tracker.ceph.com/issues/48773
1069
  reached max tries: scrub does not complete
1070
* https://tracker.ceph.com/issues/58340
1071
  mds: fsstress.sh hangs with multimds
1072
* https://tracker.ceph.com/issues/61400
1073
  valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
1074
* https://tracker.ceph.com/issues/57206
1075
  libcephfs/test.sh: ceph_test_libcephfs_reclaim
1076
  
1077
* https://tracker.ceph.com/issues/57656
1078
  [testing] dbench: write failed on handle 10010 (Resource temporarily unavailable)
1079
* https://tracker.ceph.com/issues/61399
1080
  ior build failure
1081
* https://tracker.ceph.com/issues/57676
1082
  error during scrub thrashing: backtrace
1083
  
1084
* https://tracker.ceph.com/issues/38452
1085
  'sudo -u postgres -- pgbench -s 500 -i' failed
1086 158 Rishabh Dave
* https://tracker.ceph.com/issues/62126
1087 157 Venky Shankar
  blogbench.sh failure
1088
1089
h3. 18 July 2023
1090
1091
* https://tracker.ceph.com/issues/52624
1092
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
1093
* https://tracker.ceph.com/issues/57676
1094
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1095
* https://tracker.ceph.com/issues/54460
1096
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1097
* https://tracker.ceph.com/issues/57655
1098
    qa: fs:mixed-clients kernel_untar_build failure
1099
* https://tracker.ceph.com/issues/51964
1100
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1101
* https://tracker.ceph.com/issues/59344
1102
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
1103
* https://tracker.ceph.com/issues/61182
1104
    cephfs-mirror-ha-workunit: reached maximum tries (50) after waiting for 300 seconds
1105
* https://tracker.ceph.com/issues/61957
1106
    test_client_limits.TestClientLimits.test_client_release_bug
1107
* https://tracker.ceph.com/issues/59348
1108
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1109
* https://tracker.ceph.com/issues/61892
1110
    test_strays.TestStrays.test_snapshot_remove failed
1111
* https://tracker.ceph.com/issues/59346
1112
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
1113
* https://tracker.ceph.com/issues/44565
1114
    src/mds/SimpleLock.h: 528: FAILED ceph_assert(state == LOCK_XLOCK || state == LOCK_XLOCKDONE || state == LOCK_XLOCKSNAP || state == LOCK_LOCK_XLOCK || state == LOCK_LOCK || is_locallock())
1115
* https://tracker.ceph.com/issues/62067
1116
    ffsb.sh failure "Resource temporarily unavailable"
1117 156 Venky Shankar
1118
1119
h3. 17 July 2023
1120
1121
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230704.040136
1122
1123
* https://tracker.ceph.com/issues/61982
1124
    Test failure: test_clean_stale_subvolume_snapshot_metadata (tasks.cephfs.test_volumes.TestSubvolumeSnapshots)
1125
* https://tracker.ceph.com/issues/59344
1126
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
1127
* https://tracker.ceph.com/issues/61182
1128
    cephfs-mirror-ha-workunit: reached maximum tries (50) after waiting for 300 seconds
1129
* https://tracker.ceph.com/issues/61957
1130
    test_client_limits.TestClientLimits.test_client_release_bug
1131
* https://tracker.ceph.com/issues/61400
1132
    valgrind+ceph-mon: segmentation fault in rocksdb+tcmalloc
1133
* https://tracker.ceph.com/issues/59348
1134
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1135
* https://tracker.ceph.com/issues/61892
1136
    test_strays.TestStrays.test_snapshot_remove failed
1137
* https://tracker.ceph.com/issues/59346
1138
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
1139
* https://tracker.ceph.com/issues/62036
1140
    src/mds/MDCache.cc: 5131: FAILED ceph_assert(isolated_inodes.empty())
1141
* https://tracker.ceph.com/issues/61737
1142
    coredump from '/bin/podman pull quay.ceph.io/ceph-ci/ceph:pacific'
1143
* https://tracker.ceph.com/issues/44565
1144
    src/mds/SimpleLock.h: 528: FAILED ceph_assert(state == LOCK_XLOCK || state == LOCK_XLOCKDONE || state == LOCK_XLOCKSNAP || state == LOCK_LOCK_XLOCK || state == LOCK_LOCK || is_locallock())
1145 155 Rishabh Dave
1146 1 Patrick Donnelly
1147 153 Rishabh Dave
h3. 13 July 2023 Run 2
1148 152 Rishabh Dave
1149
1150
https://pulpito.ceph.com/rishabh-2023-07-08_23:33:40-fs-wip-rishabh-2023Jul9-testing-default-smithi/
1151
https://pulpito.ceph.com/rishabh-2023-07-09_20:19:09-fs-wip-rishabh-2023Jul9-testing-default-smithi/
1152
1153
* https://tracker.ceph.com/issues/61957
1154
  test_client_limits.TestClientLimits.test_client_release_bug
1155
* https://tracker.ceph.com/issues/61982
1156
  Test failure: test_clean_stale_subvolume_snapshot_metadata (tasks.cephfs.test_volumes.TestSubvolumeSnapshots)
1157
* https://tracker.ceph.com/issues/59348
1158
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1159
* https://tracker.ceph.com/issues/59344
1160
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
1161
* https://tracker.ceph.com/issues/54460
1162
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1163
* https://tracker.ceph.com/issues/57655
1164
  qa: fs:mixed-clients kernel_untar_build failure
1165
* https://tracker.ceph.com/issues/61400
1166
  valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
1167
* https://tracker.ceph.com/issues/61399
1168
  ior build failure
1169
1170 151 Venky Shankar
h3. 13 July 2023
1171
1172
https://pulpito.ceph.com/vshankar-2023-07-04_11:45:30-fs-wip-vshankar-testing-20230704.040242-testing-default-smithi/
1173
1174
* https://tracker.ceph.com/issues/54460
1175
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1176
* https://tracker.ceph.com/issues/61400
1177
    valgrind+ceph-mon: segmentation fault in rocksdb+tcmalloc
1178
* https://tracker.ceph.com/issues/57655
1179
    qa: fs:mixed-clients kernel_untar_build failure
1180
* https://tracker.ceph.com/issues/61945
1181
    LibCephFS.DelegTimeout failure
1182
* https://tracker.ceph.com/issues/52624
1183
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
1184
* https://tracker.ceph.com/issues/57676
1185
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1186
* https://tracker.ceph.com/issues/59348
1187
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1188
* https://tracker.ceph.com/issues/59344
1189
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
1190
* https://tracker.ceph.com/issues/51964
1191
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1192
* https://tracker.ceph.com/issues/59346
1193
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
1194
* https://tracker.ceph.com/issues/61982
1195
    Test failure: test_clean_stale_subvolume_snapshot_metadata (tasks.cephfs.test_volumes.TestSubvolumeSnapshots)
1196 150 Rishabh Dave
1197
1198
h3. 13 Jul 2023
1199
1200
https://pulpito.ceph.com/rishabh-2023-07-05_22:21:20-fs-wip-rishabh-2023Jul5-testing-default-smithi/
1201
https://pulpito.ceph.com/rishabh-2023-07-06_19:33:28-fs-wip-rishabh-2023Jul5-testing-default-smithi/
1202
1203
* https://tracker.ceph.com/issues/61957
1204
  test_client_limits.TestClientLimits.test_client_release_bug
1205
* https://tracker.ceph.com/issues/59348
1206
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1207
* https://tracker.ceph.com/issues/59346
1208
  fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
1209
* https://tracker.ceph.com/issues/48773
1210
  scrub does not complete: reached max tries
1211
* https://tracker.ceph.com/issues/59344
1212
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
1213
* https://tracker.ceph.com/issues/52438
1214
  qa: ffsb timeout
1215
* https://tracker.ceph.com/issues/57656
1216
  [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
1217
* https://tracker.ceph.com/issues/58742
1218
  xfstests-dev: kcephfs: generic
1219
* https://tracker.ceph.com/issues/61399
1220 148 Rishabh Dave
  libmpich: undefined references to fi_strerror
1221 149 Rishabh Dave
1222 148 Rishabh Dave
h3. 12 July 2023
1223
1224
https://pulpito.ceph.com/rishabh-2023-07-05_18:32:52-fs-wip-rishabh-CephManager-in-CephFSTestCase-testing-default-smithi/
1225
https://pulpito.ceph.com/rishabh-2023-07-06_19:46:43-fs-wip-rishabh-CephManager-in-CephFSTestCase-testing-default-smithi/
1226
1227
* https://tracker.ceph.com/issues/61892
1228
  test_strays.TestStrays.test_snapshot_remove failed
1229
* https://tracker.ceph.com/issues/59348
1230
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1231
* https://tracker.ceph.com/issues/53859
1232
  qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
1233
* https://tracker.ceph.com/issues/59346
1234
  fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write"
1235
* https://tracker.ceph.com/issues/58742
1236
  xfstests-dev: kcephfs: generic
1237
* https://tracker.ceph.com/issues/59344
1238
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
1239
* https://tracker.ceph.com/issues/52438
1240
  qa: ffsb timeout
1241
* https://tracker.ceph.com/issues/57656
1242
  [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
1243
* https://tracker.ceph.com/issues/54460
1244
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1245
* https://tracker.ceph.com/issues/57655
1246
  qa: fs:mixed-clients kernel_untar_build failure
1247
* https://tracker.ceph.com/issues/61182
1248
  cephfs-mirror-ha-workunit: reached maximum tries (50) after waiting for 300 seconds
1249
* https://tracker.ceph.com/issues/61400
1250
  valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
1251 147 Rishabh Dave
* https://tracker.ceph.com/issues/48773
1252 146 Patrick Donnelly
  reached max tries: scrub does not complete
1253
1254
h3. 05 July 2023
1255
1256
https://pulpito.ceph.com/pdonnell-2023-07-05_03:38:33-fs:libcephfs-wip-pdonnell-testing-20230705.003205-distro-default-smithi/
1257
1258 137 Rishabh Dave
* https://tracker.ceph.com/issues/59346
1259 143 Rishabh Dave
  fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write"
1260
1261
h3. 27 Jun 2023
1262
1263
https://pulpito.ceph.com/rishabh-2023-06-21_23:38:17-fs-wip-rishabh-improvements-authmon-testing-default-smithi/
1264 144 Rishabh Dave
https://pulpito.ceph.com/rishabh-2023-06-23_17:37:30-fs-wip-rishabh-improvements-authmon-distro-default-smithi/
1265
1266
* https://tracker.ceph.com/issues/59348
1267
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1268
* https://tracker.ceph.com/issues/54460
1269
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1270
* https://tracker.ceph.com/issues/59346
1271
  fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write"
1272
* https://tracker.ceph.com/issues/59344
1273
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
1274
* https://tracker.ceph.com/issues/61399
1275
  libmpich: undefined references to fi_strerror
1276
* https://tracker.ceph.com/issues/50223
1277
  client.xxxx isn't responding to mclientcaps(revoke)
1278 143 Rishabh Dave
* https://tracker.ceph.com/issues/61831
1279
  Test failure: test_mirroring_init_failure_with_recovery (tasks.cephfs.test_mirroring.TestMirroring)
1280 142 Venky Shankar
1281
1282
h3. 22 June 2023
1283
1284
* https://tracker.ceph.com/issues/57676
1285
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1286
* https://tracker.ceph.com/issues/54460
1287
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1288
* https://tracker.ceph.com/issues/59344
1289
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
1290
* https://tracker.ceph.com/issues/59348
1291
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1292
* https://tracker.ceph.com/issues/61400
1293
    valgrind+ceph-mon: segmentation fault in rocksdb+tcmalloc
1294
* https://tracker.ceph.com/issues/57655
1295
    qa: fs:mixed-clients kernel_untar_build failure
1296
* https://tracker.ceph.com/issues/61394
1297
    qa/quincy: cluster [WRN] evicting unresponsive client smithi152 (4298), after 303.726 seconds" in cluster log
1298
* https://tracker.ceph.com/issues/61762
1299
    qa: wait_for_clean: failed before timeout expired
1300
* https://tracker.ceph.com/issues/61775
1301
    cephfs-mirror: mirror daemon does not shutdown (in mirror ha tests)
1302
* https://tracker.ceph.com/issues/44565
1303
    src/mds/SimpleLock.h: 528: FAILED ceph_assert(state == LOCK_XLOCK || state == LOCK_XLOCKDONE || state == LOCK_XLOCKSNAP || state == LOCK_LOCK_XLOCK || state == LOCK_LOCK || is_locallock())
1304
* https://tracker.ceph.com/issues/61790
1305
    cephfs client to mds comms remain silent after reconnect
1306
* https://tracker.ceph.com/issues/61791
1307
    snaptest-git-ceph.sh test timed out (job dead)
1308 139 Venky Shankar
1309
1310
h3. 20 June 2023
1311
1312
https://pulpito.ceph.com/vshankar-2023-06-15_04:58:28-fs-wip-vshankar-testing-20230614.124123-testing-default-smithi/
1313
1314
* https://tracker.ceph.com/issues/57676
1315
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1316
* https://tracker.ceph.com/issues/54460
1317
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1318 140 Venky Shankar
* https://tracker.ceph.com/issues/54462
1319 1 Patrick Donnelly
  Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128
1320 141 Venky Shankar
* https://tracker.ceph.com/issues/58340
1321 139 Venky Shankar
  mds: fsstress.sh hangs with multimds
1322
* https://tracker.ceph.com/issues/59344
1323
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
1324
* https://tracker.ceph.com/issues/59348
1325
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1326
* https://tracker.ceph.com/issues/57656
1327
  [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
1328
* https://tracker.ceph.com/issues/61400
1329
    valgrind+ceph-mon: segmentation fault in rocksdb+tcmalloc
1330
* https://tracker.ceph.com/issues/57655
1331
    qa: fs:mixed-clients kernel_untar_build failure
1332
* https://tracker.ceph.com/issues/44565
1333
    src/mds/SimpleLock.h: 528: FAILED ceph_assert(state == LOCK_XLOCK || state == LOCK_XLOCKDONE || state == LOCK_XLOCKSNAP || state == LOCK_LOCK_XLOCK || state == LOCK_LOCK || is_locallock())
1334
* https://tracker.ceph.com/issues/61737
1335 138 Rishabh Dave
    coredump from '/bin/podman pull quay.ceph.io/ceph-ci/ceph:pacific'
1336
1337
h3. 16 June 2023
1338
1339 1 Patrick Donnelly
https://pulpito.ceph.com/rishabh-2023-05-16_10:39:13-fs-wip-rishabh-2023May15-1524-testing-default-smithi/
1340 145 Rishabh Dave
https://pulpito.ceph.com/rishabh-2023-05-17_11:09:48-fs-wip-rishabh-2023May15-1524-testing-default-smithi/
1341 138 Rishabh Dave
https://pulpito.ceph.com/rishabh-2023-05-18_10:01:53-fs-wip-rishabh-2023May15-1524-testing-default-smithi/
1342 1 Patrick Donnelly
(bins were rebuilt with a subset of orig PRs) https://pulpito.ceph.com/rishabh-2023-06-09_10:19:22-fs-wip-rishabh-2023Jun9-1308-testing-default-smithi/
1343
1344
1345
* https://tracker.ceph.com/issues/59344
1346
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
1347 138 Rishabh Dave
* https://tracker.ceph.com/issues/59348
1348
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1349 145 Rishabh Dave
* https://tracker.ceph.com/issues/59346
1350
  fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write"
1351
* https://tracker.ceph.com/issues/57656
1352
  [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
1353
* https://tracker.ceph.com/issues/54460
1354
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1355 138 Rishabh Dave
* https://tracker.ceph.com/issues/54462
1356
  Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128
1357 145 Rishabh Dave
* https://tracker.ceph.com/issues/61399
1358
  libmpich: undefined references to fi_strerror
1359
* https://tracker.ceph.com/issues/58945
1360
  xfstests-dev: ceph-fuse: generic 
1361 138 Rishabh Dave
* https://tracker.ceph.com/issues/58742
1362 136 Patrick Donnelly
  xfstests-dev: kcephfs: generic
1363
1364
h3. 24 May 2023
1365
1366
https://pulpito.ceph.com/pdonnell-2023-05-23_18:20:18-fs-wip-pdonnell-testing-20230523.134409-distro-default-smithi/
1367
1368
* https://tracker.ceph.com/issues/57676
1369
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1370
* https://tracker.ceph.com/issues/59683
1371
    Error: Unable to find a match: userspace-rcu-devel libedit-devel device-mapper-devel with fscrypt tests
1372
* https://tracker.ceph.com/issues/61399
1373
    qa: "[Makefile:299: ior] Error 1"
1374
* https://tracker.ceph.com/issues/61265
1375
    qa: tasks.cephfs.fuse_mount:process failed to terminate after unmount
1376
* https://tracker.ceph.com/issues/59348
1377
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1378
* https://tracker.ceph.com/issues/59346
1379
    qa/workunits/fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write"
1380
* https://tracker.ceph.com/issues/61400
1381
    valgrind+ceph-mon: segmentation fault in rocksdb+tcmalloc
1382
* https://tracker.ceph.com/issues/54460
1383
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1384
* https://tracker.ceph.com/issues/51964
1385
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1386
* https://tracker.ceph.com/issues/59344
1387
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
1388
* https://tracker.ceph.com/issues/61407
1389
    mds: abort on CInode::verify_dirfrags
1390
* https://tracker.ceph.com/issues/48773
1391
    qa: scrub does not complete
1392
* https://tracker.ceph.com/issues/57655
1393
    qa: fs:mixed-clients kernel_untar_build failure
1394
* https://tracker.ceph.com/issues/61409
1395 128 Venky Shankar
    qa: _test_stale_caps does not wait for file flush before stat
1396
1397
h3. 15 May 2023
1398 130 Venky Shankar
1399 128 Venky Shankar
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230509.090020
1400
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230509.090020-6
1401
1402
* https://tracker.ceph.com/issues/52624
1403
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
1404
* https://tracker.ceph.com/issues/54460
1405
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1406
* https://tracker.ceph.com/issues/57676
1407
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1408
* https://tracker.ceph.com/issues/59684 [kclient bug]
1409
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
1410
* https://tracker.ceph.com/issues/59348
1411
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1412 131 Venky Shankar
* https://tracker.ceph.com/issues/61148
1413
    dbench test results in call trace in dmesg [kclient bug]
1414 133 Kotresh Hiremath Ravishankar
* https://tracker.ceph.com/issues/58340
1415 134 Kotresh Hiremath Ravishankar
    mds: fsstress.sh hangs with multimds
1416 125 Venky Shankar
1417
 
1418 129 Rishabh Dave
h3. 11 May 2023
1419
1420
https://pulpito.ceph.com/yuriw-2023-05-10_18:21:40-fs-wip-yuri7-testing-2023-05-10-0742-distro-default-smithi/
1421
1422
* https://tracker.ceph.com/issues/59684 [kclient bug]
1423
  Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
1424
* https://tracker.ceph.com/issues/59348
1425
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1426
* https://tracker.ceph.com/issues/57655
1427
  qa: fs:mixed-clients kernel_untar_build failure
1428
* https://tracker.ceph.com/issues/57676
1429
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
1430
* https://tracker.ceph.com/issues/55805
1431
  error during scrub thrashing reached max tries in 900 secs
1432
* https://tracker.ceph.com/issues/54460
1433
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1434
* https://tracker.ceph.com/issues/57656
1435
  [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
1436
* https://tracker.ceph.com/issues/58220
1437
  Command failed (workunit test fs/quota/quota.sh) on smithi081 with status 1:
1438 1 Patrick Donnelly
* https://tracker.ceph.com/issues/58220#note-9
1439
  workunit fs/test_python.sh: test_disk_quota_exceeeded_error failure
1440 134 Kotresh Hiremath Ravishankar
* https://tracker.ceph.com/issues/59342
1441
  qa/workunits/kernel_untar_build.sh failed when compiling the Linux source
1442 135 Kotresh Hiremath Ravishankar
* https://tracker.ceph.com/issues/58949
1443
    test_cephfs.test_disk_quota_exceeeded_error - AssertionError: DiskQuotaExceeded not raised by write
1444 129 Rishabh Dave
* https://tracker.ceph.com/issues/61243 (NEW)
1445
  test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
1446
1447 125 Venky Shankar
h3. 11 May 2023
1448 127 Venky Shankar
1449
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230509.054005
1450 126 Venky Shankar
1451 125 Venky Shankar
(no fsstress job failure [https://tracker.ceph.com/issues/58340] since https://github.com/ceph/ceph/pull/49553
1452
 was included in the branch, however, the PR got updated and needs retest).
1453
1454
* https://tracker.ceph.com/issues/52624
1455
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
1456
* https://tracker.ceph.com/issues/54460
1457
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1458
* https://tracker.ceph.com/issues/57676
1459
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1460
* https://tracker.ceph.com/issues/59683
1461
    Error: Unable to find a match: userspace-rcu-devel libedit-devel device-mapper-devel with fscrypt tests
1462
* https://tracker.ceph.com/issues/59684 [kclient bug]
1463
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
1464
* https://tracker.ceph.com/issues/59348
1465 124 Venky Shankar
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1466
1467
h3. 09 May 2023
1468
1469
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230506.143554
1470
1471
* https://tracker.ceph.com/issues/52624
1472
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
1473
* https://tracker.ceph.com/issues/58340
1474
    mds: fsstress.sh hangs with multimds
1475
* https://tracker.ceph.com/issues/54460
1476
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1477
* https://tracker.ceph.com/issues/57676
1478
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1479
* https://tracker.ceph.com/issues/51964
1480
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1481
* https://tracker.ceph.com/issues/59350
1482
    qa: test_flag_scrub_mdsdir (tasks.cephfs.test_scrub_checks.TestScrubChecks) ... ERROR
1483
* https://tracker.ceph.com/issues/59683
1484
    Error: Unable to find a match: userspace-rcu-devel libedit-devel device-mapper-devel with fscrypt tests
1485
* https://tracker.ceph.com/issues/59684 [kclient bug]
1486
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
1487
* https://tracker.ceph.com/issues/59348
1488 123 Venky Shankar
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1489
1490
h3. 10 Apr 2023
1491
1492
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230330.105356
1493
1494
* https://tracker.ceph.com/issues/52624
1495
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
1496
* https://tracker.ceph.com/issues/58340
1497
    mds: fsstress.sh hangs with multimds
1498
* https://tracker.ceph.com/issues/54460
1499
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1500
* https://tracker.ceph.com/issues/57676
1501
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1502 119 Rishabh Dave
* https://tracker.ceph.com/issues/51964
1503 120 Rishabh Dave
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1504 121 Rishabh Dave
1505 120 Rishabh Dave
h3. 31 Mar 2023
1506 122 Rishabh Dave
1507
run: http://pulpito.front.sepia.ceph.com/rishabh-2023-03-03_21:39:49-fs-wip-rishabh-2023Mar03-2316-testing-default-smithi/
1508 120 Rishabh Dave
re-run (some PRs removed from batch): http://pulpito.front.sepia.ceph.com/rishabh-2023-03-11_05:54:03-fs-wip-rishabh-2023Mar10-1727-testing-default-smithi/
1509
re-run (some PRs removed from batch): http://pulpito.front.sepia.ceph.com/rishabh-2023-03-23_08:27:28-fs-wip-rishabh-2023Mar20-2250-testing-default-smithi/
1510
1511
There were many more re-runs for "failed+dead" jobs as well as for individual jobs. half of the PRs from the batch were removed (gradually over subsequent re-runs).
1512
1513
* https://tracker.ceph.com/issues/57676
1514
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
1515
* https://tracker.ceph.com/issues/54460
1516
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1517
* https://tracker.ceph.com/issues/58220
1518
  Command failed (workunit test fs/quota/quota.sh) on smithixxx with status 1
1519
* https://tracker.ceph.com/issues/58220#note-9
1520
  workunit fs/test_python.sh: test_disk_quota_exceeeded_error failure
1521
* https://tracker.ceph.com/issues/56695
1522
  Command failed (workunit test suites/pjd.sh)
1523
* https://tracker.ceph.com/issues/58564 
1524
  workuit dbench failed with error code 1
1525
* https://tracker.ceph.com/issues/57206
1526
  libcephfs/test.sh: ceph_test_libcephfs_reclaim
1527
* https://tracker.ceph.com/issues/57580
1528
  Test failure: test_newops_getvxattr (tasks.cephfs.test_newops.TestNewOps)
1529
* https://tracker.ceph.com/issues/58940
1530
  ceph osd hit ceph_abort
1531
* https://tracker.ceph.com/issues/55805
1532 118 Venky Shankar
  error scrub thrashing reached max tries in 900 secs
1533
1534
h3. 30 March 2023
1535
1536
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230315.085747
1537
1538
* https://tracker.ceph.com/issues/58938
1539
    qa: xfstests-dev's generic test suite has 7 failures with kclient
1540
* https://tracker.ceph.com/issues/51964
1541
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1542
* https://tracker.ceph.com/issues/58340
1543 114 Venky Shankar
    mds: fsstress.sh hangs with multimds
1544
1545 115 Venky Shankar
h3. 29 March 2023
1546 114 Venky Shankar
1547
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230317.095222
1548
1549
* https://tracker.ceph.com/issues/56695
1550
    [RHEL stock] pjd test failures
1551
* https://tracker.ceph.com/issues/57676
1552
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1553
* https://tracker.ceph.com/issues/57087
1554
    qa: test_fragmented_injection (tasks.cephfs.test_data_scan.TestDataScan) failure
1555 116 Venky Shankar
* https://tracker.ceph.com/issues/58340
1556
    mds: fsstress.sh hangs with multimds
1557 114 Venky Shankar
* https://tracker.ceph.com/issues/57655
1558
    qa: fs:mixed-clients kernel_untar_build failure
1559 117 Venky Shankar
* https://tracker.ceph.com/issues/59230
1560
    Test failure: test_object_deletion (tasks.cephfs.test_damage.TestDamage)
1561 114 Venky Shankar
* https://tracker.ceph.com/issues/58938
1562 113 Venky Shankar
    qa: xfstests-dev's generic test suite has 7 failures with kclient
1563
1564
h3. 13 Mar 2023
1565
1566
* https://tracker.ceph.com/issues/56695
1567
    [RHEL stock] pjd test failures
1568
* https://tracker.ceph.com/issues/57676
1569
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1570
* https://tracker.ceph.com/issues/51964
1571
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1572
* https://tracker.ceph.com/issues/54460
1573
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1574
* https://tracker.ceph.com/issues/57656
1575 112 Venky Shankar
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
1576
1577
h3. 09 Mar 2023
1578
1579
https://pulpito.ceph.com/vshankar-2023-03-03_04:39:14-fs-wip-vshankar-testing-20230303.023823-testing-default-smithi/
1580
https://pulpito.ceph.com/vshankar-2023-03-08_15:12:36-fs-wip-vshankar-testing-20230308.112059-testing-default-smithi/
1581
1582
* https://tracker.ceph.com/issues/56695
1583
    [RHEL stock] pjd test failures
1584
* https://tracker.ceph.com/issues/57676
1585
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1586
* https://tracker.ceph.com/issues/51964
1587
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1588
* https://tracker.ceph.com/issues/54460
1589
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1590
* https://tracker.ceph.com/issues/58340
1591
    mds: fsstress.sh hangs with multimds
1592
* https://tracker.ceph.com/issues/57087
1593 111 Venky Shankar
    qa: test_fragmented_injection (tasks.cephfs.test_data_scan.TestDataScan) failure
1594
1595
h3. 07 Mar 2023
1596
1597
https://pulpito.ceph.com/vshankar-2023-03-02_09:21:58-fs-wip-vshankar-testing-20230222.044949-testing-default-smithi/
1598
https://pulpito.ceph.com/vshankar-2023-03-07_05:15:12-fs-wip-vshankar-testing-20230307.030510-testing-default-smithi/
1599
1600
* https://tracker.ceph.com/issues/56695
1601
    [RHEL stock] pjd test failures
1602
* https://tracker.ceph.com/issues/57676
1603
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1604
* https://tracker.ceph.com/issues/51964
1605
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1606
* https://tracker.ceph.com/issues/57656
1607
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
1608
* https://tracker.ceph.com/issues/57655
1609
    qa: fs:mixed-clients kernel_untar_build failure
1610
* https://tracker.ceph.com/issues/58220
1611
    Command failed (workunit test fs/quota/quota.sh) on smithi081 with status 1:
1612
* https://tracker.ceph.com/issues/54460
1613
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1614
* https://tracker.ceph.com/issues/58934
1615 109 Venky Shankar
    snaptest-git-ceph.sh failure with ceph-fuse
1616
1617
h3. 28 Feb 2023
1618
1619
https://pulpito.ceph.com/vshankar-2023-02-24_02:11:45-fs-wip-vshankar-testing-20230222.025426-testing-default-smithi/
1620
1621
* https://tracker.ceph.com/issues/56695
1622
    [RHEL stock] pjd test failures
1623
* https://tracker.ceph.com/issues/57676
1624
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1625 110 Venky Shankar
* https://tracker.ceph.com/issues/56446
1626 109 Venky Shankar
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
1627
1628 107 Venky Shankar
(teuthology infra issues causing testing delays - merging PRs which have tests passing)
1629
1630
h3. 25 Jan 2023
1631
1632
https://pulpito.ceph.com/vshankar-2023-01-25_07:57:32-fs-wip-vshankar-testing-20230125.055346-testing-default-smithi/
1633
1634
* https://tracker.ceph.com/issues/52624
1635
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
1636
* https://tracker.ceph.com/issues/56695
1637
    [RHEL stock] pjd test failures
1638
* https://tracker.ceph.com/issues/57676
1639
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1640
* https://tracker.ceph.com/issues/56446
1641
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
1642
* https://tracker.ceph.com/issues/57206
1643
    libcephfs/test.sh: ceph_test_libcephfs_reclaim
1644
* https://tracker.ceph.com/issues/58220
1645
    Command failed (workunit test fs/quota/quota.sh) on smithi081 with status 1:
1646
* https://tracker.ceph.com/issues/58340
1647
  mds: fsstress.sh hangs with multimds
1648
* https://tracker.ceph.com/issues/56011
1649
    fs/thrash: snaptest-snap-rm-cmp.sh fails in mds5sum comparison
1650
* https://tracker.ceph.com/issues/54460
1651 101 Rishabh Dave
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1652
1653
h3. 30 JAN 2023
1654
1655
run: http://pulpito.front.sepia.ceph.com/rishabh-2022-11-28_08:04:11-fs-wip-rishabh-testing-2022Nov24-1818-testing-default-smithi/
1656
re-run: http://pulpito.front.sepia.ceph.com/rishabh-2023-01-13_12:08:33-fs-wip-rishabh-testing-2022Nov24-11Jan2023-distro-default-smithi/
1657 105 Rishabh Dave
re-run of re-run: http://pulpito.front.sepia.ceph.com/rishabh-2023-01-23_18:53:32-fs-wip-rishabh-testing-2022Nov24-11Jan2023-distro-default-smithi/
1658
1659 101 Rishabh Dave
* https://tracker.ceph.com/issues/52624
1660
  qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
1661
* https://tracker.ceph.com/issues/56695
1662
  [RHEL stock] pjd test failures
1663
* https://tracker.ceph.com/issues/57676
1664
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
1665
* https://tracker.ceph.com/issues/55332
1666
  Failure in snaptest-git-ceph.sh
1667
* https://tracker.ceph.com/issues/51964
1668
  qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1669
* https://tracker.ceph.com/issues/56446
1670
  Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
1671
* https://tracker.ceph.com/issues/57655 
1672
  qa: fs:mixed-clients kernel_untar_build failure
1673
* https://tracker.ceph.com/issues/54460
1674
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1675 103 Rishabh Dave
* https://tracker.ceph.com/issues/58340
1676
  mds: fsstress.sh hangs with multimds
1677 101 Rishabh Dave
* https://tracker.ceph.com/issues/58219
1678 102 Rishabh Dave
  Command crashed: 'ceph-dencoder type inode_backtrace_t import - decode dump_json'
1679
1680
* "Failed to load ceph-mgr modules: prometheus" in cluster log"
1681 106 Rishabh Dave
  http://pulpito.front.sepia.ceph.com/rishabh-2023-01-23_18:53:32-fs-wip-rishabh-testing-2022Nov24-11Jan2023-distro-default-smithi/7134086
1682
  Acc to Venky this was fixed in https://github.com/ceph/ceph/commit/cf6089200d96fc56b08ee17a4e31f19823370dc8
1683 102 Rishabh Dave
* Created https://tracker.ceph.com/issues/58564
1684 100 Venky Shankar
  workunit test suites/dbench.sh failed error code 1
1685
1686
h3. 15 Dec 2022
1687
1688
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20221215.112736
1689
1690
* https://tracker.ceph.com/issues/52624
1691
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1692
* https://tracker.ceph.com/issues/56695
1693
    [RHEL stock] pjd test failures
1694
* https://tracker.ceph.com/issues/58219
1695
* https://tracker.ceph.com/issues/57655
1696
* qa: fs:mixed-clients kernel_untar_build failure
1697
    Test failure: test_journal_migration (tasks.cephfs.test_journal_migration.TestJournalMigration)
1698
* https://tracker.ceph.com/issues/57676
1699
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1700
* https://tracker.ceph.com/issues/58340
1701 96 Venky Shankar
    mds: fsstress.sh hangs with multimds
1702
1703
h3. 08 Dec 2022
1704 99 Venky Shankar
1705 96 Venky Shankar
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20221130.043104
1706
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20221209.043803
1707
1708
(lots of transient git.ceph.com failures)
1709
1710
* https://tracker.ceph.com/issues/52624
1711
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1712
* https://tracker.ceph.com/issues/56695
1713
    [RHEL stock] pjd test failures
1714
* https://tracker.ceph.com/issues/57655
1715
    qa: fs:mixed-clients kernel_untar_build failure
1716
* https://tracker.ceph.com/issues/58219
1717
    Test failure: test_journal_migration (tasks.cephfs.test_journal_migration.TestJournalMigration)
1718
* https://tracker.ceph.com/issues/58220
1719
    Command failed (workunit test fs/quota/quota.sh) on smithi081 with status 1:
1720 97 Venky Shankar
* https://tracker.ceph.com/issues/57676
1721
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1722 98 Venky Shankar
* https://tracker.ceph.com/issues/53859
1723
    qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
1724
* https://tracker.ceph.com/issues/54460
1725
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1726 96 Venky Shankar
* https://tracker.ceph.com/issues/58244
1727 95 Venky Shankar
    Test failure: test_rebuild_inotable (tasks.cephfs.test_data_scan.TestDataScan)
1728
1729
h3. 14 Oct 2022
1730
1731
https://pulpito.ceph.com/vshankar-2022-10-12_04:56:59-fs-wip-vshankar-testing-20221011-145847-testing-default-smithi/
1732
https://pulpito.ceph.com/vshankar-2022-10-14_04:04:57-fs-wip-vshankar-testing-20221014-072608-testing-default-smithi/
1733
1734
* https://tracker.ceph.com/issues/52624
1735
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1736
* https://tracker.ceph.com/issues/55804
1737
    Command failed (workunit test suites/pjd.sh)
1738
* https://tracker.ceph.com/issues/51964
1739
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1740
* https://tracker.ceph.com/issues/57682
1741
    client: ERROR: test_reconnect_after_blocklisted
1742 90 Rishabh Dave
* https://tracker.ceph.com/issues/54460
1743 91 Rishabh Dave
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1744
1745
h3. 10 Oct 2022
1746 92 Rishabh Dave
1747 91 Rishabh Dave
http://pulpito.front.sepia.ceph.com/rishabh-2022-09-30_19:45:21-fs-wip-rishabh-testing-30Sep2022-testing-default-smithi/
1748
1749
reruns
1750
* fs-thrash, passed: http://pulpito.front.sepia.ceph.com/rishabh-2022-10-04_13:19:47-fs-wip-rishabh-testing-30Sep2022-testing-default-smithi/
1751 94 Rishabh Dave
* fs-verify, passed: http://pulpito.front.sepia.ceph.com/rishabh-2022-10-05_12:25:37-fs-wip-rishabh-testing-30Sep2022-testing-default-smithi/
1752 91 Rishabh Dave
* cephadm failures also passed after many re-runs: http://pulpito.front.sepia.ceph.com/rishabh-2022-10-06_13:50:51-fs-wip-rishabh-testing-30Sep2022-2-testing-default-smithi/
1753 93 Rishabh Dave
    ** needed this PR to be merged in ceph-ci branch - https://github.com/ceph/ceph/pull/47458
1754 91 Rishabh Dave
1755
known bugs
1756
* https://tracker.ceph.com/issues/52624
1757
  qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1758
* https://tracker.ceph.com/issues/50223
1759
  client.xxxx isn't responding to mclientcaps(revoke
1760
* https://tracker.ceph.com/issues/57299
1761
  qa: test_dump_loads fails with JSONDecodeError
1762
* https://tracker.ceph.com/issues/57655 [Exist in main as well]
1763
  qa: fs:mixed-clients kernel_untar_build failure
1764
* https://tracker.ceph.com/issues/57206
1765 90 Rishabh Dave
  libcephfs/test.sh: ceph_test_libcephfs_reclaim
1766
1767
h3. 2022 Sep 29
1768
1769
http://pulpito.front.sepia.ceph.com/rishabh-2022-09-14_12:48:43-fs-wip-rishabh-testing-2022Sep9-1708-testing-default-smithi/
1770
1771
* https://tracker.ceph.com/issues/55804
1772
  Command failed (workunit test suites/pjd.sh)
1773
* https://tracker.ceph.com/issues/36593
1774
  Command failed (workunit test fs/quota/quota.sh) on smithixxx with status 1
1775
* https://tracker.ceph.com/issues/52624
1776
  qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1777
* https://tracker.ceph.com/issues/51964
1778
  qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1779
* https://tracker.ceph.com/issues/56632
1780
  Test failure: test_subvolume_snapshot_clone_quota_exceeded
1781
* https://tracker.ceph.com/issues/50821
1782 88 Patrick Donnelly
  qa: untar_snap_rm failure during mds thrashing
1783
1784
h3. 2022 Sep 26
1785
1786
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20220923.171109
1787
1788
* https://tracker.ceph.com/issues/55804
1789
    qa failure: pjd link tests failed
1790
* https://tracker.ceph.com/issues/57676
1791
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1792
* https://tracker.ceph.com/issues/52624
1793
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1794
* https://tracker.ceph.com/issues/57580
1795
    Test failure: test_newops_getvxattr (tasks.cephfs.test_newops.TestNewOps)
1796
* https://tracker.ceph.com/issues/48773
1797
    qa: scrub does not complete
1798
* https://tracker.ceph.com/issues/57299
1799
    qa: test_dump_loads fails with JSONDecodeError
1800
* https://tracker.ceph.com/issues/57280
1801
    qa: tasks/kernel_cfuse_workunits_untarbuild_blogbench fails - Failed to fetch package version from shaman
1802
* https://tracker.ceph.com/issues/57205
1803
    Test failure: test_subvolume_group_ls_filter_internal_directories (tasks.cephfs.test_volumes.TestSubvolumeGroups)
1804
* https://tracker.ceph.com/issues/57656
1805
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
1806
* https://tracker.ceph.com/issues/57677
1807
    qa: "1 MDSs behind on trimming (MDS_TRIM)"
1808
* https://tracker.ceph.com/issues/57206
1809
    libcephfs/test.sh: ceph_test_libcephfs_reclaim
1810
* https://tracker.ceph.com/issues/57446
1811
    qa: test_subvolume_snapshot_info_if_orphan_clone fails
1812 89 Patrick Donnelly
* https://tracker.ceph.com/issues/57655 [Exist in main as well]
1813
    qa: fs:mixed-clients kernel_untar_build failure
1814 88 Patrick Donnelly
* https://tracker.ceph.com/issues/57682
1815
    client: ERROR: test_reconnect_after_blocklisted
1816 87 Patrick Donnelly
1817
1818
h3. 2022 Sep 22
1819
1820
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20220920.234701
1821
1822
* https://tracker.ceph.com/issues/57299
1823
    qa: test_dump_loads fails with JSONDecodeError
1824
* https://tracker.ceph.com/issues/57205
1825
    Test failure: test_subvolume_group_ls_filter_internal_directories (tasks.cephfs.test_volumes.TestSubvolumeGroups)
1826
* https://tracker.ceph.com/issues/52624
1827
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1828
* https://tracker.ceph.com/issues/57580
1829
    Test failure: test_newops_getvxattr (tasks.cephfs.test_newops.TestNewOps)
1830
* https://tracker.ceph.com/issues/57280
1831
    qa: tasks/kernel_cfuse_workunits_untarbuild_blogbench fails - Failed to fetch package version from shaman
1832
* https://tracker.ceph.com/issues/48773
1833
    qa: scrub does not complete
1834
* https://tracker.ceph.com/issues/56446
1835
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
1836
* https://tracker.ceph.com/issues/57206
1837
    libcephfs/test.sh: ceph_test_libcephfs_reclaim
1838
* https://tracker.ceph.com/issues/51267
1839
    CommandFailedError: Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithi096 with status 1:...
1840
1841
NEW:
1842
1843
* https://tracker.ceph.com/issues/57656
1844
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
1845
* https://tracker.ceph.com/issues/57655 [Exist in main as well]
1846
    qa: fs:mixed-clients kernel_untar_build failure
1847
* https://tracker.ceph.com/issues/57657
1848
    mds: scrub locates mismatch between child accounted_rstats and self rstats
1849
1850
Segfault probably caused by: https://github.com/ceph/ceph/pull/47795#issuecomment-1255724799
1851 80 Venky Shankar
1852 79 Venky Shankar
1853
h3. 2022 Sep 16
1854
1855
https://pulpito.ceph.com/?branch=wip-vshankar-testing1-20220905-132828
1856
1857
* https://tracker.ceph.com/issues/57446
1858
    qa: test_subvolume_snapshot_info_if_orphan_clone fails
1859
* https://tracker.ceph.com/issues/57299
1860
    qa: test_dump_loads fails with JSONDecodeError
1861
* https://tracker.ceph.com/issues/50223
1862
    client.xxxx isn't responding to mclientcaps(revoke)
1863
* https://tracker.ceph.com/issues/52624
1864
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1865
* https://tracker.ceph.com/issues/57205
1866
    Test failure: test_subvolume_group_ls_filter_internal_directories (tasks.cephfs.test_volumes.TestSubvolumeGroups)
1867
* https://tracker.ceph.com/issues/57280
1868
    qa: tasks/kernel_cfuse_workunits_untarbuild_blogbench fails - Failed to fetch package version from shaman
1869
* https://tracker.ceph.com/issues/51282
1870
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
1871
* https://tracker.ceph.com/issues/48203
1872
  https://tracker.ceph.com/issues/36593
1873
    qa: quota failure
1874
    qa: quota failure caused by clients stepping on each other
1875
* https://tracker.ceph.com/issues/57580
1876 77 Rishabh Dave
    Test failure: test_newops_getvxattr (tasks.cephfs.test_newops.TestNewOps)
1877
1878 76 Rishabh Dave
1879
h3. 2022 Aug 26
1880
1881
http://pulpito.front.sepia.ceph.com/rishabh-2022-08-22_17:49:59-fs-wip-rishabh-testing-2022Aug19-testing-default-smithi/
1882
http://pulpito.front.sepia.ceph.com/rishabh-2022-08-24_11:56:51-fs-wip-rishabh-testing-2022Aug19-testing-default-smithi/
1883
1884
* https://tracker.ceph.com/issues/57206
1885
  libcephfs/test.sh: ceph_test_libcephfs_reclaim
1886
* https://tracker.ceph.com/issues/56632
1887
  Test failure: test_subvolume_snapshot_clone_quota_exceeded (tasks.cephfs.test_volumes.TestSubvolumeSnapshotClones)
1888
* https://tracker.ceph.com/issues/56446
1889
  Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
1890
* https://tracker.ceph.com/issues/51964
1891
  qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1892
* https://tracker.ceph.com/issues/53859
1893
  qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
1894
1895
* https://tracker.ceph.com/issues/54460
1896
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1897
* https://tracker.ceph.com/issues/54462
1898
  Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128
1899
* https://tracker.ceph.com/issues/54460
1900
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1901
* https://tracker.ceph.com/issues/36593
1902
  Command failed (workunit test fs/quota/quota.sh) on smithixxx with status 1
1903
1904
* https://tracker.ceph.com/issues/52624
1905
  qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1906
* https://tracker.ceph.com/issues/55804
1907
  Command failed (workunit test suites/pjd.sh)
1908
* https://tracker.ceph.com/issues/50223
1909
  client.xxxx isn't responding to mclientcaps(revoke)
1910 75 Venky Shankar
1911
1912
h3. 2022 Aug 22
1913
1914
https://pulpito.ceph.com/vshankar-2022-08-12_09:34:24-fs-wip-vshankar-testing1-20220812-072441-testing-default-smithi/
1915
https://pulpito.ceph.com/vshankar-2022-08-18_04:30:42-fs-wip-vshankar-testing1-20220818-082047-testing-default-smithi/ (drop problematic PR and re-run)
1916
1917
* https://tracker.ceph.com/issues/52624
1918
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1919
* https://tracker.ceph.com/issues/56446
1920
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
1921
* https://tracker.ceph.com/issues/55804
1922
    Command failed (workunit test suites/pjd.sh)
1923
* https://tracker.ceph.com/issues/51278
1924
    mds: "FAILED ceph_assert(!segments.empty())"
1925
* https://tracker.ceph.com/issues/54460
1926
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1927
* https://tracker.ceph.com/issues/57205
1928
    Test failure: test_subvolume_group_ls_filter_internal_directories (tasks.cephfs.test_volumes.TestSubvolumeGroups)
1929
* https://tracker.ceph.com/issues/57206
1930
    ceph_test_libcephfs_reclaim crashes during test
1931
* https://tracker.ceph.com/issues/53859
1932
  Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
1933
* https://tracker.ceph.com/issues/50223
1934 72 Venky Shankar
    client.xxxx isn't responding to mclientcaps(revoke)
1935
1936
h3. 2022 Aug 12
1937
1938
https://pulpito.ceph.com/vshankar-2022-08-10_04:06:00-fs-wip-vshankar-testing-20220805-190751-testing-default-smithi/
1939
https://pulpito.ceph.com/vshankar-2022-08-11_12:16:58-fs-wip-vshankar-testing-20220811-145809-testing-default-smithi/ (drop problematic PR and re-run)
1940
1941
* https://tracker.ceph.com/issues/52624
1942
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1943
* https://tracker.ceph.com/issues/56446
1944
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
1945
* https://tracker.ceph.com/issues/51964
1946
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1947
* https://tracker.ceph.com/issues/55804
1948
    Command failed (workunit test suites/pjd.sh)
1949
* https://tracker.ceph.com/issues/50223
1950
    client.xxxx isn't responding to mclientcaps(revoke)
1951
* https://tracker.ceph.com/issues/50821
1952 73 Venky Shankar
    qa: untar_snap_rm failure during mds thrashing
1953 72 Venky Shankar
* https://tracker.ceph.com/issues/54460
1954 71 Venky Shankar
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1955
1956
h3. 2022 Aug 04
1957
1958
https://pulpito.ceph.com/?branch=wip-vshankar-testing1-20220804-123835 (only mgr/volumes, mgr/stats)
1959
1960 69 Rishabh Dave
Unrealted teuthology failure on rhel
1961 68 Rishabh Dave
1962
h3. 2022 Jul 25
1963
1964
http://pulpito.front.sepia.ceph.com/rishabh-2022-07-22_11:34:20-fs-wip-rishabh-testing-2022Jul22-1400-testing-default-smithi/
1965
1966 74 Rishabh Dave
1st re-run: http://pulpito.front.sepia.ceph.com/rishabh-2022-07-24_03:51:19-fs-wip-rishabh-testing-2022Jul22-1400-testing-default-smithi
1967
2nd re-run: http://pulpito.front.sepia.ceph.com/rishabh-2022-07-24_08:53:36-fs-wip-rishabh-testing-2022Jul22-1400-testing-default-smithi/
1968 68 Rishabh Dave
3rd re-run: http://pulpito.front.sepia.ceph.com/rishabh-2022-07-24_08:53:36-fs-wip-rishabh-testing-2022Jul22-1400-testing-default-smithi/
1969
4th (final) re-run: http://pulpito.front.sepia.ceph.com/rishabh-2022-07-28_03:59:01-fs-wip-rishabh-testing-2022Jul28-0143-testing-default-smithi/
1970
1971
* https://tracker.ceph.com/issues/55804
1972
  Command failed (workunit test suites/pjd.sh)
1973
* https://tracker.ceph.com/issues/50223
1974
  client.xxxx isn't responding to mclientcaps(revoke)
1975
1976
* https://tracker.ceph.com/issues/54460
1977
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1978 1 Patrick Donnelly
* https://tracker.ceph.com/issues/36593
1979 74 Rishabh Dave
  Command failed (workunit test fs/quota/quota.sh) on smithixxx with status 1
1980 68 Rishabh Dave
* https://tracker.ceph.com/issues/54462
1981 67 Patrick Donnelly
  Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128~
1982
1983
h3. 2022 July 22
1984
1985
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20220721.235756
1986
1987
MDS_HEALTH_DUMMY error in log fixed by followup commit.
1988
transient selinux ping failure
1989
1990
* https://tracker.ceph.com/issues/56694
1991
    qa: avoid blocking forever on hung umount
1992
* https://tracker.ceph.com/issues/56695
1993
    [RHEL stock] pjd test failures
1994
* https://tracker.ceph.com/issues/56696
1995
    admin keyring disappears during qa run
1996
* https://tracker.ceph.com/issues/56697
1997
    qa: fs/snaps fails for fuse
1998
* https://tracker.ceph.com/issues/50222
1999
    osd: 5.2s0 deep-scrub : stat mismatch
2000
* https://tracker.ceph.com/issues/56698
2001
    client: FAILED ceph_assert(_size == 0)
2002
* https://tracker.ceph.com/issues/50223
2003
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2004 66 Rishabh Dave
2005 65 Rishabh Dave
2006
h3. 2022 Jul 15
2007
2008
http://pulpito.front.sepia.ceph.com/rishabh-2022-07-08_23:53:34-fs-wip-rishabh-testing-2022Jul08-1820-testing-default-smithi/
2009
2010
re-run: http://pulpito.front.sepia.ceph.com/rishabh-2022-07-15_06:42:04-fs-wip-rishabh-testing-2022Jul08-1820-testing-default-smithi/
2011
2012
* https://tracker.ceph.com/issues/53859
2013
  Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
2014
* https://tracker.ceph.com/issues/55804
2015
  Command failed (workunit test suites/pjd.sh)
2016
* https://tracker.ceph.com/issues/50223
2017
  client.xxxx isn't responding to mclientcaps(revoke)
2018
* https://tracker.ceph.com/issues/50222
2019
  osd: deep-scrub : stat mismatch
2020
2021
* https://tracker.ceph.com/issues/56632
2022
  Test failure: test_subvolume_snapshot_clone_quota_exceeded (tasks.cephfs.test_volumes.TestSubvolumeSnapshotClones)
2023
* https://tracker.ceph.com/issues/56634
2024
  workunit test fs/snaps/snaptest-intodir.sh
2025
* https://tracker.ceph.com/issues/56644
2026
  Test failure: test_rapid_creation (tasks.cephfs.test_fragment.TestFragmentation)
2027
2028 61 Rishabh Dave
2029
2030
h3. 2022 July 05
2031 62 Rishabh Dave
2032 64 Rishabh Dave
http://pulpito.front.sepia.ceph.com/rishabh-2022-07-02_14:14:52-fs-wip-rishabh-testing-20220702-1631-testing-default-smithi/
2033
2034
On 1st re-run some jobs passed - http://pulpito.front.sepia.ceph.com/rishabh-2022-07-03_15:10:28-fs-wip-rishabh-testing-20220702-1631-distro-default-smithi/
2035
2036
On 2nd re-run only few jobs failed -
2037 62 Rishabh Dave
http://pulpito.front.sepia.ceph.com/rishabh-2022-07-06_05:24:29-fs-wip-rishabh-testing-20220705-2132-distro-default-smithi/
2038
http://pulpito.front.sepia.ceph.com/rishabh-2022-07-06_05:24:29-fs-wip-rishabh-testing-20220705-2132-distro-default-smithi/
2039
2040
* https://tracker.ceph.com/issues/56446
2041
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
2042
* https://tracker.ceph.com/issues/55804
2043
    Command failed (workunit test suites/pjd.sh) on smithi047 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/
2044
2045
* https://tracker.ceph.com/issues/56445
2046 63 Rishabh Dave
    Command failed on smithi080 with status 123: "find /home/ubuntu/cephtest/archive/syslog -name '*.log' -print0 | sudo xargs -0 --no-run-if-empty -- gzip --"
2047
* https://tracker.ceph.com/issues/51267
2048
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithi098 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1
2049 62 Rishabh Dave
* https://tracker.ceph.com/issues/50224
2050
    Test failure: test_mirroring_init_failure_with_recovery (tasks.cephfs.test_mirroring.TestMirroring)
2051 61 Rishabh Dave
2052 58 Venky Shankar
2053
2054
h3. 2022 July 04
2055
2056
https://pulpito.ceph.com/vshankar-2022-06-29_09:19:00-fs-wip-vshankar-testing-20220627-100931-testing-default-smithi/
2057
(rhel runs were borked due to: https://lists.ceph.io/hyperkitty/list/dev@ceph.io/thread/JSZQFUKVLDND4W33PXDGCABPHNSPT6SS/, tests ran with --filter-out=rhel)
2058
2059
* https://tracker.ceph.com/issues/56445
2060 59 Rishabh Dave
    Command failed on smithi162 with status 123: "find /home/ubuntu/cephtest/archive/syslog -name '*.log' -print0 | sudo xargs -0 --no-run-if-empty -- gzip --"
2061
* https://tracker.ceph.com/issues/56446
2062
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
2063
* https://tracker.ceph.com/issues/51964
2064 60 Rishabh Dave
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
2065 59 Rishabh Dave
* https://tracker.ceph.com/issues/52624
2066 57 Venky Shankar
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2067
2068
h3. 2022 June 20
2069
2070
https://pulpito.ceph.com/vshankar-2022-06-15_04:03:39-fs-wip-vshankar-testing1-20220615-072516-testing-default-smithi/
2071
https://pulpito.ceph.com/vshankar-2022-06-19_08:22:46-fs-wip-vshankar-testing1-20220619-102531-testing-default-smithi/
2072
2073
* https://tracker.ceph.com/issues/52624
2074
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2075
* https://tracker.ceph.com/issues/55804
2076
    qa failure: pjd link tests failed
2077
* https://tracker.ceph.com/issues/54108
2078
    qa: iogen workunit: "The following counters failed to be set on mds daemons: {'mds.exported', 'mds.imported'}"
2079
* https://tracker.ceph.com/issues/55332
2080 56 Patrick Donnelly
    Failure in snaptest-git-ceph.sh (it's an async unlink/create bug)
2081
2082
h3. 2022 June 13
2083
2084
https://pulpito.ceph.com/pdonnell-2022-06-12_05:08:12-fs:workload-wip-pdonnell-testing-20220612.004943-distro-default-smithi/
2085
2086
* https://tracker.ceph.com/issues/56024
2087
    cephadm: removes ceph.conf during qa run causing command failure
2088
* https://tracker.ceph.com/issues/48773
2089
    qa: scrub does not complete
2090
* https://tracker.ceph.com/issues/56012
2091
    mds: src/mds/MDLog.cc: 283: FAILED ceph_assert(!mds->is_ any_replay())
2092 55 Venky Shankar
2093 54 Venky Shankar
2094
h3. 2022 Jun 13
2095
2096
https://pulpito.ceph.com/vshankar-2022-06-07_00:25:50-fs-wip-vshankar-testing-20220606-223254-testing-default-smithi/
2097
https://pulpito.ceph.com/vshankar-2022-06-10_01:04:46-fs-wip-vshankar-testing-20220609-175550-testing-default-smithi/
2098
2099
* https://tracker.ceph.com/issues/52624
2100
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2101
* https://tracker.ceph.com/issues/51964
2102
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
2103
* https://tracker.ceph.com/issues/53859
2104
    qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
2105
* https://tracker.ceph.com/issues/55804
2106
    qa failure: pjd link tests failed
2107
* https://tracker.ceph.com/issues/56003
2108
    client: src/include/xlist.h: 81: FAILED ceph_assert(_size == 0)
2109
* https://tracker.ceph.com/issues/56011
2110
    fs/thrash: snaptest-snap-rm-cmp.sh fails in mds5sum comparison
2111
* https://tracker.ceph.com/issues/56012
2112 53 Venky Shankar
    mds: src/mds/MDLog.cc: 283: FAILED ceph_assert(!mds->is_ any_replay())
2113
2114
h3. 2022 Jun 07
2115
2116
https://pulpito.ceph.com/vshankar-2022-06-06_21:25:41-fs-wip-vshankar-testing1-20220606-230129-testing-default-smithi/
2117
https://pulpito.ceph.com/vshankar-2022-06-07_10:53:31-fs-wip-vshankar-testing1-20220607-104134-testing-default-smithi/ (rerun after dropping a problematic PR)
2118
2119
* https://tracker.ceph.com/issues/52624
2120
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2121
* https://tracker.ceph.com/issues/50223
2122
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2123
* https://tracker.ceph.com/issues/50224
2124 51 Venky Shankar
    qa: test_mirroring_init_failure_with_recovery failure
2125
2126
h3. 2022 May 12
2127 52 Venky Shankar
2128 51 Venky Shankar
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20220509-125847
2129
https://pulpito.ceph.com/vshankar-2022-05-13_17:09:16-fs-wip-vshankar-testing-20220513-120051-testing-default-smithi/ (drop prs + rerun)
2130
2131
* https://tracker.ceph.com/issues/52624
2132
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2133
* https://tracker.ceph.com/issues/50223
2134
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2135
* https://tracker.ceph.com/issues/55332
2136
    Failure in snaptest-git-ceph.sh
2137
* https://tracker.ceph.com/issues/53859
2138 1 Patrick Donnelly
    qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
2139 52 Venky Shankar
* https://tracker.ceph.com/issues/55538
2140
    Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead)
2141 51 Venky Shankar
* https://tracker.ceph.com/issues/55258
2142 49 Venky Shankar
    lots of "heartbeat_check: no reply from X.X.X.X" in OSD logs (cropss up again, though very infrequent)
2143
2144 50 Venky Shankar
h3. 2022 May 04
2145
2146
https://pulpito.ceph.com/vshankar-2022-05-01_13:18:44-fs-wip-vshankar-testing1-20220428-204527-testing-default-smithi/
2147 49 Venky Shankar
https://pulpito.ceph.com/vshankar-2022-05-02_16:58:59-fs-wip-vshankar-testing1-20220502-201957-testing-default-smithi/ (after dropping PRs)
2148
2149
* https://tracker.ceph.com/issues/52624
2150
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2151
* https://tracker.ceph.com/issues/50223
2152
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2153
* https://tracker.ceph.com/issues/55332
2154
    Failure in snaptest-git-ceph.sh
2155
* https://tracker.ceph.com/issues/53859
2156
    qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
2157
* https://tracker.ceph.com/issues/55516
2158
    qa: fs suite tests failing with "json.decoder.JSONDecodeError: Extra data: line 2 column 82 (char 82)"
2159
* https://tracker.ceph.com/issues/55537
2160
    mds: crash during fs:upgrade test
2161
* https://tracker.ceph.com/issues/55538
2162 48 Venky Shankar
    Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead)
2163
2164
h3. 2022 Apr 25
2165
2166
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20220420-113951 (owner vshankar)
2167
2168
* https://tracker.ceph.com/issues/52624
2169
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2170
* https://tracker.ceph.com/issues/50223
2171
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2172
* https://tracker.ceph.com/issues/55258
2173
    lots of "heartbeat_check: no reply from X.X.X.X" in OSD logs
2174
* https://tracker.ceph.com/issues/55377
2175 47 Venky Shankar
    kclient: mds revoke Fwb caps stuck after the kclient tries writebcak once
2176
2177
h3. 2022 Apr 14
2178
2179
https://pulpito.ceph.com/?branch=wip-vshankar-testing1-20220411-144044
2180
2181
* https://tracker.ceph.com/issues/52624
2182
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2183
* https://tracker.ceph.com/issues/50223
2184
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2185
* https://tracker.ceph.com/issues/52438
2186
    qa: ffsb timeout
2187
* https://tracker.ceph.com/issues/55170
2188
    mds: crash during rejoin (CDir::fetch_keys)
2189
* https://tracker.ceph.com/issues/55331
2190
    pjd failure
2191
* https://tracker.ceph.com/issues/48773
2192
    qa: scrub does not complete
2193
* https://tracker.ceph.com/issues/55332
2194
    Failure in snaptest-git-ceph.sh
2195
* https://tracker.ceph.com/issues/55258
2196 45 Venky Shankar
    lots of "heartbeat_check: no reply from X.X.X.X" in OSD logs
2197
2198 46 Venky Shankar
h3. 2022 Apr 11
2199 45 Venky Shankar
2200
https://pulpito.ceph.com/?branch=wip-vshankar-testing-55110-20220408-203242
2201
2202
* https://tracker.ceph.com/issues/48773
2203
    qa: scrub does not complete
2204
* https://tracker.ceph.com/issues/52624
2205
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2206
* https://tracker.ceph.com/issues/52438
2207
    qa: ffsb timeout
2208
* https://tracker.ceph.com/issues/48680
2209
    mds: scrubbing stuck "scrub active (0 inodes in the stack)"
2210
* https://tracker.ceph.com/issues/55236
2211
    qa: fs/snaps tests fails with "hit max job timeout"
2212
* https://tracker.ceph.com/issues/54108
2213
    qa: iogen workunit: "The following counters failed to be set on mds daemons: {'mds.exported', 'mds.imported'}"
2214
* https://tracker.ceph.com/issues/54971
2215
    Test failure: test_perf_stats_stale_metrics (tasks.cephfs.test_mds_metrics.TestMDSMetrics)
2216
* https://tracker.ceph.com/issues/50223
2217
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2218
* https://tracker.ceph.com/issues/55258
2219 44 Venky Shankar
    lots of "heartbeat_check: no reply from X.X.X.X" in OSD logs
2220 42 Venky Shankar
2221 43 Venky Shankar
h3. 2022 Mar 21
2222
2223
https://pulpito.ceph.com/vshankar-2022-03-20_02:16:37-fs-wip-vshankar-testing-20220319-163539-testing-default-smithi/
2224
2225
Run didn't go well, lots of failures - debugging by dropping PRs and running against master branch. Only merging unrelated PRs that pass tests.
2226
2227
2228 42 Venky Shankar
h3. 2022 Mar 08
2229
2230
https://pulpito.ceph.com/vshankar-2022-02-28_04:32:15-fs-wip-vshankar-testing-20220226-211550-testing-default-smithi/
2231
2232
rerun with
2233
- (drop) https://github.com/ceph/ceph/pull/44679
2234
- (drop) https://github.com/ceph/ceph/pull/44958
2235
https://pulpito.ceph.com/vshankar-2022-03-06_14:47:51-fs-wip-vshankar-testing-20220304-132102-testing-default-smithi/
2236
2237
* https://tracker.ceph.com/issues/54419 (new)
2238
    `ceph orch upgrade start` seems to never reach completion
2239
* https://tracker.ceph.com/issues/51964
2240
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
2241
* https://tracker.ceph.com/issues/52624
2242
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2243
* https://tracker.ceph.com/issues/50223
2244
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2245
* https://tracker.ceph.com/issues/52438
2246
    qa: ffsb timeout
2247
* https://tracker.ceph.com/issues/50821
2248
    qa: untar_snap_rm failure during mds thrashing
2249 41 Venky Shankar
2250
2251
h3. 2022 Feb 09
2252
2253
https://pulpito.ceph.com/vshankar-2022-02-05_17:27:49-fs-wip-vshankar-testing-20220201-113815-testing-default-smithi/
2254
2255
rerun with
2256
- (drop) https://github.com/ceph/ceph/pull/37938
2257
- (drop) https://github.com/ceph/ceph/pull/44335
2258
- (drop) https://github.com/ceph/ceph/pull/44491
2259
- (drop) https://github.com/ceph/ceph/pull/44501
2260
https://pulpito.ceph.com/vshankar-2022-02-08_14:27:29-fs-wip-vshankar-testing-20220208-181241-testing-default-smithi/
2261
2262
* https://tracker.ceph.com/issues/51964
2263
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
2264
* https://tracker.ceph.com/issues/54066
2265
    test_subvolume_no_upgrade_v1_sanity fails with `AssertionError: 1000 != 0`
2266
* https://tracker.ceph.com/issues/48773
2267
    qa: scrub does not complete
2268
* https://tracker.ceph.com/issues/52624
2269
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2270
* https://tracker.ceph.com/issues/50223
2271
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2272
* https://tracker.ceph.com/issues/52438
2273 40 Patrick Donnelly
    qa: ffsb timeout
2274
2275
h3. 2022 Feb 01
2276
2277
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20220127.171526
2278
2279
* https://tracker.ceph.com/issues/54107
2280
    kclient: hang during umount
2281
* https://tracker.ceph.com/issues/54106
2282
    kclient: hang during workunit cleanup
2283
* https://tracker.ceph.com/issues/54108
2284
    qa: iogen workunit: "The following counters failed to be set on mds daemons: {'mds.exported', 'mds.imported'}"
2285
* https://tracker.ceph.com/issues/48773
2286
    qa: scrub does not complete
2287
* https://tracker.ceph.com/issues/52438
2288
    qa: ffsb timeout
2289 36 Venky Shankar
2290
2291
h3. 2022 Jan 13
2292 39 Venky Shankar
2293 36 Venky Shankar
https://pulpito.ceph.com/vshankar-2022-01-06_13:18:41-fs-wip-vshankar-testing-20220106-145819-testing-default-smithi/
2294 38 Venky Shankar
2295
rerun with:
2296 36 Venky Shankar
- (add) https://github.com/ceph/ceph/pull/44570
2297
- (drop) https://github.com/ceph/ceph/pull/43184
2298
https://pulpito.ceph.com/vshankar-2022-01-13_04:42:40-fs-wip-vshankar-testing-20220106-145819-testing-default-smithi/
2299
2300
* https://tracker.ceph.com/issues/50223
2301
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2302
* https://tracker.ceph.com/issues/51282
2303
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2304
* https://tracker.ceph.com/issues/48773
2305
    qa: scrub does not complete
2306
* https://tracker.ceph.com/issues/52624
2307
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2308
* https://tracker.ceph.com/issues/53859
2309 34 Venky Shankar
    qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
2310
2311
h3. 2022 Jan 03
2312
2313
https://pulpito.ceph.com/vshankar-2021-12-22_07:37:44-fs-wip-vshankar-testing-20211216-114012-testing-default-smithi/
2314
https://pulpito.ceph.com/vshankar-2022-01-03_12:27:45-fs-wip-vshankar-testing-20220103-142738-testing-default-smithi/ (rerun)
2315
2316
* https://tracker.ceph.com/issues/50223
2317
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2318
* https://tracker.ceph.com/issues/51964
2319
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
2320
* https://tracker.ceph.com/issues/51267
2321
    CommandFailedError: Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithi096 with status 1:...
2322
* https://tracker.ceph.com/issues/51282
2323
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2324
* https://tracker.ceph.com/issues/50821
2325
    qa: untar_snap_rm failure during mds thrashing
2326 35 Ramana Raja
* https://tracker.ceph.com/issues/51278
2327
    mds: "FAILED ceph_assert(!segments.empty())"
2328
* https://tracker.ceph.com/issues/52279
2329 34 Venky Shankar
    cephadm tests fail due to: error adding seccomp filter rule for syscall bdflush: requested action matches default action of filter
2330 33 Patrick Donnelly
2331
2332
h3. 2021 Dec 22
2333
2334
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211222.014316
2335
2336
* https://tracker.ceph.com/issues/52624
2337
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2338
* https://tracker.ceph.com/issues/50223
2339
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2340
* https://tracker.ceph.com/issues/52279
2341
    cephadm tests fail due to: error adding seccomp filter rule for syscall bdflush: requested action matches default action of filter
2342
* https://tracker.ceph.com/issues/50224
2343
    qa: test_mirroring_init_failure_with_recovery failure
2344
* https://tracker.ceph.com/issues/48773
2345
    qa: scrub does not complete
2346 32 Venky Shankar
2347
2348
h3. 2021 Nov 30
2349
2350
https://pulpito.ceph.com/vshankar-2021-11-24_07:14:27-fs-wip-vshankar-testing-20211124-094330-testing-default-smithi/
2351
https://pulpito.ceph.com/vshankar-2021-11-30_06:23:32-fs-wip-vshankar-testing-20211124-094330-distro-default-smithi/ (rerun w/ QA fixes)
2352
2353
* https://tracker.ceph.com/issues/53436
2354
    mds, mon: mds beacon messages get dropped? (mds never reaches up:active state)
2355
* https://tracker.ceph.com/issues/51964
2356
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
2357
* https://tracker.ceph.com/issues/48812
2358
    qa: test_scrub_pause_and_resume_with_abort failure
2359
* https://tracker.ceph.com/issues/51076
2360
    "wait_for_recovery: failed before timeout expired" during thrashosd test with EC backend.
2361
* https://tracker.ceph.com/issues/50223
2362
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2363
* https://tracker.ceph.com/issues/52624
2364
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2365
* https://tracker.ceph.com/issues/50250
2366
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
2367 31 Patrick Donnelly
2368
2369
h3. 2021 November 9
2370
2371
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211109.180315
2372
2373
* https://tracker.ceph.com/issues/53214
2374
    qa: "dd: error reading '/sys/kernel/debug/ceph/2a934501-6731-4052-a836-f42229a869be.client4874/metrics': Is a directory"
2375
* https://tracker.ceph.com/issues/48773
2376
    qa: scrub does not complete
2377
* https://tracker.ceph.com/issues/50223
2378
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2379
* https://tracker.ceph.com/issues/51282
2380
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2381
* https://tracker.ceph.com/issues/52624
2382
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2383
* https://tracker.ceph.com/issues/53216
2384
    qa: "RuntimeError: value of attributes should be either str or None. client_id"
2385
* https://tracker.ceph.com/issues/50250
2386
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
2387
2388 30 Patrick Donnelly
2389
2390
h3. 2021 November 03
2391
2392
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211103.023355
2393
2394
* https://tracker.ceph.com/issues/51964
2395
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
2396
* https://tracker.ceph.com/issues/51282
2397
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2398
* https://tracker.ceph.com/issues/52436
2399
    fs/ceph: "corrupt mdsmap"
2400
* https://tracker.ceph.com/issues/53074
2401
    pybind/mgr/cephadm: upgrade sequence does not continue if no MDS are active
2402
* https://tracker.ceph.com/issues/53150
2403
    pybind/mgr/cephadm/upgrade: tolerate MDS failures during upgrade straddling v16.2.5
2404
* https://tracker.ceph.com/issues/53155
2405
    MDSMonitor: assertion during upgrade to v16.2.5+
2406 29 Patrick Donnelly
2407
2408
h3. 2021 October 26
2409
2410
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211025.000447
2411
2412
* https://tracker.ceph.com/issues/53074
2413
    pybind/mgr/cephadm: upgrade sequence does not continue if no MDS are active
2414
* https://tracker.ceph.com/issues/52997
2415
    testing: hang ing umount
2416
* https://tracker.ceph.com/issues/50824
2417
    qa: snaptest-git-ceph bus error
2418
* https://tracker.ceph.com/issues/52436
2419
    fs/ceph: "corrupt mdsmap"
2420
* https://tracker.ceph.com/issues/48773
2421
    qa: scrub does not complete
2422
* https://tracker.ceph.com/issues/53082
2423
    ceph-fuse: segmenetation fault in Client::handle_mds_map
2424
* https://tracker.ceph.com/issues/50223
2425
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2426
* https://tracker.ceph.com/issues/52624
2427
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2428
* https://tracker.ceph.com/issues/50224
2429
    qa: test_mirroring_init_failure_with_recovery failure
2430
* https://tracker.ceph.com/issues/50821
2431
    qa: untar_snap_rm failure during mds thrashing
2432
* https://tracker.ceph.com/issues/50250
2433
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
2434
2435 27 Patrick Donnelly
2436
2437 28 Patrick Donnelly
h3. 2021 October 19
2438 27 Patrick Donnelly
2439
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211019.013028
2440
2441
* https://tracker.ceph.com/issues/52995
2442
    qa: test_standby_count_wanted failure
2443
* https://tracker.ceph.com/issues/52948
2444
    osd: fails to come up: "teuthology.misc:7 of 8 OSDs are up"
2445
* https://tracker.ceph.com/issues/52996
2446
    qa: test_perf_counters via test_openfiletable
2447
* https://tracker.ceph.com/issues/48772
2448
    qa: pjd: not ok 9, 44, 80
2449
* https://tracker.ceph.com/issues/52997
2450
    testing: hang ing umount
2451
* https://tracker.ceph.com/issues/50250
2452
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
2453
* https://tracker.ceph.com/issues/52624
2454
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2455
* https://tracker.ceph.com/issues/50223
2456
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2457
* https://tracker.ceph.com/issues/50821
2458
    qa: untar_snap_rm failure during mds thrashing
2459
* https://tracker.ceph.com/issues/48773
2460
    qa: scrub does not complete
2461 26 Patrick Donnelly
2462
2463
h3. 2021 October 12
2464
2465
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211012.192211
2466
2467
Some failures caused by teuthology bug: https://tracker.ceph.com/issues/52944
2468
2469
New test caused failure: https://github.com/ceph/ceph/pull/43297#discussion_r729883167
2470
2471
2472
* https://tracker.ceph.com/issues/51282
2473
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2474
* https://tracker.ceph.com/issues/52948
2475
    osd: fails to come up: "teuthology.misc:7 of 8 OSDs are up"
2476
* https://tracker.ceph.com/issues/48773
2477
    qa: scrub does not complete
2478
* https://tracker.ceph.com/issues/50224
2479
    qa: test_mirroring_init_failure_with_recovery failure
2480
* https://tracker.ceph.com/issues/52949
2481
    RuntimeError: The following counters failed to be set on mds daemons: {'mds.dir_split'}
2482 25 Patrick Donnelly
2483 23 Patrick Donnelly
2484 24 Patrick Donnelly
h3. 2021 October 02
2485
2486
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211002.163337
2487
2488
Some failures caused by cephadm upgrade test. Fixed in follow-up qa commit.
2489
2490
test_simple failures caused by PR in this set.
2491
2492
A few reruns because of QA infra noise.
2493
2494
* https://tracker.ceph.com/issues/52822
2495
    qa: failed pacific install on fs:upgrade
2496
* https://tracker.ceph.com/issues/52624
2497
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2498
* https://tracker.ceph.com/issues/50223
2499
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2500
* https://tracker.ceph.com/issues/48773
2501
    qa: scrub does not complete
2502
2503
2504 23 Patrick Donnelly
h3. 2021 September 20
2505
2506
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210917.174826
2507
2508
* https://tracker.ceph.com/issues/52677
2509
    qa: test_simple failure
2510
* https://tracker.ceph.com/issues/51279
2511
    kclient hangs on umount (testing branch)
2512
* https://tracker.ceph.com/issues/50223
2513
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2514
* https://tracker.ceph.com/issues/50250
2515
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
2516
* https://tracker.ceph.com/issues/52624
2517
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2518
* https://tracker.ceph.com/issues/52438
2519
    qa: ffsb timeout
2520 22 Patrick Donnelly
2521
2522
h3. 2021 September 10
2523
2524
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210910.181451
2525
2526
* https://tracker.ceph.com/issues/50223
2527
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2528
* https://tracker.ceph.com/issues/50250
2529
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
2530
* https://tracker.ceph.com/issues/52624
2531
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2532
* https://tracker.ceph.com/issues/52625
2533
    qa: test_kill_mdstable (tasks.cephfs.test_snapshots.TestSnapshots)
2534
* https://tracker.ceph.com/issues/52439
2535
    qa: acls does not compile on centos stream
2536
* https://tracker.ceph.com/issues/50821
2537
    qa: untar_snap_rm failure during mds thrashing
2538
* https://tracker.ceph.com/issues/48773
2539
    qa: scrub does not complete
2540
* https://tracker.ceph.com/issues/52626
2541
    mds: ScrubStack.cc: 831: FAILED ceph_assert(diri)
2542
* https://tracker.ceph.com/issues/51279
2543
    kclient hangs on umount (testing branch)
2544 21 Patrick Donnelly
2545
2546
h3. 2021 August 27
2547
2548
Several jobs died because of device failures.
2549
2550
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210827.024746
2551
2552
* https://tracker.ceph.com/issues/52430
2553
    mds: fast async create client mount breaks racy test
2554
* https://tracker.ceph.com/issues/52436
2555
    fs/ceph: "corrupt mdsmap"
2556
* https://tracker.ceph.com/issues/52437
2557
    mds: InoTable::replay_release_ids abort via test_inotable_sync
2558
* https://tracker.ceph.com/issues/51282
2559
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2560
* https://tracker.ceph.com/issues/52438
2561
    qa: ffsb timeout
2562
* https://tracker.ceph.com/issues/52439
2563
    qa: acls does not compile on centos stream
2564 20 Patrick Donnelly
2565
2566
h3. 2021 July 30
2567
2568
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210729.214022
2569
2570
* https://tracker.ceph.com/issues/50250
2571
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
2572
* https://tracker.ceph.com/issues/51282
2573
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2574
* https://tracker.ceph.com/issues/48773
2575
    qa: scrub does not complete
2576
* https://tracker.ceph.com/issues/51975
2577
    pybind/mgr/stats: KeyError
2578 19 Patrick Donnelly
2579
2580
h3. 2021 July 28
2581
2582
https://pulpito.ceph.com/pdonnell-2021-07-28_00:39:45-fs-wip-pdonnell-testing-20210727.213757-distro-basic-smithi/
2583
2584
with qa fix: https://pulpito.ceph.com/pdonnell-2021-07-28_16:20:28-fs-wip-pdonnell-testing-20210728.141004-distro-basic-smithi/
2585
2586
* https://tracker.ceph.com/issues/51905
2587
    qa: "error reading sessionmap 'mds1_sessionmap'"
2588
* https://tracker.ceph.com/issues/48773
2589
    qa: scrub does not complete
2590
* https://tracker.ceph.com/issues/50250
2591
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
2592
* https://tracker.ceph.com/issues/51267
2593
    CommandFailedError: Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithi096 with status 1:...
2594
* https://tracker.ceph.com/issues/51279
2595
    kclient hangs on umount (testing branch)
2596 18 Patrick Donnelly
2597
2598
h3. 2021 July 16
2599
2600
https://pulpito.ceph.com/pdonnell-2021-07-16_05:50:11-fs-wip-pdonnell-testing-20210716.022804-distro-basic-smithi/
2601
2602
* https://tracker.ceph.com/issues/48773
2603
    qa: scrub does not complete
2604
* https://tracker.ceph.com/issues/48772
2605
    qa: pjd: not ok 9, 44, 80
2606
* https://tracker.ceph.com/issues/45434
2607
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2608
* https://tracker.ceph.com/issues/51279
2609
    kclient hangs on umount (testing branch)
2610
* https://tracker.ceph.com/issues/50824
2611
    qa: snaptest-git-ceph bus error
2612 17 Patrick Donnelly
2613
2614
h3. 2021 July 04
2615
2616
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210703.052904
2617
2618
* https://tracker.ceph.com/issues/48773
2619
    qa: scrub does not complete
2620
* https://tracker.ceph.com/issues/39150
2621
    mon: "FAILED ceph_assert(session_map.sessions.empty())" when out of quorum
2622
* https://tracker.ceph.com/issues/45434
2623
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2624
* https://tracker.ceph.com/issues/51282
2625
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2626
* https://tracker.ceph.com/issues/48771
2627
    qa: iogen: workload fails to cause balancing
2628
* https://tracker.ceph.com/issues/51279
2629
    kclient hangs on umount (testing branch)
2630
* https://tracker.ceph.com/issues/50250
2631
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
2632 16 Patrick Donnelly
2633
2634
h3. 2021 July 01
2635
2636
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210701.192056
2637
2638
* https://tracker.ceph.com/issues/51197
2639
    qa: [WRN] Scrub error on inode 0x10000001520 (/client.0/tmp/t/linux-5.4/Documentation/driver-api) see mds.f log and `damage ls` output for details
2640
* https://tracker.ceph.com/issues/50866
2641
    osd: stat mismatch on objects
2642
* https://tracker.ceph.com/issues/48773
2643
    qa: scrub does not complete
2644 15 Patrick Donnelly
2645
2646
h3. 2021 June 26
2647
2648
https://pulpito.ceph.com/pdonnell-2021-06-26_00:57:00-fs-wip-pdonnell-testing-20210625.225421-distro-basic-smithi/
2649
2650
* https://tracker.ceph.com/issues/51183
2651
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
2652
* https://tracker.ceph.com/issues/51410
2653
    kclient: fails to finish reconnect during MDS thrashing (testing branch)
2654
* https://tracker.ceph.com/issues/48773
2655
    qa: scrub does not complete
2656
* https://tracker.ceph.com/issues/51282
2657
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2658
* https://tracker.ceph.com/issues/51169
2659
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
2660
* https://tracker.ceph.com/issues/48772
2661
    qa: pjd: not ok 9, 44, 80
2662 14 Patrick Donnelly
2663
2664
h3. 2021 June 21
2665
2666
https://pulpito.ceph.com/pdonnell-2021-06-22_00:27:21-fs-wip-pdonnell-testing-20210621.231646-distro-basic-smithi/
2667
2668
One failure caused by PR: https://github.com/ceph/ceph/pull/41935#issuecomment-866472599
2669
2670
* https://tracker.ceph.com/issues/51282
2671
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2672
* https://tracker.ceph.com/issues/51183
2673
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
2674
* https://tracker.ceph.com/issues/48773
2675
    qa: scrub does not complete
2676
* https://tracker.ceph.com/issues/48771
2677
    qa: iogen: workload fails to cause balancing
2678
* https://tracker.ceph.com/issues/51169
2679
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
2680
* https://tracker.ceph.com/issues/50495
2681
    libcephfs: shutdown race fails with status 141
2682
* https://tracker.ceph.com/issues/45434
2683
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2684
* https://tracker.ceph.com/issues/50824
2685
    qa: snaptest-git-ceph bus error
2686
* https://tracker.ceph.com/issues/50223
2687
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2688 13 Patrick Donnelly
2689
2690
h3. 2021 June 16
2691
2692
https://pulpito.ceph.com/pdonnell-2021-06-16_21:26:55-fs-wip-pdonnell-testing-20210616.191804-distro-basic-smithi/
2693
2694
MDS abort class of failures caused by PR: https://github.com/ceph/ceph/pull/41667
2695
2696
* https://tracker.ceph.com/issues/45434
2697
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2698
* https://tracker.ceph.com/issues/51169
2699
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
2700
* https://tracker.ceph.com/issues/43216
2701
    MDSMonitor: removes MDS coming out of quorum election
2702
* https://tracker.ceph.com/issues/51278
2703
    mds: "FAILED ceph_assert(!segments.empty())"
2704
* https://tracker.ceph.com/issues/51279
2705
    kclient hangs on umount (testing branch)
2706
* https://tracker.ceph.com/issues/51280
2707
    mds: "FAILED ceph_assert(r == 0 || r == -2)"
2708
* https://tracker.ceph.com/issues/51183
2709
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
2710
* https://tracker.ceph.com/issues/51281
2711
    qa: snaptest-snap-rm-cmp.sh: "echo 'FAIL: bad match, /tmp/a 4637e766853d1ad16a7b17079e2c6f03 != real c3883760b18d50e8d78819c54d579b00'"
2712
* https://tracker.ceph.com/issues/48773
2713
    qa: scrub does not complete
2714
* https://tracker.ceph.com/issues/51076
2715
    "wait_for_recovery: failed before timeout expired" during thrashosd test with EC backend.
2716
* https://tracker.ceph.com/issues/51228
2717
    qa: rmdir: failed to remove 'a/.snap/*': No such file or directory
2718
* https://tracker.ceph.com/issues/51282
2719
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2720 12 Patrick Donnelly
2721
2722
h3. 2021 June 14
2723
2724
https://pulpito.ceph.com/pdonnell-2021-06-14_20:53:05-fs-wip-pdonnell-testing-20210614.173325-distro-basic-smithi/
2725
2726
Some Ubuntu 20.04 upgrade fallout. In particular, upgrade tests are failing due to missing packages for 18.04 Pacific.
2727
2728
* https://tracker.ceph.com/issues/51169
2729
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
2730
* https://tracker.ceph.com/issues/51228
2731
    qa: rmdir: failed to remove 'a/.snap/*': No such file or directory
2732
* https://tracker.ceph.com/issues/48773
2733
    qa: scrub does not complete
2734
* https://tracker.ceph.com/issues/51183
2735
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
2736
* https://tracker.ceph.com/issues/45434
2737
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2738
* https://tracker.ceph.com/issues/51182
2739
    pybind/mgr/snap_schedule: Invalid command: Unexpected argument 'fs=cephfs'
2740
* https://tracker.ceph.com/issues/51229
2741
    qa: test_multi_snap_schedule list difference failure
2742
* https://tracker.ceph.com/issues/50821
2743
    qa: untar_snap_rm failure during mds thrashing
2744 11 Patrick Donnelly
2745
2746
h3. 2021 June 13
2747
2748
https://pulpito.ceph.com/pdonnell-2021-06-12_02:45:35-fs-wip-pdonnell-testing-20210612.002809-distro-basic-smithi/
2749
2750
Some Ubuntu 20.04 upgrade fallout. In particular, upgrade tests are failing due to missing packages for 18.04 Pacific.
2751
2752
* https://tracker.ceph.com/issues/51169
2753
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
2754
* https://tracker.ceph.com/issues/48773
2755
    qa: scrub does not complete
2756
* https://tracker.ceph.com/issues/51182
2757
    pybind/mgr/snap_schedule: Invalid command: Unexpected argument 'fs=cephfs'
2758
* https://tracker.ceph.com/issues/51183
2759
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
2760
* https://tracker.ceph.com/issues/51197
2761
    qa: [WRN] Scrub error on inode 0x10000001520 (/client.0/tmp/t/linux-5.4/Documentation/driver-api) see mds.f log and `damage ls` output for details
2762
* https://tracker.ceph.com/issues/45434
2763 10 Patrick Donnelly
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2764
2765
h3. 2021 June 11
2766
2767
https://pulpito.ceph.com/pdonnell-2021-06-11_18:02:10-fs-wip-pdonnell-testing-20210611.162716-distro-basic-smithi/
2768
2769
Some Ubuntu 20.04 upgrade fallout. In particular, upgrade tests are failing due to missing packages for 18.04 Pacific.
2770
2771
* https://tracker.ceph.com/issues/51169
2772
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
2773
* https://tracker.ceph.com/issues/45434
2774
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2775
* https://tracker.ceph.com/issues/48771
2776
    qa: iogen: workload fails to cause balancing
2777
* https://tracker.ceph.com/issues/43216
2778
    MDSMonitor: removes MDS coming out of quorum election
2779
* https://tracker.ceph.com/issues/51182
2780
    pybind/mgr/snap_schedule: Invalid command: Unexpected argument 'fs=cephfs'
2781
* https://tracker.ceph.com/issues/50223
2782
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2783
* https://tracker.ceph.com/issues/48773
2784
    qa: scrub does not complete
2785
* https://tracker.ceph.com/issues/51183
2786
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
2787
* https://tracker.ceph.com/issues/51184
2788
    qa: fs:bugs does not specify distro
2789 9 Patrick Donnelly
2790
2791
h3. 2021 June 03
2792
2793
https://pulpito.ceph.com/pdonnell-2021-06-03_03:40:33-fs-wip-pdonnell-testing-20210603.020013-distro-basic-smithi/
2794
2795
* https://tracker.ceph.com/issues/45434
2796
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2797
* https://tracker.ceph.com/issues/50016
2798
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
2799
* https://tracker.ceph.com/issues/50821
2800
    qa: untar_snap_rm failure during mds thrashing
2801
* https://tracker.ceph.com/issues/50622 (regression)
2802
    msg: active_connections regression
2803
* https://tracker.ceph.com/issues/49845#note-2 (regression)
2804
    qa: failed umount in test_volumes
2805
* https://tracker.ceph.com/issues/48773
2806
    qa: scrub does not complete
2807
* https://tracker.ceph.com/issues/43216
2808
    MDSMonitor: removes MDS coming out of quorum election
2809 7 Patrick Donnelly
2810
2811 8 Patrick Donnelly
h3. 2021 May 18
2812
2813
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210518.214114
2814
2815
Regression in testing kernel caused some failures. Ilya fixed those and rerun
2816
looked better. Some odd new noise in the rerun relating to packaging and "No
2817
module named 'tasks.ceph'".
2818
2819
* https://tracker.ceph.com/issues/50824
2820
    qa: snaptest-git-ceph bus error
2821
* https://tracker.ceph.com/issues/50622 (regression)
2822
    msg: active_connections regression
2823
* https://tracker.ceph.com/issues/49845#note-2 (regression)
2824
    qa: failed umount in test_volumes
2825
* https://tracker.ceph.com/issues/48203 (stock kernel update required)
2826
    qa: quota failure
2827
2828
2829 7 Patrick Donnelly
h3. 2021 May 18
2830
2831
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210518.025642
2832
2833
* https://tracker.ceph.com/issues/50821
2834
    qa: untar_snap_rm failure during mds thrashing
2835
* https://tracker.ceph.com/issues/48773
2836
    qa: scrub does not complete
2837
* https://tracker.ceph.com/issues/45591
2838
    mgr: FAILED ceph_assert(daemon != nullptr)
2839
* https://tracker.ceph.com/issues/50866
2840
    osd: stat mismatch on objects
2841
* https://tracker.ceph.com/issues/50016
2842
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
2843
* https://tracker.ceph.com/issues/50867
2844
    qa: fs:mirror: reduced data availability
2845
* https://tracker.ceph.com/issues/50821
2846
    qa: untar_snap_rm failure during mds thrashing
2847
* https://tracker.ceph.com/issues/50622 (regression)
2848
    msg: active_connections regression
2849
* https://tracker.ceph.com/issues/50223
2850
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2851
* https://tracker.ceph.com/issues/50868
2852
    qa: "kern.log.gz already exists; not overwritten"
2853
* https://tracker.ceph.com/issues/50870
2854
    qa: test_full: "rm: cannot remove 'large_file_a': Permission denied"
2855 6 Patrick Donnelly
2856
2857
h3. 2021 May 11
2858
2859
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210511.232042
2860
2861
* one class of failures caused by PR
2862
* https://tracker.ceph.com/issues/48812
2863
    qa: test_scrub_pause_and_resume_with_abort failure
2864
* https://tracker.ceph.com/issues/50390
2865
    mds: monclient: wait_auth_rotating timed out after 30
2866
* https://tracker.ceph.com/issues/48773
2867
    qa: scrub does not complete
2868
* https://tracker.ceph.com/issues/50821
2869
    qa: untar_snap_rm failure during mds thrashing
2870
* https://tracker.ceph.com/issues/50224
2871
    qa: test_mirroring_init_failure_with_recovery failure
2872
* https://tracker.ceph.com/issues/50622 (regression)
2873
    msg: active_connections regression
2874
* https://tracker.ceph.com/issues/50825
2875
    qa: snaptest-git-ceph hang during mon thrashing v2
2876
* https://tracker.ceph.com/issues/50821
2877
    qa: untar_snap_rm failure during mds thrashing
2878
* https://tracker.ceph.com/issues/50823
2879
    qa: RuntimeError: timeout waiting for cluster to stabilize
2880 5 Patrick Donnelly
2881
2882
h3. 2021 May 14
2883
2884
https://pulpito.ceph.com/pdonnell-2021-05-14_21:45:42-fs-master-distro-basic-smithi/
2885
2886
* https://tracker.ceph.com/issues/48812
2887
    qa: test_scrub_pause_and_resume_with_abort failure
2888
* https://tracker.ceph.com/issues/50821
2889
    qa: untar_snap_rm failure during mds thrashing
2890
* https://tracker.ceph.com/issues/50622 (regression)
2891
    msg: active_connections regression
2892
* https://tracker.ceph.com/issues/50822
2893
    qa: testing kernel patch for client metrics causes mds abort
2894
* https://tracker.ceph.com/issues/48773
2895
    qa: scrub does not complete
2896
* https://tracker.ceph.com/issues/50823
2897
    qa: RuntimeError: timeout waiting for cluster to stabilize
2898
* https://tracker.ceph.com/issues/50824
2899
    qa: snaptest-git-ceph bus error
2900
* https://tracker.ceph.com/issues/50825
2901
    qa: snaptest-git-ceph hang during mon thrashing v2
2902
* https://tracker.ceph.com/issues/50826
2903
    kceph: stock RHEL kernel hangs on snaptests with mon|osd thrashers
2904 4 Patrick Donnelly
2905
2906
h3. 2021 May 01
2907
2908
https://pulpito.ceph.com/pdonnell-2021-05-01_09:07:09-fs-wip-pdonnell-testing-20210501.040415-distro-basic-smithi/
2909
2910
* https://tracker.ceph.com/issues/45434
2911
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2912
* https://tracker.ceph.com/issues/50281
2913
    qa: untar_snap_rm timeout
2914
* https://tracker.ceph.com/issues/48203 (stock kernel update required)
2915
    qa: quota failure
2916
* https://tracker.ceph.com/issues/48773
2917
    qa: scrub does not complete
2918
* https://tracker.ceph.com/issues/50390
2919
    mds: monclient: wait_auth_rotating timed out after 30
2920
* https://tracker.ceph.com/issues/50250
2921
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details"
2922
* https://tracker.ceph.com/issues/50622 (regression)
2923
    msg: active_connections regression
2924
* https://tracker.ceph.com/issues/45591
2925
    mgr: FAILED ceph_assert(daemon != nullptr)
2926
* https://tracker.ceph.com/issues/50221
2927
    qa: snaptest-git-ceph failure in git diff
2928
* https://tracker.ceph.com/issues/50016
2929
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
2930 3 Patrick Donnelly
2931
2932
h3. 2021 Apr 15
2933
2934
https://pulpito.ceph.com/pdonnell-2021-04-15_01:35:57-fs-wip-pdonnell-testing-20210414.230315-distro-basic-smithi/
2935
2936
* https://tracker.ceph.com/issues/50281
2937
    qa: untar_snap_rm timeout
2938
* https://tracker.ceph.com/issues/50220
2939
    qa: dbench workload timeout
2940
* https://tracker.ceph.com/issues/50246
2941
    mds: failure replaying journal (EMetaBlob)
2942
* https://tracker.ceph.com/issues/50250
2943
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details"
2944
* https://tracker.ceph.com/issues/50016
2945
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
2946
* https://tracker.ceph.com/issues/50222
2947
    osd: 5.2s0 deep-scrub : stat mismatch
2948
* https://tracker.ceph.com/issues/45434
2949
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2950
* https://tracker.ceph.com/issues/49845
2951
    qa: failed umount in test_volumes
2952
* https://tracker.ceph.com/issues/37808
2953
    osd: osdmap cache weak_refs assert during shutdown
2954
* https://tracker.ceph.com/issues/50387
2955
    client: fs/snaps failure
2956
* https://tracker.ceph.com/issues/50389
2957
    mds: "cluster [ERR] Error recovering journal 0x203: (2) No such file or directory" in cluster log"
2958
* https://tracker.ceph.com/issues/50216
2959
    qa: "ls: cannot access 'lost+found': No such file or directory"
2960
* https://tracker.ceph.com/issues/50390
2961
    mds: monclient: wait_auth_rotating timed out after 30
2962
2963 1 Patrick Donnelly
2964
2965 2 Patrick Donnelly
h3. 2021 Apr 08
2966
2967
https://pulpito.ceph.com/pdonnell-2021-04-08_22:42:24-fs-wip-pdonnell-testing-20210408.192301-distro-basic-smithi/
2968
2969
* https://tracker.ceph.com/issues/45434
2970
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2971
* https://tracker.ceph.com/issues/50016
2972
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
2973
* https://tracker.ceph.com/issues/48773
2974
    qa: scrub does not complete
2975
* https://tracker.ceph.com/issues/50279
2976
    qa: "Replacing daemon mds.b as rank 0 with standby daemon mds.c"
2977
* https://tracker.ceph.com/issues/50246
2978
    mds: failure replaying journal (EMetaBlob)
2979
* https://tracker.ceph.com/issues/48365
2980
    qa: ffsb build failure on CentOS 8.2
2981
* https://tracker.ceph.com/issues/50216
2982
    qa: "ls: cannot access 'lost+found': No such file or directory"
2983
* https://tracker.ceph.com/issues/50223
2984
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2985
* https://tracker.ceph.com/issues/50280
2986
    cephadm: RuntimeError: uid/gid not found
2987
* https://tracker.ceph.com/issues/50281
2988
    qa: untar_snap_rm timeout
2989
2990 1 Patrick Donnelly
h3. 2021 Apr 08
2991
2992
https://pulpito.ceph.com/pdonnell-2021-04-08_04:31:36-fs-wip-pdonnell-testing-20210408.024225-distro-basic-smithi/
2993
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210408.142238 (with logic inversion / QA fix)
2994
2995
* https://tracker.ceph.com/issues/50246
2996
    mds: failure replaying journal (EMetaBlob)
2997
* https://tracker.ceph.com/issues/50250
2998
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details"
2999
3000
3001
h3. 2021 Apr 07
3002
3003
https://pulpito.ceph.com/pdonnell-2021-04-07_02:12:41-fs-wip-pdonnell-testing-20210406.213012-distro-basic-smithi/
3004
3005
* https://tracker.ceph.com/issues/50215
3006
    qa: "log [ERR] : error reading sessionmap 'mds2_sessionmap'"
3007
* https://tracker.ceph.com/issues/49466
3008
    qa: "Command failed on gibba030 with status 1: 'set -ex\nsudo dd of=/tmp/tmp.ZEeZBasJer'"
3009
* https://tracker.ceph.com/issues/50216
3010
    qa: "ls: cannot access 'lost+found': No such file or directory"
3011
* https://tracker.ceph.com/issues/48773
3012
    qa: scrub does not complete
3013
* https://tracker.ceph.com/issues/49845
3014
    qa: failed umount in test_volumes
3015
* https://tracker.ceph.com/issues/50220
3016
    qa: dbench workload timeout
3017
* https://tracker.ceph.com/issues/50221
3018
    qa: snaptest-git-ceph failure in git diff
3019
* https://tracker.ceph.com/issues/50222
3020
    osd: 5.2s0 deep-scrub : stat mismatch
3021
* https://tracker.ceph.com/issues/50223
3022
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
3023
* https://tracker.ceph.com/issues/50224
3024
    qa: test_mirroring_init_failure_with_recovery failure
3025
3026
h3. 2021 Apr 01
3027
3028
https://pulpito.ceph.com/pdonnell-2021-04-01_00:45:34-fs-wip-pdonnell-testing-20210331.222326-distro-basic-smithi/
3029
3030
* https://tracker.ceph.com/issues/48772
3031
    qa: pjd: not ok 9, 44, 80
3032
* https://tracker.ceph.com/issues/50177
3033
    osd: "stalled aio... buggy kernel or bad device?"
3034
* https://tracker.ceph.com/issues/48771
3035
    qa: iogen: workload fails to cause balancing
3036
* https://tracker.ceph.com/issues/49845
3037
    qa: failed umount in test_volumes
3038
* https://tracker.ceph.com/issues/48773
3039
    qa: scrub does not complete
3040
* https://tracker.ceph.com/issues/48805
3041
    mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details"
3042
* https://tracker.ceph.com/issues/50178
3043
    qa: "TypeError: run() got an unexpected keyword argument 'shell'"
3044
* https://tracker.ceph.com/issues/45434
3045
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
3046
3047
h3. 2021 Mar 24
3048
3049
https://pulpito.ceph.com/pdonnell-2021-03-24_23:26:35-fs-wip-pdonnell-testing-20210324.190252-distro-basic-smithi/
3050
3051
* https://tracker.ceph.com/issues/49500
3052
    qa: "Assertion `cb_done' failed."
3053
* https://tracker.ceph.com/issues/50019
3054
    qa: mount failure with cephadm "probably no MDS server is up?"
3055
* https://tracker.ceph.com/issues/50020
3056
    qa: "RADOS object not found (Failed to operate read op for oid cephfs_mirror)"
3057
* https://tracker.ceph.com/issues/48773
3058
    qa: scrub does not complete
3059
* https://tracker.ceph.com/issues/45434
3060
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
3061
* https://tracker.ceph.com/issues/48805
3062
    mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details"
3063
* https://tracker.ceph.com/issues/48772
3064
    qa: pjd: not ok 9, 44, 80
3065
* https://tracker.ceph.com/issues/50021
3066
    qa: snaptest-git-ceph failure during mon thrashing
3067
* https://tracker.ceph.com/issues/48771
3068
    qa: iogen: workload fails to cause balancing
3069
* https://tracker.ceph.com/issues/50016
3070
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
3071
* https://tracker.ceph.com/issues/49466
3072
    qa: "Command failed on gibba030 with status 1: 'set -ex\nsudo dd of=/tmp/tmp.ZEeZBasJer'"
3073
3074
3075
h3. 2021 Mar 18
3076
3077
https://pulpito.ceph.com/pdonnell-2021-03-18_13:46:31-fs-wip-pdonnell-testing-20210318.024145-distro-basic-smithi/
3078
3079
* https://tracker.ceph.com/issues/49466
3080
    qa: "Command failed on gibba030 with status 1: 'set -ex\nsudo dd of=/tmp/tmp.ZEeZBasJer'"
3081
* https://tracker.ceph.com/issues/48773
3082
    qa: scrub does not complete
3083
* https://tracker.ceph.com/issues/48805
3084
    mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details"
3085
* https://tracker.ceph.com/issues/45434
3086
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
3087
* https://tracker.ceph.com/issues/49845
3088
    qa: failed umount in test_volumes
3089
* https://tracker.ceph.com/issues/49605
3090
    mgr: drops command on the floor
3091
* https://tracker.ceph.com/issues/48203 (stock kernel update required)
3092
    qa: quota failure
3093
* https://tracker.ceph.com/issues/49928
3094
    client: items pinned in cache preventing unmount x2
3095
3096
h3. 2021 Mar 15
3097
3098
https://pulpito.ceph.com/pdonnell-2021-03-15_22:16:56-fs-wip-pdonnell-testing-20210315.182203-distro-basic-smithi/
3099
3100
* https://tracker.ceph.com/issues/49842
3101
    qa: stuck pkg install
3102
* https://tracker.ceph.com/issues/49466
3103
    qa: "Command failed on gibba030 with status 1: 'set -ex\nsudo dd of=/tmp/tmp.ZEeZBasJer'"
3104
* https://tracker.ceph.com/issues/49822
3105
    test: test_mirroring_command_idempotency (tasks.cephfs.test_admin.TestMirroringCommands) failure
3106
* https://tracker.ceph.com/issues/49240
3107
    terminate called after throwing an instance of 'std::bad_alloc'
3108
* https://tracker.ceph.com/issues/48773
3109
    qa: scrub does not complete
3110
* https://tracker.ceph.com/issues/45434
3111
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
3112
* https://tracker.ceph.com/issues/49500
3113
    qa: "Assertion `cb_done' failed."
3114
* https://tracker.ceph.com/issues/49843
3115
    qa: fs/snaps/snaptest-upchildrealms.sh failure
3116
* https://tracker.ceph.com/issues/49845
3117
    qa: failed umount in test_volumes
3118
* https://tracker.ceph.com/issues/48805
3119
    mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details"
3120
* https://tracker.ceph.com/issues/49605
3121
    mgr: drops command on the floor
3122
3123
and failure caused by PR: https://github.com/ceph/ceph/pull/39969
3124
3125
3126
h3. 2021 Mar 09
3127
3128
https://pulpito.ceph.com/pdonnell-2021-03-09_03:27:39-fs-wip-pdonnell-testing-20210308.214827-distro-basic-smithi/
3129
3130
* https://tracker.ceph.com/issues/49500
3131
    qa: "Assertion `cb_done' failed."
3132
* https://tracker.ceph.com/issues/48805
3133
    mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details"
3134
* https://tracker.ceph.com/issues/48773
3135
    qa: scrub does not complete
3136
* https://tracker.ceph.com/issues/45434
3137
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
3138
* https://tracker.ceph.com/issues/49240
3139
    terminate called after throwing an instance of 'std::bad_alloc'
3140
* https://tracker.ceph.com/issues/49466
3141
    qa: "Command failed on gibba030 with status 1: 'set -ex\nsudo dd of=/tmp/tmp.ZEeZBasJer'"
3142
* https://tracker.ceph.com/issues/49684
3143
    qa: fs:cephadm mount does not wait for mds to be created
3144
* https://tracker.ceph.com/issues/48771
3145
    qa: iogen: workload fails to cause balancing