Project

General

Profile

Main » History » Version 241

Patrick Donnelly, 04/02/2024 10:19 PM

1 221 Patrick Donnelly
h1. <code>main</code> branch
2 1 Patrick Donnelly
3 240 Patrick Donnelly
h3. 2024-04-02
4
5
https://tracker.ceph.com/issues/65215
6
7
* "qa: error during scrub thrashing: rank damage found: {'backtrace'}":https://tracker.ceph.com/issues/57676
8
* "qa: ceph tell 4.3a deep-scrub command not found":https://tracker.ceph.com/issues/64972
9
* "pacific/quincy/v18.2.0: client: ceph-fuse fails to unmount after upgrade to main":https://tracker.ceph.com/issues/64502
10
* "Test failure: test_cephfs_mirror_cancel_mirroring_and_readd":https://tracker.ceph.com/issues/64711
11
* "workunits/fsx.sh failure":https://tracker.ceph.com/issues/64572
12
* "qa: failed cephfs-shell test_reading_conf":https://tracker.ceph.com/issues/63699
13
* "centos 9 testing reveals rocksdb Leak_StillReachable memory leak in mons":https://tracker.ceph.com/issues/61774
14
* "qa: test_max_items_per_obj open procs not fully cleaned up":https://tracker.ceph.com/issues/65022
15
* "qa: dbench workload timeout":https://tracker.ceph.com/issues/50220
16
* "suites/fsstress.sh hangs on one client - test times out":https://tracker.ceph.com/issues/64707
17 241 Patrick Donnelly
* "qa/suites/fs/nfs: cluster [WRN] Health check failed: 1 stray daemon(s) not managed by cephadm (CEPHADM_STRAY_DAEMON)" in cluster log":https://tracker.ceph.com/issues/65021
18
* "qa: Scrub error on inode 0x1000000356c (/volumes/qa/sv_0/2f8f6bb4-3ea9-47a0-bd79-a0f50dc149d5/client.0/tmp/clients/client7/~dmtmp/PARADOX) see mds.b log and `damage ls` output for details in cluster log":https://tracker.ceph.com/issues/65020
19
* "qa: iogen workunit: The following counters failed to be set on mds daemons: {'mds.exported', 'mds.imported'}":https://tracker.ceph.com/issues/54108
20
* "ffsb.sh failure Resource temporarily unavailable":https://tracker.ceph.com/issues/62067
21
* "QA failure: test_fscrypt_dummy_encryption_with_quick_group":https://tracker.ceph.com/issues/65136
22
* "qa: cluster [WRN] Health detail: HEALTH_WARN 1 pool(s) do not have an application enabled in cluster log:https://tracker.ceph.com/issues/65271?next_issue_id=65268
23
* "qa: test_cephfs_mirror_cancel_sync fails in a 100 jobs run of fs:mirror suite":https://tracker.ceph.com/issues/64534
24
* 
25 240 Patrick Donnelly
26 236 Patrick Donnelly
h3. 2024-03-28
27
28
https://tracker.ceph.com/issues/65213
29
30 237 Patrick Donnelly
* "qa: error during scrub thrashing: rank damage found: {'backtrace'}":https://tracker.ceph.com/issues/57676
31
* "workunits/fsx.sh failure":https://tracker.ceph.com/issues/64572
32
* "PG_DEGRADED warnings during cluster creation via cephadm: Health check failed: Degraded data":https://tracker.ceph.com/issues/65018
33 238 Patrick Donnelly
* "suites/fsstress.sh hangs on one client - test times out":https://tracker.ceph.com/issues/64707
34
* "qa: ceph tell 4.3a deep-scrub command not found":https://tracker.ceph.com/issues/64972
35
* "qa: iogen workunit: The following counters failed to be set on mds daemons: {'mds.exported', 'mds.imported'}":https://tracker.ceph.com/issues/54108
36 239 Patrick Donnelly
* "qa: Scrub error on inode 0x1000000356c (/volumes/qa/sv_0/2f8f6bb4-3ea9-47a0-bd79-a0f50dc149d5/client.0/tmp/clients/client7/~dmtmp/PARADOX) see mds.b log and `damage ls` output for details in cluster log":https://tracker.ceph.com/issues/65020
37
* "qa: failed cephfs-shell test_reading_conf":https://tracker.ceph.com/issues/63699
38
* "Test failure: test_cephfs_mirror_cancel_mirroring_and_readd":https://tracker.ceph.com/issues/64711
39
* "qa: test_max_items_per_obj open procs not fully cleaned up":https://tracker.ceph.com/issues/65022
40
* "pacific/quincy/v18.2.0: client: ceph-fuse fails to unmount after upgrade to main":https://tracker.ceph.com/issues/64502
41
* "centos 9 testing reveals rocksdb Leak_StillReachable memory leak in mons":https://tracker.ceph.com/issues/61774
42
* "qa: Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)":https://tracker.ceph.com/issues/52624
43
* "qa: dbench workload timeout":https://tracker.ceph.com/issues/50220
44
45
46 236 Patrick Donnelly
47 235 Milind Changire
h3. 2024-03-25
48
49
https://pulpito.ceph.com/mchangir-2024-03-22_09:46:06-fs:upgrade-wip-mchangir-testing-main-20240318.032620-testing-default-smithi/
50
* https://tracker.ceph.com/issues/64502
51
  fusermount -u fails with: teuthology.exceptions.MaxWhileTries: reached maximum tries (51) after waiting for 300 seconds
52
53
https://pulpito.ceph.com/mchangir-2024-03-22_09:48:09-fs:libcephfs-wip-mchangir-testing-main-20240318.032620-testing-default-smithi/
54
55
* https://tracker.ceph.com/issues/62245
56
  libcephfs/test.sh failed - https://tracker.ceph.com/issues/62245#note-3
57
58
59 228 Patrick Donnelly
h3. 2024-03-20
60
61 234 Patrick Donnelly
https://pulpito.ceph.com/?branch=wip-batrick-testing-20240320.145742
62 228 Patrick Donnelly
63 233 Patrick Donnelly
https://github.com/batrick/ceph/commit/360516069d9393362c4cc6eb9371680fe16d66ab
64
65 229 Patrick Donnelly
Ubuntu jobs filtered out because builds were skipped by jenkins/shaman.
66 1 Patrick Donnelly
67 229 Patrick Donnelly
This run has a lot more failures because https://github.com/ceph/ceph/pull/55455 fixed log WRN/ERR checks.
68 228 Patrick Donnelly
69 229 Patrick Donnelly
* https://tracker.ceph.com/issues/57676
70
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
71
* https://tracker.ceph.com/issues/64572
72
    workunits/fsx.sh failure
73
* https://tracker.ceph.com/issues/65018
74
    PG_DEGRADED warnings during cluster creation via cephadm: "Health check failed: Degraded data redundancy: 2/192 objects degraded (1.042%), 1 pg degraded (PG_DEGRADED)"
75
* https://tracker.ceph.com/issues/64707 (new issue)
76
    suites/fsstress.sh hangs on one client - test times out
77 1 Patrick Donnelly
* https://tracker.ceph.com/issues/64988
78
    qa: fs:workloads mgr client evicted indicated by "cluster [WRN] evicting unresponsive client smithi042:x (15288), after 303.306 seconds"
79
* https://tracker.ceph.com/issues/59684
80
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
81 230 Patrick Donnelly
* https://tracker.ceph.com/issues/64972
82
    qa: "ceph tell 4.3a deep-scrub" command not found
83
* https://tracker.ceph.com/issues/54108
84
    qa: iogen workunit: "The following counters failed to be set on mds daemons: {'mds.exported', 'mds.imported'}"
85
* https://tracker.ceph.com/issues/65019
86
    qa/suites/fs/top: [WRN] Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)" in cluster log 
87
* https://tracker.ceph.com/issues/65020
88
    qa: Scrub error on inode 0x1000000356c (/volumes/qa/sv_0/2f8f6bb4-3ea9-47a0-bd79-a0f50dc149d5/client.0/tmp/clients/client7/~dmtmp/PARADOX) see mds.b log and `damage ls` output for details" in cluster log
89
* https://tracker.ceph.com/issues/65021
90
    qa/suites/fs/nfs: cluster [WRN] Health check failed: 1 stray daemon(s) not managed by cephadm (CEPHADM_STRAY_DAEMON)" in cluster log
91
* https://tracker.ceph.com/issues/63699
92
    qa: failed cephfs-shell test_reading_conf
93 231 Patrick Donnelly
* https://tracker.ceph.com/issues/64711
94
    Test failure: test_cephfs_mirror_cancel_mirroring_and_readd (tasks.cephfs.test_mirroring.TestMirroring)
95
* https://tracker.ceph.com/issues/50821
96
    qa: untar_snap_rm failure during mds thrashing
97 232 Patrick Donnelly
* https://tracker.ceph.com/issues/65022
98
    qa: test_max_items_per_obj open procs not fully cleaned up
99 228 Patrick Donnelly
100 226 Venky Shankar
h3.  14th March 2024
101
102
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20240307.013758
103
104 227 Venky Shankar
(pjd.sh failures are related to a bug in the testing kernel. See - https://tracker.ceph.com/issues/64679#note-4)
105 226 Venky Shankar
106
* https://tracker.ceph.com/issues/62067
107
    ffsb.sh failure "Resource temporarily unavailable"
108
* https://tracker.ceph.com/issues/57676
109
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
110
* https://tracker.ceph.com/issues/64502
111
    pacific/quincy/v18.2.0: client: ceph-fuse fails to unmount after upgrade to main
112
* https://tracker.ceph.com/issues/64572
113
    workunits/fsx.sh failure
114
* https://tracker.ceph.com/issues/63700
115
    qa: test_cd_with_args failure
116
* https://tracker.ceph.com/issues/59684
117
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
118
* https://tracker.ceph.com/issues/61243
119
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
120
121 225 Venky Shankar
h3. 5th March 2024
122
123
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20240304.042522
124
125
* https://tracker.ceph.com/issues/57676
126
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
127
* https://tracker.ceph.com/issues/64502
128
    pacific/quincy/v18.2.0: client: ceph-fuse fails to unmount after upgrade to main
129
* https://tracker.ceph.com/issues/63949
130
    leak in mds.c detected by valgrind during CephFS QA run
131
* https://tracker.ceph.com/issues/57656
132
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
133
* https://tracker.ceph.com/issues/63699
134
    qa: failed cephfs-shell test_reading_conf
135
* https://tracker.ceph.com/issues/64572
136
    workunits/fsx.sh failure
137
* https://tracker.ceph.com/issues/64707 (new issue)
138
    suites/fsstress.sh hangs on one client - test times out
139
* https://tracker.ceph.com/issues/59684
140
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
141
* https://tracker.ceph.com/issues/63700
142
    qa: test_cd_with_args failure
143
* https://tracker.ceph.com/issues/64711
144
    Test failure: test_cephfs_mirror_cancel_mirroring_and_readd (tasks.cephfs.test_mirroring.TestMirroring)
145
* https://tracker.ceph.com/issues/64729 (new issue)
146
    mon.a (mon.0) 1281 : cluster 3 [WRN] MDS_SLOW_METADATA_IO: 3 MDSs report slow metadata IOs" in cluster log
147
* https://tracker.ceph.com/issues/64730
148
    fs/misc/multiple_rsync.sh workunit times out
149
150 224 Venky Shankar
h3. 26th Feb 2024
151
152
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20240216.060239
153
154
(This run is a bit messy due to
155
156
  a) OCI runtime issues in the testing kernel with centos9
157
  b) SELinux denials related failures
158
  c) Unrelated MON_DOWN warnings)
159
160
* https://tracker.ceph.com/issues/57676
161
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
162
* https://tracker.ceph.com/issues/63700
163
    qa: test_cd_with_args failure
164
* https://tracker.ceph.com/issues/63949
165
    leak in mds.c detected by valgrind during CephFS QA run
166
* https://tracker.ceph.com/issues/59684
167
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
168
* https://tracker.ceph.com/issues/61243
169
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
170
* https://tracker.ceph.com/issues/63699
171
    qa: failed cephfs-shell test_reading_conf
172
* https://tracker.ceph.com/issues/64172
173
    Test failure: test_multiple_path_r (tasks.cephfs.test_admin.TestFsAuthorize)
174
* https://tracker.ceph.com/issues/57656
175
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
176
* https://tracker.ceph.com/issues/64572
177
    workunits/fsx.sh failure
178
179 222 Patrick Donnelly
h3. 20th Feb 2024
180
181
https://github.com/ceph/ceph/pull/55601
182
https://github.com/ceph/ceph/pull/55659
183
184
https://pulpito.ceph.com/pdonnell-2024-02-20_07:23:03-fs:upgrade:mds_upgrade_sequence-wip-batrick-testing-20240220.022152-distro-default-smithi/
185
186
* https://tracker.ceph.com/issues/64502
187
    client: quincy ceph-fuse fails to unmount after upgrade to main
188
189 223 Patrick Donnelly
This run has numerous problems. #55601 introduces testing for the upgrade sequence from </code>reef/{v18.2.0,v18.2.1,reef}</code> as well as an extra dimension for the ceph-fuse client. The main "big" issue is i64502: the ceph-fuse client is not being unmounted when <code>fusermount -u</code> is called. Instead, the client begins to unmount only after daemons are shut down during test cleanup.
190 218 Venky Shankar
191
h3. 19th Feb 2024
192
193 220 Venky Shankar
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20240217.015652
194
195 218 Venky Shankar
* https://tracker.ceph.com/issues/61243
196
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
197
* https://tracker.ceph.com/issues/63700
198
    qa: test_cd_with_args failure
199
* https://tracker.ceph.com/issues/63141
200
    qa/cephfs: test_idem_unaffected_root_squash fails
201
* https://tracker.ceph.com/issues/59684
202
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
203
* https://tracker.ceph.com/issues/63949
204
    leak in mds.c detected by valgrind during CephFS QA run
205
* https://tracker.ceph.com/issues/63764
206
    Test failure: test_r_with_fsname_and_no_path_in_cap (tasks.cephfs.test_multifs_auth.TestMDSCaps)
207
* https://tracker.ceph.com/issues/63699
208
    qa: failed cephfs-shell test_reading_conf
209 219 Venky Shankar
* https://tracker.ceph.com/issues/64482
210
    ceph: stderr Error: OCI runtime error: crun: bpf create ``: Function not implemented
211 201 Rishabh Dave
212 217 Venky Shankar
h3. 29 Jan 2024
213
214
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20240119.075157-1
215
216
* https://tracker.ceph.com/issues/57676
217
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
218
* https://tracker.ceph.com/issues/63949
219
    leak in mds.c detected by valgrind during CephFS QA run
220
* https://tracker.ceph.com/issues/62067
221
    ffsb.sh failure "Resource temporarily unavailable"
222
* https://tracker.ceph.com/issues/64172
223
    Test failure: test_multiple_path_r (tasks.cephfs.test_admin.TestFsAuthorize)
224
* https://tracker.ceph.com/issues/63265
225
    qa: fs/snaps/snaptest-git-ceph.sh failed when reseting to tag 'v0.1'
226
* https://tracker.ceph.com/issues/61243
227
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
228
* https://tracker.ceph.com/issues/59684
229
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
230
* https://tracker.ceph.com/issues/57656
231
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
232
* https://tracker.ceph.com/issues/64209
233
    snaptest-multiple-capsnaps.sh fails with "got remote process result: 1"
234
235 216 Venky Shankar
h3. 17th Jan 2024
236
237
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20240103.072409-1
238
239
* https://tracker.ceph.com/issues/63764
240
    Test failure: test_r_with_fsname_and_no_path_in_cap (tasks.cephfs.test_multifs_auth.TestMDSCaps)
241
* https://tracker.ceph.com/issues/57676
242
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
243
* https://tracker.ceph.com/issues/51964
244
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
245
* https://tracker.ceph.com/issues/63949
246
    leak in mds.c detected by valgrind during CephFS QA run
247
* https://tracker.ceph.com/issues/62067
248
    ffsb.sh failure "Resource temporarily unavailable"
249
* https://tracker.ceph.com/issues/61243
250
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
251
* https://tracker.ceph.com/issues/63259
252
    mds: failed to store backtrace and force file system read-only
253
* https://tracker.ceph.com/issues/63265
254
    qa: fs/snaps/snaptest-git-ceph.sh failed when reseting to tag 'v0.1'
255
256
h3. 16 Jan 2024
257 215 Rishabh Dave
258 214 Rishabh Dave
https://pulpito.ceph.com/rishabh-2023-12-11_15:37:57-fs-rishabh-2023dec11-testing-default-smithi/
259
https://pulpito.ceph.com/rishabh-2023-12-17_11:19:43-fs-rishabh-2023dec11-testing-default-smithi/
260
https://pulpito.ceph.com/rishabh-2024-01-04_18:43:16-fs-rishabh-2024jan4-testing-default-smithi
261
262
* https://tracker.ceph.com/issues/63764
263
  Test failure: test_r_with_fsname_and_no_path_in_cap (tasks.cephfs.test_multifs_auth.TestMDSCaps)
264
* https://tracker.ceph.com/issues/63141
265
  qa/cephfs: test_idem_unaffected_root_squash fails
266
* https://tracker.ceph.com/issues/62067
267
  ffsb.sh failure "Resource temporarily unavailable" 
268
* https://tracker.ceph.com/issues/51964
269
  qa: test_cephfs_mirror_restart_sync_on_blocklist failure
270
* https://tracker.ceph.com/issues/54462 
271
  Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128
272
* https://tracker.ceph.com/issues/57676
273
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
274
275
* https://tracker.ceph.com/issues/63949
276
  valgrind leak in MDS
277
* https://tracker.ceph.com/issues/64041
278
  qa/cephfs: fs/upgrade/nofs suite attempts to jump more than 2 releases
279
* fsstress failure in last run was due a kernel MM layer failure, unrelated to CephFS
280
* from last run, job #7507400 failed due to MGR; FS wasn't degraded, so it's unrelated to CephFS
281
282 213 Venky Shankar
h3. 06 Dec 2023
283
284
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20231206.125818
285
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20231206.125818-x (rerun w/ squid kickoff changes)
286
287
* https://tracker.ceph.com/issues/63764
288
    Test failure: test_r_with_fsname_and_no_path_in_cap (tasks.cephfs.test_multifs_auth.TestMDSCaps)
289
* https://tracker.ceph.com/issues/63233
290
    mon|client|mds: valgrind reports possible leaks in the MDS
291
* https://tracker.ceph.com/issues/57676
292
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
293
* https://tracker.ceph.com/issues/62580
294
    testing: Test failure: test_snapshot_remove (tasks.cephfs.test_strays.TestStrays)
295
* https://tracker.ceph.com/issues/62067
296
    ffsb.sh failure "Resource temporarily unavailable"
297
* https://tracker.ceph.com/issues/61243
298
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
299
* https://tracker.ceph.com/issues/62081
300
    tasks/fscrypt-common does not finish, timesout
301
* https://tracker.ceph.com/issues/63265
302
    qa: fs/snaps/snaptest-git-ceph.sh failed when reseting to tag 'v0.1'
303
* https://tracker.ceph.com/issues/63806
304
    ffsb.sh workunit failure (MDS: std::out_of_range, damaged)
305
306 211 Patrick Donnelly
h3. 30 Nov 2023
307
308
https://pulpito.ceph.com/pdonnell-2023-11-30_08:05:19-fs:shell-wip-batrick-testing-20231130.014408-distro-default-smithi/
309
310
* https://tracker.ceph.com/issues/63699
311 212 Patrick Donnelly
    qa: failed cephfs-shell test_reading_conf
312
* https://tracker.ceph.com/issues/63700
313
    qa: test_cd_with_args failure
314 211 Patrick Donnelly
315 210 Venky Shankar
h3. 29 Nov 2023
316
317
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20231107.042705
318
319
* https://tracker.ceph.com/issues/63233
320
    mon|client|mds: valgrind reports possible leaks in the MDS
321
* https://tracker.ceph.com/issues/63141
322
    qa/cephfs: test_idem_unaffected_root_squash fails
323
* https://tracker.ceph.com/issues/57676
324
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
325
* https://tracker.ceph.com/issues/57655
326
    qa: fs:mixed-clients kernel_untar_build failure
327
* https://tracker.ceph.com/issues/62067
328
    ffsb.sh failure "Resource temporarily unavailable"
329
* https://tracker.ceph.com/issues/61243
330
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
331
* https://tracker.ceph.com/issues/62510 (pending RHEL back port)
    snaptest-git-ceph.sh failure with fs/thrash
332
* https://tracker.ceph.com/issues/62810
333
    Failure in snaptest-git-ceph.sh (it's an async unlink/create bug) -- Need to fix again
334
335 206 Venky Shankar
h3. 14 Nov 2023
336 207 Milind Changire
(Milind)
337
338
https://pulpito.ceph.com/mchangir-2023-11-13_10:27:15-fs-wip-mchangir-testing-20231110.052303-testing-default-smithi/
339
340
* https://tracker.ceph.com/issues/53859
341
  qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
342
* https://tracker.ceph.com/issues/63233
343
  mon|client|mds: valgrind reports possible leaks in the MDS
344
* https://tracker.ceph.com/issues/63521
345
  qa: Test failure: test_scrub_merge_dirfrags (tasks.cephfs.test_scrub_checks.TestScrubChecks)
346
* https://tracker.ceph.com/issues/57655
347
  qa: fs:mixed-clients kernel_untar_build failure
348
* https://tracker.ceph.com/issues/62580
349
    testing: Test failure: test_snapshot_remove (tasks.cephfs.test_strays.TestStrays)
350
* https://tracker.ceph.com/issues/57676
351
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
352
* https://tracker.ceph.com/issues/61243
353
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
354
* https://tracker.ceph.com/issues/63141
355
    qa/cephfs: test_idem_unaffected_root_squash fails
356
* https://tracker.ceph.com/issues/51964
357
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
358
* https://tracker.ceph.com/issues/63522
359
    No module named 'tasks.ceph_fuse'
360
    No module named 'tasks.kclient'
361
    No module named 'tasks.cephfs.fuse_mount'
362
    No module named 'tasks.ceph'
363
* https://tracker.ceph.com/issues/63523
364
    Command failed - qa/workunits/fs/misc/general_vxattrs.sh
365
366
367
h3. 14 Nov 2023
368 206 Venky Shankar
369
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20231106.073650
370
371
(nvm the fs:upgrade test failure - the PR is excluded from merge)
372
373
* https://tracker.ceph.com/issues/57676
374
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
375
* https://tracker.ceph.com/issues/63233
376
    mon|client|mds: valgrind reports possible leaks in the MDS
377
* https://tracker.ceph.com/issues/63141
378
    qa/cephfs: test_idem_unaffected_root_squash fails
379
* https://tracker.ceph.com/issues/62580
380
    testing: Test failure: test_snapshot_remove (tasks.cephfs.test_strays.TestStrays)
381
* https://tracker.ceph.com/issues/57655
382
    qa: fs:mixed-clients kernel_untar_build failure
383
* https://tracker.ceph.com/issues/51964
384
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
385
* https://tracker.ceph.com/issues/63519
386
    ceph-fuse: reef ceph-fuse crashes with main branch ceph-mds
387
* https://tracker.ceph.com/issues/57087
388
    qa: test_fragmented_injection (tasks.cephfs.test_data_scan.TestDataScan) failure
389
* https://tracker.ceph.com/issues/58945
390
    qa: xfstests-dev's generic test suite has 20 failures with fuse client
391
392 204 Rishabh Dave
h3. 7 Nov 2023
393
394 205 Rishabh Dave
fs: https://pulpito.ceph.com/rishabh-2023-11-04_04:30:51-fs-rishabh-2023nov3-testing-default-smithi/
395
re-run: https://pulpito.ceph.com/rishabh-2023-11-05_14:10:09-fs-rishabh-2023nov3-testing-default-smithi/
396
smoke: https://pulpito.ceph.com/rishabh-2023-11-08_08:39:05-smoke-rishabh-2023nov3-testing-default-smithi/
397 204 Rishabh Dave
398
* https://tracker.ceph.com/issues/53859
399
  qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
400
* https://tracker.ceph.com/issues/63233
401
  mon|client|mds: valgrind reports possible leaks in the MDS
402
* https://tracker.ceph.com/issues/57655
403
  qa: fs:mixed-clients kernel_untar_build failure
404
* https://tracker.ceph.com/issues/57676
405
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
406
407
* https://tracker.ceph.com/issues/63473
408
  fsstress.sh failed with errno 124
409
410 202 Rishabh Dave
h3. 3 Nov 2023
411 203 Rishabh Dave
412 202 Rishabh Dave
https://pulpito.ceph.com/rishabh-2023-10-27_06:26:52-fs-rishabh-2023oct26-testing-default-smithi/
413
414
* https://tracker.ceph.com/issues/63141
415
  qa/cephfs: test_idem_unaffected_root_squash fails
416
* https://tracker.ceph.com/issues/63233
417
  mon|client|mds: valgrind reports possible leaks in the MDS
418
* https://tracker.ceph.com/issues/57656
419
  dbench: write failed on handle 10010 (Resource temporarily unavailable)
420
* https://tracker.ceph.com/issues/57655
421
  qa: fs:mixed-clients kernel_untar_build failure
422
* https://tracker.ceph.com/issues/57676
423
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
424
425
* https://tracker.ceph.com/issues/59531
426
  "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio)"
427
* https://tracker.ceph.com/issues/52624
428
  qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
429
430 198 Patrick Donnelly
h3. 24 October 2023
431
432
https://pulpito.ceph.com/?branch=wip-batrick-testing-20231024.144545
433
434 200 Patrick Donnelly
Two failures:
435
436
https://pulpito.ceph.com/pdonnell-2023-10-26_05:21:22-fs-wip-batrick-testing-20231024.144545-distro-default-smithi/7438459/
437
https://pulpito.ceph.com/pdonnell-2023-10-26_05:21:22-fs-wip-batrick-testing-20231024.144545-distro-default-smithi/7438468/
438
439
probably related to https://github.com/ceph/ceph/pull/53255. Killing the mount as part of the test did not complete. Will research more.
440
441 198 Patrick Donnelly
* https://tracker.ceph.com/issues/52624
442
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
443
* https://tracker.ceph.com/issues/57676
444 199 Patrick Donnelly
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
445
* https://tracker.ceph.com/issues/63233
446
    mon|client|mds: valgrind reports possible leaks in the MDS
447
* https://tracker.ceph.com/issues/59531
448
    "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS 
449
* https://tracker.ceph.com/issues/57655
450
    qa: fs:mixed-clients kernel_untar_build failure
451 200 Patrick Donnelly
* https://tracker.ceph.com/issues/62067
452
    ffsb.sh failure "Resource temporarily unavailable"
453
* https://tracker.ceph.com/issues/63411
454
    qa: flush journal may cause timeouts of `scrub status`
455
* https://tracker.ceph.com/issues/61243
456
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
457
* https://tracker.ceph.com/issues/63141
458 198 Patrick Donnelly
    test_idem_unaffected_root_squash (test_admin.TestFsAuthorizeUpdate) fails
459 148 Rishabh Dave
460 195 Venky Shankar
h3. 18 Oct 2023
461
462
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20231018.065603
463
464
* https://tracker.ceph.com/issues/52624
465
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
466
* https://tracker.ceph.com/issues/57676
467
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
468
* https://tracker.ceph.com/issues/63233
469
    mon|client|mds: valgrind reports possible leaks in the MDS
470
* https://tracker.ceph.com/issues/63141
471
    qa/cephfs: test_idem_unaffected_root_squash fails
472
* https://tracker.ceph.com/issues/59531
473
  "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio)" 
474
* https://tracker.ceph.com/issues/62658
475
    error during scrub thrashing: reached maximum tries (31) after waiting for 900 seconds
476
* https://tracker.ceph.com/issues/62580
477
    testing: Test failure: test_snapshot_remove (tasks.cephfs.test_strays.TestStrays)
478
* https://tracker.ceph.com/issues/62067
479
    ffsb.sh failure "Resource temporarily unavailable"
480
* https://tracker.ceph.com/issues/57655
481
    qa: fs:mixed-clients kernel_untar_build failure
482
* https://tracker.ceph.com/issues/62036
483
    src/mds/MDCache.cc: 5131: FAILED ceph_assert(isolated_inodes.empty())
484
* https://tracker.ceph.com/issues/58945
485
    qa: xfstests-dev's generic test suite has 20 failures with fuse client
486
* https://tracker.ceph.com/issues/62847
487
    mds: blogbench requests stuck (5mds+scrub+snaps-flush)
488
489 193 Venky Shankar
h3. 13 Oct 2023
490
491
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20231013.093215
492
493
* https://tracker.ceph.com/issues/52624
494
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
495
* https://tracker.ceph.com/issues/62936
496
    Test failure: test_mirroring_init_failure_with_recovery (tasks.cephfs.test_mirroring.TestMirroring)
497
* https://tracker.ceph.com/issues/47292
498
    cephfs-shell: test_df_for_valid_file failure
499
* https://tracker.ceph.com/issues/63141
500
    qa/cephfs: test_idem_unaffected_root_squash fails
501
* https://tracker.ceph.com/issues/62081
502
    tasks/fscrypt-common does not finish, timesout
503 1 Patrick Donnelly
* https://tracker.ceph.com/issues/58945
504
    qa: xfstests-dev's generic test suite has 20 failures with fuse client
505 194 Venky Shankar
* https://tracker.ceph.com/issues/63233
506
    mon|client|mds: valgrind reports possible leaks in the MDS
507 193 Venky Shankar
508 190 Patrick Donnelly
h3. 16 Oct 2023
509
510
https://pulpito.ceph.com/?branch=wip-batrick-testing-20231016.203825
511
512 192 Patrick Donnelly
Infrastructure issues:
513
* /teuthology/pdonnell-2023-10-19_12:04:12-fs-wip-batrick-testing-20231016.203825-distro-default-smithi/7432286/teuthology.log
514
    Host lost.
515
516 196 Patrick Donnelly
One followup fix:
517
* https://pulpito.ceph.com/pdonnell-2023-10-20_00:33:29-fs-wip-batrick-testing-20231016.203825-distro-default-smithi/
518
519 192 Patrick Donnelly
Failures:
520
521
* https://tracker.ceph.com/issues/56694
522
    qa: avoid blocking forever on hung umount
523
* https://tracker.ceph.com/issues/63089
524
    qa: tasks/mirror times out
525
* https://tracker.ceph.com/issues/52624
526
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
527
* https://tracker.ceph.com/issues/59531
528
  "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio)" 
529
* https://tracker.ceph.com/issues/57676
530
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
531
* https://tracker.ceph.com/issues/62658 
532
    error during scrub thrashing: reached maximum tries (31) after waiting for 900 seconds
533
* https://tracker.ceph.com/issues/61243
534
  test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
535
* https://tracker.ceph.com/issues/57656
536
  dbench: write failed on handle 10010 (Resource temporarily unavailable)
537
* https://tracker.ceph.com/issues/63233
538
  mon|client|mds: valgrind reports possible leaks in the MDS
539 197 Patrick Donnelly
* https://tracker.ceph.com/issues/63278
540
  kclient: may wrongly decode session messages and believe it is blocklisted (dead jobs)
541 192 Patrick Donnelly
542 189 Rishabh Dave
h3. 9 Oct 2023
543
544
https://pulpito.ceph.com/rishabh-2023-10-06_11:56:52-fs-rishabh-cephfs-mon-testing-default-smithi/
545
546
* https://tracker.ceph.com/issues/54460
547
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
548
* https://tracker.ceph.com/issues/63141
549
  test_idem_unaffected_root_squash (test_admin.TestFsAuthorizeUpdate) fails
550
* https://tracker.ceph.com/issues/62937
551
  logrotate doesn't support parallel execution on same set of logfiles
552
* https://tracker.ceph.com/issues/61400
553
  valgrind+ceph-mon issues
554
* https://tracker.ceph.com/issues/57676
555
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
556
* https://tracker.ceph.com/issues/55805
557
  error during scrub thrashing reached max tries in 900 secs
558
559 188 Venky Shankar
h3. 26 Sep 2023
560
561
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230926.081818
562
563
* https://tracker.ceph.com/issues/52624
564
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
565
* https://tracker.ceph.com/issues/62873
566
    qa: FAIL: test_client_blocklisted_oldest_tid (tasks.cephfs.test_client_limits.TestClientLimits)
567
* https://tracker.ceph.com/issues/61400
568
    valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
569
* https://tracker.ceph.com/issues/57676
570
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
571
* https://tracker.ceph.com/issues/62682
572
    mon: no mdsmap broadcast after "fs set joinable" is set to true
573
* https://tracker.ceph.com/issues/63089
574
    qa: tasks/mirror times out
575
576 185 Rishabh Dave
h3. 22 Sep 2023
577
578
https://pulpito.ceph.com/rishabh-2023-09-12_12:12:15-fs-wip-rishabh-2023sep12-b2-testing-default-smithi/
579
580
* https://tracker.ceph.com/issues/59348
581
  qa: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota
582
* https://tracker.ceph.com/issues/59344
583
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
584
* https://tracker.ceph.com/issues/59531
585
  "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio)" 
586
* https://tracker.ceph.com/issues/61574
587
  build failure for mdtest project
588
* https://tracker.ceph.com/issues/62702
589
  fsstress.sh: MDS slow requests for the internal 'rename' requests
590
* https://tracker.ceph.com/issues/57676
591
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
592
593
* https://tracker.ceph.com/issues/62863 
594
  deadlock in ceph-fuse causes teuthology job to hang and fail
595
* https://tracker.ceph.com/issues/62870
596
  test_cluster_info (tasks.cephfs.test_nfs.TestNFS)
597
* https://tracker.ceph.com/issues/62873
598
  test_client_blocklisted_oldest_tid (tasks.cephfs.test_client_limits.TestClientLimits)
599
600 186 Venky Shankar
h3. 20 Sep 2023
601
602
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230920.072635
603
604
* https://tracker.ceph.com/issues/52624
605
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
606
* https://tracker.ceph.com/issues/61400
607
    valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
608
* https://tracker.ceph.com/issues/61399
609
    libmpich: undefined references to fi_strerror
610
* https://tracker.ceph.com/issues/62081
611
    tasks/fscrypt-common does not finish, timesout
612
* https://tracker.ceph.com/issues/62658 
613
    error during scrub thrashing: reached maximum tries (31) after waiting for 900 seconds
614
* https://tracker.ceph.com/issues/62915
615
    qa/suites/fs/nfs: No orchestrator configured (try `ceph orch set backend`) while running test cases
616
* https://tracker.ceph.com/issues/59531
617
    quincy: "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio)"
618
* https://tracker.ceph.com/issues/62873
619
  qa: FAIL: test_client_blocklisted_oldest_tid (tasks.cephfs.test_client_limits.TestClientLimits)
620
* https://tracker.ceph.com/issues/62936
621
    Test failure: test_mirroring_init_failure_with_recovery (tasks.cephfs.test_mirroring.TestMirroring)
622
* https://tracker.ceph.com/issues/62937
623
    Command failed on smithi027 with status 3: 'sudo logrotate /etc/logrotate.d/ceph-test.conf'
624
* https://tracker.ceph.com/issues/62510
625
    snaptest-git-ceph.sh failure with fs/thrash
626
* https://tracker.ceph.com/issues/62081
627
    tasks/fscrypt-common does not finish, timesout
628
* https://tracker.ceph.com/issues/62126
629
    test failure: suites/blogbench.sh stops running
630 187 Venky Shankar
* https://tracker.ceph.com/issues/62682
631
    mon: no mdsmap broadcast after "fs set joinable" is set to true
632 186 Venky Shankar
633 184 Milind Changire
h3. 19 Sep 2023
634
635
http://pulpito.front.sepia.ceph.com/mchangir-2023-09-12_05:40:22-fs-wip-mchangir-testing-20230908.140927-testing-default-smithi/
636
637
* https://tracker.ceph.com/issues/58220#note-9
638
  workunit fs/test_python.sh: test_disk_quota_exceeeded_error failure
639
* https://tracker.ceph.com/issues/62702
640
  Command failed (workunit test suites/fsstress.sh) on smithi124 with status 124
641
* https://tracker.ceph.com/issues/57676
642
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
643
* https://tracker.ceph.com/issues/59348
644
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
645
* https://tracker.ceph.com/issues/52624
646
  qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
647
* https://tracker.ceph.com/issues/51964
648
  qa: test_cephfs_mirror_restart_sync_on_blocklist failure
649
* https://tracker.ceph.com/issues/61243
650
  test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
651
* https://tracker.ceph.com/issues/59344
652
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
653
* https://tracker.ceph.com/issues/62873
654
  qa: FAIL: test_client_blocklisted_oldest_tid (tasks.cephfs.test_client_limits.TestClientLimits)
655
* https://tracker.ceph.com/issues/59413
656
  cephfs: qa snaptest-git-ceph.sh failed with "got remote process result: 128"
657
* https://tracker.ceph.com/issues/53859
658
  qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
659
* https://tracker.ceph.com/issues/62482
660
  qa: cluster [WRN] Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
661
662 178 Patrick Donnelly
663 177 Venky Shankar
h3. 13 Sep 2023
664
665
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230908.065909
666
667
* https://tracker.ceph.com/issues/52624
668
      qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
669
* https://tracker.ceph.com/issues/57655
670
    qa: fs:mixed-clients kernel_untar_build failure
671
* https://tracker.ceph.com/issues/57676
672
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
673
* https://tracker.ceph.com/issues/61243
674
    qa: tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev - 17 tests failed
675
* https://tracker.ceph.com/issues/62567
676
    postgres workunit times out - MDS_SLOW_REQUEST in logs
677
* https://tracker.ceph.com/issues/61400
678
    valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
679
* https://tracker.ceph.com/issues/61399
680
    libmpich: undefined references to fi_strerror
681
* https://tracker.ceph.com/issues/57655
682
    qa: fs:mixed-clients kernel_untar_build failure
683
* https://tracker.ceph.com/issues/57676
684
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
685
* https://tracker.ceph.com/issues/51964
686
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
687
* https://tracker.ceph.com/issues/62081
688
    tasks/fscrypt-common does not finish, timesout
689 178 Patrick Donnelly
690 179 Patrick Donnelly
h3. 2023 Sep 12
691 178 Patrick Donnelly
692
https://pulpito.ceph.com/pdonnell-2023-09-12_14:07:50-fs-wip-batrick-testing-20230912.122437-distro-default-smithi/
693 1 Patrick Donnelly
694 181 Patrick Donnelly
A few failures caused by qa refactoring in https://github.com/ceph/ceph/pull/48130 ; notably:
695
696 182 Patrick Donnelly
* Test failure: test_export_pin_many (tasks.cephfs.test_exports.TestExportPin) caused by fragmentation from config changes.
697 181 Patrick Donnelly
698
Failures:
699
700 179 Patrick Donnelly
* https://tracker.ceph.com/issues/59348
701
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
702
* https://tracker.ceph.com/issues/57656
703
  dbench: write failed on handle 10010 (Resource temporarily unavailable)
704
* https://tracker.ceph.com/issues/55805
705
  error scrub thrashing reached max tries in 900 secs
706
* https://tracker.ceph.com/issues/62067
707
    ffsb.sh failure "Resource temporarily unavailable"
708
* https://tracker.ceph.com/issues/59344
709
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
710
* https://tracker.ceph.com/issues/61399
711 180 Patrick Donnelly
  libmpich: undefined references to fi_strerror
712
* https://tracker.ceph.com/issues/62832
713
  common: config_proxy deadlock during shutdown (and possibly other times)
714
* https://tracker.ceph.com/issues/59413
715 1 Patrick Donnelly
  cephfs: qa snaptest-git-ceph.sh failed with "got remote process result: 128"
716 181 Patrick Donnelly
* https://tracker.ceph.com/issues/57676
717
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
718
* https://tracker.ceph.com/issues/62567
719
  Command failed with status 1: "sudo TESTDIR=/home/ubuntu/cephtest bash -c 'sudo -u postgres -- pgbench -s 500 -i'"
720
* https://tracker.ceph.com/issues/54460
721
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
722
* https://tracker.ceph.com/issues/58220#note-9
723
  workunit fs/test_python.sh: test_disk_quota_exceeeded_error failure
724
* https://tracker.ceph.com/issues/59348
725
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
726 183 Patrick Donnelly
* https://tracker.ceph.com/issues/62847
727
    mds: blogbench requests stuck (5mds+scrub+snaps-flush)
728
* https://tracker.ceph.com/issues/62848
729
    qa: fail_fs upgrade scenario hanging
730
* https://tracker.ceph.com/issues/62081
731
    tasks/fscrypt-common does not finish, timesout
732 177 Venky Shankar
733 176 Venky Shankar
h3. 11 Sep 2023
734 175 Venky Shankar
735
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230830.153114
736
737
* https://tracker.ceph.com/issues/52624
738
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
739
* https://tracker.ceph.com/issues/61399
740
    qa: build failure for ior (the failed instance is when compiling `mdtest`)
741
* https://tracker.ceph.com/issues/57655
742
    qa: fs:mixed-clients kernel_untar_build failure
743
* https://tracker.ceph.com/issues/61399
744
    ior build failure
745
* https://tracker.ceph.com/issues/59531
746
    quincy: "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio)"
747
* https://tracker.ceph.com/issues/59344
748
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
749
* https://tracker.ceph.com/issues/59346
750
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
751
* https://tracker.ceph.com/issues/59348
752
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
753
* https://tracker.ceph.com/issues/57676
754
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
755
* https://tracker.ceph.com/issues/61243
756
  qa: tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev - 17 tests failed
757
* https://tracker.ceph.com/issues/62567
758
  postgres workunit times out - MDS_SLOW_REQUEST in logs
759
760
761 174 Rishabh Dave
h3. 6 Sep 2023 Run 2
762
763
https://pulpito.ceph.com/rishabh-2023-08-25_01:50:32-fs-wip-rishabh-2023aug3-b5-testing-default-smithi/ 
764
765
* https://tracker.ceph.com/issues/51964
766
  test_cephfs_mirror_restart_sync_on_blocklist failure
767
* https://tracker.ceph.com/issues/59348
768
  test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
769
* https://tracker.ceph.com/issues/53859
770
  qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
771
* https://tracker.ceph.com/issues/61892
772
  test_strays.TestStrays.test_snapshot_remove failed
773
* https://tracker.ceph.com/issues/54460
774
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
775
* https://tracker.ceph.com/issues/59346
776
  fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write"
777
* https://tracker.ceph.com/issues/59344
778
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
779
* https://tracker.ceph.com/issues/62484
780
  qa: ffsb.sh test failure
781
* https://tracker.ceph.com/issues/62567
782
  Command failed with status 1: "sudo TESTDIR=/home/ubuntu/cephtest bash -c 'sudo -u postgres -- pgbench -s 500 -i'"
783
  
784
* https://tracker.ceph.com/issues/61399
785
  ior build failure
786
* https://tracker.ceph.com/issues/57676
787
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
788
* https://tracker.ceph.com/issues/55805
789
  error scrub thrashing reached max tries in 900 secs
790
791 172 Rishabh Dave
h3. 6 Sep 2023
792 171 Rishabh Dave
793 173 Rishabh Dave
https://pulpito.ceph.com/rishabh-2023-08-10_20:16:46-fs-wip-rishabh-2023Aug1-b4-testing-default-smithi/
794 171 Rishabh Dave
795 1 Patrick Donnelly
* https://tracker.ceph.com/issues/53859
796
  qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
797 173 Rishabh Dave
* https://tracker.ceph.com/issues/51964
798
  qa: test_cephfs_mirror_restart_sync_on_blocklist failure
799 1 Patrick Donnelly
* https://tracker.ceph.com/issues/61892
800 173 Rishabh Dave
  test_snapshot_remove (test_strays.TestStrays) failed
801
* https://tracker.ceph.com/issues/59348
802
  qa: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota
803
* https://tracker.ceph.com/issues/54462
804
  Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128
805
* https://tracker.ceph.com/issues/62556
806
  test_acls: xfstests_dev: python2 is missing
807
* https://tracker.ceph.com/issues/62067
808
  ffsb.sh failure "Resource temporarily unavailable"
809
* https://tracker.ceph.com/issues/57656
810
  dbench: write failed on handle 10010 (Resource temporarily unavailable)
811 1 Patrick Donnelly
* https://tracker.ceph.com/issues/59346
812
  fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write"
813 171 Rishabh Dave
* https://tracker.ceph.com/issues/59344
814 173 Rishabh Dave
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
815
816 171 Rishabh Dave
* https://tracker.ceph.com/issues/61399
817
  ior build failure
818
* https://tracker.ceph.com/issues/57676
819
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
820
* https://tracker.ceph.com/issues/55805
821
  error scrub thrashing reached max tries in 900 secs
822 173 Rishabh Dave
823
* https://tracker.ceph.com/issues/62567
824
  Command failed on smithi008 with status 1: "sudo TESTDIR=/home/ubuntu/cephtest bash -c 'sudo -u postgres -- pgbench -s 500 -i'"
825
* https://tracker.ceph.com/issues/62702
826
  workunit test suites/fsstress.sh on smithi066 with status 124
827 170 Rishabh Dave
828
h3. 5 Sep 2023
829
830
https://pulpito.ceph.com/rishabh-2023-08-25_06:38:25-fs-wip-rishabh-2023aug3-b5-testing-default-smithi/
831
orch:cephadm suite run: http://pulpito.front.sepia.ceph.com/rishabh-2023-09-05_12:16:09-orch:cephadm-wip-rishabh-2023aug3-b5-testing-default-smithi/
832
  this run has failures but acc to Adam King these are not relevant and should be ignored
833
834
* https://tracker.ceph.com/issues/61892
835
  test_snapshot_remove (test_strays.TestStrays) failed
836
* https://tracker.ceph.com/issues/59348
837
  test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota
838
* https://tracker.ceph.com/issues/54462
839
  Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128
840
* https://tracker.ceph.com/issues/62067
841
  ffsb.sh failure "Resource temporarily unavailable"
842
* https://tracker.ceph.com/issues/57656 
843
  dbench: write failed on handle 10010 (Resource temporarily unavailable)
844
* https://tracker.ceph.com/issues/59346
845
  fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write"
846
* https://tracker.ceph.com/issues/59344
847
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
848
* https://tracker.ceph.com/issues/50223
849
  client.xxxx isn't responding to mclientcaps(revoke)
850
* https://tracker.ceph.com/issues/57655
851
  qa: fs:mixed-clients kernel_untar_build failure
852
* https://tracker.ceph.com/issues/62187
853
  iozone.sh: line 5: iozone: command not found
854
 
855
* https://tracker.ceph.com/issues/61399
856
  ior build failure
857
* https://tracker.ceph.com/issues/57676
858
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
859
* https://tracker.ceph.com/issues/55805
860
  error scrub thrashing reached max tries in 900 secs
861 169 Venky Shankar
862
863
h3. 31 Aug 2023
864
865
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230824.045828
866
867
* https://tracker.ceph.com/issues/52624
868
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
869
* https://tracker.ceph.com/issues/62187
870
    iozone: command not found
871
* https://tracker.ceph.com/issues/61399
872
    ior build failure
873
* https://tracker.ceph.com/issues/59531
874
    quincy: "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio)"
875
* https://tracker.ceph.com/issues/61399
876
    qa: build failure for ior (the failed instance is when compiling `mdtest`)
877
* https://tracker.ceph.com/issues/57655
878
    qa: fs:mixed-clients kernel_untar_build failure
879
* https://tracker.ceph.com/issues/59344
880
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
881
* https://tracker.ceph.com/issues/59346
882
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
883
* https://tracker.ceph.com/issues/59348
884
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
885
* https://tracker.ceph.com/issues/59413
886
    cephfs: qa snaptest-git-ceph.sh failed with "got remote process result: 128"
887
* https://tracker.ceph.com/issues/62653
888
    qa: unimplemented fcntl command: 1036 with fsstress
889
* https://tracker.ceph.com/issues/61400
890
    valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
891
* https://tracker.ceph.com/issues/62658
892
    error during scrub thrashing: reached maximum tries (31) after waiting for 900 seconds
893
* https://tracker.ceph.com/issues/62188
894
    AttributeError: 'RemoteProcess' object has no attribute 'read' (happens only with multis-auth test)
895 168 Venky Shankar
896
897
h3. 25 Aug 2023
898
899
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230822.064807
900
901
* https://tracker.ceph.com/issues/59344
902
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
903
* https://tracker.ceph.com/issues/59346
904
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
905
* https://tracker.ceph.com/issues/59348
906
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
907
* https://tracker.ceph.com/issues/57655
908
    qa: fs:mixed-clients kernel_untar_build failure
909
* https://tracker.ceph.com/issues/61243
910
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
911
* https://tracker.ceph.com/issues/61399
912
    ior build failure
913
* https://tracker.ceph.com/issues/61399
914
    qa: build failure for ior (the failed instance is when compiling `mdtest`)
915
* https://tracker.ceph.com/issues/62484
916
    qa: ffsb.sh test failure
917
* https://tracker.ceph.com/issues/59531
918
    quincy: "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio)"
919
* https://tracker.ceph.com/issues/62510
920
    snaptest-git-ceph.sh failure with fs/thrash
921 167 Venky Shankar
922
923
h3. 24 Aug 2023
924
925
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230822.060131
926
927
* https://tracker.ceph.com/issues/57676
928
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
929
* https://tracker.ceph.com/issues/51964
930
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
931
* https://tracker.ceph.com/issues/59344
932
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
933
* https://tracker.ceph.com/issues/59346
934
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
935
* https://tracker.ceph.com/issues/59348
936
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
937
* https://tracker.ceph.com/issues/61399
938
    ior build failure
939
* https://tracker.ceph.com/issues/61399
940
    qa: build failure for ior (the failed instance is when compiling `mdtest`)
941
* https://tracker.ceph.com/issues/62510
942
    snaptest-git-ceph.sh failure with fs/thrash
943
* https://tracker.ceph.com/issues/62484
944
    qa: ffsb.sh test failure
945
* https://tracker.ceph.com/issues/57087
946
    qa: test_fragmented_injection (tasks.cephfs.test_data_scan.TestDataScan) failure
947
* https://tracker.ceph.com/issues/57656
948
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
949
* https://tracker.ceph.com/issues/62187
950
    iozone: command not found
951
* https://tracker.ceph.com/issues/62188
952
    AttributeError: 'RemoteProcess' object has no attribute 'read' (happens only with multis-auth test)
953
* https://tracker.ceph.com/issues/62567
954
    postgres workunit times out - MDS_SLOW_REQUEST in logs
955 166 Venky Shankar
956
957
h3. 22 Aug 2023
958
959
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230809.035933
960
961
* https://tracker.ceph.com/issues/57676
962
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
963
* https://tracker.ceph.com/issues/51964
964
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
965
* https://tracker.ceph.com/issues/59344
966
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
967
* https://tracker.ceph.com/issues/59346
968
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
969
* https://tracker.ceph.com/issues/59348
970
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
971
* https://tracker.ceph.com/issues/61399
972
    ior build failure
973
* https://tracker.ceph.com/issues/61399
974
    qa: build failure for ior (the failed instance is when compiling `mdtest`)
975
* https://tracker.ceph.com/issues/57655
976
    qa: fs:mixed-clients kernel_untar_build failure
977
* https://tracker.ceph.com/issues/61243
978
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
979
* https://tracker.ceph.com/issues/62188
980
    AttributeError: 'RemoteProcess' object has no attribute 'read' (happens only with multis-auth test)
981
* https://tracker.ceph.com/issues/62510
982
    snaptest-git-ceph.sh failure with fs/thrash
983
* https://tracker.ceph.com/issues/62511
984
    src/mds/MDLog.cc: 299: FAILED ceph_assert(!mds_is_shutting_down)
985 165 Venky Shankar
986
987
h3. 14 Aug 2023
988
989
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230808.093601
990
991
* https://tracker.ceph.com/issues/51964
992
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
993
* https://tracker.ceph.com/issues/61400
994
    valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
995
* https://tracker.ceph.com/issues/61399
996
    ior build failure
997
* https://tracker.ceph.com/issues/59348
998
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
999
* https://tracker.ceph.com/issues/59531
1000
    cluster [WRN] OSD bench result of 137706.272521 IOPS exceeded the threshold
1001
* https://tracker.ceph.com/issues/59344
1002
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
1003
* https://tracker.ceph.com/issues/59346
1004
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
1005
* https://tracker.ceph.com/issues/61399
1006
    qa: build failure for ior (the failed instance is when compiling `mdtest`)
1007
* https://tracker.ceph.com/issues/59684 [kclient bug]
1008
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
1009
* https://tracker.ceph.com/issues/61243 (NEW)
1010
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
1011
* https://tracker.ceph.com/issues/57655
1012
    qa: fs:mixed-clients kernel_untar_build failure
1013
* https://tracker.ceph.com/issues/57656
1014
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
1015 163 Venky Shankar
1016
1017
h3. 28 JULY 2023
1018
1019
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230725.053049
1020
1021
* https://tracker.ceph.com/issues/51964
1022
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1023
* https://tracker.ceph.com/issues/61400
1024
    valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
1025
* https://tracker.ceph.com/issues/61399
1026
    ior build failure
1027
* https://tracker.ceph.com/issues/57676
1028
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1029
* https://tracker.ceph.com/issues/59348
1030
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1031
* https://tracker.ceph.com/issues/59531
1032
    cluster [WRN] OSD bench result of 137706.272521 IOPS exceeded the threshold
1033
* https://tracker.ceph.com/issues/59344
1034
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
1035
* https://tracker.ceph.com/issues/59346
1036
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
1037
* https://github.com/ceph/ceph/pull/52556
1038
    task/test_nfs: AttributeError: 'TestNFS' object has no attribute 'run_ceph_cmd' (see note #4)
1039
* https://tracker.ceph.com/issues/62187
1040
    iozone: command not found
1041
* https://tracker.ceph.com/issues/61399
1042
    qa: build failure for ior (the failed instance is when compiling `mdtest`)
1043
* https://tracker.ceph.com/issues/62188
1044 164 Rishabh Dave
    AttributeError: 'RemoteProcess' object has no attribute 'read' (happens only with multis-auth test)
1045 158 Rishabh Dave
1046
h3. 24 Jul 2023
1047
1048
https://pulpito.ceph.com/rishabh-2023-07-13_21:35:13-fs-wip-rishabh-2023Jul13-testing-default-smithi/
1049
https://pulpito.ceph.com/rishabh-2023-07-14_10:26:42-fs-wip-rishabh-2023Jul13-testing-default-smithi/
1050
There were few failure from one of the PRs under testing. Following run confirms that removing this PR fixes these failures -
1051
https://pulpito.ceph.com/rishabh-2023-07-18_02:11:50-fs-wip-rishabh-2023Jul13-m-quota-testing-default-smithi/
1052
One more extra run to check if blogbench.sh fail every time:
1053
https://pulpito.ceph.com/rishabh-2023-07-21_17:58:19-fs-wip-rishabh-2023Jul13-m-quota-testing-default-smithi/
1054
blogbench.sh failure were seen on above runs for first time, following run with main branch that confirms that "blogbench.sh" was not related to any of the PRs that are under testing -
1055 161 Rishabh Dave
https://pulpito.ceph.com/rishabh-2023-07-21_21:30:53-fs-wip-rishabh-2023Jul13-base-2-testing-default-smithi/
1056
1057
* https://tracker.ceph.com/issues/61892
1058
  test_snapshot_remove (test_strays.TestStrays) failed
1059
* https://tracker.ceph.com/issues/53859
1060
  test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
1061
* https://tracker.ceph.com/issues/61982
1062
  test_clean_stale_subvolume_snapshot_metadata (tasks.cephfs.test_volumes.TestSubvolumeSnapshots)
1063
* https://tracker.ceph.com/issues/52438
1064
  qa: ffsb timeout
1065
* https://tracker.ceph.com/issues/54460
1066
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1067
* https://tracker.ceph.com/issues/57655
1068
  qa: fs:mixed-clients kernel_untar_build failure
1069
* https://tracker.ceph.com/issues/48773
1070
  reached max tries: scrub does not complete
1071
* https://tracker.ceph.com/issues/58340
1072
  mds: fsstress.sh hangs with multimds
1073
* https://tracker.ceph.com/issues/61400
1074
  valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
1075
* https://tracker.ceph.com/issues/57206
1076
  libcephfs/test.sh: ceph_test_libcephfs_reclaim
1077
  
1078
* https://tracker.ceph.com/issues/57656
1079
  [testing] dbench: write failed on handle 10010 (Resource temporarily unavailable)
1080
* https://tracker.ceph.com/issues/61399
1081
  ior build failure
1082
* https://tracker.ceph.com/issues/57676
1083
  error during scrub thrashing: backtrace
1084
  
1085
* https://tracker.ceph.com/issues/38452
1086
  'sudo -u postgres -- pgbench -s 500 -i' failed
1087 158 Rishabh Dave
* https://tracker.ceph.com/issues/62126
1088 157 Venky Shankar
  blogbench.sh failure
1089
1090
h3. 18 July 2023
1091
1092
* https://tracker.ceph.com/issues/52624
1093
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
1094
* https://tracker.ceph.com/issues/57676
1095
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1096
* https://tracker.ceph.com/issues/54460
1097
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1098
* https://tracker.ceph.com/issues/57655
1099
    qa: fs:mixed-clients kernel_untar_build failure
1100
* https://tracker.ceph.com/issues/51964
1101
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1102
* https://tracker.ceph.com/issues/59344
1103
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
1104
* https://tracker.ceph.com/issues/61182
1105
    cephfs-mirror-ha-workunit: reached maximum tries (50) after waiting for 300 seconds
1106
* https://tracker.ceph.com/issues/61957
1107
    test_client_limits.TestClientLimits.test_client_release_bug
1108
* https://tracker.ceph.com/issues/59348
1109
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1110
* https://tracker.ceph.com/issues/61892
1111
    test_strays.TestStrays.test_snapshot_remove failed
1112
* https://tracker.ceph.com/issues/59346
1113
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
1114
* https://tracker.ceph.com/issues/44565
1115
    src/mds/SimpleLock.h: 528: FAILED ceph_assert(state == LOCK_XLOCK || state == LOCK_XLOCKDONE || state == LOCK_XLOCKSNAP || state == LOCK_LOCK_XLOCK || state == LOCK_LOCK || is_locallock())
1116
* https://tracker.ceph.com/issues/62067
1117
    ffsb.sh failure "Resource temporarily unavailable"
1118 156 Venky Shankar
1119
1120
h3. 17 July 2023
1121
1122
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230704.040136
1123
1124
* https://tracker.ceph.com/issues/61982
1125
    Test failure: test_clean_stale_subvolume_snapshot_metadata (tasks.cephfs.test_volumes.TestSubvolumeSnapshots)
1126
* https://tracker.ceph.com/issues/59344
1127
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
1128
* https://tracker.ceph.com/issues/61182
1129
    cephfs-mirror-ha-workunit: reached maximum tries (50) after waiting for 300 seconds
1130
* https://tracker.ceph.com/issues/61957
1131
    test_client_limits.TestClientLimits.test_client_release_bug
1132
* https://tracker.ceph.com/issues/61400
1133
    valgrind+ceph-mon: segmentation fault in rocksdb+tcmalloc
1134
* https://tracker.ceph.com/issues/59348
1135
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1136
* https://tracker.ceph.com/issues/61892
1137
    test_strays.TestStrays.test_snapshot_remove failed
1138
* https://tracker.ceph.com/issues/59346
1139
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
1140
* https://tracker.ceph.com/issues/62036
1141
    src/mds/MDCache.cc: 5131: FAILED ceph_assert(isolated_inodes.empty())
1142
* https://tracker.ceph.com/issues/61737
1143
    coredump from '/bin/podman pull quay.ceph.io/ceph-ci/ceph:pacific'
1144
* https://tracker.ceph.com/issues/44565
1145
    src/mds/SimpleLock.h: 528: FAILED ceph_assert(state == LOCK_XLOCK || state == LOCK_XLOCKDONE || state == LOCK_XLOCKSNAP || state == LOCK_LOCK_XLOCK || state == LOCK_LOCK || is_locallock())
1146 155 Rishabh Dave
1147 1 Patrick Donnelly
1148 153 Rishabh Dave
h3. 13 July 2023 Run 2
1149 152 Rishabh Dave
1150
1151
https://pulpito.ceph.com/rishabh-2023-07-08_23:33:40-fs-wip-rishabh-2023Jul9-testing-default-smithi/
1152
https://pulpito.ceph.com/rishabh-2023-07-09_20:19:09-fs-wip-rishabh-2023Jul9-testing-default-smithi/
1153
1154
* https://tracker.ceph.com/issues/61957
1155
  test_client_limits.TestClientLimits.test_client_release_bug
1156
* https://tracker.ceph.com/issues/61982
1157
  Test failure: test_clean_stale_subvolume_snapshot_metadata (tasks.cephfs.test_volumes.TestSubvolumeSnapshots)
1158
* https://tracker.ceph.com/issues/59348
1159
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1160
* https://tracker.ceph.com/issues/59344
1161
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
1162
* https://tracker.ceph.com/issues/54460
1163
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1164
* https://tracker.ceph.com/issues/57655
1165
  qa: fs:mixed-clients kernel_untar_build failure
1166
* https://tracker.ceph.com/issues/61400
1167
  valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
1168
* https://tracker.ceph.com/issues/61399
1169
  ior build failure
1170
1171 151 Venky Shankar
h3. 13 July 2023
1172
1173
https://pulpito.ceph.com/vshankar-2023-07-04_11:45:30-fs-wip-vshankar-testing-20230704.040242-testing-default-smithi/
1174
1175
* https://tracker.ceph.com/issues/54460
1176
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1177
* https://tracker.ceph.com/issues/61400
1178
    valgrind+ceph-mon: segmentation fault in rocksdb+tcmalloc
1179
* https://tracker.ceph.com/issues/57655
1180
    qa: fs:mixed-clients kernel_untar_build failure
1181
* https://tracker.ceph.com/issues/61945
1182
    LibCephFS.DelegTimeout failure
1183
* https://tracker.ceph.com/issues/52624
1184
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
1185
* https://tracker.ceph.com/issues/57676
1186
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1187
* https://tracker.ceph.com/issues/59348
1188
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1189
* https://tracker.ceph.com/issues/59344
1190
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
1191
* https://tracker.ceph.com/issues/51964
1192
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1193
* https://tracker.ceph.com/issues/59346
1194
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
1195
* https://tracker.ceph.com/issues/61982
1196
    Test failure: test_clean_stale_subvolume_snapshot_metadata (tasks.cephfs.test_volumes.TestSubvolumeSnapshots)
1197 150 Rishabh Dave
1198
1199
h3. 13 Jul 2023
1200
1201
https://pulpito.ceph.com/rishabh-2023-07-05_22:21:20-fs-wip-rishabh-2023Jul5-testing-default-smithi/
1202
https://pulpito.ceph.com/rishabh-2023-07-06_19:33:28-fs-wip-rishabh-2023Jul5-testing-default-smithi/
1203
1204
* https://tracker.ceph.com/issues/61957
1205
  test_client_limits.TestClientLimits.test_client_release_bug
1206
* https://tracker.ceph.com/issues/59348
1207
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1208
* https://tracker.ceph.com/issues/59346
1209
  fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
1210
* https://tracker.ceph.com/issues/48773
1211
  scrub does not complete: reached max tries
1212
* https://tracker.ceph.com/issues/59344
1213
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
1214
* https://tracker.ceph.com/issues/52438
1215
  qa: ffsb timeout
1216
* https://tracker.ceph.com/issues/57656
1217
  [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
1218
* https://tracker.ceph.com/issues/58742
1219
  xfstests-dev: kcephfs: generic
1220
* https://tracker.ceph.com/issues/61399
1221 148 Rishabh Dave
  libmpich: undefined references to fi_strerror
1222 149 Rishabh Dave
1223 148 Rishabh Dave
h3. 12 July 2023
1224
1225
https://pulpito.ceph.com/rishabh-2023-07-05_18:32:52-fs-wip-rishabh-CephManager-in-CephFSTestCase-testing-default-smithi/
1226
https://pulpito.ceph.com/rishabh-2023-07-06_19:46:43-fs-wip-rishabh-CephManager-in-CephFSTestCase-testing-default-smithi/
1227
1228
* https://tracker.ceph.com/issues/61892
1229
  test_strays.TestStrays.test_snapshot_remove failed
1230
* https://tracker.ceph.com/issues/59348
1231
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1232
* https://tracker.ceph.com/issues/53859
1233
  qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
1234
* https://tracker.ceph.com/issues/59346
1235
  fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write"
1236
* https://tracker.ceph.com/issues/58742
1237
  xfstests-dev: kcephfs: generic
1238
* https://tracker.ceph.com/issues/59344
1239
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
1240
* https://tracker.ceph.com/issues/52438
1241
  qa: ffsb timeout
1242
* https://tracker.ceph.com/issues/57656
1243
  [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
1244
* https://tracker.ceph.com/issues/54460
1245
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1246
* https://tracker.ceph.com/issues/57655
1247
  qa: fs:mixed-clients kernel_untar_build failure
1248
* https://tracker.ceph.com/issues/61182
1249
  cephfs-mirror-ha-workunit: reached maximum tries (50) after waiting for 300 seconds
1250
* https://tracker.ceph.com/issues/61400
1251
  valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
1252 147 Rishabh Dave
* https://tracker.ceph.com/issues/48773
1253 146 Patrick Donnelly
  reached max tries: scrub does not complete
1254
1255
h3. 05 July 2023
1256
1257
https://pulpito.ceph.com/pdonnell-2023-07-05_03:38:33-fs:libcephfs-wip-pdonnell-testing-20230705.003205-distro-default-smithi/
1258
1259 137 Rishabh Dave
* https://tracker.ceph.com/issues/59346
1260 143 Rishabh Dave
  fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write"
1261
1262
h3. 27 Jun 2023
1263
1264
https://pulpito.ceph.com/rishabh-2023-06-21_23:38:17-fs-wip-rishabh-improvements-authmon-testing-default-smithi/
1265 144 Rishabh Dave
https://pulpito.ceph.com/rishabh-2023-06-23_17:37:30-fs-wip-rishabh-improvements-authmon-distro-default-smithi/
1266
1267
* https://tracker.ceph.com/issues/59348
1268
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1269
* https://tracker.ceph.com/issues/54460
1270
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1271
* https://tracker.ceph.com/issues/59346
1272
  fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write"
1273
* https://tracker.ceph.com/issues/59344
1274
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
1275
* https://tracker.ceph.com/issues/61399
1276
  libmpich: undefined references to fi_strerror
1277
* https://tracker.ceph.com/issues/50223
1278
  client.xxxx isn't responding to mclientcaps(revoke)
1279 143 Rishabh Dave
* https://tracker.ceph.com/issues/61831
1280
  Test failure: test_mirroring_init_failure_with_recovery (tasks.cephfs.test_mirroring.TestMirroring)
1281 142 Venky Shankar
1282
1283
h3. 22 June 2023
1284
1285
* https://tracker.ceph.com/issues/57676
1286
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1287
* https://tracker.ceph.com/issues/54460
1288
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1289
* https://tracker.ceph.com/issues/59344
1290
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
1291
* https://tracker.ceph.com/issues/59348
1292
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1293
* https://tracker.ceph.com/issues/61400
1294
    valgrind+ceph-mon: segmentation fault in rocksdb+tcmalloc
1295
* https://tracker.ceph.com/issues/57655
1296
    qa: fs:mixed-clients kernel_untar_build failure
1297
* https://tracker.ceph.com/issues/61394
1298
    qa/quincy: cluster [WRN] evicting unresponsive client smithi152 (4298), after 303.726 seconds" in cluster log
1299
* https://tracker.ceph.com/issues/61762
1300
    qa: wait_for_clean: failed before timeout expired
1301
* https://tracker.ceph.com/issues/61775
1302
    cephfs-mirror: mirror daemon does not shutdown (in mirror ha tests)
1303
* https://tracker.ceph.com/issues/44565
1304
    src/mds/SimpleLock.h: 528: FAILED ceph_assert(state == LOCK_XLOCK || state == LOCK_XLOCKDONE || state == LOCK_XLOCKSNAP || state == LOCK_LOCK_XLOCK || state == LOCK_LOCK || is_locallock())
1305
* https://tracker.ceph.com/issues/61790
1306
    cephfs client to mds comms remain silent after reconnect
1307
* https://tracker.ceph.com/issues/61791
1308
    snaptest-git-ceph.sh test timed out (job dead)
1309 139 Venky Shankar
1310
1311
h3. 20 June 2023
1312
1313
https://pulpito.ceph.com/vshankar-2023-06-15_04:58:28-fs-wip-vshankar-testing-20230614.124123-testing-default-smithi/
1314
1315
* https://tracker.ceph.com/issues/57676
1316
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1317
* https://tracker.ceph.com/issues/54460
1318
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1319 140 Venky Shankar
* https://tracker.ceph.com/issues/54462
1320 1 Patrick Donnelly
  Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128
1321 141 Venky Shankar
* https://tracker.ceph.com/issues/58340
1322 139 Venky Shankar
  mds: fsstress.sh hangs with multimds
1323
* https://tracker.ceph.com/issues/59344
1324
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
1325
* https://tracker.ceph.com/issues/59348
1326
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1327
* https://tracker.ceph.com/issues/57656
1328
  [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
1329
* https://tracker.ceph.com/issues/61400
1330
    valgrind+ceph-mon: segmentation fault in rocksdb+tcmalloc
1331
* https://tracker.ceph.com/issues/57655
1332
    qa: fs:mixed-clients kernel_untar_build failure
1333
* https://tracker.ceph.com/issues/44565
1334
    src/mds/SimpleLock.h: 528: FAILED ceph_assert(state == LOCK_XLOCK || state == LOCK_XLOCKDONE || state == LOCK_XLOCKSNAP || state == LOCK_LOCK_XLOCK || state == LOCK_LOCK || is_locallock())
1335
* https://tracker.ceph.com/issues/61737
1336 138 Rishabh Dave
    coredump from '/bin/podman pull quay.ceph.io/ceph-ci/ceph:pacific'
1337
1338
h3. 16 June 2023
1339
1340 1 Patrick Donnelly
https://pulpito.ceph.com/rishabh-2023-05-16_10:39:13-fs-wip-rishabh-2023May15-1524-testing-default-smithi/
1341 145 Rishabh Dave
https://pulpito.ceph.com/rishabh-2023-05-17_11:09:48-fs-wip-rishabh-2023May15-1524-testing-default-smithi/
1342 138 Rishabh Dave
https://pulpito.ceph.com/rishabh-2023-05-18_10:01:53-fs-wip-rishabh-2023May15-1524-testing-default-smithi/
1343 1 Patrick Donnelly
(bins were rebuilt with a subset of orig PRs) https://pulpito.ceph.com/rishabh-2023-06-09_10:19:22-fs-wip-rishabh-2023Jun9-1308-testing-default-smithi/
1344
1345
1346
* https://tracker.ceph.com/issues/59344
1347
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
1348 138 Rishabh Dave
* https://tracker.ceph.com/issues/59348
1349
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1350 145 Rishabh Dave
* https://tracker.ceph.com/issues/59346
1351
  fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write"
1352
* https://tracker.ceph.com/issues/57656
1353
  [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
1354
* https://tracker.ceph.com/issues/54460
1355
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1356 138 Rishabh Dave
* https://tracker.ceph.com/issues/54462
1357
  Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128
1358 145 Rishabh Dave
* https://tracker.ceph.com/issues/61399
1359
  libmpich: undefined references to fi_strerror
1360
* https://tracker.ceph.com/issues/58945
1361
  xfstests-dev: ceph-fuse: generic 
1362 138 Rishabh Dave
* https://tracker.ceph.com/issues/58742
1363 136 Patrick Donnelly
  xfstests-dev: kcephfs: generic
1364
1365
h3. 24 May 2023
1366
1367
https://pulpito.ceph.com/pdonnell-2023-05-23_18:20:18-fs-wip-pdonnell-testing-20230523.134409-distro-default-smithi/
1368
1369
* https://tracker.ceph.com/issues/57676
1370
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1371
* https://tracker.ceph.com/issues/59683
1372
    Error: Unable to find a match: userspace-rcu-devel libedit-devel device-mapper-devel with fscrypt tests
1373
* https://tracker.ceph.com/issues/61399
1374
    qa: "[Makefile:299: ior] Error 1"
1375
* https://tracker.ceph.com/issues/61265
1376
    qa: tasks.cephfs.fuse_mount:process failed to terminate after unmount
1377
* https://tracker.ceph.com/issues/59348
1378
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1379
* https://tracker.ceph.com/issues/59346
1380
    qa/workunits/fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write"
1381
* https://tracker.ceph.com/issues/61400
1382
    valgrind+ceph-mon: segmentation fault in rocksdb+tcmalloc
1383
* https://tracker.ceph.com/issues/54460
1384
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1385
* https://tracker.ceph.com/issues/51964
1386
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1387
* https://tracker.ceph.com/issues/59344
1388
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
1389
* https://tracker.ceph.com/issues/61407
1390
    mds: abort on CInode::verify_dirfrags
1391
* https://tracker.ceph.com/issues/48773
1392
    qa: scrub does not complete
1393
* https://tracker.ceph.com/issues/57655
1394
    qa: fs:mixed-clients kernel_untar_build failure
1395
* https://tracker.ceph.com/issues/61409
1396 128 Venky Shankar
    qa: _test_stale_caps does not wait for file flush before stat
1397
1398
h3. 15 May 2023
1399 130 Venky Shankar
1400 128 Venky Shankar
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230509.090020
1401
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230509.090020-6
1402
1403
* https://tracker.ceph.com/issues/52624
1404
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
1405
* https://tracker.ceph.com/issues/54460
1406
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1407
* https://tracker.ceph.com/issues/57676
1408
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1409
* https://tracker.ceph.com/issues/59684 [kclient bug]
1410
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
1411
* https://tracker.ceph.com/issues/59348
1412
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1413 131 Venky Shankar
* https://tracker.ceph.com/issues/61148
1414
    dbench test results in call trace in dmesg [kclient bug]
1415 133 Kotresh Hiremath Ravishankar
* https://tracker.ceph.com/issues/58340
1416 134 Kotresh Hiremath Ravishankar
    mds: fsstress.sh hangs with multimds
1417 125 Venky Shankar
1418
 
1419 129 Rishabh Dave
h3. 11 May 2023
1420
1421
https://pulpito.ceph.com/yuriw-2023-05-10_18:21:40-fs-wip-yuri7-testing-2023-05-10-0742-distro-default-smithi/
1422
1423
* https://tracker.ceph.com/issues/59684 [kclient bug]
1424
  Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
1425
* https://tracker.ceph.com/issues/59348
1426
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1427
* https://tracker.ceph.com/issues/57655
1428
  qa: fs:mixed-clients kernel_untar_build failure
1429
* https://tracker.ceph.com/issues/57676
1430
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
1431
* https://tracker.ceph.com/issues/55805
1432
  error during scrub thrashing reached max tries in 900 secs
1433
* https://tracker.ceph.com/issues/54460
1434
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1435
* https://tracker.ceph.com/issues/57656
1436
  [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
1437
* https://tracker.ceph.com/issues/58220
1438
  Command failed (workunit test fs/quota/quota.sh) on smithi081 with status 1:
1439 1 Patrick Donnelly
* https://tracker.ceph.com/issues/58220#note-9
1440
  workunit fs/test_python.sh: test_disk_quota_exceeeded_error failure
1441 134 Kotresh Hiremath Ravishankar
* https://tracker.ceph.com/issues/59342
1442
  qa/workunits/kernel_untar_build.sh failed when compiling the Linux source
1443 135 Kotresh Hiremath Ravishankar
* https://tracker.ceph.com/issues/58949
1444
    test_cephfs.test_disk_quota_exceeeded_error - AssertionError: DiskQuotaExceeded not raised by write
1445 129 Rishabh Dave
* https://tracker.ceph.com/issues/61243 (NEW)
1446
  test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
1447
1448 125 Venky Shankar
h3. 11 May 2023
1449 127 Venky Shankar
1450
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230509.054005
1451 126 Venky Shankar
1452 125 Venky Shankar
(no fsstress job failure [https://tracker.ceph.com/issues/58340] since https://github.com/ceph/ceph/pull/49553
1453
 was included in the branch, however, the PR got updated and needs retest).
1454
1455
* https://tracker.ceph.com/issues/52624
1456
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
1457
* https://tracker.ceph.com/issues/54460
1458
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1459
* https://tracker.ceph.com/issues/57676
1460
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1461
* https://tracker.ceph.com/issues/59683
1462
    Error: Unable to find a match: userspace-rcu-devel libedit-devel device-mapper-devel with fscrypt tests
1463
* https://tracker.ceph.com/issues/59684 [kclient bug]
1464
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
1465
* https://tracker.ceph.com/issues/59348
1466 124 Venky Shankar
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1467
1468
h3. 09 May 2023
1469
1470
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230506.143554
1471
1472
* https://tracker.ceph.com/issues/52624
1473
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
1474
* https://tracker.ceph.com/issues/58340
1475
    mds: fsstress.sh hangs with multimds
1476
* https://tracker.ceph.com/issues/54460
1477
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1478
* https://tracker.ceph.com/issues/57676
1479
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1480
* https://tracker.ceph.com/issues/51964
1481
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1482
* https://tracker.ceph.com/issues/59350
1483
    qa: test_flag_scrub_mdsdir (tasks.cephfs.test_scrub_checks.TestScrubChecks) ... ERROR
1484
* https://tracker.ceph.com/issues/59683
1485
    Error: Unable to find a match: userspace-rcu-devel libedit-devel device-mapper-devel with fscrypt tests
1486
* https://tracker.ceph.com/issues/59684 [kclient bug]
1487
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
1488
* https://tracker.ceph.com/issues/59348
1489 123 Venky Shankar
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1490
1491
h3. 10 Apr 2023
1492
1493
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230330.105356
1494
1495
* https://tracker.ceph.com/issues/52624
1496
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
1497
* https://tracker.ceph.com/issues/58340
1498
    mds: fsstress.sh hangs with multimds
1499
* https://tracker.ceph.com/issues/54460
1500
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1501
* https://tracker.ceph.com/issues/57676
1502
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1503 119 Rishabh Dave
* https://tracker.ceph.com/issues/51964
1504 120 Rishabh Dave
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1505 121 Rishabh Dave
1506 120 Rishabh Dave
h3. 31 Mar 2023
1507 122 Rishabh Dave
1508
run: http://pulpito.front.sepia.ceph.com/rishabh-2023-03-03_21:39:49-fs-wip-rishabh-2023Mar03-2316-testing-default-smithi/
1509 120 Rishabh Dave
re-run (some PRs removed from batch): http://pulpito.front.sepia.ceph.com/rishabh-2023-03-11_05:54:03-fs-wip-rishabh-2023Mar10-1727-testing-default-smithi/
1510
re-run (some PRs removed from batch): http://pulpito.front.sepia.ceph.com/rishabh-2023-03-23_08:27:28-fs-wip-rishabh-2023Mar20-2250-testing-default-smithi/
1511
1512
There were many more re-runs for "failed+dead" jobs as well as for individual jobs. half of the PRs from the batch were removed (gradually over subsequent re-runs).
1513
1514
* https://tracker.ceph.com/issues/57676
1515
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
1516
* https://tracker.ceph.com/issues/54460
1517
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1518
* https://tracker.ceph.com/issues/58220
1519
  Command failed (workunit test fs/quota/quota.sh) on smithixxx with status 1
1520
* https://tracker.ceph.com/issues/58220#note-9
1521
  workunit fs/test_python.sh: test_disk_quota_exceeeded_error failure
1522
* https://tracker.ceph.com/issues/56695
1523
  Command failed (workunit test suites/pjd.sh)
1524
* https://tracker.ceph.com/issues/58564 
1525
  workuit dbench failed with error code 1
1526
* https://tracker.ceph.com/issues/57206
1527
  libcephfs/test.sh: ceph_test_libcephfs_reclaim
1528
* https://tracker.ceph.com/issues/57580
1529
  Test failure: test_newops_getvxattr (tasks.cephfs.test_newops.TestNewOps)
1530
* https://tracker.ceph.com/issues/58940
1531
  ceph osd hit ceph_abort
1532
* https://tracker.ceph.com/issues/55805
1533 118 Venky Shankar
  error scrub thrashing reached max tries in 900 secs
1534
1535
h3. 30 March 2023
1536
1537
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230315.085747
1538
1539
* https://tracker.ceph.com/issues/58938
1540
    qa: xfstests-dev's generic test suite has 7 failures with kclient
1541
* https://tracker.ceph.com/issues/51964
1542
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1543
* https://tracker.ceph.com/issues/58340
1544 114 Venky Shankar
    mds: fsstress.sh hangs with multimds
1545
1546 115 Venky Shankar
h3. 29 March 2023
1547 114 Venky Shankar
1548
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230317.095222
1549
1550
* https://tracker.ceph.com/issues/56695
1551
    [RHEL stock] pjd test failures
1552
* https://tracker.ceph.com/issues/57676
1553
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1554
* https://tracker.ceph.com/issues/57087
1555
    qa: test_fragmented_injection (tasks.cephfs.test_data_scan.TestDataScan) failure
1556 116 Venky Shankar
* https://tracker.ceph.com/issues/58340
1557
    mds: fsstress.sh hangs with multimds
1558 114 Venky Shankar
* https://tracker.ceph.com/issues/57655
1559
    qa: fs:mixed-clients kernel_untar_build failure
1560 117 Venky Shankar
* https://tracker.ceph.com/issues/59230
1561
    Test failure: test_object_deletion (tasks.cephfs.test_damage.TestDamage)
1562 114 Venky Shankar
* https://tracker.ceph.com/issues/58938
1563 113 Venky Shankar
    qa: xfstests-dev's generic test suite has 7 failures with kclient
1564
1565
h3. 13 Mar 2023
1566
1567
* https://tracker.ceph.com/issues/56695
1568
    [RHEL stock] pjd test failures
1569
* https://tracker.ceph.com/issues/57676
1570
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1571
* https://tracker.ceph.com/issues/51964
1572
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1573
* https://tracker.ceph.com/issues/54460
1574
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1575
* https://tracker.ceph.com/issues/57656
1576 112 Venky Shankar
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
1577
1578
h3. 09 Mar 2023
1579
1580
https://pulpito.ceph.com/vshankar-2023-03-03_04:39:14-fs-wip-vshankar-testing-20230303.023823-testing-default-smithi/
1581
https://pulpito.ceph.com/vshankar-2023-03-08_15:12:36-fs-wip-vshankar-testing-20230308.112059-testing-default-smithi/
1582
1583
* https://tracker.ceph.com/issues/56695
1584
    [RHEL stock] pjd test failures
1585
* https://tracker.ceph.com/issues/57676
1586
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1587
* https://tracker.ceph.com/issues/51964
1588
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1589
* https://tracker.ceph.com/issues/54460
1590
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1591
* https://tracker.ceph.com/issues/58340
1592
    mds: fsstress.sh hangs with multimds
1593
* https://tracker.ceph.com/issues/57087
1594 111 Venky Shankar
    qa: test_fragmented_injection (tasks.cephfs.test_data_scan.TestDataScan) failure
1595
1596
h3. 07 Mar 2023
1597
1598
https://pulpito.ceph.com/vshankar-2023-03-02_09:21:58-fs-wip-vshankar-testing-20230222.044949-testing-default-smithi/
1599
https://pulpito.ceph.com/vshankar-2023-03-07_05:15:12-fs-wip-vshankar-testing-20230307.030510-testing-default-smithi/
1600
1601
* https://tracker.ceph.com/issues/56695
1602
    [RHEL stock] pjd test failures
1603
* https://tracker.ceph.com/issues/57676
1604
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1605
* https://tracker.ceph.com/issues/51964
1606
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1607
* https://tracker.ceph.com/issues/57656
1608
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
1609
* https://tracker.ceph.com/issues/57655
1610
    qa: fs:mixed-clients kernel_untar_build failure
1611
* https://tracker.ceph.com/issues/58220
1612
    Command failed (workunit test fs/quota/quota.sh) on smithi081 with status 1:
1613
* https://tracker.ceph.com/issues/54460
1614
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1615
* https://tracker.ceph.com/issues/58934
1616 109 Venky Shankar
    snaptest-git-ceph.sh failure with ceph-fuse
1617
1618
h3. 28 Feb 2023
1619
1620
https://pulpito.ceph.com/vshankar-2023-02-24_02:11:45-fs-wip-vshankar-testing-20230222.025426-testing-default-smithi/
1621
1622
* https://tracker.ceph.com/issues/56695
1623
    [RHEL stock] pjd test failures
1624
* https://tracker.ceph.com/issues/57676
1625
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1626 110 Venky Shankar
* https://tracker.ceph.com/issues/56446
1627 109 Venky Shankar
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
1628
1629 107 Venky Shankar
(teuthology infra issues causing testing delays - merging PRs which have tests passing)
1630
1631
h3. 25 Jan 2023
1632
1633
https://pulpito.ceph.com/vshankar-2023-01-25_07:57:32-fs-wip-vshankar-testing-20230125.055346-testing-default-smithi/
1634
1635
* https://tracker.ceph.com/issues/52624
1636
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
1637
* https://tracker.ceph.com/issues/56695
1638
    [RHEL stock] pjd test failures
1639
* https://tracker.ceph.com/issues/57676
1640
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1641
* https://tracker.ceph.com/issues/56446
1642
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
1643
* https://tracker.ceph.com/issues/57206
1644
    libcephfs/test.sh: ceph_test_libcephfs_reclaim
1645
* https://tracker.ceph.com/issues/58220
1646
    Command failed (workunit test fs/quota/quota.sh) on smithi081 with status 1:
1647
* https://tracker.ceph.com/issues/58340
1648
  mds: fsstress.sh hangs with multimds
1649
* https://tracker.ceph.com/issues/56011
1650
    fs/thrash: snaptest-snap-rm-cmp.sh fails in mds5sum comparison
1651
* https://tracker.ceph.com/issues/54460
1652 101 Rishabh Dave
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1653
1654
h3. 30 JAN 2023
1655
1656
run: http://pulpito.front.sepia.ceph.com/rishabh-2022-11-28_08:04:11-fs-wip-rishabh-testing-2022Nov24-1818-testing-default-smithi/
1657
re-run: http://pulpito.front.sepia.ceph.com/rishabh-2023-01-13_12:08:33-fs-wip-rishabh-testing-2022Nov24-11Jan2023-distro-default-smithi/
1658 105 Rishabh Dave
re-run of re-run: http://pulpito.front.sepia.ceph.com/rishabh-2023-01-23_18:53:32-fs-wip-rishabh-testing-2022Nov24-11Jan2023-distro-default-smithi/
1659
1660 101 Rishabh Dave
* https://tracker.ceph.com/issues/52624
1661
  qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
1662
* https://tracker.ceph.com/issues/56695
1663
  [RHEL stock] pjd test failures
1664
* https://tracker.ceph.com/issues/57676
1665
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
1666
* https://tracker.ceph.com/issues/55332
1667
  Failure in snaptest-git-ceph.sh
1668
* https://tracker.ceph.com/issues/51964
1669
  qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1670
* https://tracker.ceph.com/issues/56446
1671
  Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
1672
* https://tracker.ceph.com/issues/57655 
1673
  qa: fs:mixed-clients kernel_untar_build failure
1674
* https://tracker.ceph.com/issues/54460
1675
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1676 103 Rishabh Dave
* https://tracker.ceph.com/issues/58340
1677
  mds: fsstress.sh hangs with multimds
1678 101 Rishabh Dave
* https://tracker.ceph.com/issues/58219
1679 102 Rishabh Dave
  Command crashed: 'ceph-dencoder type inode_backtrace_t import - decode dump_json'
1680
1681
* "Failed to load ceph-mgr modules: prometheus" in cluster log"
1682 106 Rishabh Dave
  http://pulpito.front.sepia.ceph.com/rishabh-2023-01-23_18:53:32-fs-wip-rishabh-testing-2022Nov24-11Jan2023-distro-default-smithi/7134086
1683
  Acc to Venky this was fixed in https://github.com/ceph/ceph/commit/cf6089200d96fc56b08ee17a4e31f19823370dc8
1684 102 Rishabh Dave
* Created https://tracker.ceph.com/issues/58564
1685 100 Venky Shankar
  workunit test suites/dbench.sh failed error code 1
1686
1687
h3. 15 Dec 2022
1688
1689
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20221215.112736
1690
1691
* https://tracker.ceph.com/issues/52624
1692
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1693
* https://tracker.ceph.com/issues/56695
1694
    [RHEL stock] pjd test failures
1695
* https://tracker.ceph.com/issues/58219
1696
* https://tracker.ceph.com/issues/57655
1697
* qa: fs:mixed-clients kernel_untar_build failure
1698
    Test failure: test_journal_migration (tasks.cephfs.test_journal_migration.TestJournalMigration)
1699
* https://tracker.ceph.com/issues/57676
1700
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1701
* https://tracker.ceph.com/issues/58340
1702 96 Venky Shankar
    mds: fsstress.sh hangs with multimds
1703
1704
h3. 08 Dec 2022
1705 99 Venky Shankar
1706 96 Venky Shankar
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20221130.043104
1707
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20221209.043803
1708
1709
(lots of transient git.ceph.com failures)
1710
1711
* https://tracker.ceph.com/issues/52624
1712
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1713
* https://tracker.ceph.com/issues/56695
1714
    [RHEL stock] pjd test failures
1715
* https://tracker.ceph.com/issues/57655
1716
    qa: fs:mixed-clients kernel_untar_build failure
1717
* https://tracker.ceph.com/issues/58219
1718
    Test failure: test_journal_migration (tasks.cephfs.test_journal_migration.TestJournalMigration)
1719
* https://tracker.ceph.com/issues/58220
1720
    Command failed (workunit test fs/quota/quota.sh) on smithi081 with status 1:
1721 97 Venky Shankar
* https://tracker.ceph.com/issues/57676
1722
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1723 98 Venky Shankar
* https://tracker.ceph.com/issues/53859
1724
    qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
1725
* https://tracker.ceph.com/issues/54460
1726
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1727 96 Venky Shankar
* https://tracker.ceph.com/issues/58244
1728 95 Venky Shankar
    Test failure: test_rebuild_inotable (tasks.cephfs.test_data_scan.TestDataScan)
1729
1730
h3. 14 Oct 2022
1731
1732
https://pulpito.ceph.com/vshankar-2022-10-12_04:56:59-fs-wip-vshankar-testing-20221011-145847-testing-default-smithi/
1733
https://pulpito.ceph.com/vshankar-2022-10-14_04:04:57-fs-wip-vshankar-testing-20221014-072608-testing-default-smithi/
1734
1735
* https://tracker.ceph.com/issues/52624
1736
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1737
* https://tracker.ceph.com/issues/55804
1738
    Command failed (workunit test suites/pjd.sh)
1739
* https://tracker.ceph.com/issues/51964
1740
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1741
* https://tracker.ceph.com/issues/57682
1742
    client: ERROR: test_reconnect_after_blocklisted
1743 90 Rishabh Dave
* https://tracker.ceph.com/issues/54460
1744 91 Rishabh Dave
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1745
1746
h3. 10 Oct 2022
1747 92 Rishabh Dave
1748 91 Rishabh Dave
http://pulpito.front.sepia.ceph.com/rishabh-2022-09-30_19:45:21-fs-wip-rishabh-testing-30Sep2022-testing-default-smithi/
1749
1750
reruns
1751
* fs-thrash, passed: http://pulpito.front.sepia.ceph.com/rishabh-2022-10-04_13:19:47-fs-wip-rishabh-testing-30Sep2022-testing-default-smithi/
1752 94 Rishabh Dave
* fs-verify, passed: http://pulpito.front.sepia.ceph.com/rishabh-2022-10-05_12:25:37-fs-wip-rishabh-testing-30Sep2022-testing-default-smithi/
1753 91 Rishabh Dave
* cephadm failures also passed after many re-runs: http://pulpito.front.sepia.ceph.com/rishabh-2022-10-06_13:50:51-fs-wip-rishabh-testing-30Sep2022-2-testing-default-smithi/
1754 93 Rishabh Dave
    ** needed this PR to be merged in ceph-ci branch - https://github.com/ceph/ceph/pull/47458
1755 91 Rishabh Dave
1756
known bugs
1757
* https://tracker.ceph.com/issues/52624
1758
  qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1759
* https://tracker.ceph.com/issues/50223
1760
  client.xxxx isn't responding to mclientcaps(revoke
1761
* https://tracker.ceph.com/issues/57299
1762
  qa: test_dump_loads fails with JSONDecodeError
1763
* https://tracker.ceph.com/issues/57655 [Exist in main as well]
1764
  qa: fs:mixed-clients kernel_untar_build failure
1765
* https://tracker.ceph.com/issues/57206
1766 90 Rishabh Dave
  libcephfs/test.sh: ceph_test_libcephfs_reclaim
1767
1768
h3. 2022 Sep 29
1769
1770
http://pulpito.front.sepia.ceph.com/rishabh-2022-09-14_12:48:43-fs-wip-rishabh-testing-2022Sep9-1708-testing-default-smithi/
1771
1772
* https://tracker.ceph.com/issues/55804
1773
  Command failed (workunit test suites/pjd.sh)
1774
* https://tracker.ceph.com/issues/36593
1775
  Command failed (workunit test fs/quota/quota.sh) on smithixxx with status 1
1776
* https://tracker.ceph.com/issues/52624
1777
  qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1778
* https://tracker.ceph.com/issues/51964
1779
  qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1780
* https://tracker.ceph.com/issues/56632
1781
  Test failure: test_subvolume_snapshot_clone_quota_exceeded
1782
* https://tracker.ceph.com/issues/50821
1783 88 Patrick Donnelly
  qa: untar_snap_rm failure during mds thrashing
1784
1785
h3. 2022 Sep 26
1786
1787
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20220923.171109
1788
1789
* https://tracker.ceph.com/issues/55804
1790
    qa failure: pjd link tests failed
1791
* https://tracker.ceph.com/issues/57676
1792
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1793
* https://tracker.ceph.com/issues/52624
1794
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1795
* https://tracker.ceph.com/issues/57580
1796
    Test failure: test_newops_getvxattr (tasks.cephfs.test_newops.TestNewOps)
1797
* https://tracker.ceph.com/issues/48773
1798
    qa: scrub does not complete
1799
* https://tracker.ceph.com/issues/57299
1800
    qa: test_dump_loads fails with JSONDecodeError
1801
* https://tracker.ceph.com/issues/57280
1802
    qa: tasks/kernel_cfuse_workunits_untarbuild_blogbench fails - Failed to fetch package version from shaman
1803
* https://tracker.ceph.com/issues/57205
1804
    Test failure: test_subvolume_group_ls_filter_internal_directories (tasks.cephfs.test_volumes.TestSubvolumeGroups)
1805
* https://tracker.ceph.com/issues/57656
1806
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
1807
* https://tracker.ceph.com/issues/57677
1808
    qa: "1 MDSs behind on trimming (MDS_TRIM)"
1809
* https://tracker.ceph.com/issues/57206
1810
    libcephfs/test.sh: ceph_test_libcephfs_reclaim
1811
* https://tracker.ceph.com/issues/57446
1812
    qa: test_subvolume_snapshot_info_if_orphan_clone fails
1813 89 Patrick Donnelly
* https://tracker.ceph.com/issues/57655 [Exist in main as well]
1814
    qa: fs:mixed-clients kernel_untar_build failure
1815 88 Patrick Donnelly
* https://tracker.ceph.com/issues/57682
1816
    client: ERROR: test_reconnect_after_blocklisted
1817 87 Patrick Donnelly
1818
1819
h3. 2022 Sep 22
1820
1821
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20220920.234701
1822
1823
* https://tracker.ceph.com/issues/57299
1824
    qa: test_dump_loads fails with JSONDecodeError
1825
* https://tracker.ceph.com/issues/57205
1826
    Test failure: test_subvolume_group_ls_filter_internal_directories (tasks.cephfs.test_volumes.TestSubvolumeGroups)
1827
* https://tracker.ceph.com/issues/52624
1828
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1829
* https://tracker.ceph.com/issues/57580
1830
    Test failure: test_newops_getvxattr (tasks.cephfs.test_newops.TestNewOps)
1831
* https://tracker.ceph.com/issues/57280
1832
    qa: tasks/kernel_cfuse_workunits_untarbuild_blogbench fails - Failed to fetch package version from shaman
1833
* https://tracker.ceph.com/issues/48773
1834
    qa: scrub does not complete
1835
* https://tracker.ceph.com/issues/56446
1836
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
1837
* https://tracker.ceph.com/issues/57206
1838
    libcephfs/test.sh: ceph_test_libcephfs_reclaim
1839
* https://tracker.ceph.com/issues/51267
1840
    CommandFailedError: Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithi096 with status 1:...
1841
1842
NEW:
1843
1844
* https://tracker.ceph.com/issues/57656
1845
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
1846
* https://tracker.ceph.com/issues/57655 [Exist in main as well]
1847
    qa: fs:mixed-clients kernel_untar_build failure
1848
* https://tracker.ceph.com/issues/57657
1849
    mds: scrub locates mismatch between child accounted_rstats and self rstats
1850
1851
Segfault probably caused by: https://github.com/ceph/ceph/pull/47795#issuecomment-1255724799
1852 80 Venky Shankar
1853 79 Venky Shankar
1854
h3. 2022 Sep 16
1855
1856
https://pulpito.ceph.com/?branch=wip-vshankar-testing1-20220905-132828
1857
1858
* https://tracker.ceph.com/issues/57446
1859
    qa: test_subvolume_snapshot_info_if_orphan_clone fails
1860
* https://tracker.ceph.com/issues/57299
1861
    qa: test_dump_loads fails with JSONDecodeError
1862
* https://tracker.ceph.com/issues/50223
1863
    client.xxxx isn't responding to mclientcaps(revoke)
1864
* https://tracker.ceph.com/issues/52624
1865
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1866
* https://tracker.ceph.com/issues/57205
1867
    Test failure: test_subvolume_group_ls_filter_internal_directories (tasks.cephfs.test_volumes.TestSubvolumeGroups)
1868
* https://tracker.ceph.com/issues/57280
1869
    qa: tasks/kernel_cfuse_workunits_untarbuild_blogbench fails - Failed to fetch package version from shaman
1870
* https://tracker.ceph.com/issues/51282
1871
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
1872
* https://tracker.ceph.com/issues/48203
1873
  https://tracker.ceph.com/issues/36593
1874
    qa: quota failure
1875
    qa: quota failure caused by clients stepping on each other
1876
* https://tracker.ceph.com/issues/57580
1877 77 Rishabh Dave
    Test failure: test_newops_getvxattr (tasks.cephfs.test_newops.TestNewOps)
1878
1879 76 Rishabh Dave
1880
h3. 2022 Aug 26
1881
1882
http://pulpito.front.sepia.ceph.com/rishabh-2022-08-22_17:49:59-fs-wip-rishabh-testing-2022Aug19-testing-default-smithi/
1883
http://pulpito.front.sepia.ceph.com/rishabh-2022-08-24_11:56:51-fs-wip-rishabh-testing-2022Aug19-testing-default-smithi/
1884
1885
* https://tracker.ceph.com/issues/57206
1886
  libcephfs/test.sh: ceph_test_libcephfs_reclaim
1887
* https://tracker.ceph.com/issues/56632
1888
  Test failure: test_subvolume_snapshot_clone_quota_exceeded (tasks.cephfs.test_volumes.TestSubvolumeSnapshotClones)
1889
* https://tracker.ceph.com/issues/56446
1890
  Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
1891
* https://tracker.ceph.com/issues/51964
1892
  qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1893
* https://tracker.ceph.com/issues/53859
1894
  qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
1895
1896
* https://tracker.ceph.com/issues/54460
1897
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1898
* https://tracker.ceph.com/issues/54462
1899
  Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128
1900
* https://tracker.ceph.com/issues/54460
1901
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1902
* https://tracker.ceph.com/issues/36593
1903
  Command failed (workunit test fs/quota/quota.sh) on smithixxx with status 1
1904
1905
* https://tracker.ceph.com/issues/52624
1906
  qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1907
* https://tracker.ceph.com/issues/55804
1908
  Command failed (workunit test suites/pjd.sh)
1909
* https://tracker.ceph.com/issues/50223
1910
  client.xxxx isn't responding to mclientcaps(revoke)
1911 75 Venky Shankar
1912
1913
h3. 2022 Aug 22
1914
1915
https://pulpito.ceph.com/vshankar-2022-08-12_09:34:24-fs-wip-vshankar-testing1-20220812-072441-testing-default-smithi/
1916
https://pulpito.ceph.com/vshankar-2022-08-18_04:30:42-fs-wip-vshankar-testing1-20220818-082047-testing-default-smithi/ (drop problematic PR and re-run)
1917
1918
* https://tracker.ceph.com/issues/52624
1919
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1920
* https://tracker.ceph.com/issues/56446
1921
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
1922
* https://tracker.ceph.com/issues/55804
1923
    Command failed (workunit test suites/pjd.sh)
1924
* https://tracker.ceph.com/issues/51278
1925
    mds: "FAILED ceph_assert(!segments.empty())"
1926
* https://tracker.ceph.com/issues/54460
1927
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1928
* https://tracker.ceph.com/issues/57205
1929
    Test failure: test_subvolume_group_ls_filter_internal_directories (tasks.cephfs.test_volumes.TestSubvolumeGroups)
1930
* https://tracker.ceph.com/issues/57206
1931
    ceph_test_libcephfs_reclaim crashes during test
1932
* https://tracker.ceph.com/issues/53859
1933
  Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
1934
* https://tracker.ceph.com/issues/50223
1935 72 Venky Shankar
    client.xxxx isn't responding to mclientcaps(revoke)
1936
1937
h3. 2022 Aug 12
1938
1939
https://pulpito.ceph.com/vshankar-2022-08-10_04:06:00-fs-wip-vshankar-testing-20220805-190751-testing-default-smithi/
1940
https://pulpito.ceph.com/vshankar-2022-08-11_12:16:58-fs-wip-vshankar-testing-20220811-145809-testing-default-smithi/ (drop problematic PR and re-run)
1941
1942
* https://tracker.ceph.com/issues/52624
1943
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1944
* https://tracker.ceph.com/issues/56446
1945
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
1946
* https://tracker.ceph.com/issues/51964
1947
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1948
* https://tracker.ceph.com/issues/55804
1949
    Command failed (workunit test suites/pjd.sh)
1950
* https://tracker.ceph.com/issues/50223
1951
    client.xxxx isn't responding to mclientcaps(revoke)
1952
* https://tracker.ceph.com/issues/50821
1953 73 Venky Shankar
    qa: untar_snap_rm failure during mds thrashing
1954 72 Venky Shankar
* https://tracker.ceph.com/issues/54460
1955 71 Venky Shankar
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1956
1957
h3. 2022 Aug 04
1958
1959
https://pulpito.ceph.com/?branch=wip-vshankar-testing1-20220804-123835 (only mgr/volumes, mgr/stats)
1960
1961 69 Rishabh Dave
Unrealted teuthology failure on rhel
1962 68 Rishabh Dave
1963
h3. 2022 Jul 25
1964
1965
http://pulpito.front.sepia.ceph.com/rishabh-2022-07-22_11:34:20-fs-wip-rishabh-testing-2022Jul22-1400-testing-default-smithi/
1966
1967 74 Rishabh Dave
1st re-run: http://pulpito.front.sepia.ceph.com/rishabh-2022-07-24_03:51:19-fs-wip-rishabh-testing-2022Jul22-1400-testing-default-smithi
1968
2nd re-run: http://pulpito.front.sepia.ceph.com/rishabh-2022-07-24_08:53:36-fs-wip-rishabh-testing-2022Jul22-1400-testing-default-smithi/
1969 68 Rishabh Dave
3rd re-run: http://pulpito.front.sepia.ceph.com/rishabh-2022-07-24_08:53:36-fs-wip-rishabh-testing-2022Jul22-1400-testing-default-smithi/
1970
4th (final) re-run: http://pulpito.front.sepia.ceph.com/rishabh-2022-07-28_03:59:01-fs-wip-rishabh-testing-2022Jul28-0143-testing-default-smithi/
1971
1972
* https://tracker.ceph.com/issues/55804
1973
  Command failed (workunit test suites/pjd.sh)
1974
* https://tracker.ceph.com/issues/50223
1975
  client.xxxx isn't responding to mclientcaps(revoke)
1976
1977
* https://tracker.ceph.com/issues/54460
1978
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1979 1 Patrick Donnelly
* https://tracker.ceph.com/issues/36593
1980 74 Rishabh Dave
  Command failed (workunit test fs/quota/quota.sh) on smithixxx with status 1
1981 68 Rishabh Dave
* https://tracker.ceph.com/issues/54462
1982 67 Patrick Donnelly
  Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128~
1983
1984
h3. 2022 July 22
1985
1986
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20220721.235756
1987
1988
MDS_HEALTH_DUMMY error in log fixed by followup commit.
1989
transient selinux ping failure
1990
1991
* https://tracker.ceph.com/issues/56694
1992
    qa: avoid blocking forever on hung umount
1993
* https://tracker.ceph.com/issues/56695
1994
    [RHEL stock] pjd test failures
1995
* https://tracker.ceph.com/issues/56696
1996
    admin keyring disappears during qa run
1997
* https://tracker.ceph.com/issues/56697
1998
    qa: fs/snaps fails for fuse
1999
* https://tracker.ceph.com/issues/50222
2000
    osd: 5.2s0 deep-scrub : stat mismatch
2001
* https://tracker.ceph.com/issues/56698
2002
    client: FAILED ceph_assert(_size == 0)
2003
* https://tracker.ceph.com/issues/50223
2004
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2005 66 Rishabh Dave
2006 65 Rishabh Dave
2007
h3. 2022 Jul 15
2008
2009
http://pulpito.front.sepia.ceph.com/rishabh-2022-07-08_23:53:34-fs-wip-rishabh-testing-2022Jul08-1820-testing-default-smithi/
2010
2011
re-run: http://pulpito.front.sepia.ceph.com/rishabh-2022-07-15_06:42:04-fs-wip-rishabh-testing-2022Jul08-1820-testing-default-smithi/
2012
2013
* https://tracker.ceph.com/issues/53859
2014
  Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
2015
* https://tracker.ceph.com/issues/55804
2016
  Command failed (workunit test suites/pjd.sh)
2017
* https://tracker.ceph.com/issues/50223
2018
  client.xxxx isn't responding to mclientcaps(revoke)
2019
* https://tracker.ceph.com/issues/50222
2020
  osd: deep-scrub : stat mismatch
2021
2022
* https://tracker.ceph.com/issues/56632
2023
  Test failure: test_subvolume_snapshot_clone_quota_exceeded (tasks.cephfs.test_volumes.TestSubvolumeSnapshotClones)
2024
* https://tracker.ceph.com/issues/56634
2025
  workunit test fs/snaps/snaptest-intodir.sh
2026
* https://tracker.ceph.com/issues/56644
2027
  Test failure: test_rapid_creation (tasks.cephfs.test_fragment.TestFragmentation)
2028
2029 61 Rishabh Dave
2030
2031
h3. 2022 July 05
2032 62 Rishabh Dave
2033 64 Rishabh Dave
http://pulpito.front.sepia.ceph.com/rishabh-2022-07-02_14:14:52-fs-wip-rishabh-testing-20220702-1631-testing-default-smithi/
2034
2035
On 1st re-run some jobs passed - http://pulpito.front.sepia.ceph.com/rishabh-2022-07-03_15:10:28-fs-wip-rishabh-testing-20220702-1631-distro-default-smithi/
2036
2037
On 2nd re-run only few jobs failed -
2038 62 Rishabh Dave
http://pulpito.front.sepia.ceph.com/rishabh-2022-07-06_05:24:29-fs-wip-rishabh-testing-20220705-2132-distro-default-smithi/
2039
http://pulpito.front.sepia.ceph.com/rishabh-2022-07-06_05:24:29-fs-wip-rishabh-testing-20220705-2132-distro-default-smithi/
2040
2041
* https://tracker.ceph.com/issues/56446
2042
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
2043
* https://tracker.ceph.com/issues/55804
2044
    Command failed (workunit test suites/pjd.sh) on smithi047 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/
2045
2046
* https://tracker.ceph.com/issues/56445
2047 63 Rishabh Dave
    Command failed on smithi080 with status 123: "find /home/ubuntu/cephtest/archive/syslog -name '*.log' -print0 | sudo xargs -0 --no-run-if-empty -- gzip --"
2048
* https://tracker.ceph.com/issues/51267
2049
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithi098 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1
2050 62 Rishabh Dave
* https://tracker.ceph.com/issues/50224
2051
    Test failure: test_mirroring_init_failure_with_recovery (tasks.cephfs.test_mirroring.TestMirroring)
2052 61 Rishabh Dave
2053 58 Venky Shankar
2054
2055
h3. 2022 July 04
2056
2057
https://pulpito.ceph.com/vshankar-2022-06-29_09:19:00-fs-wip-vshankar-testing-20220627-100931-testing-default-smithi/
2058
(rhel runs were borked due to: https://lists.ceph.io/hyperkitty/list/dev@ceph.io/thread/JSZQFUKVLDND4W33PXDGCABPHNSPT6SS/, tests ran with --filter-out=rhel)
2059
2060
* https://tracker.ceph.com/issues/56445
2061 59 Rishabh Dave
    Command failed on smithi162 with status 123: "find /home/ubuntu/cephtest/archive/syslog -name '*.log' -print0 | sudo xargs -0 --no-run-if-empty -- gzip --"
2062
* https://tracker.ceph.com/issues/56446
2063
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
2064
* https://tracker.ceph.com/issues/51964
2065 60 Rishabh Dave
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
2066 59 Rishabh Dave
* https://tracker.ceph.com/issues/52624
2067 57 Venky Shankar
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2068
2069
h3. 2022 June 20
2070
2071
https://pulpito.ceph.com/vshankar-2022-06-15_04:03:39-fs-wip-vshankar-testing1-20220615-072516-testing-default-smithi/
2072
https://pulpito.ceph.com/vshankar-2022-06-19_08:22:46-fs-wip-vshankar-testing1-20220619-102531-testing-default-smithi/
2073
2074
* https://tracker.ceph.com/issues/52624
2075
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2076
* https://tracker.ceph.com/issues/55804
2077
    qa failure: pjd link tests failed
2078
* https://tracker.ceph.com/issues/54108
2079
    qa: iogen workunit: "The following counters failed to be set on mds daemons: {'mds.exported', 'mds.imported'}"
2080
* https://tracker.ceph.com/issues/55332
2081 56 Patrick Donnelly
    Failure in snaptest-git-ceph.sh (it's an async unlink/create bug)
2082
2083
h3. 2022 June 13
2084
2085
https://pulpito.ceph.com/pdonnell-2022-06-12_05:08:12-fs:workload-wip-pdonnell-testing-20220612.004943-distro-default-smithi/
2086
2087
* https://tracker.ceph.com/issues/56024
2088
    cephadm: removes ceph.conf during qa run causing command failure
2089
* https://tracker.ceph.com/issues/48773
2090
    qa: scrub does not complete
2091
* https://tracker.ceph.com/issues/56012
2092
    mds: src/mds/MDLog.cc: 283: FAILED ceph_assert(!mds->is_ any_replay())
2093 55 Venky Shankar
2094 54 Venky Shankar
2095
h3. 2022 Jun 13
2096
2097
https://pulpito.ceph.com/vshankar-2022-06-07_00:25:50-fs-wip-vshankar-testing-20220606-223254-testing-default-smithi/
2098
https://pulpito.ceph.com/vshankar-2022-06-10_01:04:46-fs-wip-vshankar-testing-20220609-175550-testing-default-smithi/
2099
2100
* https://tracker.ceph.com/issues/52624
2101
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2102
* https://tracker.ceph.com/issues/51964
2103
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
2104
* https://tracker.ceph.com/issues/53859
2105
    qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
2106
* https://tracker.ceph.com/issues/55804
2107
    qa failure: pjd link tests failed
2108
* https://tracker.ceph.com/issues/56003
2109
    client: src/include/xlist.h: 81: FAILED ceph_assert(_size == 0)
2110
* https://tracker.ceph.com/issues/56011
2111
    fs/thrash: snaptest-snap-rm-cmp.sh fails in mds5sum comparison
2112
* https://tracker.ceph.com/issues/56012
2113 53 Venky Shankar
    mds: src/mds/MDLog.cc: 283: FAILED ceph_assert(!mds->is_ any_replay())
2114
2115
h3. 2022 Jun 07
2116
2117
https://pulpito.ceph.com/vshankar-2022-06-06_21:25:41-fs-wip-vshankar-testing1-20220606-230129-testing-default-smithi/
2118
https://pulpito.ceph.com/vshankar-2022-06-07_10:53:31-fs-wip-vshankar-testing1-20220607-104134-testing-default-smithi/ (rerun after dropping a problematic PR)
2119
2120
* https://tracker.ceph.com/issues/52624
2121
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2122
* https://tracker.ceph.com/issues/50223
2123
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2124
* https://tracker.ceph.com/issues/50224
2125 51 Venky Shankar
    qa: test_mirroring_init_failure_with_recovery failure
2126
2127
h3. 2022 May 12
2128 52 Venky Shankar
2129 51 Venky Shankar
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20220509-125847
2130
https://pulpito.ceph.com/vshankar-2022-05-13_17:09:16-fs-wip-vshankar-testing-20220513-120051-testing-default-smithi/ (drop prs + rerun)
2131
2132
* https://tracker.ceph.com/issues/52624
2133
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2134
* https://tracker.ceph.com/issues/50223
2135
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2136
* https://tracker.ceph.com/issues/55332
2137
    Failure in snaptest-git-ceph.sh
2138
* https://tracker.ceph.com/issues/53859
2139 1 Patrick Donnelly
    qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
2140 52 Venky Shankar
* https://tracker.ceph.com/issues/55538
2141
    Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead)
2142 51 Venky Shankar
* https://tracker.ceph.com/issues/55258
2143 49 Venky Shankar
    lots of "heartbeat_check: no reply from X.X.X.X" in OSD logs (cropss up again, though very infrequent)
2144
2145 50 Venky Shankar
h3. 2022 May 04
2146
2147
https://pulpito.ceph.com/vshankar-2022-05-01_13:18:44-fs-wip-vshankar-testing1-20220428-204527-testing-default-smithi/
2148 49 Venky Shankar
https://pulpito.ceph.com/vshankar-2022-05-02_16:58:59-fs-wip-vshankar-testing1-20220502-201957-testing-default-smithi/ (after dropping PRs)
2149
2150
* https://tracker.ceph.com/issues/52624
2151
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2152
* https://tracker.ceph.com/issues/50223
2153
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2154
* https://tracker.ceph.com/issues/55332
2155
    Failure in snaptest-git-ceph.sh
2156
* https://tracker.ceph.com/issues/53859
2157
    qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
2158
* https://tracker.ceph.com/issues/55516
2159
    qa: fs suite tests failing with "json.decoder.JSONDecodeError: Extra data: line 2 column 82 (char 82)"
2160
* https://tracker.ceph.com/issues/55537
2161
    mds: crash during fs:upgrade test
2162
* https://tracker.ceph.com/issues/55538
2163 48 Venky Shankar
    Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead)
2164
2165
h3. 2022 Apr 25
2166
2167
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20220420-113951 (owner vshankar)
2168
2169
* https://tracker.ceph.com/issues/52624
2170
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2171
* https://tracker.ceph.com/issues/50223
2172
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2173
* https://tracker.ceph.com/issues/55258
2174
    lots of "heartbeat_check: no reply from X.X.X.X" in OSD logs
2175
* https://tracker.ceph.com/issues/55377
2176 47 Venky Shankar
    kclient: mds revoke Fwb caps stuck after the kclient tries writebcak once
2177
2178
h3. 2022 Apr 14
2179
2180
https://pulpito.ceph.com/?branch=wip-vshankar-testing1-20220411-144044
2181
2182
* https://tracker.ceph.com/issues/52624
2183
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2184
* https://tracker.ceph.com/issues/50223
2185
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2186
* https://tracker.ceph.com/issues/52438
2187
    qa: ffsb timeout
2188
* https://tracker.ceph.com/issues/55170
2189
    mds: crash during rejoin (CDir::fetch_keys)
2190
* https://tracker.ceph.com/issues/55331
2191
    pjd failure
2192
* https://tracker.ceph.com/issues/48773
2193
    qa: scrub does not complete
2194
* https://tracker.ceph.com/issues/55332
2195
    Failure in snaptest-git-ceph.sh
2196
* https://tracker.ceph.com/issues/55258
2197 45 Venky Shankar
    lots of "heartbeat_check: no reply from X.X.X.X" in OSD logs
2198
2199 46 Venky Shankar
h3. 2022 Apr 11
2200 45 Venky Shankar
2201
https://pulpito.ceph.com/?branch=wip-vshankar-testing-55110-20220408-203242
2202
2203
* https://tracker.ceph.com/issues/48773
2204
    qa: scrub does not complete
2205
* https://tracker.ceph.com/issues/52624
2206
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2207
* https://tracker.ceph.com/issues/52438
2208
    qa: ffsb timeout
2209
* https://tracker.ceph.com/issues/48680
2210
    mds: scrubbing stuck "scrub active (0 inodes in the stack)"
2211
* https://tracker.ceph.com/issues/55236
2212
    qa: fs/snaps tests fails with "hit max job timeout"
2213
* https://tracker.ceph.com/issues/54108
2214
    qa: iogen workunit: "The following counters failed to be set on mds daemons: {'mds.exported', 'mds.imported'}"
2215
* https://tracker.ceph.com/issues/54971
2216
    Test failure: test_perf_stats_stale_metrics (tasks.cephfs.test_mds_metrics.TestMDSMetrics)
2217
* https://tracker.ceph.com/issues/50223
2218
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2219
* https://tracker.ceph.com/issues/55258
2220 44 Venky Shankar
    lots of "heartbeat_check: no reply from X.X.X.X" in OSD logs
2221 42 Venky Shankar
2222 43 Venky Shankar
h3. 2022 Mar 21
2223
2224
https://pulpito.ceph.com/vshankar-2022-03-20_02:16:37-fs-wip-vshankar-testing-20220319-163539-testing-default-smithi/
2225
2226
Run didn't go well, lots of failures - debugging by dropping PRs and running against master branch. Only merging unrelated PRs that pass tests.
2227
2228
2229 42 Venky Shankar
h3. 2022 Mar 08
2230
2231
https://pulpito.ceph.com/vshankar-2022-02-28_04:32:15-fs-wip-vshankar-testing-20220226-211550-testing-default-smithi/
2232
2233
rerun with
2234
- (drop) https://github.com/ceph/ceph/pull/44679
2235
- (drop) https://github.com/ceph/ceph/pull/44958
2236
https://pulpito.ceph.com/vshankar-2022-03-06_14:47:51-fs-wip-vshankar-testing-20220304-132102-testing-default-smithi/
2237
2238
* https://tracker.ceph.com/issues/54419 (new)
2239
    `ceph orch upgrade start` seems to never reach completion
2240
* https://tracker.ceph.com/issues/51964
2241
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
2242
* https://tracker.ceph.com/issues/52624
2243
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2244
* https://tracker.ceph.com/issues/50223
2245
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2246
* https://tracker.ceph.com/issues/52438
2247
    qa: ffsb timeout
2248
* https://tracker.ceph.com/issues/50821
2249
    qa: untar_snap_rm failure during mds thrashing
2250 41 Venky Shankar
2251
2252
h3. 2022 Feb 09
2253
2254
https://pulpito.ceph.com/vshankar-2022-02-05_17:27:49-fs-wip-vshankar-testing-20220201-113815-testing-default-smithi/
2255
2256
rerun with
2257
- (drop) https://github.com/ceph/ceph/pull/37938
2258
- (drop) https://github.com/ceph/ceph/pull/44335
2259
- (drop) https://github.com/ceph/ceph/pull/44491
2260
- (drop) https://github.com/ceph/ceph/pull/44501
2261
https://pulpito.ceph.com/vshankar-2022-02-08_14:27:29-fs-wip-vshankar-testing-20220208-181241-testing-default-smithi/
2262
2263
* https://tracker.ceph.com/issues/51964
2264
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
2265
* https://tracker.ceph.com/issues/54066
2266
    test_subvolume_no_upgrade_v1_sanity fails with `AssertionError: 1000 != 0`
2267
* https://tracker.ceph.com/issues/48773
2268
    qa: scrub does not complete
2269
* https://tracker.ceph.com/issues/52624
2270
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2271
* https://tracker.ceph.com/issues/50223
2272
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2273
* https://tracker.ceph.com/issues/52438
2274 40 Patrick Donnelly
    qa: ffsb timeout
2275
2276
h3. 2022 Feb 01
2277
2278
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20220127.171526
2279
2280
* https://tracker.ceph.com/issues/54107
2281
    kclient: hang during umount
2282
* https://tracker.ceph.com/issues/54106
2283
    kclient: hang during workunit cleanup
2284
* https://tracker.ceph.com/issues/54108
2285
    qa: iogen workunit: "The following counters failed to be set on mds daemons: {'mds.exported', 'mds.imported'}"
2286
* https://tracker.ceph.com/issues/48773
2287
    qa: scrub does not complete
2288
* https://tracker.ceph.com/issues/52438
2289
    qa: ffsb timeout
2290 36 Venky Shankar
2291
2292
h3. 2022 Jan 13
2293 39 Venky Shankar
2294 36 Venky Shankar
https://pulpito.ceph.com/vshankar-2022-01-06_13:18:41-fs-wip-vshankar-testing-20220106-145819-testing-default-smithi/
2295 38 Venky Shankar
2296
rerun with:
2297 36 Venky Shankar
- (add) https://github.com/ceph/ceph/pull/44570
2298
- (drop) https://github.com/ceph/ceph/pull/43184
2299
https://pulpito.ceph.com/vshankar-2022-01-13_04:42:40-fs-wip-vshankar-testing-20220106-145819-testing-default-smithi/
2300
2301
* https://tracker.ceph.com/issues/50223
2302
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2303
* https://tracker.ceph.com/issues/51282
2304
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2305
* https://tracker.ceph.com/issues/48773
2306
    qa: scrub does not complete
2307
* https://tracker.ceph.com/issues/52624
2308
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2309
* https://tracker.ceph.com/issues/53859
2310 34 Venky Shankar
    qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
2311
2312
h3. 2022 Jan 03
2313
2314
https://pulpito.ceph.com/vshankar-2021-12-22_07:37:44-fs-wip-vshankar-testing-20211216-114012-testing-default-smithi/
2315
https://pulpito.ceph.com/vshankar-2022-01-03_12:27:45-fs-wip-vshankar-testing-20220103-142738-testing-default-smithi/ (rerun)
2316
2317
* https://tracker.ceph.com/issues/50223
2318
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2319
* https://tracker.ceph.com/issues/51964
2320
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
2321
* https://tracker.ceph.com/issues/51267
2322
    CommandFailedError: Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithi096 with status 1:...
2323
* https://tracker.ceph.com/issues/51282
2324
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2325
* https://tracker.ceph.com/issues/50821
2326
    qa: untar_snap_rm failure during mds thrashing
2327 35 Ramana Raja
* https://tracker.ceph.com/issues/51278
2328
    mds: "FAILED ceph_assert(!segments.empty())"
2329
* https://tracker.ceph.com/issues/52279
2330 34 Venky Shankar
    cephadm tests fail due to: error adding seccomp filter rule for syscall bdflush: requested action matches default action of filter
2331 33 Patrick Donnelly
2332
2333
h3. 2021 Dec 22
2334
2335
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211222.014316
2336
2337
* https://tracker.ceph.com/issues/52624
2338
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2339
* https://tracker.ceph.com/issues/50223
2340
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2341
* https://tracker.ceph.com/issues/52279
2342
    cephadm tests fail due to: error adding seccomp filter rule for syscall bdflush: requested action matches default action of filter
2343
* https://tracker.ceph.com/issues/50224
2344
    qa: test_mirroring_init_failure_with_recovery failure
2345
* https://tracker.ceph.com/issues/48773
2346
    qa: scrub does not complete
2347 32 Venky Shankar
2348
2349
h3. 2021 Nov 30
2350
2351
https://pulpito.ceph.com/vshankar-2021-11-24_07:14:27-fs-wip-vshankar-testing-20211124-094330-testing-default-smithi/
2352
https://pulpito.ceph.com/vshankar-2021-11-30_06:23:32-fs-wip-vshankar-testing-20211124-094330-distro-default-smithi/ (rerun w/ QA fixes)
2353
2354
* https://tracker.ceph.com/issues/53436
2355
    mds, mon: mds beacon messages get dropped? (mds never reaches up:active state)
2356
* https://tracker.ceph.com/issues/51964
2357
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
2358
* https://tracker.ceph.com/issues/48812
2359
    qa: test_scrub_pause_and_resume_with_abort failure
2360
* https://tracker.ceph.com/issues/51076
2361
    "wait_for_recovery: failed before timeout expired" during thrashosd test with EC backend.
2362
* https://tracker.ceph.com/issues/50223
2363
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2364
* https://tracker.ceph.com/issues/52624
2365
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2366
* https://tracker.ceph.com/issues/50250
2367
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
2368 31 Patrick Donnelly
2369
2370
h3. 2021 November 9
2371
2372
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211109.180315
2373
2374
* https://tracker.ceph.com/issues/53214
2375
    qa: "dd: error reading '/sys/kernel/debug/ceph/2a934501-6731-4052-a836-f42229a869be.client4874/metrics': Is a directory"
2376
* https://tracker.ceph.com/issues/48773
2377
    qa: scrub does not complete
2378
* https://tracker.ceph.com/issues/50223
2379
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2380
* https://tracker.ceph.com/issues/51282
2381
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2382
* https://tracker.ceph.com/issues/52624
2383
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2384
* https://tracker.ceph.com/issues/53216
2385
    qa: "RuntimeError: value of attributes should be either str or None. client_id"
2386
* https://tracker.ceph.com/issues/50250
2387
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
2388
2389 30 Patrick Donnelly
2390
2391
h3. 2021 November 03
2392
2393
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211103.023355
2394
2395
* https://tracker.ceph.com/issues/51964
2396
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
2397
* https://tracker.ceph.com/issues/51282
2398
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2399
* https://tracker.ceph.com/issues/52436
2400
    fs/ceph: "corrupt mdsmap"
2401
* https://tracker.ceph.com/issues/53074
2402
    pybind/mgr/cephadm: upgrade sequence does not continue if no MDS are active
2403
* https://tracker.ceph.com/issues/53150
2404
    pybind/mgr/cephadm/upgrade: tolerate MDS failures during upgrade straddling v16.2.5
2405
* https://tracker.ceph.com/issues/53155
2406
    MDSMonitor: assertion during upgrade to v16.2.5+
2407 29 Patrick Donnelly
2408
2409
h3. 2021 October 26
2410
2411
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211025.000447
2412
2413
* https://tracker.ceph.com/issues/53074
2414
    pybind/mgr/cephadm: upgrade sequence does not continue if no MDS are active
2415
* https://tracker.ceph.com/issues/52997
2416
    testing: hang ing umount
2417
* https://tracker.ceph.com/issues/50824
2418
    qa: snaptest-git-ceph bus error
2419
* https://tracker.ceph.com/issues/52436
2420
    fs/ceph: "corrupt mdsmap"
2421
* https://tracker.ceph.com/issues/48773
2422
    qa: scrub does not complete
2423
* https://tracker.ceph.com/issues/53082
2424
    ceph-fuse: segmenetation fault in Client::handle_mds_map
2425
* https://tracker.ceph.com/issues/50223
2426
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2427
* https://tracker.ceph.com/issues/52624
2428
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2429
* https://tracker.ceph.com/issues/50224
2430
    qa: test_mirroring_init_failure_with_recovery failure
2431
* https://tracker.ceph.com/issues/50821
2432
    qa: untar_snap_rm failure during mds thrashing
2433
* https://tracker.ceph.com/issues/50250
2434
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
2435
2436 27 Patrick Donnelly
2437
2438 28 Patrick Donnelly
h3. 2021 October 19
2439 27 Patrick Donnelly
2440
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211019.013028
2441
2442
* https://tracker.ceph.com/issues/52995
2443
    qa: test_standby_count_wanted failure
2444
* https://tracker.ceph.com/issues/52948
2445
    osd: fails to come up: "teuthology.misc:7 of 8 OSDs are up"
2446
* https://tracker.ceph.com/issues/52996
2447
    qa: test_perf_counters via test_openfiletable
2448
* https://tracker.ceph.com/issues/48772
2449
    qa: pjd: not ok 9, 44, 80
2450
* https://tracker.ceph.com/issues/52997
2451
    testing: hang ing umount
2452
* https://tracker.ceph.com/issues/50250
2453
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
2454
* https://tracker.ceph.com/issues/52624
2455
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2456
* https://tracker.ceph.com/issues/50223
2457
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2458
* https://tracker.ceph.com/issues/50821
2459
    qa: untar_snap_rm failure during mds thrashing
2460
* https://tracker.ceph.com/issues/48773
2461
    qa: scrub does not complete
2462 26 Patrick Donnelly
2463
2464
h3. 2021 October 12
2465
2466
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211012.192211
2467
2468
Some failures caused by teuthology bug: https://tracker.ceph.com/issues/52944
2469
2470
New test caused failure: https://github.com/ceph/ceph/pull/43297#discussion_r729883167
2471
2472
2473
* https://tracker.ceph.com/issues/51282
2474
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2475
* https://tracker.ceph.com/issues/52948
2476
    osd: fails to come up: "teuthology.misc:7 of 8 OSDs are up"
2477
* https://tracker.ceph.com/issues/48773
2478
    qa: scrub does not complete
2479
* https://tracker.ceph.com/issues/50224
2480
    qa: test_mirroring_init_failure_with_recovery failure
2481
* https://tracker.ceph.com/issues/52949
2482
    RuntimeError: The following counters failed to be set on mds daemons: {'mds.dir_split'}
2483 25 Patrick Donnelly
2484 23 Patrick Donnelly
2485 24 Patrick Donnelly
h3. 2021 October 02
2486
2487
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211002.163337
2488
2489
Some failures caused by cephadm upgrade test. Fixed in follow-up qa commit.
2490
2491
test_simple failures caused by PR in this set.
2492
2493
A few reruns because of QA infra noise.
2494
2495
* https://tracker.ceph.com/issues/52822
2496
    qa: failed pacific install on fs:upgrade
2497
* https://tracker.ceph.com/issues/52624
2498
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2499
* https://tracker.ceph.com/issues/50223
2500
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2501
* https://tracker.ceph.com/issues/48773
2502
    qa: scrub does not complete
2503
2504
2505 23 Patrick Donnelly
h3. 2021 September 20
2506
2507
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210917.174826
2508
2509
* https://tracker.ceph.com/issues/52677
2510
    qa: test_simple failure
2511
* https://tracker.ceph.com/issues/51279
2512
    kclient hangs on umount (testing branch)
2513
* https://tracker.ceph.com/issues/50223
2514
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2515
* https://tracker.ceph.com/issues/50250
2516
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
2517
* https://tracker.ceph.com/issues/52624
2518
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2519
* https://tracker.ceph.com/issues/52438
2520
    qa: ffsb timeout
2521 22 Patrick Donnelly
2522
2523
h3. 2021 September 10
2524
2525
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210910.181451
2526
2527
* https://tracker.ceph.com/issues/50223
2528
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2529
* https://tracker.ceph.com/issues/50250
2530
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
2531
* https://tracker.ceph.com/issues/52624
2532
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2533
* https://tracker.ceph.com/issues/52625
2534
    qa: test_kill_mdstable (tasks.cephfs.test_snapshots.TestSnapshots)
2535
* https://tracker.ceph.com/issues/52439
2536
    qa: acls does not compile on centos stream
2537
* https://tracker.ceph.com/issues/50821
2538
    qa: untar_snap_rm failure during mds thrashing
2539
* https://tracker.ceph.com/issues/48773
2540
    qa: scrub does not complete
2541
* https://tracker.ceph.com/issues/52626
2542
    mds: ScrubStack.cc: 831: FAILED ceph_assert(diri)
2543
* https://tracker.ceph.com/issues/51279
2544
    kclient hangs on umount (testing branch)
2545 21 Patrick Donnelly
2546
2547
h3. 2021 August 27
2548
2549
Several jobs died because of device failures.
2550
2551
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210827.024746
2552
2553
* https://tracker.ceph.com/issues/52430
2554
    mds: fast async create client mount breaks racy test
2555
* https://tracker.ceph.com/issues/52436
2556
    fs/ceph: "corrupt mdsmap"
2557
* https://tracker.ceph.com/issues/52437
2558
    mds: InoTable::replay_release_ids abort via test_inotable_sync
2559
* https://tracker.ceph.com/issues/51282
2560
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2561
* https://tracker.ceph.com/issues/52438
2562
    qa: ffsb timeout
2563
* https://tracker.ceph.com/issues/52439
2564
    qa: acls does not compile on centos stream
2565 20 Patrick Donnelly
2566
2567
h3. 2021 July 30
2568
2569
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210729.214022
2570
2571
* https://tracker.ceph.com/issues/50250
2572
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
2573
* https://tracker.ceph.com/issues/51282
2574
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2575
* https://tracker.ceph.com/issues/48773
2576
    qa: scrub does not complete
2577
* https://tracker.ceph.com/issues/51975
2578
    pybind/mgr/stats: KeyError
2579 19 Patrick Donnelly
2580
2581
h3. 2021 July 28
2582
2583
https://pulpito.ceph.com/pdonnell-2021-07-28_00:39:45-fs-wip-pdonnell-testing-20210727.213757-distro-basic-smithi/
2584
2585
with qa fix: https://pulpito.ceph.com/pdonnell-2021-07-28_16:20:28-fs-wip-pdonnell-testing-20210728.141004-distro-basic-smithi/
2586
2587
* https://tracker.ceph.com/issues/51905
2588
    qa: "error reading sessionmap 'mds1_sessionmap'"
2589
* https://tracker.ceph.com/issues/48773
2590
    qa: scrub does not complete
2591
* https://tracker.ceph.com/issues/50250
2592
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
2593
* https://tracker.ceph.com/issues/51267
2594
    CommandFailedError: Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithi096 with status 1:...
2595
* https://tracker.ceph.com/issues/51279
2596
    kclient hangs on umount (testing branch)
2597 18 Patrick Donnelly
2598
2599
h3. 2021 July 16
2600
2601
https://pulpito.ceph.com/pdonnell-2021-07-16_05:50:11-fs-wip-pdonnell-testing-20210716.022804-distro-basic-smithi/
2602
2603
* https://tracker.ceph.com/issues/48773
2604
    qa: scrub does not complete
2605
* https://tracker.ceph.com/issues/48772
2606
    qa: pjd: not ok 9, 44, 80
2607
* https://tracker.ceph.com/issues/45434
2608
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2609
* https://tracker.ceph.com/issues/51279
2610
    kclient hangs on umount (testing branch)
2611
* https://tracker.ceph.com/issues/50824
2612
    qa: snaptest-git-ceph bus error
2613 17 Patrick Donnelly
2614
2615
h3. 2021 July 04
2616
2617
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210703.052904
2618
2619
* https://tracker.ceph.com/issues/48773
2620
    qa: scrub does not complete
2621
* https://tracker.ceph.com/issues/39150
2622
    mon: "FAILED ceph_assert(session_map.sessions.empty())" when out of quorum
2623
* https://tracker.ceph.com/issues/45434
2624
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2625
* https://tracker.ceph.com/issues/51282
2626
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2627
* https://tracker.ceph.com/issues/48771
2628
    qa: iogen: workload fails to cause balancing
2629
* https://tracker.ceph.com/issues/51279
2630
    kclient hangs on umount (testing branch)
2631
* https://tracker.ceph.com/issues/50250
2632
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
2633 16 Patrick Donnelly
2634
2635
h3. 2021 July 01
2636
2637
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210701.192056
2638
2639
* https://tracker.ceph.com/issues/51197
2640
    qa: [WRN] Scrub error on inode 0x10000001520 (/client.0/tmp/t/linux-5.4/Documentation/driver-api) see mds.f log and `damage ls` output for details
2641
* https://tracker.ceph.com/issues/50866
2642
    osd: stat mismatch on objects
2643
* https://tracker.ceph.com/issues/48773
2644
    qa: scrub does not complete
2645 15 Patrick Donnelly
2646
2647
h3. 2021 June 26
2648
2649
https://pulpito.ceph.com/pdonnell-2021-06-26_00:57:00-fs-wip-pdonnell-testing-20210625.225421-distro-basic-smithi/
2650
2651
* https://tracker.ceph.com/issues/51183
2652
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
2653
* https://tracker.ceph.com/issues/51410
2654
    kclient: fails to finish reconnect during MDS thrashing (testing branch)
2655
* https://tracker.ceph.com/issues/48773
2656
    qa: scrub does not complete
2657
* https://tracker.ceph.com/issues/51282
2658
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2659
* https://tracker.ceph.com/issues/51169
2660
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
2661
* https://tracker.ceph.com/issues/48772
2662
    qa: pjd: not ok 9, 44, 80
2663 14 Patrick Donnelly
2664
2665
h3. 2021 June 21
2666
2667
https://pulpito.ceph.com/pdonnell-2021-06-22_00:27:21-fs-wip-pdonnell-testing-20210621.231646-distro-basic-smithi/
2668
2669
One failure caused by PR: https://github.com/ceph/ceph/pull/41935#issuecomment-866472599
2670
2671
* https://tracker.ceph.com/issues/51282
2672
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2673
* https://tracker.ceph.com/issues/51183
2674
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
2675
* https://tracker.ceph.com/issues/48773
2676
    qa: scrub does not complete
2677
* https://tracker.ceph.com/issues/48771
2678
    qa: iogen: workload fails to cause balancing
2679
* https://tracker.ceph.com/issues/51169
2680
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
2681
* https://tracker.ceph.com/issues/50495
2682
    libcephfs: shutdown race fails with status 141
2683
* https://tracker.ceph.com/issues/45434
2684
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2685
* https://tracker.ceph.com/issues/50824
2686
    qa: snaptest-git-ceph bus error
2687
* https://tracker.ceph.com/issues/50223
2688
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2689 13 Patrick Donnelly
2690
2691
h3. 2021 June 16
2692
2693
https://pulpito.ceph.com/pdonnell-2021-06-16_21:26:55-fs-wip-pdonnell-testing-20210616.191804-distro-basic-smithi/
2694
2695
MDS abort class of failures caused by PR: https://github.com/ceph/ceph/pull/41667
2696
2697
* https://tracker.ceph.com/issues/45434
2698
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2699
* https://tracker.ceph.com/issues/51169
2700
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
2701
* https://tracker.ceph.com/issues/43216
2702
    MDSMonitor: removes MDS coming out of quorum election
2703
* https://tracker.ceph.com/issues/51278
2704
    mds: "FAILED ceph_assert(!segments.empty())"
2705
* https://tracker.ceph.com/issues/51279
2706
    kclient hangs on umount (testing branch)
2707
* https://tracker.ceph.com/issues/51280
2708
    mds: "FAILED ceph_assert(r == 0 || r == -2)"
2709
* https://tracker.ceph.com/issues/51183
2710
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
2711
* https://tracker.ceph.com/issues/51281
2712
    qa: snaptest-snap-rm-cmp.sh: "echo 'FAIL: bad match, /tmp/a 4637e766853d1ad16a7b17079e2c6f03 != real c3883760b18d50e8d78819c54d579b00'"
2713
* https://tracker.ceph.com/issues/48773
2714
    qa: scrub does not complete
2715
* https://tracker.ceph.com/issues/51076
2716
    "wait_for_recovery: failed before timeout expired" during thrashosd test with EC backend.
2717
* https://tracker.ceph.com/issues/51228
2718
    qa: rmdir: failed to remove 'a/.snap/*': No such file or directory
2719
* https://tracker.ceph.com/issues/51282
2720
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2721 12 Patrick Donnelly
2722
2723
h3. 2021 June 14
2724
2725
https://pulpito.ceph.com/pdonnell-2021-06-14_20:53:05-fs-wip-pdonnell-testing-20210614.173325-distro-basic-smithi/
2726
2727
Some Ubuntu 20.04 upgrade fallout. In particular, upgrade tests are failing due to missing packages for 18.04 Pacific.
2728
2729
* https://tracker.ceph.com/issues/51169
2730
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
2731
* https://tracker.ceph.com/issues/51228
2732
    qa: rmdir: failed to remove 'a/.snap/*': No such file or directory
2733
* https://tracker.ceph.com/issues/48773
2734
    qa: scrub does not complete
2735
* https://tracker.ceph.com/issues/51183
2736
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
2737
* https://tracker.ceph.com/issues/45434
2738
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2739
* https://tracker.ceph.com/issues/51182
2740
    pybind/mgr/snap_schedule: Invalid command: Unexpected argument 'fs=cephfs'
2741
* https://tracker.ceph.com/issues/51229
2742
    qa: test_multi_snap_schedule list difference failure
2743
* https://tracker.ceph.com/issues/50821
2744
    qa: untar_snap_rm failure during mds thrashing
2745 11 Patrick Donnelly
2746
2747
h3. 2021 June 13
2748
2749
https://pulpito.ceph.com/pdonnell-2021-06-12_02:45:35-fs-wip-pdonnell-testing-20210612.002809-distro-basic-smithi/
2750
2751
Some Ubuntu 20.04 upgrade fallout. In particular, upgrade tests are failing due to missing packages for 18.04 Pacific.
2752
2753
* https://tracker.ceph.com/issues/51169
2754
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
2755
* https://tracker.ceph.com/issues/48773
2756
    qa: scrub does not complete
2757
* https://tracker.ceph.com/issues/51182
2758
    pybind/mgr/snap_schedule: Invalid command: Unexpected argument 'fs=cephfs'
2759
* https://tracker.ceph.com/issues/51183
2760
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
2761
* https://tracker.ceph.com/issues/51197
2762
    qa: [WRN] Scrub error on inode 0x10000001520 (/client.0/tmp/t/linux-5.4/Documentation/driver-api) see mds.f log and `damage ls` output for details
2763
* https://tracker.ceph.com/issues/45434
2764 10 Patrick Donnelly
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2765
2766
h3. 2021 June 11
2767
2768
https://pulpito.ceph.com/pdonnell-2021-06-11_18:02:10-fs-wip-pdonnell-testing-20210611.162716-distro-basic-smithi/
2769
2770
Some Ubuntu 20.04 upgrade fallout. In particular, upgrade tests are failing due to missing packages for 18.04 Pacific.
2771
2772
* https://tracker.ceph.com/issues/51169
2773
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
2774
* https://tracker.ceph.com/issues/45434
2775
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2776
* https://tracker.ceph.com/issues/48771
2777
    qa: iogen: workload fails to cause balancing
2778
* https://tracker.ceph.com/issues/43216
2779
    MDSMonitor: removes MDS coming out of quorum election
2780
* https://tracker.ceph.com/issues/51182
2781
    pybind/mgr/snap_schedule: Invalid command: Unexpected argument 'fs=cephfs'
2782
* https://tracker.ceph.com/issues/50223
2783
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2784
* https://tracker.ceph.com/issues/48773
2785
    qa: scrub does not complete
2786
* https://tracker.ceph.com/issues/51183
2787
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
2788
* https://tracker.ceph.com/issues/51184
2789
    qa: fs:bugs does not specify distro
2790 9 Patrick Donnelly
2791
2792
h3. 2021 June 03
2793
2794
https://pulpito.ceph.com/pdonnell-2021-06-03_03:40:33-fs-wip-pdonnell-testing-20210603.020013-distro-basic-smithi/
2795
2796
* https://tracker.ceph.com/issues/45434
2797
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2798
* https://tracker.ceph.com/issues/50016
2799
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
2800
* https://tracker.ceph.com/issues/50821
2801
    qa: untar_snap_rm failure during mds thrashing
2802
* https://tracker.ceph.com/issues/50622 (regression)
2803
    msg: active_connections regression
2804
* https://tracker.ceph.com/issues/49845#note-2 (regression)
2805
    qa: failed umount in test_volumes
2806
* https://tracker.ceph.com/issues/48773
2807
    qa: scrub does not complete
2808
* https://tracker.ceph.com/issues/43216
2809
    MDSMonitor: removes MDS coming out of quorum election
2810 7 Patrick Donnelly
2811
2812 8 Patrick Donnelly
h3. 2021 May 18
2813
2814
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210518.214114
2815
2816
Regression in testing kernel caused some failures. Ilya fixed those and rerun
2817
looked better. Some odd new noise in the rerun relating to packaging and "No
2818
module named 'tasks.ceph'".
2819
2820
* https://tracker.ceph.com/issues/50824
2821
    qa: snaptest-git-ceph bus error
2822
* https://tracker.ceph.com/issues/50622 (regression)
2823
    msg: active_connections regression
2824
* https://tracker.ceph.com/issues/49845#note-2 (regression)
2825
    qa: failed umount in test_volumes
2826
* https://tracker.ceph.com/issues/48203 (stock kernel update required)
2827
    qa: quota failure
2828
2829
2830 7 Patrick Donnelly
h3. 2021 May 18
2831
2832
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210518.025642
2833
2834
* https://tracker.ceph.com/issues/50821
2835
    qa: untar_snap_rm failure during mds thrashing
2836
* https://tracker.ceph.com/issues/48773
2837
    qa: scrub does not complete
2838
* https://tracker.ceph.com/issues/45591
2839
    mgr: FAILED ceph_assert(daemon != nullptr)
2840
* https://tracker.ceph.com/issues/50866
2841
    osd: stat mismatch on objects
2842
* https://tracker.ceph.com/issues/50016
2843
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
2844
* https://tracker.ceph.com/issues/50867
2845
    qa: fs:mirror: reduced data availability
2846
* https://tracker.ceph.com/issues/50821
2847
    qa: untar_snap_rm failure during mds thrashing
2848
* https://tracker.ceph.com/issues/50622 (regression)
2849
    msg: active_connections regression
2850
* https://tracker.ceph.com/issues/50223
2851
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2852
* https://tracker.ceph.com/issues/50868
2853
    qa: "kern.log.gz already exists; not overwritten"
2854
* https://tracker.ceph.com/issues/50870
2855
    qa: test_full: "rm: cannot remove 'large_file_a': Permission denied"
2856 6 Patrick Donnelly
2857
2858
h3. 2021 May 11
2859
2860
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210511.232042
2861
2862
* one class of failures caused by PR
2863
* https://tracker.ceph.com/issues/48812
2864
    qa: test_scrub_pause_and_resume_with_abort failure
2865
* https://tracker.ceph.com/issues/50390
2866
    mds: monclient: wait_auth_rotating timed out after 30
2867
* https://tracker.ceph.com/issues/48773
2868
    qa: scrub does not complete
2869
* https://tracker.ceph.com/issues/50821
2870
    qa: untar_snap_rm failure during mds thrashing
2871
* https://tracker.ceph.com/issues/50224
2872
    qa: test_mirroring_init_failure_with_recovery failure
2873
* https://tracker.ceph.com/issues/50622 (regression)
2874
    msg: active_connections regression
2875
* https://tracker.ceph.com/issues/50825
2876
    qa: snaptest-git-ceph hang during mon thrashing v2
2877
* https://tracker.ceph.com/issues/50821
2878
    qa: untar_snap_rm failure during mds thrashing
2879
* https://tracker.ceph.com/issues/50823
2880
    qa: RuntimeError: timeout waiting for cluster to stabilize
2881 5 Patrick Donnelly
2882
2883
h3. 2021 May 14
2884
2885
https://pulpito.ceph.com/pdonnell-2021-05-14_21:45:42-fs-master-distro-basic-smithi/
2886
2887
* https://tracker.ceph.com/issues/48812
2888
    qa: test_scrub_pause_and_resume_with_abort failure
2889
* https://tracker.ceph.com/issues/50821
2890
    qa: untar_snap_rm failure during mds thrashing
2891
* https://tracker.ceph.com/issues/50622 (regression)
2892
    msg: active_connections regression
2893
* https://tracker.ceph.com/issues/50822
2894
    qa: testing kernel patch for client metrics causes mds abort
2895
* https://tracker.ceph.com/issues/48773
2896
    qa: scrub does not complete
2897
* https://tracker.ceph.com/issues/50823
2898
    qa: RuntimeError: timeout waiting for cluster to stabilize
2899
* https://tracker.ceph.com/issues/50824
2900
    qa: snaptest-git-ceph bus error
2901
* https://tracker.ceph.com/issues/50825
2902
    qa: snaptest-git-ceph hang during mon thrashing v2
2903
* https://tracker.ceph.com/issues/50826
2904
    kceph: stock RHEL kernel hangs on snaptests with mon|osd thrashers
2905 4 Patrick Donnelly
2906
2907
h3. 2021 May 01
2908
2909
https://pulpito.ceph.com/pdonnell-2021-05-01_09:07:09-fs-wip-pdonnell-testing-20210501.040415-distro-basic-smithi/
2910
2911
* https://tracker.ceph.com/issues/45434
2912
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2913
* https://tracker.ceph.com/issues/50281
2914
    qa: untar_snap_rm timeout
2915
* https://tracker.ceph.com/issues/48203 (stock kernel update required)
2916
    qa: quota failure
2917
* https://tracker.ceph.com/issues/48773
2918
    qa: scrub does not complete
2919
* https://tracker.ceph.com/issues/50390
2920
    mds: monclient: wait_auth_rotating timed out after 30
2921
* https://tracker.ceph.com/issues/50250
2922
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details"
2923
* https://tracker.ceph.com/issues/50622 (regression)
2924
    msg: active_connections regression
2925
* https://tracker.ceph.com/issues/45591
2926
    mgr: FAILED ceph_assert(daemon != nullptr)
2927
* https://tracker.ceph.com/issues/50221
2928
    qa: snaptest-git-ceph failure in git diff
2929
* https://tracker.ceph.com/issues/50016
2930
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
2931 3 Patrick Donnelly
2932
2933
h3. 2021 Apr 15
2934
2935
https://pulpito.ceph.com/pdonnell-2021-04-15_01:35:57-fs-wip-pdonnell-testing-20210414.230315-distro-basic-smithi/
2936
2937
* https://tracker.ceph.com/issues/50281
2938
    qa: untar_snap_rm timeout
2939
* https://tracker.ceph.com/issues/50220
2940
    qa: dbench workload timeout
2941
* https://tracker.ceph.com/issues/50246
2942
    mds: failure replaying journal (EMetaBlob)
2943
* https://tracker.ceph.com/issues/50250
2944
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details"
2945
* https://tracker.ceph.com/issues/50016
2946
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
2947
* https://tracker.ceph.com/issues/50222
2948
    osd: 5.2s0 deep-scrub : stat mismatch
2949
* https://tracker.ceph.com/issues/45434
2950
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2951
* https://tracker.ceph.com/issues/49845
2952
    qa: failed umount in test_volumes
2953
* https://tracker.ceph.com/issues/37808
2954
    osd: osdmap cache weak_refs assert during shutdown
2955
* https://tracker.ceph.com/issues/50387
2956
    client: fs/snaps failure
2957
* https://tracker.ceph.com/issues/50389
2958
    mds: "cluster [ERR] Error recovering journal 0x203: (2) No such file or directory" in cluster log"
2959
* https://tracker.ceph.com/issues/50216
2960
    qa: "ls: cannot access 'lost+found': No such file or directory"
2961
* https://tracker.ceph.com/issues/50390
2962
    mds: monclient: wait_auth_rotating timed out after 30
2963
2964 1 Patrick Donnelly
2965
2966 2 Patrick Donnelly
h3. 2021 Apr 08
2967
2968
https://pulpito.ceph.com/pdonnell-2021-04-08_22:42:24-fs-wip-pdonnell-testing-20210408.192301-distro-basic-smithi/
2969
2970
* https://tracker.ceph.com/issues/45434
2971
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2972
* https://tracker.ceph.com/issues/50016
2973
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
2974
* https://tracker.ceph.com/issues/48773
2975
    qa: scrub does not complete
2976
* https://tracker.ceph.com/issues/50279
2977
    qa: "Replacing daemon mds.b as rank 0 with standby daemon mds.c"
2978
* https://tracker.ceph.com/issues/50246
2979
    mds: failure replaying journal (EMetaBlob)
2980
* https://tracker.ceph.com/issues/48365
2981
    qa: ffsb build failure on CentOS 8.2
2982
* https://tracker.ceph.com/issues/50216
2983
    qa: "ls: cannot access 'lost+found': No such file or directory"
2984
* https://tracker.ceph.com/issues/50223
2985
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2986
* https://tracker.ceph.com/issues/50280
2987
    cephadm: RuntimeError: uid/gid not found
2988
* https://tracker.ceph.com/issues/50281
2989
    qa: untar_snap_rm timeout
2990
2991 1 Patrick Donnelly
h3. 2021 Apr 08
2992
2993
https://pulpito.ceph.com/pdonnell-2021-04-08_04:31:36-fs-wip-pdonnell-testing-20210408.024225-distro-basic-smithi/
2994
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210408.142238 (with logic inversion / QA fix)
2995
2996
* https://tracker.ceph.com/issues/50246
2997
    mds: failure replaying journal (EMetaBlob)
2998
* https://tracker.ceph.com/issues/50250
2999
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details"
3000
3001
3002
h3. 2021 Apr 07
3003
3004
https://pulpito.ceph.com/pdonnell-2021-04-07_02:12:41-fs-wip-pdonnell-testing-20210406.213012-distro-basic-smithi/
3005
3006
* https://tracker.ceph.com/issues/50215
3007
    qa: "log [ERR] : error reading sessionmap 'mds2_sessionmap'"
3008
* https://tracker.ceph.com/issues/49466
3009
    qa: "Command failed on gibba030 with status 1: 'set -ex\nsudo dd of=/tmp/tmp.ZEeZBasJer'"
3010
* https://tracker.ceph.com/issues/50216
3011
    qa: "ls: cannot access 'lost+found': No such file or directory"
3012
* https://tracker.ceph.com/issues/48773
3013
    qa: scrub does not complete
3014
* https://tracker.ceph.com/issues/49845
3015
    qa: failed umount in test_volumes
3016
* https://tracker.ceph.com/issues/50220
3017
    qa: dbench workload timeout
3018
* https://tracker.ceph.com/issues/50221
3019
    qa: snaptest-git-ceph failure in git diff
3020
* https://tracker.ceph.com/issues/50222
3021
    osd: 5.2s0 deep-scrub : stat mismatch
3022
* https://tracker.ceph.com/issues/50223
3023
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
3024
* https://tracker.ceph.com/issues/50224
3025
    qa: test_mirroring_init_failure_with_recovery failure
3026
3027
h3. 2021 Apr 01
3028
3029
https://pulpito.ceph.com/pdonnell-2021-04-01_00:45:34-fs-wip-pdonnell-testing-20210331.222326-distro-basic-smithi/
3030
3031
* https://tracker.ceph.com/issues/48772
3032
    qa: pjd: not ok 9, 44, 80
3033
* https://tracker.ceph.com/issues/50177
3034
    osd: "stalled aio... buggy kernel or bad device?"
3035
* https://tracker.ceph.com/issues/48771
3036
    qa: iogen: workload fails to cause balancing
3037
* https://tracker.ceph.com/issues/49845
3038
    qa: failed umount in test_volumes
3039
* https://tracker.ceph.com/issues/48773
3040
    qa: scrub does not complete
3041
* https://tracker.ceph.com/issues/48805
3042
    mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details"
3043
* https://tracker.ceph.com/issues/50178
3044
    qa: "TypeError: run() got an unexpected keyword argument 'shell'"
3045
* https://tracker.ceph.com/issues/45434
3046
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
3047
3048
h3. 2021 Mar 24
3049
3050
https://pulpito.ceph.com/pdonnell-2021-03-24_23:26:35-fs-wip-pdonnell-testing-20210324.190252-distro-basic-smithi/
3051
3052
* https://tracker.ceph.com/issues/49500
3053
    qa: "Assertion `cb_done' failed."
3054
* https://tracker.ceph.com/issues/50019
3055
    qa: mount failure with cephadm "probably no MDS server is up?"
3056
* https://tracker.ceph.com/issues/50020
3057
    qa: "RADOS object not found (Failed to operate read op for oid cephfs_mirror)"
3058
* https://tracker.ceph.com/issues/48773
3059
    qa: scrub does not complete
3060
* https://tracker.ceph.com/issues/45434
3061
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
3062
* https://tracker.ceph.com/issues/48805
3063
    mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details"
3064
* https://tracker.ceph.com/issues/48772
3065
    qa: pjd: not ok 9, 44, 80
3066
* https://tracker.ceph.com/issues/50021
3067
    qa: snaptest-git-ceph failure during mon thrashing
3068
* https://tracker.ceph.com/issues/48771
3069
    qa: iogen: workload fails to cause balancing
3070
* https://tracker.ceph.com/issues/50016
3071
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
3072
* https://tracker.ceph.com/issues/49466
3073
    qa: "Command failed on gibba030 with status 1: 'set -ex\nsudo dd of=/tmp/tmp.ZEeZBasJer'"
3074
3075
3076
h3. 2021 Mar 18
3077
3078
https://pulpito.ceph.com/pdonnell-2021-03-18_13:46:31-fs-wip-pdonnell-testing-20210318.024145-distro-basic-smithi/
3079
3080
* https://tracker.ceph.com/issues/49466
3081
    qa: "Command failed on gibba030 with status 1: 'set -ex\nsudo dd of=/tmp/tmp.ZEeZBasJer'"
3082
* https://tracker.ceph.com/issues/48773
3083
    qa: scrub does not complete
3084
* https://tracker.ceph.com/issues/48805
3085
    mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details"
3086
* https://tracker.ceph.com/issues/45434
3087
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
3088
* https://tracker.ceph.com/issues/49845
3089
    qa: failed umount in test_volumes
3090
* https://tracker.ceph.com/issues/49605
3091
    mgr: drops command on the floor
3092
* https://tracker.ceph.com/issues/48203 (stock kernel update required)
3093
    qa: quota failure
3094
* https://tracker.ceph.com/issues/49928
3095
    client: items pinned in cache preventing unmount x2
3096
3097
h3. 2021 Mar 15
3098
3099
https://pulpito.ceph.com/pdonnell-2021-03-15_22:16:56-fs-wip-pdonnell-testing-20210315.182203-distro-basic-smithi/
3100
3101
* https://tracker.ceph.com/issues/49842
3102
    qa: stuck pkg install
3103
* https://tracker.ceph.com/issues/49466
3104
    qa: "Command failed on gibba030 with status 1: 'set -ex\nsudo dd of=/tmp/tmp.ZEeZBasJer'"
3105
* https://tracker.ceph.com/issues/49822
3106
    test: test_mirroring_command_idempotency (tasks.cephfs.test_admin.TestMirroringCommands) failure
3107
* https://tracker.ceph.com/issues/49240
3108
    terminate called after throwing an instance of 'std::bad_alloc'
3109
* https://tracker.ceph.com/issues/48773
3110
    qa: scrub does not complete
3111
* https://tracker.ceph.com/issues/45434
3112
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
3113
* https://tracker.ceph.com/issues/49500
3114
    qa: "Assertion `cb_done' failed."
3115
* https://tracker.ceph.com/issues/49843
3116
    qa: fs/snaps/snaptest-upchildrealms.sh failure
3117
* https://tracker.ceph.com/issues/49845
3118
    qa: failed umount in test_volumes
3119
* https://tracker.ceph.com/issues/48805
3120
    mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details"
3121
* https://tracker.ceph.com/issues/49605
3122
    mgr: drops command on the floor
3123
3124
and failure caused by PR: https://github.com/ceph/ceph/pull/39969
3125
3126
3127
h3. 2021 Mar 09
3128
3129
https://pulpito.ceph.com/pdonnell-2021-03-09_03:27:39-fs-wip-pdonnell-testing-20210308.214827-distro-basic-smithi/
3130
3131
* https://tracker.ceph.com/issues/49500
3132
    qa: "Assertion `cb_done' failed."
3133
* https://tracker.ceph.com/issues/48805
3134
    mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details"
3135
* https://tracker.ceph.com/issues/48773
3136
    qa: scrub does not complete
3137
* https://tracker.ceph.com/issues/45434
3138
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
3139
* https://tracker.ceph.com/issues/49240
3140
    terminate called after throwing an instance of 'std::bad_alloc'
3141
* https://tracker.ceph.com/issues/49466
3142
    qa: "Command failed on gibba030 with status 1: 'set -ex\nsudo dd of=/tmp/tmp.ZEeZBasJer'"
3143
* https://tracker.ceph.com/issues/49684
3144
    qa: fs:cephadm mount does not wait for mds to be created
3145
* https://tracker.ceph.com/issues/48771
3146
    qa: iogen: workload fails to cause balancing