Project

General

Profile

Main » History » Version 245

Rishabh Dave, 04/04/2024 08:02 AM

1 221 Patrick Donnelly
h1. <code>main</code> branch
2 1 Patrick Donnelly
3 245 Rishabh Dave
h3. ADD NEW ENTRY BELOW
4
5 240 Patrick Donnelly
h3. 2024-04-02
6
7
https://tracker.ceph.com/issues/65215
8
9
* "qa: error during scrub thrashing: rank damage found: {'backtrace'}":https://tracker.ceph.com/issues/57676
10
* "qa: ceph tell 4.3a deep-scrub command not found":https://tracker.ceph.com/issues/64972
11
* "pacific/quincy/v18.2.0: client: ceph-fuse fails to unmount after upgrade to main":https://tracker.ceph.com/issues/64502
12
* "Test failure: test_cephfs_mirror_cancel_mirroring_and_readd":https://tracker.ceph.com/issues/64711
13
* "workunits/fsx.sh failure":https://tracker.ceph.com/issues/64572
14
* "qa: failed cephfs-shell test_reading_conf":https://tracker.ceph.com/issues/63699
15
* "centos 9 testing reveals rocksdb Leak_StillReachable memory leak in mons":https://tracker.ceph.com/issues/61774
16
* "qa: test_max_items_per_obj open procs not fully cleaned up":https://tracker.ceph.com/issues/65022
17
* "qa: dbench workload timeout":https://tracker.ceph.com/issues/50220
18
* "suites/fsstress.sh hangs on one client - test times out":https://tracker.ceph.com/issues/64707
19 241 Patrick Donnelly
* "qa/suites/fs/nfs: cluster [WRN] Health check failed: 1 stray daemon(s) not managed by cephadm (CEPHADM_STRAY_DAEMON)" in cluster log":https://tracker.ceph.com/issues/65021
20
* "qa: Scrub error on inode 0x1000000356c (/volumes/qa/sv_0/2f8f6bb4-3ea9-47a0-bd79-a0f50dc149d5/client.0/tmp/clients/client7/~dmtmp/PARADOX) see mds.b log and `damage ls` output for details in cluster log":https://tracker.ceph.com/issues/65020
21
* "qa: iogen workunit: The following counters failed to be set on mds daemons: {'mds.exported', 'mds.imported'}":https://tracker.ceph.com/issues/54108
22
* "ffsb.sh failure Resource temporarily unavailable":https://tracker.ceph.com/issues/62067
23
* "QA failure: test_fscrypt_dummy_encryption_with_quick_group":https://tracker.ceph.com/issues/65136
24 244 Patrick Donnelly
* "qa: cluster [WRN] Health detail: HEALTH_WARN 1 pool(s) do not have an application enabled in cluster log":https://tracker.ceph.com/issues/65271
25 241 Patrick Donnelly
* "qa: test_cephfs_mirror_cancel_sync fails in a 100 jobs run of fs:mirror suite":https://tracker.ceph.com/issues/64534
26 240 Patrick Donnelly
27 236 Patrick Donnelly
h3. 2024-03-28
28
29
https://tracker.ceph.com/issues/65213
30
31 237 Patrick Donnelly
* "qa: error during scrub thrashing: rank damage found: {'backtrace'}":https://tracker.ceph.com/issues/57676
32
* "workunits/fsx.sh failure":https://tracker.ceph.com/issues/64572
33
* "PG_DEGRADED warnings during cluster creation via cephadm: Health check failed: Degraded data":https://tracker.ceph.com/issues/65018
34 238 Patrick Donnelly
* "suites/fsstress.sh hangs on one client - test times out":https://tracker.ceph.com/issues/64707
35
* "qa: ceph tell 4.3a deep-scrub command not found":https://tracker.ceph.com/issues/64972
36
* "qa: iogen workunit: The following counters failed to be set on mds daemons: {'mds.exported', 'mds.imported'}":https://tracker.ceph.com/issues/54108
37 239 Patrick Donnelly
* "qa: Scrub error on inode 0x1000000356c (/volumes/qa/sv_0/2f8f6bb4-3ea9-47a0-bd79-a0f50dc149d5/client.0/tmp/clients/client7/~dmtmp/PARADOX) see mds.b log and `damage ls` output for details in cluster log":https://tracker.ceph.com/issues/65020
38
* "qa: failed cephfs-shell test_reading_conf":https://tracker.ceph.com/issues/63699
39
* "Test failure: test_cephfs_mirror_cancel_mirroring_and_readd":https://tracker.ceph.com/issues/64711
40
* "qa: test_max_items_per_obj open procs not fully cleaned up":https://tracker.ceph.com/issues/65022
41
* "pacific/quincy/v18.2.0: client: ceph-fuse fails to unmount after upgrade to main":https://tracker.ceph.com/issues/64502
42
* "centos 9 testing reveals rocksdb Leak_StillReachable memory leak in mons":https://tracker.ceph.com/issues/61774
43
* "qa: Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)":https://tracker.ceph.com/issues/52624
44
* "qa: dbench workload timeout":https://tracker.ceph.com/issues/50220
45
46
47 236 Patrick Donnelly
48 235 Milind Changire
h3. 2024-03-25
49
50
https://pulpito.ceph.com/mchangir-2024-03-22_09:46:06-fs:upgrade-wip-mchangir-testing-main-20240318.032620-testing-default-smithi/
51
* https://tracker.ceph.com/issues/64502
52
  fusermount -u fails with: teuthology.exceptions.MaxWhileTries: reached maximum tries (51) after waiting for 300 seconds
53
54
https://pulpito.ceph.com/mchangir-2024-03-22_09:48:09-fs:libcephfs-wip-mchangir-testing-main-20240318.032620-testing-default-smithi/
55
56
* https://tracker.ceph.com/issues/62245
57
  libcephfs/test.sh failed - https://tracker.ceph.com/issues/62245#note-3
58
59
60 228 Patrick Donnelly
h3. 2024-03-20
61
62 234 Patrick Donnelly
https://pulpito.ceph.com/?branch=wip-batrick-testing-20240320.145742
63 228 Patrick Donnelly
64 233 Patrick Donnelly
https://github.com/batrick/ceph/commit/360516069d9393362c4cc6eb9371680fe16d66ab
65
66 229 Patrick Donnelly
Ubuntu jobs filtered out because builds were skipped by jenkins/shaman.
67 1 Patrick Donnelly
68 229 Patrick Donnelly
This run has a lot more failures because https://github.com/ceph/ceph/pull/55455 fixed log WRN/ERR checks.
69 228 Patrick Donnelly
70 229 Patrick Donnelly
* https://tracker.ceph.com/issues/57676
71
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
72
* https://tracker.ceph.com/issues/64572
73
    workunits/fsx.sh failure
74
* https://tracker.ceph.com/issues/65018
75
    PG_DEGRADED warnings during cluster creation via cephadm: "Health check failed: Degraded data redundancy: 2/192 objects degraded (1.042%), 1 pg degraded (PG_DEGRADED)"
76
* https://tracker.ceph.com/issues/64707 (new issue)
77
    suites/fsstress.sh hangs on one client - test times out
78 1 Patrick Donnelly
* https://tracker.ceph.com/issues/64988
79
    qa: fs:workloads mgr client evicted indicated by "cluster [WRN] evicting unresponsive client smithi042:x (15288), after 303.306 seconds"
80
* https://tracker.ceph.com/issues/59684
81
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
82 230 Patrick Donnelly
* https://tracker.ceph.com/issues/64972
83
    qa: "ceph tell 4.3a deep-scrub" command not found
84
* https://tracker.ceph.com/issues/54108
85
    qa: iogen workunit: "The following counters failed to be set on mds daemons: {'mds.exported', 'mds.imported'}"
86
* https://tracker.ceph.com/issues/65019
87
    qa/suites/fs/top: [WRN] Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)" in cluster log 
88
* https://tracker.ceph.com/issues/65020
89
    qa: Scrub error on inode 0x1000000356c (/volumes/qa/sv_0/2f8f6bb4-3ea9-47a0-bd79-a0f50dc149d5/client.0/tmp/clients/client7/~dmtmp/PARADOX) see mds.b log and `damage ls` output for details" in cluster log
90
* https://tracker.ceph.com/issues/65021
91
    qa/suites/fs/nfs: cluster [WRN] Health check failed: 1 stray daemon(s) not managed by cephadm (CEPHADM_STRAY_DAEMON)" in cluster log
92
* https://tracker.ceph.com/issues/63699
93
    qa: failed cephfs-shell test_reading_conf
94 231 Patrick Donnelly
* https://tracker.ceph.com/issues/64711
95
    Test failure: test_cephfs_mirror_cancel_mirroring_and_readd (tasks.cephfs.test_mirroring.TestMirroring)
96
* https://tracker.ceph.com/issues/50821
97
    qa: untar_snap_rm failure during mds thrashing
98 232 Patrick Donnelly
* https://tracker.ceph.com/issues/65022
99
    qa: test_max_items_per_obj open procs not fully cleaned up
100 228 Patrick Donnelly
101 226 Venky Shankar
h3.  14th March 2024
102
103
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20240307.013758
104
105 227 Venky Shankar
(pjd.sh failures are related to a bug in the testing kernel. See - https://tracker.ceph.com/issues/64679#note-4)
106 226 Venky Shankar
107
* https://tracker.ceph.com/issues/62067
108
    ffsb.sh failure "Resource temporarily unavailable"
109
* https://tracker.ceph.com/issues/57676
110
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
111
* https://tracker.ceph.com/issues/64502
112
    pacific/quincy/v18.2.0: client: ceph-fuse fails to unmount after upgrade to main
113
* https://tracker.ceph.com/issues/64572
114
    workunits/fsx.sh failure
115
* https://tracker.ceph.com/issues/63700
116
    qa: test_cd_with_args failure
117
* https://tracker.ceph.com/issues/59684
118
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
119
* https://tracker.ceph.com/issues/61243
120
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
121
122 225 Venky Shankar
h3. 5th March 2024
123
124
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20240304.042522
125
126
* https://tracker.ceph.com/issues/57676
127
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
128
* https://tracker.ceph.com/issues/64502
129
    pacific/quincy/v18.2.0: client: ceph-fuse fails to unmount after upgrade to main
130
* https://tracker.ceph.com/issues/63949
131
    leak in mds.c detected by valgrind during CephFS QA run
132
* https://tracker.ceph.com/issues/57656
133
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
134
* https://tracker.ceph.com/issues/63699
135
    qa: failed cephfs-shell test_reading_conf
136
* https://tracker.ceph.com/issues/64572
137
    workunits/fsx.sh failure
138
* https://tracker.ceph.com/issues/64707 (new issue)
139
    suites/fsstress.sh hangs on one client - test times out
140
* https://tracker.ceph.com/issues/59684
141
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
142
* https://tracker.ceph.com/issues/63700
143
    qa: test_cd_with_args failure
144
* https://tracker.ceph.com/issues/64711
145
    Test failure: test_cephfs_mirror_cancel_mirroring_and_readd (tasks.cephfs.test_mirroring.TestMirroring)
146
* https://tracker.ceph.com/issues/64729 (new issue)
147
    mon.a (mon.0) 1281 : cluster 3 [WRN] MDS_SLOW_METADATA_IO: 3 MDSs report slow metadata IOs" in cluster log
148
* https://tracker.ceph.com/issues/64730
149
    fs/misc/multiple_rsync.sh workunit times out
150
151 224 Venky Shankar
h3. 26th Feb 2024
152
153
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20240216.060239
154
155
(This run is a bit messy due to
156
157
  a) OCI runtime issues in the testing kernel with centos9
158
  b) SELinux denials related failures
159
  c) Unrelated MON_DOWN warnings)
160
161
* https://tracker.ceph.com/issues/57676
162
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
163
* https://tracker.ceph.com/issues/63700
164
    qa: test_cd_with_args failure
165
* https://tracker.ceph.com/issues/63949
166
    leak in mds.c detected by valgrind during CephFS QA run
167
* https://tracker.ceph.com/issues/59684
168
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
169
* https://tracker.ceph.com/issues/61243
170
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
171
* https://tracker.ceph.com/issues/63699
172
    qa: failed cephfs-shell test_reading_conf
173
* https://tracker.ceph.com/issues/64172
174
    Test failure: test_multiple_path_r (tasks.cephfs.test_admin.TestFsAuthorize)
175
* https://tracker.ceph.com/issues/57656
176
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
177
* https://tracker.ceph.com/issues/64572
178
    workunits/fsx.sh failure
179
180 222 Patrick Donnelly
h3. 20th Feb 2024
181
182
https://github.com/ceph/ceph/pull/55601
183
https://github.com/ceph/ceph/pull/55659
184
185
https://pulpito.ceph.com/pdonnell-2024-02-20_07:23:03-fs:upgrade:mds_upgrade_sequence-wip-batrick-testing-20240220.022152-distro-default-smithi/
186
187
* https://tracker.ceph.com/issues/64502
188
    client: quincy ceph-fuse fails to unmount after upgrade to main
189
190 223 Patrick Donnelly
This run has numerous problems. #55601 introduces testing for the upgrade sequence from </code>reef/{v18.2.0,v18.2.1,reef}</code> as well as an extra dimension for the ceph-fuse client. The main "big" issue is i64502: the ceph-fuse client is not being unmounted when <code>fusermount -u</code> is called. Instead, the client begins to unmount only after daemons are shut down during test cleanup.
191 218 Venky Shankar
192
h3. 19th Feb 2024
193
194 220 Venky Shankar
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20240217.015652
195
196 218 Venky Shankar
* https://tracker.ceph.com/issues/61243
197
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
198
* https://tracker.ceph.com/issues/63700
199
    qa: test_cd_with_args failure
200
* https://tracker.ceph.com/issues/63141
201
    qa/cephfs: test_idem_unaffected_root_squash fails
202
* https://tracker.ceph.com/issues/59684
203
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
204
* https://tracker.ceph.com/issues/63949
205
    leak in mds.c detected by valgrind during CephFS QA run
206
* https://tracker.ceph.com/issues/63764
207
    Test failure: test_r_with_fsname_and_no_path_in_cap (tasks.cephfs.test_multifs_auth.TestMDSCaps)
208
* https://tracker.ceph.com/issues/63699
209
    qa: failed cephfs-shell test_reading_conf
210 219 Venky Shankar
* https://tracker.ceph.com/issues/64482
211
    ceph: stderr Error: OCI runtime error: crun: bpf create ``: Function not implemented
212 201 Rishabh Dave
213 217 Venky Shankar
h3. 29 Jan 2024
214
215
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20240119.075157-1
216
217
* https://tracker.ceph.com/issues/57676
218
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
219
* https://tracker.ceph.com/issues/63949
220
    leak in mds.c detected by valgrind during CephFS QA run
221
* https://tracker.ceph.com/issues/62067
222
    ffsb.sh failure "Resource temporarily unavailable"
223
* https://tracker.ceph.com/issues/64172
224
    Test failure: test_multiple_path_r (tasks.cephfs.test_admin.TestFsAuthorize)
225
* https://tracker.ceph.com/issues/63265
226
    qa: fs/snaps/snaptest-git-ceph.sh failed when reseting to tag 'v0.1'
227
* https://tracker.ceph.com/issues/61243
228
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
229
* https://tracker.ceph.com/issues/59684
230
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
231
* https://tracker.ceph.com/issues/57656
232
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
233
* https://tracker.ceph.com/issues/64209
234
    snaptest-multiple-capsnaps.sh fails with "got remote process result: 1"
235
236 216 Venky Shankar
h3. 17th Jan 2024
237
238
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20240103.072409-1
239
240
* https://tracker.ceph.com/issues/63764
241
    Test failure: test_r_with_fsname_and_no_path_in_cap (tasks.cephfs.test_multifs_auth.TestMDSCaps)
242
* https://tracker.ceph.com/issues/57676
243
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
244
* https://tracker.ceph.com/issues/51964
245
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
246
* https://tracker.ceph.com/issues/63949
247
    leak in mds.c detected by valgrind during CephFS QA run
248
* https://tracker.ceph.com/issues/62067
249
    ffsb.sh failure "Resource temporarily unavailable"
250
* https://tracker.ceph.com/issues/61243
251
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
252
* https://tracker.ceph.com/issues/63259
253
    mds: failed to store backtrace and force file system read-only
254
* https://tracker.ceph.com/issues/63265
255
    qa: fs/snaps/snaptest-git-ceph.sh failed when reseting to tag 'v0.1'
256
257
h3. 16 Jan 2024
258 215 Rishabh Dave
259 214 Rishabh Dave
https://pulpito.ceph.com/rishabh-2023-12-11_15:37:57-fs-rishabh-2023dec11-testing-default-smithi/
260
https://pulpito.ceph.com/rishabh-2023-12-17_11:19:43-fs-rishabh-2023dec11-testing-default-smithi/
261
https://pulpito.ceph.com/rishabh-2024-01-04_18:43:16-fs-rishabh-2024jan4-testing-default-smithi
262
263
* https://tracker.ceph.com/issues/63764
264
  Test failure: test_r_with_fsname_and_no_path_in_cap (tasks.cephfs.test_multifs_auth.TestMDSCaps)
265
* https://tracker.ceph.com/issues/63141
266
  qa/cephfs: test_idem_unaffected_root_squash fails
267
* https://tracker.ceph.com/issues/62067
268
  ffsb.sh failure "Resource temporarily unavailable" 
269
* https://tracker.ceph.com/issues/51964
270
  qa: test_cephfs_mirror_restart_sync_on_blocklist failure
271
* https://tracker.ceph.com/issues/54462 
272
  Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128
273
* https://tracker.ceph.com/issues/57676
274
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
275
276
* https://tracker.ceph.com/issues/63949
277
  valgrind leak in MDS
278
* https://tracker.ceph.com/issues/64041
279
  qa/cephfs: fs/upgrade/nofs suite attempts to jump more than 2 releases
280
* fsstress failure in last run was due a kernel MM layer failure, unrelated to CephFS
281
* from last run, job #7507400 failed due to MGR; FS wasn't degraded, so it's unrelated to CephFS
282
283 213 Venky Shankar
h3. 06 Dec 2023
284
285
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20231206.125818
286
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20231206.125818-x (rerun w/ squid kickoff changes)
287
288
* https://tracker.ceph.com/issues/63764
289
    Test failure: test_r_with_fsname_and_no_path_in_cap (tasks.cephfs.test_multifs_auth.TestMDSCaps)
290
* https://tracker.ceph.com/issues/63233
291
    mon|client|mds: valgrind reports possible leaks in the MDS
292
* https://tracker.ceph.com/issues/57676
293
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
294
* https://tracker.ceph.com/issues/62580
295
    testing: Test failure: test_snapshot_remove (tasks.cephfs.test_strays.TestStrays)
296
* https://tracker.ceph.com/issues/62067
297
    ffsb.sh failure "Resource temporarily unavailable"
298
* https://tracker.ceph.com/issues/61243
299
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
300
* https://tracker.ceph.com/issues/62081
301
    tasks/fscrypt-common does not finish, timesout
302
* https://tracker.ceph.com/issues/63265
303
    qa: fs/snaps/snaptest-git-ceph.sh failed when reseting to tag 'v0.1'
304
* https://tracker.ceph.com/issues/63806
305
    ffsb.sh workunit failure (MDS: std::out_of_range, damaged)
306
307 211 Patrick Donnelly
h3. 30 Nov 2023
308
309
https://pulpito.ceph.com/pdonnell-2023-11-30_08:05:19-fs:shell-wip-batrick-testing-20231130.014408-distro-default-smithi/
310
311
* https://tracker.ceph.com/issues/63699
312 212 Patrick Donnelly
    qa: failed cephfs-shell test_reading_conf
313
* https://tracker.ceph.com/issues/63700
314
    qa: test_cd_with_args failure
315 211 Patrick Donnelly
316 210 Venky Shankar
h3. 29 Nov 2023
317
318
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20231107.042705
319
320
* https://tracker.ceph.com/issues/63233
321
    mon|client|mds: valgrind reports possible leaks in the MDS
322
* https://tracker.ceph.com/issues/63141
323
    qa/cephfs: test_idem_unaffected_root_squash fails
324
* https://tracker.ceph.com/issues/57676
325
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
326
* https://tracker.ceph.com/issues/57655
327
    qa: fs:mixed-clients kernel_untar_build failure
328
* https://tracker.ceph.com/issues/62067
329
    ffsb.sh failure "Resource temporarily unavailable"
330
* https://tracker.ceph.com/issues/61243
331
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
332
* https://tracker.ceph.com/issues/62510 (pending RHEL back port)
    snaptest-git-ceph.sh failure with fs/thrash
333
* https://tracker.ceph.com/issues/62810
334
    Failure in snaptest-git-ceph.sh (it's an async unlink/create bug) -- Need to fix again
335
336 206 Venky Shankar
h3. 14 Nov 2023
337 207 Milind Changire
(Milind)
338
339
https://pulpito.ceph.com/mchangir-2023-11-13_10:27:15-fs-wip-mchangir-testing-20231110.052303-testing-default-smithi/
340
341
* https://tracker.ceph.com/issues/53859
342
  qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
343
* https://tracker.ceph.com/issues/63233
344
  mon|client|mds: valgrind reports possible leaks in the MDS
345
* https://tracker.ceph.com/issues/63521
346
  qa: Test failure: test_scrub_merge_dirfrags (tasks.cephfs.test_scrub_checks.TestScrubChecks)
347
* https://tracker.ceph.com/issues/57655
348
  qa: fs:mixed-clients kernel_untar_build failure
349
* https://tracker.ceph.com/issues/62580
350
    testing: Test failure: test_snapshot_remove (tasks.cephfs.test_strays.TestStrays)
351
* https://tracker.ceph.com/issues/57676
352
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
353
* https://tracker.ceph.com/issues/61243
354
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
355
* https://tracker.ceph.com/issues/63141
356
    qa/cephfs: test_idem_unaffected_root_squash fails
357
* https://tracker.ceph.com/issues/51964
358
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
359
* https://tracker.ceph.com/issues/63522
360
    No module named 'tasks.ceph_fuse'
361
    No module named 'tasks.kclient'
362
    No module named 'tasks.cephfs.fuse_mount'
363
    No module named 'tasks.ceph'
364
* https://tracker.ceph.com/issues/63523
365
    Command failed - qa/workunits/fs/misc/general_vxattrs.sh
366
367
368
h3. 14 Nov 2023
369 206 Venky Shankar
370
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20231106.073650
371
372
(nvm the fs:upgrade test failure - the PR is excluded from merge)
373
374
* https://tracker.ceph.com/issues/57676
375
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
376
* https://tracker.ceph.com/issues/63233
377
    mon|client|mds: valgrind reports possible leaks in the MDS
378
* https://tracker.ceph.com/issues/63141
379
    qa/cephfs: test_idem_unaffected_root_squash fails
380
* https://tracker.ceph.com/issues/62580
381
    testing: Test failure: test_snapshot_remove (tasks.cephfs.test_strays.TestStrays)
382
* https://tracker.ceph.com/issues/57655
383
    qa: fs:mixed-clients kernel_untar_build failure
384
* https://tracker.ceph.com/issues/51964
385
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
386
* https://tracker.ceph.com/issues/63519
387
    ceph-fuse: reef ceph-fuse crashes with main branch ceph-mds
388
* https://tracker.ceph.com/issues/57087
389
    qa: test_fragmented_injection (tasks.cephfs.test_data_scan.TestDataScan) failure
390
* https://tracker.ceph.com/issues/58945
391
    qa: xfstests-dev's generic test suite has 20 failures with fuse client
392
393 204 Rishabh Dave
h3. 7 Nov 2023
394
395 205 Rishabh Dave
fs: https://pulpito.ceph.com/rishabh-2023-11-04_04:30:51-fs-rishabh-2023nov3-testing-default-smithi/
396
re-run: https://pulpito.ceph.com/rishabh-2023-11-05_14:10:09-fs-rishabh-2023nov3-testing-default-smithi/
397
smoke: https://pulpito.ceph.com/rishabh-2023-11-08_08:39:05-smoke-rishabh-2023nov3-testing-default-smithi/
398 204 Rishabh Dave
399
* https://tracker.ceph.com/issues/53859
400
  qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
401
* https://tracker.ceph.com/issues/63233
402
  mon|client|mds: valgrind reports possible leaks in the MDS
403
* https://tracker.ceph.com/issues/57655
404
  qa: fs:mixed-clients kernel_untar_build failure
405
* https://tracker.ceph.com/issues/57676
406
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
407
408
* https://tracker.ceph.com/issues/63473
409
  fsstress.sh failed with errno 124
410
411 202 Rishabh Dave
h3. 3 Nov 2023
412 203 Rishabh Dave
413 202 Rishabh Dave
https://pulpito.ceph.com/rishabh-2023-10-27_06:26:52-fs-rishabh-2023oct26-testing-default-smithi/
414
415
* https://tracker.ceph.com/issues/63141
416
  qa/cephfs: test_idem_unaffected_root_squash fails
417
* https://tracker.ceph.com/issues/63233
418
  mon|client|mds: valgrind reports possible leaks in the MDS
419
* https://tracker.ceph.com/issues/57656
420
  dbench: write failed on handle 10010 (Resource temporarily unavailable)
421
* https://tracker.ceph.com/issues/57655
422
  qa: fs:mixed-clients kernel_untar_build failure
423
* https://tracker.ceph.com/issues/57676
424
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
425
426
* https://tracker.ceph.com/issues/59531
427
  "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio)"
428
* https://tracker.ceph.com/issues/52624
429
  qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
430
431 198 Patrick Donnelly
h3. 24 October 2023
432
433
https://pulpito.ceph.com/?branch=wip-batrick-testing-20231024.144545
434
435 200 Patrick Donnelly
Two failures:
436
437
https://pulpito.ceph.com/pdonnell-2023-10-26_05:21:22-fs-wip-batrick-testing-20231024.144545-distro-default-smithi/7438459/
438
https://pulpito.ceph.com/pdonnell-2023-10-26_05:21:22-fs-wip-batrick-testing-20231024.144545-distro-default-smithi/7438468/
439
440
probably related to https://github.com/ceph/ceph/pull/53255. Killing the mount as part of the test did not complete. Will research more.
441
442 198 Patrick Donnelly
* https://tracker.ceph.com/issues/52624
443
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
444
* https://tracker.ceph.com/issues/57676
445 199 Patrick Donnelly
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
446
* https://tracker.ceph.com/issues/63233
447
    mon|client|mds: valgrind reports possible leaks in the MDS
448
* https://tracker.ceph.com/issues/59531
449
    "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS 
450
* https://tracker.ceph.com/issues/57655
451
    qa: fs:mixed-clients kernel_untar_build failure
452 200 Patrick Donnelly
* https://tracker.ceph.com/issues/62067
453
    ffsb.sh failure "Resource temporarily unavailable"
454
* https://tracker.ceph.com/issues/63411
455
    qa: flush journal may cause timeouts of `scrub status`
456
* https://tracker.ceph.com/issues/61243
457
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
458
* https://tracker.ceph.com/issues/63141
459 198 Patrick Donnelly
    test_idem_unaffected_root_squash (test_admin.TestFsAuthorizeUpdate) fails
460 148 Rishabh Dave
461 195 Venky Shankar
h3. 18 Oct 2023
462
463
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20231018.065603
464
465
* https://tracker.ceph.com/issues/52624
466
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
467
* https://tracker.ceph.com/issues/57676
468
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
469
* https://tracker.ceph.com/issues/63233
470
    mon|client|mds: valgrind reports possible leaks in the MDS
471
* https://tracker.ceph.com/issues/63141
472
    qa/cephfs: test_idem_unaffected_root_squash fails
473
* https://tracker.ceph.com/issues/59531
474
  "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio)" 
475
* https://tracker.ceph.com/issues/62658
476
    error during scrub thrashing: reached maximum tries (31) after waiting for 900 seconds
477
* https://tracker.ceph.com/issues/62580
478
    testing: Test failure: test_snapshot_remove (tasks.cephfs.test_strays.TestStrays)
479
* https://tracker.ceph.com/issues/62067
480
    ffsb.sh failure "Resource temporarily unavailable"
481
* https://tracker.ceph.com/issues/57655
482
    qa: fs:mixed-clients kernel_untar_build failure
483
* https://tracker.ceph.com/issues/62036
484
    src/mds/MDCache.cc: 5131: FAILED ceph_assert(isolated_inodes.empty())
485
* https://tracker.ceph.com/issues/58945
486
    qa: xfstests-dev's generic test suite has 20 failures with fuse client
487
* https://tracker.ceph.com/issues/62847
488
    mds: blogbench requests stuck (5mds+scrub+snaps-flush)
489
490 193 Venky Shankar
h3. 13 Oct 2023
491
492
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20231013.093215
493
494
* https://tracker.ceph.com/issues/52624
495
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
496
* https://tracker.ceph.com/issues/62936
497
    Test failure: test_mirroring_init_failure_with_recovery (tasks.cephfs.test_mirroring.TestMirroring)
498
* https://tracker.ceph.com/issues/47292
499
    cephfs-shell: test_df_for_valid_file failure
500
* https://tracker.ceph.com/issues/63141
501
    qa/cephfs: test_idem_unaffected_root_squash fails
502
* https://tracker.ceph.com/issues/62081
503
    tasks/fscrypt-common does not finish, timesout
504 1 Patrick Donnelly
* https://tracker.ceph.com/issues/58945
505
    qa: xfstests-dev's generic test suite has 20 failures with fuse client
506 194 Venky Shankar
* https://tracker.ceph.com/issues/63233
507
    mon|client|mds: valgrind reports possible leaks in the MDS
508 193 Venky Shankar
509 190 Patrick Donnelly
h3. 16 Oct 2023
510
511
https://pulpito.ceph.com/?branch=wip-batrick-testing-20231016.203825
512
513 192 Patrick Donnelly
Infrastructure issues:
514
* /teuthology/pdonnell-2023-10-19_12:04:12-fs-wip-batrick-testing-20231016.203825-distro-default-smithi/7432286/teuthology.log
515
    Host lost.
516
517 196 Patrick Donnelly
One followup fix:
518
* https://pulpito.ceph.com/pdonnell-2023-10-20_00:33:29-fs-wip-batrick-testing-20231016.203825-distro-default-smithi/
519
520 192 Patrick Donnelly
Failures:
521
522
* https://tracker.ceph.com/issues/56694
523
    qa: avoid blocking forever on hung umount
524
* https://tracker.ceph.com/issues/63089
525
    qa: tasks/mirror times out
526
* https://tracker.ceph.com/issues/52624
527
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
528
* https://tracker.ceph.com/issues/59531
529
  "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio)" 
530
* https://tracker.ceph.com/issues/57676
531
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
532
* https://tracker.ceph.com/issues/62658 
533
    error during scrub thrashing: reached maximum tries (31) after waiting for 900 seconds
534
* https://tracker.ceph.com/issues/61243
535
  test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
536
* https://tracker.ceph.com/issues/57656
537
  dbench: write failed on handle 10010 (Resource temporarily unavailable)
538
* https://tracker.ceph.com/issues/63233
539
  mon|client|mds: valgrind reports possible leaks in the MDS
540 197 Patrick Donnelly
* https://tracker.ceph.com/issues/63278
541
  kclient: may wrongly decode session messages and believe it is blocklisted (dead jobs)
542 192 Patrick Donnelly
543 189 Rishabh Dave
h3. 9 Oct 2023
544
545
https://pulpito.ceph.com/rishabh-2023-10-06_11:56:52-fs-rishabh-cephfs-mon-testing-default-smithi/
546
547
* https://tracker.ceph.com/issues/54460
548
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
549
* https://tracker.ceph.com/issues/63141
550
  test_idem_unaffected_root_squash (test_admin.TestFsAuthorizeUpdate) fails
551
* https://tracker.ceph.com/issues/62937
552
  logrotate doesn't support parallel execution on same set of logfiles
553
* https://tracker.ceph.com/issues/61400
554
  valgrind+ceph-mon issues
555
* https://tracker.ceph.com/issues/57676
556
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
557
* https://tracker.ceph.com/issues/55805
558
  error during scrub thrashing reached max tries in 900 secs
559
560 188 Venky Shankar
h3. 26 Sep 2023
561
562
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230926.081818
563
564
* https://tracker.ceph.com/issues/52624
565
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
566
* https://tracker.ceph.com/issues/62873
567
    qa: FAIL: test_client_blocklisted_oldest_tid (tasks.cephfs.test_client_limits.TestClientLimits)
568
* https://tracker.ceph.com/issues/61400
569
    valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
570
* https://tracker.ceph.com/issues/57676
571
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
572
* https://tracker.ceph.com/issues/62682
573
    mon: no mdsmap broadcast after "fs set joinable" is set to true
574
* https://tracker.ceph.com/issues/63089
575
    qa: tasks/mirror times out
576
577 185 Rishabh Dave
h3. 22 Sep 2023
578
579
https://pulpito.ceph.com/rishabh-2023-09-12_12:12:15-fs-wip-rishabh-2023sep12-b2-testing-default-smithi/
580
581
* https://tracker.ceph.com/issues/59348
582
  qa: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota
583
* https://tracker.ceph.com/issues/59344
584
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
585
* https://tracker.ceph.com/issues/59531
586
  "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio)" 
587
* https://tracker.ceph.com/issues/61574
588
  build failure for mdtest project
589
* https://tracker.ceph.com/issues/62702
590
  fsstress.sh: MDS slow requests for the internal 'rename' requests
591
* https://tracker.ceph.com/issues/57676
592
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
593
594
* https://tracker.ceph.com/issues/62863 
595
  deadlock in ceph-fuse causes teuthology job to hang and fail
596
* https://tracker.ceph.com/issues/62870
597
  test_cluster_info (tasks.cephfs.test_nfs.TestNFS)
598
* https://tracker.ceph.com/issues/62873
599
  test_client_blocklisted_oldest_tid (tasks.cephfs.test_client_limits.TestClientLimits)
600
601 186 Venky Shankar
h3. 20 Sep 2023
602
603
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230920.072635
604
605
* https://tracker.ceph.com/issues/52624
606
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
607
* https://tracker.ceph.com/issues/61400
608
    valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
609
* https://tracker.ceph.com/issues/61399
610
    libmpich: undefined references to fi_strerror
611
* https://tracker.ceph.com/issues/62081
612
    tasks/fscrypt-common does not finish, timesout
613
* https://tracker.ceph.com/issues/62658 
614
    error during scrub thrashing: reached maximum tries (31) after waiting for 900 seconds
615
* https://tracker.ceph.com/issues/62915
616
    qa/suites/fs/nfs: No orchestrator configured (try `ceph orch set backend`) while running test cases
617
* https://tracker.ceph.com/issues/59531
618
    quincy: "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio)"
619
* https://tracker.ceph.com/issues/62873
620
  qa: FAIL: test_client_blocklisted_oldest_tid (tasks.cephfs.test_client_limits.TestClientLimits)
621
* https://tracker.ceph.com/issues/62936
622
    Test failure: test_mirroring_init_failure_with_recovery (tasks.cephfs.test_mirroring.TestMirroring)
623
* https://tracker.ceph.com/issues/62937
624
    Command failed on smithi027 with status 3: 'sudo logrotate /etc/logrotate.d/ceph-test.conf'
625
* https://tracker.ceph.com/issues/62510
626
    snaptest-git-ceph.sh failure with fs/thrash
627
* https://tracker.ceph.com/issues/62081
628
    tasks/fscrypt-common does not finish, timesout
629
* https://tracker.ceph.com/issues/62126
630
    test failure: suites/blogbench.sh stops running
631 187 Venky Shankar
* https://tracker.ceph.com/issues/62682
632
    mon: no mdsmap broadcast after "fs set joinable" is set to true
633 186 Venky Shankar
634 184 Milind Changire
h3. 19 Sep 2023
635
636
http://pulpito.front.sepia.ceph.com/mchangir-2023-09-12_05:40:22-fs-wip-mchangir-testing-20230908.140927-testing-default-smithi/
637
638
* https://tracker.ceph.com/issues/58220#note-9
639
  workunit fs/test_python.sh: test_disk_quota_exceeeded_error failure
640
* https://tracker.ceph.com/issues/62702
641
  Command failed (workunit test suites/fsstress.sh) on smithi124 with status 124
642
* https://tracker.ceph.com/issues/57676
643
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
644
* https://tracker.ceph.com/issues/59348
645
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
646
* https://tracker.ceph.com/issues/52624
647
  qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
648
* https://tracker.ceph.com/issues/51964
649
  qa: test_cephfs_mirror_restart_sync_on_blocklist failure
650
* https://tracker.ceph.com/issues/61243
651
  test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
652
* https://tracker.ceph.com/issues/59344
653
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
654
* https://tracker.ceph.com/issues/62873
655
  qa: FAIL: test_client_blocklisted_oldest_tid (tasks.cephfs.test_client_limits.TestClientLimits)
656
* https://tracker.ceph.com/issues/59413
657
  cephfs: qa snaptest-git-ceph.sh failed with "got remote process result: 128"
658
* https://tracker.ceph.com/issues/53859
659
  qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
660
* https://tracker.ceph.com/issues/62482
661
  qa: cluster [WRN] Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
662
663 178 Patrick Donnelly
664 177 Venky Shankar
h3. 13 Sep 2023
665
666
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230908.065909
667
668
* https://tracker.ceph.com/issues/52624
669
      qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
670
* https://tracker.ceph.com/issues/57655
671
    qa: fs:mixed-clients kernel_untar_build failure
672
* https://tracker.ceph.com/issues/57676
673
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
674
* https://tracker.ceph.com/issues/61243
675
    qa: tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev - 17 tests failed
676
* https://tracker.ceph.com/issues/62567
677
    postgres workunit times out - MDS_SLOW_REQUEST in logs
678
* https://tracker.ceph.com/issues/61400
679
    valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
680
* https://tracker.ceph.com/issues/61399
681
    libmpich: undefined references to fi_strerror
682
* https://tracker.ceph.com/issues/57655
683
    qa: fs:mixed-clients kernel_untar_build failure
684
* https://tracker.ceph.com/issues/57676
685
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
686
* https://tracker.ceph.com/issues/51964
687
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
688
* https://tracker.ceph.com/issues/62081
689
    tasks/fscrypt-common does not finish, timesout
690 178 Patrick Donnelly
691 179 Patrick Donnelly
h3. 2023 Sep 12
692 178 Patrick Donnelly
693
https://pulpito.ceph.com/pdonnell-2023-09-12_14:07:50-fs-wip-batrick-testing-20230912.122437-distro-default-smithi/
694 1 Patrick Donnelly
695 181 Patrick Donnelly
A few failures caused by qa refactoring in https://github.com/ceph/ceph/pull/48130 ; notably:
696
697 182 Patrick Donnelly
* Test failure: test_export_pin_many (tasks.cephfs.test_exports.TestExportPin) caused by fragmentation from config changes.
698 181 Patrick Donnelly
699
Failures:
700
701 179 Patrick Donnelly
* https://tracker.ceph.com/issues/59348
702
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
703
* https://tracker.ceph.com/issues/57656
704
  dbench: write failed on handle 10010 (Resource temporarily unavailable)
705
* https://tracker.ceph.com/issues/55805
706
  error scrub thrashing reached max tries in 900 secs
707
* https://tracker.ceph.com/issues/62067
708
    ffsb.sh failure "Resource temporarily unavailable"
709
* https://tracker.ceph.com/issues/59344
710
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
711
* https://tracker.ceph.com/issues/61399
712 180 Patrick Donnelly
  libmpich: undefined references to fi_strerror
713
* https://tracker.ceph.com/issues/62832
714
  common: config_proxy deadlock during shutdown (and possibly other times)
715
* https://tracker.ceph.com/issues/59413
716 1 Patrick Donnelly
  cephfs: qa snaptest-git-ceph.sh failed with "got remote process result: 128"
717 181 Patrick Donnelly
* https://tracker.ceph.com/issues/57676
718
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
719
* https://tracker.ceph.com/issues/62567
720
  Command failed with status 1: "sudo TESTDIR=/home/ubuntu/cephtest bash -c 'sudo -u postgres -- pgbench -s 500 -i'"
721
* https://tracker.ceph.com/issues/54460
722
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
723
* https://tracker.ceph.com/issues/58220#note-9
724
  workunit fs/test_python.sh: test_disk_quota_exceeeded_error failure
725
* https://tracker.ceph.com/issues/59348
726
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
727 183 Patrick Donnelly
* https://tracker.ceph.com/issues/62847
728
    mds: blogbench requests stuck (5mds+scrub+snaps-flush)
729
* https://tracker.ceph.com/issues/62848
730
    qa: fail_fs upgrade scenario hanging
731
* https://tracker.ceph.com/issues/62081
732
    tasks/fscrypt-common does not finish, timesout
733 177 Venky Shankar
734 176 Venky Shankar
h3. 11 Sep 2023
735 175 Venky Shankar
736
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230830.153114
737
738
* https://tracker.ceph.com/issues/52624
739
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
740
* https://tracker.ceph.com/issues/61399
741
    qa: build failure for ior (the failed instance is when compiling `mdtest`)
742
* https://tracker.ceph.com/issues/57655
743
    qa: fs:mixed-clients kernel_untar_build failure
744
* https://tracker.ceph.com/issues/61399
745
    ior build failure
746
* https://tracker.ceph.com/issues/59531
747
    quincy: "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio)"
748
* https://tracker.ceph.com/issues/59344
749
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
750
* https://tracker.ceph.com/issues/59346
751
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
752
* https://tracker.ceph.com/issues/59348
753
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
754
* https://tracker.ceph.com/issues/57676
755
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
756
* https://tracker.ceph.com/issues/61243
757
  qa: tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev - 17 tests failed
758
* https://tracker.ceph.com/issues/62567
759
  postgres workunit times out - MDS_SLOW_REQUEST in logs
760
761
762 174 Rishabh Dave
h3. 6 Sep 2023 Run 2
763
764
https://pulpito.ceph.com/rishabh-2023-08-25_01:50:32-fs-wip-rishabh-2023aug3-b5-testing-default-smithi/ 
765
766
* https://tracker.ceph.com/issues/51964
767
  test_cephfs_mirror_restart_sync_on_blocklist failure
768
* https://tracker.ceph.com/issues/59348
769
  test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
770
* https://tracker.ceph.com/issues/53859
771
  qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
772
* https://tracker.ceph.com/issues/61892
773
  test_strays.TestStrays.test_snapshot_remove failed
774
* https://tracker.ceph.com/issues/54460
775
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
776
* https://tracker.ceph.com/issues/59346
777
  fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write"
778
* https://tracker.ceph.com/issues/59344
779
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
780
* https://tracker.ceph.com/issues/62484
781
  qa: ffsb.sh test failure
782
* https://tracker.ceph.com/issues/62567
783
  Command failed with status 1: "sudo TESTDIR=/home/ubuntu/cephtest bash -c 'sudo -u postgres -- pgbench -s 500 -i'"
784
  
785
* https://tracker.ceph.com/issues/61399
786
  ior build failure
787
* https://tracker.ceph.com/issues/57676
788
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
789
* https://tracker.ceph.com/issues/55805
790
  error scrub thrashing reached max tries in 900 secs
791
792 172 Rishabh Dave
h3. 6 Sep 2023
793 171 Rishabh Dave
794 173 Rishabh Dave
https://pulpito.ceph.com/rishabh-2023-08-10_20:16:46-fs-wip-rishabh-2023Aug1-b4-testing-default-smithi/
795 171 Rishabh Dave
796 1 Patrick Donnelly
* https://tracker.ceph.com/issues/53859
797
  qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
798 173 Rishabh Dave
* https://tracker.ceph.com/issues/51964
799
  qa: test_cephfs_mirror_restart_sync_on_blocklist failure
800 1 Patrick Donnelly
* https://tracker.ceph.com/issues/61892
801 173 Rishabh Dave
  test_snapshot_remove (test_strays.TestStrays) failed
802
* https://tracker.ceph.com/issues/59348
803
  qa: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota
804
* https://tracker.ceph.com/issues/54462
805
  Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128
806
* https://tracker.ceph.com/issues/62556
807
  test_acls: xfstests_dev: python2 is missing
808
* https://tracker.ceph.com/issues/62067
809
  ffsb.sh failure "Resource temporarily unavailable"
810
* https://tracker.ceph.com/issues/57656
811
  dbench: write failed on handle 10010 (Resource temporarily unavailable)
812 1 Patrick Donnelly
* https://tracker.ceph.com/issues/59346
813
  fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write"
814 171 Rishabh Dave
* https://tracker.ceph.com/issues/59344
815 173 Rishabh Dave
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
816
817 171 Rishabh Dave
* https://tracker.ceph.com/issues/61399
818
  ior build failure
819
* https://tracker.ceph.com/issues/57676
820
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
821
* https://tracker.ceph.com/issues/55805
822
  error scrub thrashing reached max tries in 900 secs
823 173 Rishabh Dave
824
* https://tracker.ceph.com/issues/62567
825
  Command failed on smithi008 with status 1: "sudo TESTDIR=/home/ubuntu/cephtest bash -c 'sudo -u postgres -- pgbench -s 500 -i'"
826
* https://tracker.ceph.com/issues/62702
827
  workunit test suites/fsstress.sh on smithi066 with status 124
828 170 Rishabh Dave
829
h3. 5 Sep 2023
830
831
https://pulpito.ceph.com/rishabh-2023-08-25_06:38:25-fs-wip-rishabh-2023aug3-b5-testing-default-smithi/
832
orch:cephadm suite run: http://pulpito.front.sepia.ceph.com/rishabh-2023-09-05_12:16:09-orch:cephadm-wip-rishabh-2023aug3-b5-testing-default-smithi/
833
  this run has failures but acc to Adam King these are not relevant and should be ignored
834
835
* https://tracker.ceph.com/issues/61892
836
  test_snapshot_remove (test_strays.TestStrays) failed
837
* https://tracker.ceph.com/issues/59348
838
  test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota
839
* https://tracker.ceph.com/issues/54462
840
  Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128
841
* https://tracker.ceph.com/issues/62067
842
  ffsb.sh failure "Resource temporarily unavailable"
843
* https://tracker.ceph.com/issues/57656 
844
  dbench: write failed on handle 10010 (Resource temporarily unavailable)
845
* https://tracker.ceph.com/issues/59346
846
  fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write"
847
* https://tracker.ceph.com/issues/59344
848
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
849
* https://tracker.ceph.com/issues/50223
850
  client.xxxx isn't responding to mclientcaps(revoke)
851
* https://tracker.ceph.com/issues/57655
852
  qa: fs:mixed-clients kernel_untar_build failure
853
* https://tracker.ceph.com/issues/62187
854
  iozone.sh: line 5: iozone: command not found
855
 
856
* https://tracker.ceph.com/issues/61399
857
  ior build failure
858
* https://tracker.ceph.com/issues/57676
859
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
860
* https://tracker.ceph.com/issues/55805
861
  error scrub thrashing reached max tries in 900 secs
862 169 Venky Shankar
863
864
h3. 31 Aug 2023
865
866
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230824.045828
867
868
* https://tracker.ceph.com/issues/52624
869
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
870
* https://tracker.ceph.com/issues/62187
871
    iozone: command not found
872
* https://tracker.ceph.com/issues/61399
873
    ior build failure
874
* https://tracker.ceph.com/issues/59531
875
    quincy: "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio)"
876
* https://tracker.ceph.com/issues/61399
877
    qa: build failure for ior (the failed instance is when compiling `mdtest`)
878
* https://tracker.ceph.com/issues/57655
879
    qa: fs:mixed-clients kernel_untar_build failure
880
* https://tracker.ceph.com/issues/59344
881
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
882
* https://tracker.ceph.com/issues/59346
883
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
884
* https://tracker.ceph.com/issues/59348
885
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
886
* https://tracker.ceph.com/issues/59413
887
    cephfs: qa snaptest-git-ceph.sh failed with "got remote process result: 128"
888
* https://tracker.ceph.com/issues/62653
889
    qa: unimplemented fcntl command: 1036 with fsstress
890
* https://tracker.ceph.com/issues/61400
891
    valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
892
* https://tracker.ceph.com/issues/62658
893
    error during scrub thrashing: reached maximum tries (31) after waiting for 900 seconds
894
* https://tracker.ceph.com/issues/62188
895
    AttributeError: 'RemoteProcess' object has no attribute 'read' (happens only with multis-auth test)
896 168 Venky Shankar
897
898
h3. 25 Aug 2023
899
900
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230822.064807
901
902
* https://tracker.ceph.com/issues/59344
903
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
904
* https://tracker.ceph.com/issues/59346
905
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
906
* https://tracker.ceph.com/issues/59348
907
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
908
* https://tracker.ceph.com/issues/57655
909
    qa: fs:mixed-clients kernel_untar_build failure
910
* https://tracker.ceph.com/issues/61243
911
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
912
* https://tracker.ceph.com/issues/61399
913
    ior build failure
914
* https://tracker.ceph.com/issues/61399
915
    qa: build failure for ior (the failed instance is when compiling `mdtest`)
916
* https://tracker.ceph.com/issues/62484
917
    qa: ffsb.sh test failure
918
* https://tracker.ceph.com/issues/59531
919
    quincy: "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio)"
920
* https://tracker.ceph.com/issues/62510
921
    snaptest-git-ceph.sh failure with fs/thrash
922 167 Venky Shankar
923
924
h3. 24 Aug 2023
925
926
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230822.060131
927
928
* https://tracker.ceph.com/issues/57676
929
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
930
* https://tracker.ceph.com/issues/51964
931
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
932
* https://tracker.ceph.com/issues/59344
933
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
934
* https://tracker.ceph.com/issues/59346
935
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
936
* https://tracker.ceph.com/issues/59348
937
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
938
* https://tracker.ceph.com/issues/61399
939
    ior build failure
940
* https://tracker.ceph.com/issues/61399
941
    qa: build failure for ior (the failed instance is when compiling `mdtest`)
942
* https://tracker.ceph.com/issues/62510
943
    snaptest-git-ceph.sh failure with fs/thrash
944
* https://tracker.ceph.com/issues/62484
945
    qa: ffsb.sh test failure
946
* https://tracker.ceph.com/issues/57087
947
    qa: test_fragmented_injection (tasks.cephfs.test_data_scan.TestDataScan) failure
948
* https://tracker.ceph.com/issues/57656
949
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
950
* https://tracker.ceph.com/issues/62187
951
    iozone: command not found
952
* https://tracker.ceph.com/issues/62188
953
    AttributeError: 'RemoteProcess' object has no attribute 'read' (happens only with multis-auth test)
954
* https://tracker.ceph.com/issues/62567
955
    postgres workunit times out - MDS_SLOW_REQUEST in logs
956 166 Venky Shankar
957
958
h3. 22 Aug 2023
959
960
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230809.035933
961
962
* https://tracker.ceph.com/issues/57676
963
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
964
* https://tracker.ceph.com/issues/51964
965
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
966
* https://tracker.ceph.com/issues/59344
967
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
968
* https://tracker.ceph.com/issues/59346
969
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
970
* https://tracker.ceph.com/issues/59348
971
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
972
* https://tracker.ceph.com/issues/61399
973
    ior build failure
974
* https://tracker.ceph.com/issues/61399
975
    qa: build failure for ior (the failed instance is when compiling `mdtest`)
976
* https://tracker.ceph.com/issues/57655
977
    qa: fs:mixed-clients kernel_untar_build failure
978
* https://tracker.ceph.com/issues/61243
979
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
980
* https://tracker.ceph.com/issues/62188
981
    AttributeError: 'RemoteProcess' object has no attribute 'read' (happens only with multis-auth test)
982
* https://tracker.ceph.com/issues/62510
983
    snaptest-git-ceph.sh failure with fs/thrash
984
* https://tracker.ceph.com/issues/62511
985
    src/mds/MDLog.cc: 299: FAILED ceph_assert(!mds_is_shutting_down)
986 165 Venky Shankar
987
988
h3. 14 Aug 2023
989
990
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230808.093601
991
992
* https://tracker.ceph.com/issues/51964
993
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
994
* https://tracker.ceph.com/issues/61400
995
    valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
996
* https://tracker.ceph.com/issues/61399
997
    ior build failure
998
* https://tracker.ceph.com/issues/59348
999
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1000
* https://tracker.ceph.com/issues/59531
1001
    cluster [WRN] OSD bench result of 137706.272521 IOPS exceeded the threshold
1002
* https://tracker.ceph.com/issues/59344
1003
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
1004
* https://tracker.ceph.com/issues/59346
1005
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
1006
* https://tracker.ceph.com/issues/61399
1007
    qa: build failure for ior (the failed instance is when compiling `mdtest`)
1008
* https://tracker.ceph.com/issues/59684 [kclient bug]
1009
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
1010
* https://tracker.ceph.com/issues/61243 (NEW)
1011
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
1012
* https://tracker.ceph.com/issues/57655
1013
    qa: fs:mixed-clients kernel_untar_build failure
1014
* https://tracker.ceph.com/issues/57656
1015
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
1016 163 Venky Shankar
1017
1018
h3. 28 JULY 2023
1019
1020
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230725.053049
1021
1022
* https://tracker.ceph.com/issues/51964
1023
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1024
* https://tracker.ceph.com/issues/61400
1025
    valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
1026
* https://tracker.ceph.com/issues/61399
1027
    ior build failure
1028
* https://tracker.ceph.com/issues/57676
1029
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1030
* https://tracker.ceph.com/issues/59348
1031
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1032
* https://tracker.ceph.com/issues/59531
1033
    cluster [WRN] OSD bench result of 137706.272521 IOPS exceeded the threshold
1034
* https://tracker.ceph.com/issues/59344
1035
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
1036
* https://tracker.ceph.com/issues/59346
1037
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
1038
* https://github.com/ceph/ceph/pull/52556
1039
    task/test_nfs: AttributeError: 'TestNFS' object has no attribute 'run_ceph_cmd' (see note #4)
1040
* https://tracker.ceph.com/issues/62187
1041
    iozone: command not found
1042
* https://tracker.ceph.com/issues/61399
1043
    qa: build failure for ior (the failed instance is when compiling `mdtest`)
1044
* https://tracker.ceph.com/issues/62188
1045 164 Rishabh Dave
    AttributeError: 'RemoteProcess' object has no attribute 'read' (happens only with multis-auth test)
1046 158 Rishabh Dave
1047
h3. 24 Jul 2023
1048
1049
https://pulpito.ceph.com/rishabh-2023-07-13_21:35:13-fs-wip-rishabh-2023Jul13-testing-default-smithi/
1050
https://pulpito.ceph.com/rishabh-2023-07-14_10:26:42-fs-wip-rishabh-2023Jul13-testing-default-smithi/
1051
There were few failure from one of the PRs under testing. Following run confirms that removing this PR fixes these failures -
1052
https://pulpito.ceph.com/rishabh-2023-07-18_02:11:50-fs-wip-rishabh-2023Jul13-m-quota-testing-default-smithi/
1053
One more extra run to check if blogbench.sh fail every time:
1054
https://pulpito.ceph.com/rishabh-2023-07-21_17:58:19-fs-wip-rishabh-2023Jul13-m-quota-testing-default-smithi/
1055
blogbench.sh failure were seen on above runs for first time, following run with main branch that confirms that "blogbench.sh" was not related to any of the PRs that are under testing -
1056 161 Rishabh Dave
https://pulpito.ceph.com/rishabh-2023-07-21_21:30:53-fs-wip-rishabh-2023Jul13-base-2-testing-default-smithi/
1057
1058
* https://tracker.ceph.com/issues/61892
1059
  test_snapshot_remove (test_strays.TestStrays) failed
1060
* https://tracker.ceph.com/issues/53859
1061
  test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
1062
* https://tracker.ceph.com/issues/61982
1063
  test_clean_stale_subvolume_snapshot_metadata (tasks.cephfs.test_volumes.TestSubvolumeSnapshots)
1064
* https://tracker.ceph.com/issues/52438
1065
  qa: ffsb timeout
1066
* https://tracker.ceph.com/issues/54460
1067
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1068
* https://tracker.ceph.com/issues/57655
1069
  qa: fs:mixed-clients kernel_untar_build failure
1070
* https://tracker.ceph.com/issues/48773
1071
  reached max tries: scrub does not complete
1072
* https://tracker.ceph.com/issues/58340
1073
  mds: fsstress.sh hangs with multimds
1074
* https://tracker.ceph.com/issues/61400
1075
  valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
1076
* https://tracker.ceph.com/issues/57206
1077
  libcephfs/test.sh: ceph_test_libcephfs_reclaim
1078
  
1079
* https://tracker.ceph.com/issues/57656
1080
  [testing] dbench: write failed on handle 10010 (Resource temporarily unavailable)
1081
* https://tracker.ceph.com/issues/61399
1082
  ior build failure
1083
* https://tracker.ceph.com/issues/57676
1084
  error during scrub thrashing: backtrace
1085
  
1086
* https://tracker.ceph.com/issues/38452
1087
  'sudo -u postgres -- pgbench -s 500 -i' failed
1088 158 Rishabh Dave
* https://tracker.ceph.com/issues/62126
1089 157 Venky Shankar
  blogbench.sh failure
1090
1091
h3. 18 July 2023
1092
1093
* https://tracker.ceph.com/issues/52624
1094
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
1095
* https://tracker.ceph.com/issues/57676
1096
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1097
* https://tracker.ceph.com/issues/54460
1098
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1099
* https://tracker.ceph.com/issues/57655
1100
    qa: fs:mixed-clients kernel_untar_build failure
1101
* https://tracker.ceph.com/issues/51964
1102
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1103
* https://tracker.ceph.com/issues/59344
1104
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
1105
* https://tracker.ceph.com/issues/61182
1106
    cephfs-mirror-ha-workunit: reached maximum tries (50) after waiting for 300 seconds
1107
* https://tracker.ceph.com/issues/61957
1108
    test_client_limits.TestClientLimits.test_client_release_bug
1109
* https://tracker.ceph.com/issues/59348
1110
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1111
* https://tracker.ceph.com/issues/61892
1112
    test_strays.TestStrays.test_snapshot_remove failed
1113
* https://tracker.ceph.com/issues/59346
1114
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
1115
* https://tracker.ceph.com/issues/44565
1116
    src/mds/SimpleLock.h: 528: FAILED ceph_assert(state == LOCK_XLOCK || state == LOCK_XLOCKDONE || state == LOCK_XLOCKSNAP || state == LOCK_LOCK_XLOCK || state == LOCK_LOCK || is_locallock())
1117
* https://tracker.ceph.com/issues/62067
1118
    ffsb.sh failure "Resource temporarily unavailable"
1119 156 Venky Shankar
1120
1121
h3. 17 July 2023
1122
1123
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230704.040136
1124
1125
* https://tracker.ceph.com/issues/61982
1126
    Test failure: test_clean_stale_subvolume_snapshot_metadata (tasks.cephfs.test_volumes.TestSubvolumeSnapshots)
1127
* https://tracker.ceph.com/issues/59344
1128
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
1129
* https://tracker.ceph.com/issues/61182
1130
    cephfs-mirror-ha-workunit: reached maximum tries (50) after waiting for 300 seconds
1131
* https://tracker.ceph.com/issues/61957
1132
    test_client_limits.TestClientLimits.test_client_release_bug
1133
* https://tracker.ceph.com/issues/61400
1134
    valgrind+ceph-mon: segmentation fault in rocksdb+tcmalloc
1135
* https://tracker.ceph.com/issues/59348
1136
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1137
* https://tracker.ceph.com/issues/61892
1138
    test_strays.TestStrays.test_snapshot_remove failed
1139
* https://tracker.ceph.com/issues/59346
1140
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
1141
* https://tracker.ceph.com/issues/62036
1142
    src/mds/MDCache.cc: 5131: FAILED ceph_assert(isolated_inodes.empty())
1143
* https://tracker.ceph.com/issues/61737
1144
    coredump from '/bin/podman pull quay.ceph.io/ceph-ci/ceph:pacific'
1145
* https://tracker.ceph.com/issues/44565
1146
    src/mds/SimpleLock.h: 528: FAILED ceph_assert(state == LOCK_XLOCK || state == LOCK_XLOCKDONE || state == LOCK_XLOCKSNAP || state == LOCK_LOCK_XLOCK || state == LOCK_LOCK || is_locallock())
1147 155 Rishabh Dave
1148 1 Patrick Donnelly
1149 153 Rishabh Dave
h3. 13 July 2023 Run 2
1150 152 Rishabh Dave
1151
1152
https://pulpito.ceph.com/rishabh-2023-07-08_23:33:40-fs-wip-rishabh-2023Jul9-testing-default-smithi/
1153
https://pulpito.ceph.com/rishabh-2023-07-09_20:19:09-fs-wip-rishabh-2023Jul9-testing-default-smithi/
1154
1155
* https://tracker.ceph.com/issues/61957
1156
  test_client_limits.TestClientLimits.test_client_release_bug
1157
* https://tracker.ceph.com/issues/61982
1158
  Test failure: test_clean_stale_subvolume_snapshot_metadata (tasks.cephfs.test_volumes.TestSubvolumeSnapshots)
1159
* https://tracker.ceph.com/issues/59348
1160
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1161
* https://tracker.ceph.com/issues/59344
1162
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
1163
* https://tracker.ceph.com/issues/54460
1164
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1165
* https://tracker.ceph.com/issues/57655
1166
  qa: fs:mixed-clients kernel_untar_build failure
1167
* https://tracker.ceph.com/issues/61400
1168
  valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
1169
* https://tracker.ceph.com/issues/61399
1170
  ior build failure
1171
1172 151 Venky Shankar
h3. 13 July 2023
1173
1174
https://pulpito.ceph.com/vshankar-2023-07-04_11:45:30-fs-wip-vshankar-testing-20230704.040242-testing-default-smithi/
1175
1176
* https://tracker.ceph.com/issues/54460
1177
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1178
* https://tracker.ceph.com/issues/61400
1179
    valgrind+ceph-mon: segmentation fault in rocksdb+tcmalloc
1180
* https://tracker.ceph.com/issues/57655
1181
    qa: fs:mixed-clients kernel_untar_build failure
1182
* https://tracker.ceph.com/issues/61945
1183
    LibCephFS.DelegTimeout failure
1184
* https://tracker.ceph.com/issues/52624
1185
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
1186
* https://tracker.ceph.com/issues/57676
1187
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1188
* https://tracker.ceph.com/issues/59348
1189
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1190
* https://tracker.ceph.com/issues/59344
1191
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
1192
* https://tracker.ceph.com/issues/51964
1193
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1194
* https://tracker.ceph.com/issues/59346
1195
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
1196
* https://tracker.ceph.com/issues/61982
1197
    Test failure: test_clean_stale_subvolume_snapshot_metadata (tasks.cephfs.test_volumes.TestSubvolumeSnapshots)
1198 150 Rishabh Dave
1199
1200
h3. 13 Jul 2023
1201
1202
https://pulpito.ceph.com/rishabh-2023-07-05_22:21:20-fs-wip-rishabh-2023Jul5-testing-default-smithi/
1203
https://pulpito.ceph.com/rishabh-2023-07-06_19:33:28-fs-wip-rishabh-2023Jul5-testing-default-smithi/
1204
1205
* https://tracker.ceph.com/issues/61957
1206
  test_client_limits.TestClientLimits.test_client_release_bug
1207
* https://tracker.ceph.com/issues/59348
1208
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1209
* https://tracker.ceph.com/issues/59346
1210
  fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
1211
* https://tracker.ceph.com/issues/48773
1212
  scrub does not complete: reached max tries
1213
* https://tracker.ceph.com/issues/59344
1214
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
1215
* https://tracker.ceph.com/issues/52438
1216
  qa: ffsb timeout
1217
* https://tracker.ceph.com/issues/57656
1218
  [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
1219
* https://tracker.ceph.com/issues/58742
1220
  xfstests-dev: kcephfs: generic
1221
* https://tracker.ceph.com/issues/61399
1222 148 Rishabh Dave
  libmpich: undefined references to fi_strerror
1223 149 Rishabh Dave
1224 148 Rishabh Dave
h3. 12 July 2023
1225
1226
https://pulpito.ceph.com/rishabh-2023-07-05_18:32:52-fs-wip-rishabh-CephManager-in-CephFSTestCase-testing-default-smithi/
1227
https://pulpito.ceph.com/rishabh-2023-07-06_19:46:43-fs-wip-rishabh-CephManager-in-CephFSTestCase-testing-default-smithi/
1228
1229
* https://tracker.ceph.com/issues/61892
1230
  test_strays.TestStrays.test_snapshot_remove failed
1231
* https://tracker.ceph.com/issues/59348
1232
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1233
* https://tracker.ceph.com/issues/53859
1234
  qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
1235
* https://tracker.ceph.com/issues/59346
1236
  fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write"
1237
* https://tracker.ceph.com/issues/58742
1238
  xfstests-dev: kcephfs: generic
1239
* https://tracker.ceph.com/issues/59344
1240
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
1241
* https://tracker.ceph.com/issues/52438
1242
  qa: ffsb timeout
1243
* https://tracker.ceph.com/issues/57656
1244
  [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
1245
* https://tracker.ceph.com/issues/54460
1246
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1247
* https://tracker.ceph.com/issues/57655
1248
  qa: fs:mixed-clients kernel_untar_build failure
1249
* https://tracker.ceph.com/issues/61182
1250
  cephfs-mirror-ha-workunit: reached maximum tries (50) after waiting for 300 seconds
1251
* https://tracker.ceph.com/issues/61400
1252
  valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
1253 147 Rishabh Dave
* https://tracker.ceph.com/issues/48773
1254 146 Patrick Donnelly
  reached max tries: scrub does not complete
1255
1256
h3. 05 July 2023
1257
1258
https://pulpito.ceph.com/pdonnell-2023-07-05_03:38:33-fs:libcephfs-wip-pdonnell-testing-20230705.003205-distro-default-smithi/
1259
1260 137 Rishabh Dave
* https://tracker.ceph.com/issues/59346
1261 143 Rishabh Dave
  fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write"
1262
1263
h3. 27 Jun 2023
1264
1265
https://pulpito.ceph.com/rishabh-2023-06-21_23:38:17-fs-wip-rishabh-improvements-authmon-testing-default-smithi/
1266 144 Rishabh Dave
https://pulpito.ceph.com/rishabh-2023-06-23_17:37:30-fs-wip-rishabh-improvements-authmon-distro-default-smithi/
1267
1268
* https://tracker.ceph.com/issues/59348
1269
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1270
* https://tracker.ceph.com/issues/54460
1271
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1272
* https://tracker.ceph.com/issues/59346
1273
  fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write"
1274
* https://tracker.ceph.com/issues/59344
1275
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
1276
* https://tracker.ceph.com/issues/61399
1277
  libmpich: undefined references to fi_strerror
1278
* https://tracker.ceph.com/issues/50223
1279
  client.xxxx isn't responding to mclientcaps(revoke)
1280 143 Rishabh Dave
* https://tracker.ceph.com/issues/61831
1281
  Test failure: test_mirroring_init_failure_with_recovery (tasks.cephfs.test_mirroring.TestMirroring)
1282 142 Venky Shankar
1283
1284
h3. 22 June 2023
1285
1286
* https://tracker.ceph.com/issues/57676
1287
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1288
* https://tracker.ceph.com/issues/54460
1289
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1290
* https://tracker.ceph.com/issues/59344
1291
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
1292
* https://tracker.ceph.com/issues/59348
1293
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1294
* https://tracker.ceph.com/issues/61400
1295
    valgrind+ceph-mon: segmentation fault in rocksdb+tcmalloc
1296
* https://tracker.ceph.com/issues/57655
1297
    qa: fs:mixed-clients kernel_untar_build failure
1298
* https://tracker.ceph.com/issues/61394
1299
    qa/quincy: cluster [WRN] evicting unresponsive client smithi152 (4298), after 303.726 seconds" in cluster log
1300
* https://tracker.ceph.com/issues/61762
1301
    qa: wait_for_clean: failed before timeout expired
1302
* https://tracker.ceph.com/issues/61775
1303
    cephfs-mirror: mirror daemon does not shutdown (in mirror ha tests)
1304
* https://tracker.ceph.com/issues/44565
1305
    src/mds/SimpleLock.h: 528: FAILED ceph_assert(state == LOCK_XLOCK || state == LOCK_XLOCKDONE || state == LOCK_XLOCKSNAP || state == LOCK_LOCK_XLOCK || state == LOCK_LOCK || is_locallock())
1306
* https://tracker.ceph.com/issues/61790
1307
    cephfs client to mds comms remain silent after reconnect
1308
* https://tracker.ceph.com/issues/61791
1309
    snaptest-git-ceph.sh test timed out (job dead)
1310 139 Venky Shankar
1311
1312
h3. 20 June 2023
1313
1314
https://pulpito.ceph.com/vshankar-2023-06-15_04:58:28-fs-wip-vshankar-testing-20230614.124123-testing-default-smithi/
1315
1316
* https://tracker.ceph.com/issues/57676
1317
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1318
* https://tracker.ceph.com/issues/54460
1319
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1320 140 Venky Shankar
* https://tracker.ceph.com/issues/54462
1321 1 Patrick Donnelly
  Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128
1322 141 Venky Shankar
* https://tracker.ceph.com/issues/58340
1323 139 Venky Shankar
  mds: fsstress.sh hangs with multimds
1324
* https://tracker.ceph.com/issues/59344
1325
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
1326
* https://tracker.ceph.com/issues/59348
1327
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1328
* https://tracker.ceph.com/issues/57656
1329
  [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
1330
* https://tracker.ceph.com/issues/61400
1331
    valgrind+ceph-mon: segmentation fault in rocksdb+tcmalloc
1332
* https://tracker.ceph.com/issues/57655
1333
    qa: fs:mixed-clients kernel_untar_build failure
1334
* https://tracker.ceph.com/issues/44565
1335
    src/mds/SimpleLock.h: 528: FAILED ceph_assert(state == LOCK_XLOCK || state == LOCK_XLOCKDONE || state == LOCK_XLOCKSNAP || state == LOCK_LOCK_XLOCK || state == LOCK_LOCK || is_locallock())
1336
* https://tracker.ceph.com/issues/61737
1337 138 Rishabh Dave
    coredump from '/bin/podman pull quay.ceph.io/ceph-ci/ceph:pacific'
1338
1339
h3. 16 June 2023
1340
1341 1 Patrick Donnelly
https://pulpito.ceph.com/rishabh-2023-05-16_10:39:13-fs-wip-rishabh-2023May15-1524-testing-default-smithi/
1342 145 Rishabh Dave
https://pulpito.ceph.com/rishabh-2023-05-17_11:09:48-fs-wip-rishabh-2023May15-1524-testing-default-smithi/
1343 138 Rishabh Dave
https://pulpito.ceph.com/rishabh-2023-05-18_10:01:53-fs-wip-rishabh-2023May15-1524-testing-default-smithi/
1344 1 Patrick Donnelly
(bins were rebuilt with a subset of orig PRs) https://pulpito.ceph.com/rishabh-2023-06-09_10:19:22-fs-wip-rishabh-2023Jun9-1308-testing-default-smithi/
1345
1346
1347
* https://tracker.ceph.com/issues/59344
1348
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
1349 138 Rishabh Dave
* https://tracker.ceph.com/issues/59348
1350
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1351 145 Rishabh Dave
* https://tracker.ceph.com/issues/59346
1352
  fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write"
1353
* https://tracker.ceph.com/issues/57656
1354
  [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
1355
* https://tracker.ceph.com/issues/54460
1356
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1357 138 Rishabh Dave
* https://tracker.ceph.com/issues/54462
1358
  Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128
1359 145 Rishabh Dave
* https://tracker.ceph.com/issues/61399
1360
  libmpich: undefined references to fi_strerror
1361
* https://tracker.ceph.com/issues/58945
1362
  xfstests-dev: ceph-fuse: generic 
1363 138 Rishabh Dave
* https://tracker.ceph.com/issues/58742
1364 136 Patrick Donnelly
  xfstests-dev: kcephfs: generic
1365
1366
h3. 24 May 2023
1367
1368
https://pulpito.ceph.com/pdonnell-2023-05-23_18:20:18-fs-wip-pdonnell-testing-20230523.134409-distro-default-smithi/
1369
1370
* https://tracker.ceph.com/issues/57676
1371
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1372
* https://tracker.ceph.com/issues/59683
1373
    Error: Unable to find a match: userspace-rcu-devel libedit-devel device-mapper-devel with fscrypt tests
1374
* https://tracker.ceph.com/issues/61399
1375
    qa: "[Makefile:299: ior] Error 1"
1376
* https://tracker.ceph.com/issues/61265
1377
    qa: tasks.cephfs.fuse_mount:process failed to terminate after unmount
1378
* https://tracker.ceph.com/issues/59348
1379
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1380
* https://tracker.ceph.com/issues/59346
1381
    qa/workunits/fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write"
1382
* https://tracker.ceph.com/issues/61400
1383
    valgrind+ceph-mon: segmentation fault in rocksdb+tcmalloc
1384
* https://tracker.ceph.com/issues/54460
1385
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1386
* https://tracker.ceph.com/issues/51964
1387
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1388
* https://tracker.ceph.com/issues/59344
1389
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
1390
* https://tracker.ceph.com/issues/61407
1391
    mds: abort on CInode::verify_dirfrags
1392
* https://tracker.ceph.com/issues/48773
1393
    qa: scrub does not complete
1394
* https://tracker.ceph.com/issues/57655
1395
    qa: fs:mixed-clients kernel_untar_build failure
1396
* https://tracker.ceph.com/issues/61409
1397 128 Venky Shankar
    qa: _test_stale_caps does not wait for file flush before stat
1398
1399
h3. 15 May 2023
1400 130 Venky Shankar
1401 128 Venky Shankar
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230509.090020
1402
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230509.090020-6
1403
1404
* https://tracker.ceph.com/issues/52624
1405
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
1406
* https://tracker.ceph.com/issues/54460
1407
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1408
* https://tracker.ceph.com/issues/57676
1409
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1410
* https://tracker.ceph.com/issues/59684 [kclient bug]
1411
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
1412
* https://tracker.ceph.com/issues/59348
1413
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1414 131 Venky Shankar
* https://tracker.ceph.com/issues/61148
1415
    dbench test results in call trace in dmesg [kclient bug]
1416 133 Kotresh Hiremath Ravishankar
* https://tracker.ceph.com/issues/58340
1417 134 Kotresh Hiremath Ravishankar
    mds: fsstress.sh hangs with multimds
1418 125 Venky Shankar
1419
 
1420 129 Rishabh Dave
h3. 11 May 2023
1421
1422
https://pulpito.ceph.com/yuriw-2023-05-10_18:21:40-fs-wip-yuri7-testing-2023-05-10-0742-distro-default-smithi/
1423
1424
* https://tracker.ceph.com/issues/59684 [kclient bug]
1425
  Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
1426
* https://tracker.ceph.com/issues/59348
1427
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1428
* https://tracker.ceph.com/issues/57655
1429
  qa: fs:mixed-clients kernel_untar_build failure
1430
* https://tracker.ceph.com/issues/57676
1431
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
1432
* https://tracker.ceph.com/issues/55805
1433
  error during scrub thrashing reached max tries in 900 secs
1434
* https://tracker.ceph.com/issues/54460
1435
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1436
* https://tracker.ceph.com/issues/57656
1437
  [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
1438
* https://tracker.ceph.com/issues/58220
1439
  Command failed (workunit test fs/quota/quota.sh) on smithi081 with status 1:
1440 1 Patrick Donnelly
* https://tracker.ceph.com/issues/58220#note-9
1441
  workunit fs/test_python.sh: test_disk_quota_exceeeded_error failure
1442 134 Kotresh Hiremath Ravishankar
* https://tracker.ceph.com/issues/59342
1443
  qa/workunits/kernel_untar_build.sh failed when compiling the Linux source
1444 135 Kotresh Hiremath Ravishankar
* https://tracker.ceph.com/issues/58949
1445
    test_cephfs.test_disk_quota_exceeeded_error - AssertionError: DiskQuotaExceeded not raised by write
1446 129 Rishabh Dave
* https://tracker.ceph.com/issues/61243 (NEW)
1447
  test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
1448
1449 125 Venky Shankar
h3. 11 May 2023
1450 127 Venky Shankar
1451
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230509.054005
1452 126 Venky Shankar
1453 125 Venky Shankar
(no fsstress job failure [https://tracker.ceph.com/issues/58340] since https://github.com/ceph/ceph/pull/49553
1454
 was included in the branch, however, the PR got updated and needs retest).
1455
1456
* https://tracker.ceph.com/issues/52624
1457
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
1458
* https://tracker.ceph.com/issues/54460
1459
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1460
* https://tracker.ceph.com/issues/57676
1461
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1462
* https://tracker.ceph.com/issues/59683
1463
    Error: Unable to find a match: userspace-rcu-devel libedit-devel device-mapper-devel with fscrypt tests
1464
* https://tracker.ceph.com/issues/59684 [kclient bug]
1465
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
1466
* https://tracker.ceph.com/issues/59348
1467 124 Venky Shankar
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1468
1469
h3. 09 May 2023
1470
1471
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230506.143554
1472
1473
* https://tracker.ceph.com/issues/52624
1474
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
1475
* https://tracker.ceph.com/issues/58340
1476
    mds: fsstress.sh hangs with multimds
1477
* https://tracker.ceph.com/issues/54460
1478
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1479
* https://tracker.ceph.com/issues/57676
1480
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1481
* https://tracker.ceph.com/issues/51964
1482
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1483
* https://tracker.ceph.com/issues/59350
1484
    qa: test_flag_scrub_mdsdir (tasks.cephfs.test_scrub_checks.TestScrubChecks) ... ERROR
1485
* https://tracker.ceph.com/issues/59683
1486
    Error: Unable to find a match: userspace-rcu-devel libedit-devel device-mapper-devel with fscrypt tests
1487
* https://tracker.ceph.com/issues/59684 [kclient bug]
1488
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
1489
* https://tracker.ceph.com/issues/59348
1490 123 Venky Shankar
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1491
1492
h3. 10 Apr 2023
1493
1494
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230330.105356
1495
1496
* https://tracker.ceph.com/issues/52624
1497
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
1498
* https://tracker.ceph.com/issues/58340
1499
    mds: fsstress.sh hangs with multimds
1500
* https://tracker.ceph.com/issues/54460
1501
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1502
* https://tracker.ceph.com/issues/57676
1503
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1504 119 Rishabh Dave
* https://tracker.ceph.com/issues/51964
1505 120 Rishabh Dave
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1506 121 Rishabh Dave
1507 120 Rishabh Dave
h3. 31 Mar 2023
1508 122 Rishabh Dave
1509
run: http://pulpito.front.sepia.ceph.com/rishabh-2023-03-03_21:39:49-fs-wip-rishabh-2023Mar03-2316-testing-default-smithi/
1510 120 Rishabh Dave
re-run (some PRs removed from batch): http://pulpito.front.sepia.ceph.com/rishabh-2023-03-11_05:54:03-fs-wip-rishabh-2023Mar10-1727-testing-default-smithi/
1511
re-run (some PRs removed from batch): http://pulpito.front.sepia.ceph.com/rishabh-2023-03-23_08:27:28-fs-wip-rishabh-2023Mar20-2250-testing-default-smithi/
1512
1513
There were many more re-runs for "failed+dead" jobs as well as for individual jobs. half of the PRs from the batch were removed (gradually over subsequent re-runs).
1514
1515
* https://tracker.ceph.com/issues/57676
1516
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
1517
* https://tracker.ceph.com/issues/54460
1518
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1519
* https://tracker.ceph.com/issues/58220
1520
  Command failed (workunit test fs/quota/quota.sh) on smithixxx with status 1
1521
* https://tracker.ceph.com/issues/58220#note-9
1522
  workunit fs/test_python.sh: test_disk_quota_exceeeded_error failure
1523
* https://tracker.ceph.com/issues/56695
1524
  Command failed (workunit test suites/pjd.sh)
1525
* https://tracker.ceph.com/issues/58564 
1526
  workuit dbench failed with error code 1
1527
* https://tracker.ceph.com/issues/57206
1528
  libcephfs/test.sh: ceph_test_libcephfs_reclaim
1529
* https://tracker.ceph.com/issues/57580
1530
  Test failure: test_newops_getvxattr (tasks.cephfs.test_newops.TestNewOps)
1531
* https://tracker.ceph.com/issues/58940
1532
  ceph osd hit ceph_abort
1533
* https://tracker.ceph.com/issues/55805
1534 118 Venky Shankar
  error scrub thrashing reached max tries in 900 secs
1535
1536
h3. 30 March 2023
1537
1538
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230315.085747
1539
1540
* https://tracker.ceph.com/issues/58938
1541
    qa: xfstests-dev's generic test suite has 7 failures with kclient
1542
* https://tracker.ceph.com/issues/51964
1543
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1544
* https://tracker.ceph.com/issues/58340
1545 114 Venky Shankar
    mds: fsstress.sh hangs with multimds
1546
1547 115 Venky Shankar
h3. 29 March 2023
1548 114 Venky Shankar
1549
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230317.095222
1550
1551
* https://tracker.ceph.com/issues/56695
1552
    [RHEL stock] pjd test failures
1553
* https://tracker.ceph.com/issues/57676
1554
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1555
* https://tracker.ceph.com/issues/57087
1556
    qa: test_fragmented_injection (tasks.cephfs.test_data_scan.TestDataScan) failure
1557 116 Venky Shankar
* https://tracker.ceph.com/issues/58340
1558
    mds: fsstress.sh hangs with multimds
1559 114 Venky Shankar
* https://tracker.ceph.com/issues/57655
1560
    qa: fs:mixed-clients kernel_untar_build failure
1561 117 Venky Shankar
* https://tracker.ceph.com/issues/59230
1562
    Test failure: test_object_deletion (tasks.cephfs.test_damage.TestDamage)
1563 114 Venky Shankar
* https://tracker.ceph.com/issues/58938
1564 113 Venky Shankar
    qa: xfstests-dev's generic test suite has 7 failures with kclient
1565
1566
h3. 13 Mar 2023
1567
1568
* https://tracker.ceph.com/issues/56695
1569
    [RHEL stock] pjd test failures
1570
* https://tracker.ceph.com/issues/57676
1571
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1572
* https://tracker.ceph.com/issues/51964
1573
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1574
* https://tracker.ceph.com/issues/54460
1575
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1576
* https://tracker.ceph.com/issues/57656
1577 112 Venky Shankar
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
1578
1579
h3. 09 Mar 2023
1580
1581
https://pulpito.ceph.com/vshankar-2023-03-03_04:39:14-fs-wip-vshankar-testing-20230303.023823-testing-default-smithi/
1582
https://pulpito.ceph.com/vshankar-2023-03-08_15:12:36-fs-wip-vshankar-testing-20230308.112059-testing-default-smithi/
1583
1584
* https://tracker.ceph.com/issues/56695
1585
    [RHEL stock] pjd test failures
1586
* https://tracker.ceph.com/issues/57676
1587
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1588
* https://tracker.ceph.com/issues/51964
1589
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1590
* https://tracker.ceph.com/issues/54460
1591
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1592
* https://tracker.ceph.com/issues/58340
1593
    mds: fsstress.sh hangs with multimds
1594
* https://tracker.ceph.com/issues/57087
1595 111 Venky Shankar
    qa: test_fragmented_injection (tasks.cephfs.test_data_scan.TestDataScan) failure
1596
1597
h3. 07 Mar 2023
1598
1599
https://pulpito.ceph.com/vshankar-2023-03-02_09:21:58-fs-wip-vshankar-testing-20230222.044949-testing-default-smithi/
1600
https://pulpito.ceph.com/vshankar-2023-03-07_05:15:12-fs-wip-vshankar-testing-20230307.030510-testing-default-smithi/
1601
1602
* https://tracker.ceph.com/issues/56695
1603
    [RHEL stock] pjd test failures
1604
* https://tracker.ceph.com/issues/57676
1605
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1606
* https://tracker.ceph.com/issues/51964
1607
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1608
* https://tracker.ceph.com/issues/57656
1609
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
1610
* https://tracker.ceph.com/issues/57655
1611
    qa: fs:mixed-clients kernel_untar_build failure
1612
* https://tracker.ceph.com/issues/58220
1613
    Command failed (workunit test fs/quota/quota.sh) on smithi081 with status 1:
1614
* https://tracker.ceph.com/issues/54460
1615
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1616
* https://tracker.ceph.com/issues/58934
1617 109 Venky Shankar
    snaptest-git-ceph.sh failure with ceph-fuse
1618
1619
h3. 28 Feb 2023
1620
1621
https://pulpito.ceph.com/vshankar-2023-02-24_02:11:45-fs-wip-vshankar-testing-20230222.025426-testing-default-smithi/
1622
1623
* https://tracker.ceph.com/issues/56695
1624
    [RHEL stock] pjd test failures
1625
* https://tracker.ceph.com/issues/57676
1626
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1627 110 Venky Shankar
* https://tracker.ceph.com/issues/56446
1628 109 Venky Shankar
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
1629
1630 107 Venky Shankar
(teuthology infra issues causing testing delays - merging PRs which have tests passing)
1631
1632
h3. 25 Jan 2023
1633
1634
https://pulpito.ceph.com/vshankar-2023-01-25_07:57:32-fs-wip-vshankar-testing-20230125.055346-testing-default-smithi/
1635
1636
* https://tracker.ceph.com/issues/52624
1637
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
1638
* https://tracker.ceph.com/issues/56695
1639
    [RHEL stock] pjd test failures
1640
* https://tracker.ceph.com/issues/57676
1641
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1642
* https://tracker.ceph.com/issues/56446
1643
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
1644
* https://tracker.ceph.com/issues/57206
1645
    libcephfs/test.sh: ceph_test_libcephfs_reclaim
1646
* https://tracker.ceph.com/issues/58220
1647
    Command failed (workunit test fs/quota/quota.sh) on smithi081 with status 1:
1648
* https://tracker.ceph.com/issues/58340
1649
  mds: fsstress.sh hangs with multimds
1650
* https://tracker.ceph.com/issues/56011
1651
    fs/thrash: snaptest-snap-rm-cmp.sh fails in mds5sum comparison
1652
* https://tracker.ceph.com/issues/54460
1653 101 Rishabh Dave
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1654
1655
h3. 30 JAN 2023
1656
1657
run: http://pulpito.front.sepia.ceph.com/rishabh-2022-11-28_08:04:11-fs-wip-rishabh-testing-2022Nov24-1818-testing-default-smithi/
1658
re-run: http://pulpito.front.sepia.ceph.com/rishabh-2023-01-13_12:08:33-fs-wip-rishabh-testing-2022Nov24-11Jan2023-distro-default-smithi/
1659 105 Rishabh Dave
re-run of re-run: http://pulpito.front.sepia.ceph.com/rishabh-2023-01-23_18:53:32-fs-wip-rishabh-testing-2022Nov24-11Jan2023-distro-default-smithi/
1660
1661 101 Rishabh Dave
* https://tracker.ceph.com/issues/52624
1662
  qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
1663
* https://tracker.ceph.com/issues/56695
1664
  [RHEL stock] pjd test failures
1665
* https://tracker.ceph.com/issues/57676
1666
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
1667
* https://tracker.ceph.com/issues/55332
1668
  Failure in snaptest-git-ceph.sh
1669
* https://tracker.ceph.com/issues/51964
1670
  qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1671
* https://tracker.ceph.com/issues/56446
1672
  Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
1673
* https://tracker.ceph.com/issues/57655 
1674
  qa: fs:mixed-clients kernel_untar_build failure
1675
* https://tracker.ceph.com/issues/54460
1676
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1677 103 Rishabh Dave
* https://tracker.ceph.com/issues/58340
1678
  mds: fsstress.sh hangs with multimds
1679 101 Rishabh Dave
* https://tracker.ceph.com/issues/58219
1680 102 Rishabh Dave
  Command crashed: 'ceph-dencoder type inode_backtrace_t import - decode dump_json'
1681
1682
* "Failed to load ceph-mgr modules: prometheus" in cluster log"
1683 106 Rishabh Dave
  http://pulpito.front.sepia.ceph.com/rishabh-2023-01-23_18:53:32-fs-wip-rishabh-testing-2022Nov24-11Jan2023-distro-default-smithi/7134086
1684
  Acc to Venky this was fixed in https://github.com/ceph/ceph/commit/cf6089200d96fc56b08ee17a4e31f19823370dc8
1685 102 Rishabh Dave
* Created https://tracker.ceph.com/issues/58564
1686 100 Venky Shankar
  workunit test suites/dbench.sh failed error code 1
1687
1688
h3. 15 Dec 2022
1689
1690
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20221215.112736
1691
1692
* https://tracker.ceph.com/issues/52624
1693
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1694
* https://tracker.ceph.com/issues/56695
1695
    [RHEL stock] pjd test failures
1696
* https://tracker.ceph.com/issues/58219
1697
* https://tracker.ceph.com/issues/57655
1698
* qa: fs:mixed-clients kernel_untar_build failure
1699
    Test failure: test_journal_migration (tasks.cephfs.test_journal_migration.TestJournalMigration)
1700
* https://tracker.ceph.com/issues/57676
1701
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1702
* https://tracker.ceph.com/issues/58340
1703 96 Venky Shankar
    mds: fsstress.sh hangs with multimds
1704
1705
h3. 08 Dec 2022
1706 99 Venky Shankar
1707 96 Venky Shankar
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20221130.043104
1708
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20221209.043803
1709
1710
(lots of transient git.ceph.com failures)
1711
1712
* https://tracker.ceph.com/issues/52624
1713
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1714
* https://tracker.ceph.com/issues/56695
1715
    [RHEL stock] pjd test failures
1716
* https://tracker.ceph.com/issues/57655
1717
    qa: fs:mixed-clients kernel_untar_build failure
1718
* https://tracker.ceph.com/issues/58219
1719
    Test failure: test_journal_migration (tasks.cephfs.test_journal_migration.TestJournalMigration)
1720
* https://tracker.ceph.com/issues/58220
1721
    Command failed (workunit test fs/quota/quota.sh) on smithi081 with status 1:
1722 97 Venky Shankar
* https://tracker.ceph.com/issues/57676
1723
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1724 98 Venky Shankar
* https://tracker.ceph.com/issues/53859
1725
    qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
1726
* https://tracker.ceph.com/issues/54460
1727
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1728 96 Venky Shankar
* https://tracker.ceph.com/issues/58244
1729 95 Venky Shankar
    Test failure: test_rebuild_inotable (tasks.cephfs.test_data_scan.TestDataScan)
1730
1731
h3. 14 Oct 2022
1732
1733
https://pulpito.ceph.com/vshankar-2022-10-12_04:56:59-fs-wip-vshankar-testing-20221011-145847-testing-default-smithi/
1734
https://pulpito.ceph.com/vshankar-2022-10-14_04:04:57-fs-wip-vshankar-testing-20221014-072608-testing-default-smithi/
1735
1736
* https://tracker.ceph.com/issues/52624
1737
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1738
* https://tracker.ceph.com/issues/55804
1739
    Command failed (workunit test suites/pjd.sh)
1740
* https://tracker.ceph.com/issues/51964
1741
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1742
* https://tracker.ceph.com/issues/57682
1743
    client: ERROR: test_reconnect_after_blocklisted
1744 90 Rishabh Dave
* https://tracker.ceph.com/issues/54460
1745 91 Rishabh Dave
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1746
1747
h3. 10 Oct 2022
1748 92 Rishabh Dave
1749 91 Rishabh Dave
http://pulpito.front.sepia.ceph.com/rishabh-2022-09-30_19:45:21-fs-wip-rishabh-testing-30Sep2022-testing-default-smithi/
1750
1751
reruns
1752
* fs-thrash, passed: http://pulpito.front.sepia.ceph.com/rishabh-2022-10-04_13:19:47-fs-wip-rishabh-testing-30Sep2022-testing-default-smithi/
1753 94 Rishabh Dave
* fs-verify, passed: http://pulpito.front.sepia.ceph.com/rishabh-2022-10-05_12:25:37-fs-wip-rishabh-testing-30Sep2022-testing-default-smithi/
1754 91 Rishabh Dave
* cephadm failures also passed after many re-runs: http://pulpito.front.sepia.ceph.com/rishabh-2022-10-06_13:50:51-fs-wip-rishabh-testing-30Sep2022-2-testing-default-smithi/
1755 93 Rishabh Dave
    ** needed this PR to be merged in ceph-ci branch - https://github.com/ceph/ceph/pull/47458
1756 91 Rishabh Dave
1757
known bugs
1758
* https://tracker.ceph.com/issues/52624
1759
  qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1760
* https://tracker.ceph.com/issues/50223
1761
  client.xxxx isn't responding to mclientcaps(revoke
1762
* https://tracker.ceph.com/issues/57299
1763
  qa: test_dump_loads fails with JSONDecodeError
1764
* https://tracker.ceph.com/issues/57655 [Exist in main as well]
1765
  qa: fs:mixed-clients kernel_untar_build failure
1766
* https://tracker.ceph.com/issues/57206
1767 90 Rishabh Dave
  libcephfs/test.sh: ceph_test_libcephfs_reclaim
1768
1769
h3. 2022 Sep 29
1770
1771
http://pulpito.front.sepia.ceph.com/rishabh-2022-09-14_12:48:43-fs-wip-rishabh-testing-2022Sep9-1708-testing-default-smithi/
1772
1773
* https://tracker.ceph.com/issues/55804
1774
  Command failed (workunit test suites/pjd.sh)
1775
* https://tracker.ceph.com/issues/36593
1776
  Command failed (workunit test fs/quota/quota.sh) on smithixxx with status 1
1777
* https://tracker.ceph.com/issues/52624
1778
  qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1779
* https://tracker.ceph.com/issues/51964
1780
  qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1781
* https://tracker.ceph.com/issues/56632
1782
  Test failure: test_subvolume_snapshot_clone_quota_exceeded
1783
* https://tracker.ceph.com/issues/50821
1784 88 Patrick Donnelly
  qa: untar_snap_rm failure during mds thrashing
1785
1786
h3. 2022 Sep 26
1787
1788
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20220923.171109
1789
1790
* https://tracker.ceph.com/issues/55804
1791
    qa failure: pjd link tests failed
1792
* https://tracker.ceph.com/issues/57676
1793
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1794
* https://tracker.ceph.com/issues/52624
1795
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1796
* https://tracker.ceph.com/issues/57580
1797
    Test failure: test_newops_getvxattr (tasks.cephfs.test_newops.TestNewOps)
1798
* https://tracker.ceph.com/issues/48773
1799
    qa: scrub does not complete
1800
* https://tracker.ceph.com/issues/57299
1801
    qa: test_dump_loads fails with JSONDecodeError
1802
* https://tracker.ceph.com/issues/57280
1803
    qa: tasks/kernel_cfuse_workunits_untarbuild_blogbench fails - Failed to fetch package version from shaman
1804
* https://tracker.ceph.com/issues/57205
1805
    Test failure: test_subvolume_group_ls_filter_internal_directories (tasks.cephfs.test_volumes.TestSubvolumeGroups)
1806
* https://tracker.ceph.com/issues/57656
1807
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
1808
* https://tracker.ceph.com/issues/57677
1809
    qa: "1 MDSs behind on trimming (MDS_TRIM)"
1810
* https://tracker.ceph.com/issues/57206
1811
    libcephfs/test.sh: ceph_test_libcephfs_reclaim
1812
* https://tracker.ceph.com/issues/57446
1813
    qa: test_subvolume_snapshot_info_if_orphan_clone fails
1814 89 Patrick Donnelly
* https://tracker.ceph.com/issues/57655 [Exist in main as well]
1815
    qa: fs:mixed-clients kernel_untar_build failure
1816 88 Patrick Donnelly
* https://tracker.ceph.com/issues/57682
1817
    client: ERROR: test_reconnect_after_blocklisted
1818 87 Patrick Donnelly
1819
1820
h3. 2022 Sep 22
1821
1822
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20220920.234701
1823
1824
* https://tracker.ceph.com/issues/57299
1825
    qa: test_dump_loads fails with JSONDecodeError
1826
* https://tracker.ceph.com/issues/57205
1827
    Test failure: test_subvolume_group_ls_filter_internal_directories (tasks.cephfs.test_volumes.TestSubvolumeGroups)
1828
* https://tracker.ceph.com/issues/52624
1829
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1830
* https://tracker.ceph.com/issues/57580
1831
    Test failure: test_newops_getvxattr (tasks.cephfs.test_newops.TestNewOps)
1832
* https://tracker.ceph.com/issues/57280
1833
    qa: tasks/kernel_cfuse_workunits_untarbuild_blogbench fails - Failed to fetch package version from shaman
1834
* https://tracker.ceph.com/issues/48773
1835
    qa: scrub does not complete
1836
* https://tracker.ceph.com/issues/56446
1837
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
1838
* https://tracker.ceph.com/issues/57206
1839
    libcephfs/test.sh: ceph_test_libcephfs_reclaim
1840
* https://tracker.ceph.com/issues/51267
1841
    CommandFailedError: Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithi096 with status 1:...
1842
1843
NEW:
1844
1845
* https://tracker.ceph.com/issues/57656
1846
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
1847
* https://tracker.ceph.com/issues/57655 [Exist in main as well]
1848
    qa: fs:mixed-clients kernel_untar_build failure
1849
* https://tracker.ceph.com/issues/57657
1850
    mds: scrub locates mismatch between child accounted_rstats and self rstats
1851
1852
Segfault probably caused by: https://github.com/ceph/ceph/pull/47795#issuecomment-1255724799
1853 80 Venky Shankar
1854 79 Venky Shankar
1855
h3. 2022 Sep 16
1856
1857
https://pulpito.ceph.com/?branch=wip-vshankar-testing1-20220905-132828
1858
1859
* https://tracker.ceph.com/issues/57446
1860
    qa: test_subvolume_snapshot_info_if_orphan_clone fails
1861
* https://tracker.ceph.com/issues/57299
1862
    qa: test_dump_loads fails with JSONDecodeError
1863
* https://tracker.ceph.com/issues/50223
1864
    client.xxxx isn't responding to mclientcaps(revoke)
1865
* https://tracker.ceph.com/issues/52624
1866
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1867
* https://tracker.ceph.com/issues/57205
1868
    Test failure: test_subvolume_group_ls_filter_internal_directories (tasks.cephfs.test_volumes.TestSubvolumeGroups)
1869
* https://tracker.ceph.com/issues/57280
1870
    qa: tasks/kernel_cfuse_workunits_untarbuild_blogbench fails - Failed to fetch package version from shaman
1871
* https://tracker.ceph.com/issues/51282
1872
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
1873
* https://tracker.ceph.com/issues/48203
1874
  https://tracker.ceph.com/issues/36593
1875
    qa: quota failure
1876
    qa: quota failure caused by clients stepping on each other
1877
* https://tracker.ceph.com/issues/57580
1878 77 Rishabh Dave
    Test failure: test_newops_getvxattr (tasks.cephfs.test_newops.TestNewOps)
1879
1880 76 Rishabh Dave
1881
h3. 2022 Aug 26
1882
1883
http://pulpito.front.sepia.ceph.com/rishabh-2022-08-22_17:49:59-fs-wip-rishabh-testing-2022Aug19-testing-default-smithi/
1884
http://pulpito.front.sepia.ceph.com/rishabh-2022-08-24_11:56:51-fs-wip-rishabh-testing-2022Aug19-testing-default-smithi/
1885
1886
* https://tracker.ceph.com/issues/57206
1887
  libcephfs/test.sh: ceph_test_libcephfs_reclaim
1888
* https://tracker.ceph.com/issues/56632
1889
  Test failure: test_subvolume_snapshot_clone_quota_exceeded (tasks.cephfs.test_volumes.TestSubvolumeSnapshotClones)
1890
* https://tracker.ceph.com/issues/56446
1891
  Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
1892
* https://tracker.ceph.com/issues/51964
1893
  qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1894
* https://tracker.ceph.com/issues/53859
1895
  qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
1896
1897
* https://tracker.ceph.com/issues/54460
1898
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1899
* https://tracker.ceph.com/issues/54462
1900
  Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128
1901
* https://tracker.ceph.com/issues/54460
1902
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1903
* https://tracker.ceph.com/issues/36593
1904
  Command failed (workunit test fs/quota/quota.sh) on smithixxx with status 1
1905
1906
* https://tracker.ceph.com/issues/52624
1907
  qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1908
* https://tracker.ceph.com/issues/55804
1909
  Command failed (workunit test suites/pjd.sh)
1910
* https://tracker.ceph.com/issues/50223
1911
  client.xxxx isn't responding to mclientcaps(revoke)
1912 75 Venky Shankar
1913
1914
h3. 2022 Aug 22
1915
1916
https://pulpito.ceph.com/vshankar-2022-08-12_09:34:24-fs-wip-vshankar-testing1-20220812-072441-testing-default-smithi/
1917
https://pulpito.ceph.com/vshankar-2022-08-18_04:30:42-fs-wip-vshankar-testing1-20220818-082047-testing-default-smithi/ (drop problematic PR and re-run)
1918
1919
* https://tracker.ceph.com/issues/52624
1920
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1921
* https://tracker.ceph.com/issues/56446
1922
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
1923
* https://tracker.ceph.com/issues/55804
1924
    Command failed (workunit test suites/pjd.sh)
1925
* https://tracker.ceph.com/issues/51278
1926
    mds: "FAILED ceph_assert(!segments.empty())"
1927
* https://tracker.ceph.com/issues/54460
1928
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1929
* https://tracker.ceph.com/issues/57205
1930
    Test failure: test_subvolume_group_ls_filter_internal_directories (tasks.cephfs.test_volumes.TestSubvolumeGroups)
1931
* https://tracker.ceph.com/issues/57206
1932
    ceph_test_libcephfs_reclaim crashes during test
1933
* https://tracker.ceph.com/issues/53859
1934
  Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
1935
* https://tracker.ceph.com/issues/50223
1936 72 Venky Shankar
    client.xxxx isn't responding to mclientcaps(revoke)
1937
1938
h3. 2022 Aug 12
1939
1940
https://pulpito.ceph.com/vshankar-2022-08-10_04:06:00-fs-wip-vshankar-testing-20220805-190751-testing-default-smithi/
1941
https://pulpito.ceph.com/vshankar-2022-08-11_12:16:58-fs-wip-vshankar-testing-20220811-145809-testing-default-smithi/ (drop problematic PR and re-run)
1942
1943
* https://tracker.ceph.com/issues/52624
1944
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1945
* https://tracker.ceph.com/issues/56446
1946
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
1947
* https://tracker.ceph.com/issues/51964
1948
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1949
* https://tracker.ceph.com/issues/55804
1950
    Command failed (workunit test suites/pjd.sh)
1951
* https://tracker.ceph.com/issues/50223
1952
    client.xxxx isn't responding to mclientcaps(revoke)
1953
* https://tracker.ceph.com/issues/50821
1954 73 Venky Shankar
    qa: untar_snap_rm failure during mds thrashing
1955 72 Venky Shankar
* https://tracker.ceph.com/issues/54460
1956 71 Venky Shankar
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1957
1958
h3. 2022 Aug 04
1959
1960
https://pulpito.ceph.com/?branch=wip-vshankar-testing1-20220804-123835 (only mgr/volumes, mgr/stats)
1961
1962 69 Rishabh Dave
Unrealted teuthology failure on rhel
1963 68 Rishabh Dave
1964
h3. 2022 Jul 25
1965
1966
http://pulpito.front.sepia.ceph.com/rishabh-2022-07-22_11:34:20-fs-wip-rishabh-testing-2022Jul22-1400-testing-default-smithi/
1967
1968 74 Rishabh Dave
1st re-run: http://pulpito.front.sepia.ceph.com/rishabh-2022-07-24_03:51:19-fs-wip-rishabh-testing-2022Jul22-1400-testing-default-smithi
1969
2nd re-run: http://pulpito.front.sepia.ceph.com/rishabh-2022-07-24_08:53:36-fs-wip-rishabh-testing-2022Jul22-1400-testing-default-smithi/
1970 68 Rishabh Dave
3rd re-run: http://pulpito.front.sepia.ceph.com/rishabh-2022-07-24_08:53:36-fs-wip-rishabh-testing-2022Jul22-1400-testing-default-smithi/
1971
4th (final) re-run: http://pulpito.front.sepia.ceph.com/rishabh-2022-07-28_03:59:01-fs-wip-rishabh-testing-2022Jul28-0143-testing-default-smithi/
1972
1973
* https://tracker.ceph.com/issues/55804
1974
  Command failed (workunit test suites/pjd.sh)
1975
* https://tracker.ceph.com/issues/50223
1976
  client.xxxx isn't responding to mclientcaps(revoke)
1977
1978
* https://tracker.ceph.com/issues/54460
1979
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1980 1 Patrick Donnelly
* https://tracker.ceph.com/issues/36593
1981 74 Rishabh Dave
  Command failed (workunit test fs/quota/quota.sh) on smithixxx with status 1
1982 68 Rishabh Dave
* https://tracker.ceph.com/issues/54462
1983 67 Patrick Donnelly
  Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128~
1984
1985
h3. 2022 July 22
1986
1987
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20220721.235756
1988
1989
MDS_HEALTH_DUMMY error in log fixed by followup commit.
1990
transient selinux ping failure
1991
1992
* https://tracker.ceph.com/issues/56694
1993
    qa: avoid blocking forever on hung umount
1994
* https://tracker.ceph.com/issues/56695
1995
    [RHEL stock] pjd test failures
1996
* https://tracker.ceph.com/issues/56696
1997
    admin keyring disappears during qa run
1998
* https://tracker.ceph.com/issues/56697
1999
    qa: fs/snaps fails for fuse
2000
* https://tracker.ceph.com/issues/50222
2001
    osd: 5.2s0 deep-scrub : stat mismatch
2002
* https://tracker.ceph.com/issues/56698
2003
    client: FAILED ceph_assert(_size == 0)
2004
* https://tracker.ceph.com/issues/50223
2005
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2006 66 Rishabh Dave
2007 65 Rishabh Dave
2008
h3. 2022 Jul 15
2009
2010
http://pulpito.front.sepia.ceph.com/rishabh-2022-07-08_23:53:34-fs-wip-rishabh-testing-2022Jul08-1820-testing-default-smithi/
2011
2012
re-run: http://pulpito.front.sepia.ceph.com/rishabh-2022-07-15_06:42:04-fs-wip-rishabh-testing-2022Jul08-1820-testing-default-smithi/
2013
2014
* https://tracker.ceph.com/issues/53859
2015
  Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
2016
* https://tracker.ceph.com/issues/55804
2017
  Command failed (workunit test suites/pjd.sh)
2018
* https://tracker.ceph.com/issues/50223
2019
  client.xxxx isn't responding to mclientcaps(revoke)
2020
* https://tracker.ceph.com/issues/50222
2021
  osd: deep-scrub : stat mismatch
2022
2023
* https://tracker.ceph.com/issues/56632
2024
  Test failure: test_subvolume_snapshot_clone_quota_exceeded (tasks.cephfs.test_volumes.TestSubvolumeSnapshotClones)
2025
* https://tracker.ceph.com/issues/56634
2026
  workunit test fs/snaps/snaptest-intodir.sh
2027
* https://tracker.ceph.com/issues/56644
2028
  Test failure: test_rapid_creation (tasks.cephfs.test_fragment.TestFragmentation)
2029
2030 61 Rishabh Dave
2031
2032
h3. 2022 July 05
2033 62 Rishabh Dave
2034 64 Rishabh Dave
http://pulpito.front.sepia.ceph.com/rishabh-2022-07-02_14:14:52-fs-wip-rishabh-testing-20220702-1631-testing-default-smithi/
2035
2036
On 1st re-run some jobs passed - http://pulpito.front.sepia.ceph.com/rishabh-2022-07-03_15:10:28-fs-wip-rishabh-testing-20220702-1631-distro-default-smithi/
2037
2038
On 2nd re-run only few jobs failed -
2039 62 Rishabh Dave
http://pulpito.front.sepia.ceph.com/rishabh-2022-07-06_05:24:29-fs-wip-rishabh-testing-20220705-2132-distro-default-smithi/
2040
http://pulpito.front.sepia.ceph.com/rishabh-2022-07-06_05:24:29-fs-wip-rishabh-testing-20220705-2132-distro-default-smithi/
2041
2042
* https://tracker.ceph.com/issues/56446
2043
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
2044
* https://tracker.ceph.com/issues/55804
2045
    Command failed (workunit test suites/pjd.sh) on smithi047 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/
2046
2047
* https://tracker.ceph.com/issues/56445
2048 63 Rishabh Dave
    Command failed on smithi080 with status 123: "find /home/ubuntu/cephtest/archive/syslog -name '*.log' -print0 | sudo xargs -0 --no-run-if-empty -- gzip --"
2049
* https://tracker.ceph.com/issues/51267
2050
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithi098 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1
2051 62 Rishabh Dave
* https://tracker.ceph.com/issues/50224
2052
    Test failure: test_mirroring_init_failure_with_recovery (tasks.cephfs.test_mirroring.TestMirroring)
2053 61 Rishabh Dave
2054 58 Venky Shankar
2055
2056
h3. 2022 July 04
2057
2058
https://pulpito.ceph.com/vshankar-2022-06-29_09:19:00-fs-wip-vshankar-testing-20220627-100931-testing-default-smithi/
2059
(rhel runs were borked due to: https://lists.ceph.io/hyperkitty/list/dev@ceph.io/thread/JSZQFUKVLDND4W33PXDGCABPHNSPT6SS/, tests ran with --filter-out=rhel)
2060
2061
* https://tracker.ceph.com/issues/56445
2062 59 Rishabh Dave
    Command failed on smithi162 with status 123: "find /home/ubuntu/cephtest/archive/syslog -name '*.log' -print0 | sudo xargs -0 --no-run-if-empty -- gzip --"
2063
* https://tracker.ceph.com/issues/56446
2064
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
2065
* https://tracker.ceph.com/issues/51964
2066 60 Rishabh Dave
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
2067 59 Rishabh Dave
* https://tracker.ceph.com/issues/52624
2068 57 Venky Shankar
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2069
2070
h3. 2022 June 20
2071
2072
https://pulpito.ceph.com/vshankar-2022-06-15_04:03:39-fs-wip-vshankar-testing1-20220615-072516-testing-default-smithi/
2073
https://pulpito.ceph.com/vshankar-2022-06-19_08:22:46-fs-wip-vshankar-testing1-20220619-102531-testing-default-smithi/
2074
2075
* https://tracker.ceph.com/issues/52624
2076
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2077
* https://tracker.ceph.com/issues/55804
2078
    qa failure: pjd link tests failed
2079
* https://tracker.ceph.com/issues/54108
2080
    qa: iogen workunit: "The following counters failed to be set on mds daemons: {'mds.exported', 'mds.imported'}"
2081
* https://tracker.ceph.com/issues/55332
2082 56 Patrick Donnelly
    Failure in snaptest-git-ceph.sh (it's an async unlink/create bug)
2083
2084
h3. 2022 June 13
2085
2086
https://pulpito.ceph.com/pdonnell-2022-06-12_05:08:12-fs:workload-wip-pdonnell-testing-20220612.004943-distro-default-smithi/
2087
2088
* https://tracker.ceph.com/issues/56024
2089
    cephadm: removes ceph.conf during qa run causing command failure
2090
* https://tracker.ceph.com/issues/48773
2091
    qa: scrub does not complete
2092
* https://tracker.ceph.com/issues/56012
2093
    mds: src/mds/MDLog.cc: 283: FAILED ceph_assert(!mds->is_ any_replay())
2094 55 Venky Shankar
2095 54 Venky Shankar
2096
h3. 2022 Jun 13
2097
2098
https://pulpito.ceph.com/vshankar-2022-06-07_00:25:50-fs-wip-vshankar-testing-20220606-223254-testing-default-smithi/
2099
https://pulpito.ceph.com/vshankar-2022-06-10_01:04:46-fs-wip-vshankar-testing-20220609-175550-testing-default-smithi/
2100
2101
* https://tracker.ceph.com/issues/52624
2102
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2103
* https://tracker.ceph.com/issues/51964
2104
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
2105
* https://tracker.ceph.com/issues/53859
2106
    qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
2107
* https://tracker.ceph.com/issues/55804
2108
    qa failure: pjd link tests failed
2109
* https://tracker.ceph.com/issues/56003
2110
    client: src/include/xlist.h: 81: FAILED ceph_assert(_size == 0)
2111
* https://tracker.ceph.com/issues/56011
2112
    fs/thrash: snaptest-snap-rm-cmp.sh fails in mds5sum comparison
2113
* https://tracker.ceph.com/issues/56012
2114 53 Venky Shankar
    mds: src/mds/MDLog.cc: 283: FAILED ceph_assert(!mds->is_ any_replay())
2115
2116
h3. 2022 Jun 07
2117
2118
https://pulpito.ceph.com/vshankar-2022-06-06_21:25:41-fs-wip-vshankar-testing1-20220606-230129-testing-default-smithi/
2119
https://pulpito.ceph.com/vshankar-2022-06-07_10:53:31-fs-wip-vshankar-testing1-20220607-104134-testing-default-smithi/ (rerun after dropping a problematic PR)
2120
2121
* https://tracker.ceph.com/issues/52624
2122
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2123
* https://tracker.ceph.com/issues/50223
2124
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2125
* https://tracker.ceph.com/issues/50224
2126 51 Venky Shankar
    qa: test_mirroring_init_failure_with_recovery failure
2127
2128
h3. 2022 May 12
2129 52 Venky Shankar
2130 51 Venky Shankar
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20220509-125847
2131
https://pulpito.ceph.com/vshankar-2022-05-13_17:09:16-fs-wip-vshankar-testing-20220513-120051-testing-default-smithi/ (drop prs + rerun)
2132
2133
* https://tracker.ceph.com/issues/52624
2134
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2135
* https://tracker.ceph.com/issues/50223
2136
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2137
* https://tracker.ceph.com/issues/55332
2138
    Failure in snaptest-git-ceph.sh
2139
* https://tracker.ceph.com/issues/53859
2140 1 Patrick Donnelly
    qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
2141 52 Venky Shankar
* https://tracker.ceph.com/issues/55538
2142
    Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead)
2143 51 Venky Shankar
* https://tracker.ceph.com/issues/55258
2144 49 Venky Shankar
    lots of "heartbeat_check: no reply from X.X.X.X" in OSD logs (cropss up again, though very infrequent)
2145
2146 50 Venky Shankar
h3. 2022 May 04
2147
2148
https://pulpito.ceph.com/vshankar-2022-05-01_13:18:44-fs-wip-vshankar-testing1-20220428-204527-testing-default-smithi/
2149 49 Venky Shankar
https://pulpito.ceph.com/vshankar-2022-05-02_16:58:59-fs-wip-vshankar-testing1-20220502-201957-testing-default-smithi/ (after dropping PRs)
2150
2151
* https://tracker.ceph.com/issues/52624
2152
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2153
* https://tracker.ceph.com/issues/50223
2154
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2155
* https://tracker.ceph.com/issues/55332
2156
    Failure in snaptest-git-ceph.sh
2157
* https://tracker.ceph.com/issues/53859
2158
    qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
2159
* https://tracker.ceph.com/issues/55516
2160
    qa: fs suite tests failing with "json.decoder.JSONDecodeError: Extra data: line 2 column 82 (char 82)"
2161
* https://tracker.ceph.com/issues/55537
2162
    mds: crash during fs:upgrade test
2163
* https://tracker.ceph.com/issues/55538
2164 48 Venky Shankar
    Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead)
2165
2166
h3. 2022 Apr 25
2167
2168
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20220420-113951 (owner vshankar)
2169
2170
* https://tracker.ceph.com/issues/52624
2171
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2172
* https://tracker.ceph.com/issues/50223
2173
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2174
* https://tracker.ceph.com/issues/55258
2175
    lots of "heartbeat_check: no reply from X.X.X.X" in OSD logs
2176
* https://tracker.ceph.com/issues/55377
2177 47 Venky Shankar
    kclient: mds revoke Fwb caps stuck after the kclient tries writebcak once
2178
2179
h3. 2022 Apr 14
2180
2181
https://pulpito.ceph.com/?branch=wip-vshankar-testing1-20220411-144044
2182
2183
* https://tracker.ceph.com/issues/52624
2184
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2185
* https://tracker.ceph.com/issues/50223
2186
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2187
* https://tracker.ceph.com/issues/52438
2188
    qa: ffsb timeout
2189
* https://tracker.ceph.com/issues/55170
2190
    mds: crash during rejoin (CDir::fetch_keys)
2191
* https://tracker.ceph.com/issues/55331
2192
    pjd failure
2193
* https://tracker.ceph.com/issues/48773
2194
    qa: scrub does not complete
2195
* https://tracker.ceph.com/issues/55332
2196
    Failure in snaptest-git-ceph.sh
2197
* https://tracker.ceph.com/issues/55258
2198 45 Venky Shankar
    lots of "heartbeat_check: no reply from X.X.X.X" in OSD logs
2199
2200 46 Venky Shankar
h3. 2022 Apr 11
2201 45 Venky Shankar
2202
https://pulpito.ceph.com/?branch=wip-vshankar-testing-55110-20220408-203242
2203
2204
* https://tracker.ceph.com/issues/48773
2205
    qa: scrub does not complete
2206
* https://tracker.ceph.com/issues/52624
2207
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2208
* https://tracker.ceph.com/issues/52438
2209
    qa: ffsb timeout
2210
* https://tracker.ceph.com/issues/48680
2211
    mds: scrubbing stuck "scrub active (0 inodes in the stack)"
2212
* https://tracker.ceph.com/issues/55236
2213
    qa: fs/snaps tests fails with "hit max job timeout"
2214
* https://tracker.ceph.com/issues/54108
2215
    qa: iogen workunit: "The following counters failed to be set on mds daemons: {'mds.exported', 'mds.imported'}"
2216
* https://tracker.ceph.com/issues/54971
2217
    Test failure: test_perf_stats_stale_metrics (tasks.cephfs.test_mds_metrics.TestMDSMetrics)
2218
* https://tracker.ceph.com/issues/50223
2219
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2220
* https://tracker.ceph.com/issues/55258
2221 44 Venky Shankar
    lots of "heartbeat_check: no reply from X.X.X.X" in OSD logs
2222 42 Venky Shankar
2223 43 Venky Shankar
h3. 2022 Mar 21
2224
2225
https://pulpito.ceph.com/vshankar-2022-03-20_02:16:37-fs-wip-vshankar-testing-20220319-163539-testing-default-smithi/
2226
2227
Run didn't go well, lots of failures - debugging by dropping PRs and running against master branch. Only merging unrelated PRs that pass tests.
2228
2229
2230 42 Venky Shankar
h3. 2022 Mar 08
2231
2232
https://pulpito.ceph.com/vshankar-2022-02-28_04:32:15-fs-wip-vshankar-testing-20220226-211550-testing-default-smithi/
2233
2234
rerun with
2235
- (drop) https://github.com/ceph/ceph/pull/44679
2236
- (drop) https://github.com/ceph/ceph/pull/44958
2237
https://pulpito.ceph.com/vshankar-2022-03-06_14:47:51-fs-wip-vshankar-testing-20220304-132102-testing-default-smithi/
2238
2239
* https://tracker.ceph.com/issues/54419 (new)
2240
    `ceph orch upgrade start` seems to never reach completion
2241
* https://tracker.ceph.com/issues/51964
2242
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
2243
* https://tracker.ceph.com/issues/52624
2244
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2245
* https://tracker.ceph.com/issues/50223
2246
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2247
* https://tracker.ceph.com/issues/52438
2248
    qa: ffsb timeout
2249
* https://tracker.ceph.com/issues/50821
2250
    qa: untar_snap_rm failure during mds thrashing
2251 41 Venky Shankar
2252
2253
h3. 2022 Feb 09
2254
2255
https://pulpito.ceph.com/vshankar-2022-02-05_17:27:49-fs-wip-vshankar-testing-20220201-113815-testing-default-smithi/
2256
2257
rerun with
2258
- (drop) https://github.com/ceph/ceph/pull/37938
2259
- (drop) https://github.com/ceph/ceph/pull/44335
2260
- (drop) https://github.com/ceph/ceph/pull/44491
2261
- (drop) https://github.com/ceph/ceph/pull/44501
2262
https://pulpito.ceph.com/vshankar-2022-02-08_14:27:29-fs-wip-vshankar-testing-20220208-181241-testing-default-smithi/
2263
2264
* https://tracker.ceph.com/issues/51964
2265
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
2266
* https://tracker.ceph.com/issues/54066
2267
    test_subvolume_no_upgrade_v1_sanity fails with `AssertionError: 1000 != 0`
2268
* https://tracker.ceph.com/issues/48773
2269
    qa: scrub does not complete
2270
* https://tracker.ceph.com/issues/52624
2271
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2272
* https://tracker.ceph.com/issues/50223
2273
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2274
* https://tracker.ceph.com/issues/52438
2275 40 Patrick Donnelly
    qa: ffsb timeout
2276
2277
h3. 2022 Feb 01
2278
2279
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20220127.171526
2280
2281
* https://tracker.ceph.com/issues/54107
2282
    kclient: hang during umount
2283
* https://tracker.ceph.com/issues/54106
2284
    kclient: hang during workunit cleanup
2285
* https://tracker.ceph.com/issues/54108
2286
    qa: iogen workunit: "The following counters failed to be set on mds daemons: {'mds.exported', 'mds.imported'}"
2287
* https://tracker.ceph.com/issues/48773
2288
    qa: scrub does not complete
2289
* https://tracker.ceph.com/issues/52438
2290
    qa: ffsb timeout
2291 36 Venky Shankar
2292
2293
h3. 2022 Jan 13
2294 39 Venky Shankar
2295 36 Venky Shankar
https://pulpito.ceph.com/vshankar-2022-01-06_13:18:41-fs-wip-vshankar-testing-20220106-145819-testing-default-smithi/
2296 38 Venky Shankar
2297
rerun with:
2298 36 Venky Shankar
- (add) https://github.com/ceph/ceph/pull/44570
2299
- (drop) https://github.com/ceph/ceph/pull/43184
2300
https://pulpito.ceph.com/vshankar-2022-01-13_04:42:40-fs-wip-vshankar-testing-20220106-145819-testing-default-smithi/
2301
2302
* https://tracker.ceph.com/issues/50223
2303
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2304
* https://tracker.ceph.com/issues/51282
2305
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2306
* https://tracker.ceph.com/issues/48773
2307
    qa: scrub does not complete
2308
* https://tracker.ceph.com/issues/52624
2309
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2310
* https://tracker.ceph.com/issues/53859
2311 34 Venky Shankar
    qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
2312
2313
h3. 2022 Jan 03
2314
2315
https://pulpito.ceph.com/vshankar-2021-12-22_07:37:44-fs-wip-vshankar-testing-20211216-114012-testing-default-smithi/
2316
https://pulpito.ceph.com/vshankar-2022-01-03_12:27:45-fs-wip-vshankar-testing-20220103-142738-testing-default-smithi/ (rerun)
2317
2318
* https://tracker.ceph.com/issues/50223
2319
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2320
* https://tracker.ceph.com/issues/51964
2321
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
2322
* https://tracker.ceph.com/issues/51267
2323
    CommandFailedError: Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithi096 with status 1:...
2324
* https://tracker.ceph.com/issues/51282
2325
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2326
* https://tracker.ceph.com/issues/50821
2327
    qa: untar_snap_rm failure during mds thrashing
2328 35 Ramana Raja
* https://tracker.ceph.com/issues/51278
2329
    mds: "FAILED ceph_assert(!segments.empty())"
2330
* https://tracker.ceph.com/issues/52279
2331 34 Venky Shankar
    cephadm tests fail due to: error adding seccomp filter rule for syscall bdflush: requested action matches default action of filter
2332 33 Patrick Donnelly
2333
2334
h3. 2021 Dec 22
2335
2336
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211222.014316
2337
2338
* https://tracker.ceph.com/issues/52624
2339
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2340
* https://tracker.ceph.com/issues/50223
2341
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2342
* https://tracker.ceph.com/issues/52279
2343
    cephadm tests fail due to: error adding seccomp filter rule for syscall bdflush: requested action matches default action of filter
2344
* https://tracker.ceph.com/issues/50224
2345
    qa: test_mirroring_init_failure_with_recovery failure
2346
* https://tracker.ceph.com/issues/48773
2347
    qa: scrub does not complete
2348 32 Venky Shankar
2349
2350
h3. 2021 Nov 30
2351
2352
https://pulpito.ceph.com/vshankar-2021-11-24_07:14:27-fs-wip-vshankar-testing-20211124-094330-testing-default-smithi/
2353
https://pulpito.ceph.com/vshankar-2021-11-30_06:23:32-fs-wip-vshankar-testing-20211124-094330-distro-default-smithi/ (rerun w/ QA fixes)
2354
2355
* https://tracker.ceph.com/issues/53436
2356
    mds, mon: mds beacon messages get dropped? (mds never reaches up:active state)
2357
* https://tracker.ceph.com/issues/51964
2358
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
2359
* https://tracker.ceph.com/issues/48812
2360
    qa: test_scrub_pause_and_resume_with_abort failure
2361
* https://tracker.ceph.com/issues/51076
2362
    "wait_for_recovery: failed before timeout expired" during thrashosd test with EC backend.
2363
* https://tracker.ceph.com/issues/50223
2364
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2365
* https://tracker.ceph.com/issues/52624
2366
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2367
* https://tracker.ceph.com/issues/50250
2368
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
2369 31 Patrick Donnelly
2370
2371
h3. 2021 November 9
2372
2373
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211109.180315
2374
2375
* https://tracker.ceph.com/issues/53214
2376
    qa: "dd: error reading '/sys/kernel/debug/ceph/2a934501-6731-4052-a836-f42229a869be.client4874/metrics': Is a directory"
2377
* https://tracker.ceph.com/issues/48773
2378
    qa: scrub does not complete
2379
* https://tracker.ceph.com/issues/50223
2380
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2381
* https://tracker.ceph.com/issues/51282
2382
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2383
* https://tracker.ceph.com/issues/52624
2384
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2385
* https://tracker.ceph.com/issues/53216
2386
    qa: "RuntimeError: value of attributes should be either str or None. client_id"
2387
* https://tracker.ceph.com/issues/50250
2388
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
2389
2390 30 Patrick Donnelly
2391
2392
h3. 2021 November 03
2393
2394
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211103.023355
2395
2396
* https://tracker.ceph.com/issues/51964
2397
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
2398
* https://tracker.ceph.com/issues/51282
2399
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2400
* https://tracker.ceph.com/issues/52436
2401
    fs/ceph: "corrupt mdsmap"
2402
* https://tracker.ceph.com/issues/53074
2403
    pybind/mgr/cephadm: upgrade sequence does not continue if no MDS are active
2404
* https://tracker.ceph.com/issues/53150
2405
    pybind/mgr/cephadm/upgrade: tolerate MDS failures during upgrade straddling v16.2.5
2406
* https://tracker.ceph.com/issues/53155
2407
    MDSMonitor: assertion during upgrade to v16.2.5+
2408 29 Patrick Donnelly
2409
2410
h3. 2021 October 26
2411
2412
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211025.000447
2413
2414
* https://tracker.ceph.com/issues/53074
2415
    pybind/mgr/cephadm: upgrade sequence does not continue if no MDS are active
2416
* https://tracker.ceph.com/issues/52997
2417
    testing: hang ing umount
2418
* https://tracker.ceph.com/issues/50824
2419
    qa: snaptest-git-ceph bus error
2420
* https://tracker.ceph.com/issues/52436
2421
    fs/ceph: "corrupt mdsmap"
2422
* https://tracker.ceph.com/issues/48773
2423
    qa: scrub does not complete
2424
* https://tracker.ceph.com/issues/53082
2425
    ceph-fuse: segmenetation fault in Client::handle_mds_map
2426
* https://tracker.ceph.com/issues/50223
2427
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2428
* https://tracker.ceph.com/issues/52624
2429
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2430
* https://tracker.ceph.com/issues/50224
2431
    qa: test_mirroring_init_failure_with_recovery failure
2432
* https://tracker.ceph.com/issues/50821
2433
    qa: untar_snap_rm failure during mds thrashing
2434
* https://tracker.ceph.com/issues/50250
2435
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
2436
2437 27 Patrick Donnelly
2438
2439 28 Patrick Donnelly
h3. 2021 October 19
2440 27 Patrick Donnelly
2441
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211019.013028
2442
2443
* https://tracker.ceph.com/issues/52995
2444
    qa: test_standby_count_wanted failure
2445
* https://tracker.ceph.com/issues/52948
2446
    osd: fails to come up: "teuthology.misc:7 of 8 OSDs are up"
2447
* https://tracker.ceph.com/issues/52996
2448
    qa: test_perf_counters via test_openfiletable
2449
* https://tracker.ceph.com/issues/48772
2450
    qa: pjd: not ok 9, 44, 80
2451
* https://tracker.ceph.com/issues/52997
2452
    testing: hang ing umount
2453
* https://tracker.ceph.com/issues/50250
2454
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
2455
* https://tracker.ceph.com/issues/52624
2456
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2457
* https://tracker.ceph.com/issues/50223
2458
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2459
* https://tracker.ceph.com/issues/50821
2460
    qa: untar_snap_rm failure during mds thrashing
2461
* https://tracker.ceph.com/issues/48773
2462
    qa: scrub does not complete
2463 26 Patrick Donnelly
2464
2465
h3. 2021 October 12
2466
2467
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211012.192211
2468
2469
Some failures caused by teuthology bug: https://tracker.ceph.com/issues/52944
2470
2471
New test caused failure: https://github.com/ceph/ceph/pull/43297#discussion_r729883167
2472
2473
2474
* https://tracker.ceph.com/issues/51282
2475
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2476
* https://tracker.ceph.com/issues/52948
2477
    osd: fails to come up: "teuthology.misc:7 of 8 OSDs are up"
2478
* https://tracker.ceph.com/issues/48773
2479
    qa: scrub does not complete
2480
* https://tracker.ceph.com/issues/50224
2481
    qa: test_mirroring_init_failure_with_recovery failure
2482
* https://tracker.ceph.com/issues/52949
2483
    RuntimeError: The following counters failed to be set on mds daemons: {'mds.dir_split'}
2484 25 Patrick Donnelly
2485 23 Patrick Donnelly
2486 24 Patrick Donnelly
h3. 2021 October 02
2487
2488
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211002.163337
2489
2490
Some failures caused by cephadm upgrade test. Fixed in follow-up qa commit.
2491
2492
test_simple failures caused by PR in this set.
2493
2494
A few reruns because of QA infra noise.
2495
2496
* https://tracker.ceph.com/issues/52822
2497
    qa: failed pacific install on fs:upgrade
2498
* https://tracker.ceph.com/issues/52624
2499
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2500
* https://tracker.ceph.com/issues/50223
2501
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2502
* https://tracker.ceph.com/issues/48773
2503
    qa: scrub does not complete
2504
2505
2506 23 Patrick Donnelly
h3. 2021 September 20
2507
2508
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210917.174826
2509
2510
* https://tracker.ceph.com/issues/52677
2511
    qa: test_simple failure
2512
* https://tracker.ceph.com/issues/51279
2513
    kclient hangs on umount (testing branch)
2514
* https://tracker.ceph.com/issues/50223
2515
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2516
* https://tracker.ceph.com/issues/50250
2517
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
2518
* https://tracker.ceph.com/issues/52624
2519
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2520
* https://tracker.ceph.com/issues/52438
2521
    qa: ffsb timeout
2522 22 Patrick Donnelly
2523
2524
h3. 2021 September 10
2525
2526
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210910.181451
2527
2528
* https://tracker.ceph.com/issues/50223
2529
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2530
* https://tracker.ceph.com/issues/50250
2531
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
2532
* https://tracker.ceph.com/issues/52624
2533
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2534
* https://tracker.ceph.com/issues/52625
2535
    qa: test_kill_mdstable (tasks.cephfs.test_snapshots.TestSnapshots)
2536
* https://tracker.ceph.com/issues/52439
2537
    qa: acls does not compile on centos stream
2538
* https://tracker.ceph.com/issues/50821
2539
    qa: untar_snap_rm failure during mds thrashing
2540
* https://tracker.ceph.com/issues/48773
2541
    qa: scrub does not complete
2542
* https://tracker.ceph.com/issues/52626
2543
    mds: ScrubStack.cc: 831: FAILED ceph_assert(diri)
2544
* https://tracker.ceph.com/issues/51279
2545
    kclient hangs on umount (testing branch)
2546 21 Patrick Donnelly
2547
2548
h3. 2021 August 27
2549
2550
Several jobs died because of device failures.
2551
2552
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210827.024746
2553
2554
* https://tracker.ceph.com/issues/52430
2555
    mds: fast async create client mount breaks racy test
2556
* https://tracker.ceph.com/issues/52436
2557
    fs/ceph: "corrupt mdsmap"
2558
* https://tracker.ceph.com/issues/52437
2559
    mds: InoTable::replay_release_ids abort via test_inotable_sync
2560
* https://tracker.ceph.com/issues/51282
2561
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2562
* https://tracker.ceph.com/issues/52438
2563
    qa: ffsb timeout
2564
* https://tracker.ceph.com/issues/52439
2565
    qa: acls does not compile on centos stream
2566 20 Patrick Donnelly
2567
2568
h3. 2021 July 30
2569
2570
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210729.214022
2571
2572
* https://tracker.ceph.com/issues/50250
2573
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
2574
* https://tracker.ceph.com/issues/51282
2575
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2576
* https://tracker.ceph.com/issues/48773
2577
    qa: scrub does not complete
2578
* https://tracker.ceph.com/issues/51975
2579
    pybind/mgr/stats: KeyError
2580 19 Patrick Donnelly
2581
2582
h3. 2021 July 28
2583
2584
https://pulpito.ceph.com/pdonnell-2021-07-28_00:39:45-fs-wip-pdonnell-testing-20210727.213757-distro-basic-smithi/
2585
2586
with qa fix: https://pulpito.ceph.com/pdonnell-2021-07-28_16:20:28-fs-wip-pdonnell-testing-20210728.141004-distro-basic-smithi/
2587
2588
* https://tracker.ceph.com/issues/51905
2589
    qa: "error reading sessionmap 'mds1_sessionmap'"
2590
* https://tracker.ceph.com/issues/48773
2591
    qa: scrub does not complete
2592
* https://tracker.ceph.com/issues/50250
2593
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
2594
* https://tracker.ceph.com/issues/51267
2595
    CommandFailedError: Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithi096 with status 1:...
2596
* https://tracker.ceph.com/issues/51279
2597
    kclient hangs on umount (testing branch)
2598 18 Patrick Donnelly
2599
2600
h3. 2021 July 16
2601
2602
https://pulpito.ceph.com/pdonnell-2021-07-16_05:50:11-fs-wip-pdonnell-testing-20210716.022804-distro-basic-smithi/
2603
2604
* https://tracker.ceph.com/issues/48773
2605
    qa: scrub does not complete
2606
* https://tracker.ceph.com/issues/48772
2607
    qa: pjd: not ok 9, 44, 80
2608
* https://tracker.ceph.com/issues/45434
2609
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2610
* https://tracker.ceph.com/issues/51279
2611
    kclient hangs on umount (testing branch)
2612
* https://tracker.ceph.com/issues/50824
2613
    qa: snaptest-git-ceph bus error
2614 17 Patrick Donnelly
2615
2616
h3. 2021 July 04
2617
2618
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210703.052904
2619
2620
* https://tracker.ceph.com/issues/48773
2621
    qa: scrub does not complete
2622
* https://tracker.ceph.com/issues/39150
2623
    mon: "FAILED ceph_assert(session_map.sessions.empty())" when out of quorum
2624
* https://tracker.ceph.com/issues/45434
2625
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2626
* https://tracker.ceph.com/issues/51282
2627
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2628
* https://tracker.ceph.com/issues/48771
2629
    qa: iogen: workload fails to cause balancing
2630
* https://tracker.ceph.com/issues/51279
2631
    kclient hangs on umount (testing branch)
2632
* https://tracker.ceph.com/issues/50250
2633
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
2634 16 Patrick Donnelly
2635
2636
h3. 2021 July 01
2637
2638
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210701.192056
2639
2640
* https://tracker.ceph.com/issues/51197
2641
    qa: [WRN] Scrub error on inode 0x10000001520 (/client.0/tmp/t/linux-5.4/Documentation/driver-api) see mds.f log and `damage ls` output for details
2642
* https://tracker.ceph.com/issues/50866
2643
    osd: stat mismatch on objects
2644
* https://tracker.ceph.com/issues/48773
2645
    qa: scrub does not complete
2646 15 Patrick Donnelly
2647
2648
h3. 2021 June 26
2649
2650
https://pulpito.ceph.com/pdonnell-2021-06-26_00:57:00-fs-wip-pdonnell-testing-20210625.225421-distro-basic-smithi/
2651
2652
* https://tracker.ceph.com/issues/51183
2653
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
2654
* https://tracker.ceph.com/issues/51410
2655
    kclient: fails to finish reconnect during MDS thrashing (testing branch)
2656
* https://tracker.ceph.com/issues/48773
2657
    qa: scrub does not complete
2658
* https://tracker.ceph.com/issues/51282
2659
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2660
* https://tracker.ceph.com/issues/51169
2661
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
2662
* https://tracker.ceph.com/issues/48772
2663
    qa: pjd: not ok 9, 44, 80
2664 14 Patrick Donnelly
2665
2666
h3. 2021 June 21
2667
2668
https://pulpito.ceph.com/pdonnell-2021-06-22_00:27:21-fs-wip-pdonnell-testing-20210621.231646-distro-basic-smithi/
2669
2670
One failure caused by PR: https://github.com/ceph/ceph/pull/41935#issuecomment-866472599
2671
2672
* https://tracker.ceph.com/issues/51282
2673
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2674
* https://tracker.ceph.com/issues/51183
2675
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
2676
* https://tracker.ceph.com/issues/48773
2677
    qa: scrub does not complete
2678
* https://tracker.ceph.com/issues/48771
2679
    qa: iogen: workload fails to cause balancing
2680
* https://tracker.ceph.com/issues/51169
2681
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
2682
* https://tracker.ceph.com/issues/50495
2683
    libcephfs: shutdown race fails with status 141
2684
* https://tracker.ceph.com/issues/45434
2685
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2686
* https://tracker.ceph.com/issues/50824
2687
    qa: snaptest-git-ceph bus error
2688
* https://tracker.ceph.com/issues/50223
2689
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2690 13 Patrick Donnelly
2691
2692
h3. 2021 June 16
2693
2694
https://pulpito.ceph.com/pdonnell-2021-06-16_21:26:55-fs-wip-pdonnell-testing-20210616.191804-distro-basic-smithi/
2695
2696
MDS abort class of failures caused by PR: https://github.com/ceph/ceph/pull/41667
2697
2698
* https://tracker.ceph.com/issues/45434
2699
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2700
* https://tracker.ceph.com/issues/51169
2701
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
2702
* https://tracker.ceph.com/issues/43216
2703
    MDSMonitor: removes MDS coming out of quorum election
2704
* https://tracker.ceph.com/issues/51278
2705
    mds: "FAILED ceph_assert(!segments.empty())"
2706
* https://tracker.ceph.com/issues/51279
2707
    kclient hangs on umount (testing branch)
2708
* https://tracker.ceph.com/issues/51280
2709
    mds: "FAILED ceph_assert(r == 0 || r == -2)"
2710
* https://tracker.ceph.com/issues/51183
2711
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
2712
* https://tracker.ceph.com/issues/51281
2713
    qa: snaptest-snap-rm-cmp.sh: "echo 'FAIL: bad match, /tmp/a 4637e766853d1ad16a7b17079e2c6f03 != real c3883760b18d50e8d78819c54d579b00'"
2714
* https://tracker.ceph.com/issues/48773
2715
    qa: scrub does not complete
2716
* https://tracker.ceph.com/issues/51076
2717
    "wait_for_recovery: failed before timeout expired" during thrashosd test with EC backend.
2718
* https://tracker.ceph.com/issues/51228
2719
    qa: rmdir: failed to remove 'a/.snap/*': No such file or directory
2720
* https://tracker.ceph.com/issues/51282
2721
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2722 12 Patrick Donnelly
2723
2724
h3. 2021 June 14
2725
2726
https://pulpito.ceph.com/pdonnell-2021-06-14_20:53:05-fs-wip-pdonnell-testing-20210614.173325-distro-basic-smithi/
2727
2728
Some Ubuntu 20.04 upgrade fallout. In particular, upgrade tests are failing due to missing packages for 18.04 Pacific.
2729
2730
* https://tracker.ceph.com/issues/51169
2731
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
2732
* https://tracker.ceph.com/issues/51228
2733
    qa: rmdir: failed to remove 'a/.snap/*': No such file or directory
2734
* https://tracker.ceph.com/issues/48773
2735
    qa: scrub does not complete
2736
* https://tracker.ceph.com/issues/51183
2737
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
2738
* https://tracker.ceph.com/issues/45434
2739
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2740
* https://tracker.ceph.com/issues/51182
2741
    pybind/mgr/snap_schedule: Invalid command: Unexpected argument 'fs=cephfs'
2742
* https://tracker.ceph.com/issues/51229
2743
    qa: test_multi_snap_schedule list difference failure
2744
* https://tracker.ceph.com/issues/50821
2745
    qa: untar_snap_rm failure during mds thrashing
2746 11 Patrick Donnelly
2747
2748
h3. 2021 June 13
2749
2750
https://pulpito.ceph.com/pdonnell-2021-06-12_02:45:35-fs-wip-pdonnell-testing-20210612.002809-distro-basic-smithi/
2751
2752
Some Ubuntu 20.04 upgrade fallout. In particular, upgrade tests are failing due to missing packages for 18.04 Pacific.
2753
2754
* https://tracker.ceph.com/issues/51169
2755
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
2756
* https://tracker.ceph.com/issues/48773
2757
    qa: scrub does not complete
2758
* https://tracker.ceph.com/issues/51182
2759
    pybind/mgr/snap_schedule: Invalid command: Unexpected argument 'fs=cephfs'
2760
* https://tracker.ceph.com/issues/51183
2761
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
2762
* https://tracker.ceph.com/issues/51197
2763
    qa: [WRN] Scrub error on inode 0x10000001520 (/client.0/tmp/t/linux-5.4/Documentation/driver-api) see mds.f log and `damage ls` output for details
2764
* https://tracker.ceph.com/issues/45434
2765 10 Patrick Donnelly
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2766
2767
h3. 2021 June 11
2768
2769
https://pulpito.ceph.com/pdonnell-2021-06-11_18:02:10-fs-wip-pdonnell-testing-20210611.162716-distro-basic-smithi/
2770
2771
Some Ubuntu 20.04 upgrade fallout. In particular, upgrade tests are failing due to missing packages for 18.04 Pacific.
2772
2773
* https://tracker.ceph.com/issues/51169
2774
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
2775
* https://tracker.ceph.com/issues/45434
2776
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2777
* https://tracker.ceph.com/issues/48771
2778
    qa: iogen: workload fails to cause balancing
2779
* https://tracker.ceph.com/issues/43216
2780
    MDSMonitor: removes MDS coming out of quorum election
2781
* https://tracker.ceph.com/issues/51182
2782
    pybind/mgr/snap_schedule: Invalid command: Unexpected argument 'fs=cephfs'
2783
* https://tracker.ceph.com/issues/50223
2784
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2785
* https://tracker.ceph.com/issues/48773
2786
    qa: scrub does not complete
2787
* https://tracker.ceph.com/issues/51183
2788
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
2789
* https://tracker.ceph.com/issues/51184
2790
    qa: fs:bugs does not specify distro
2791 9 Patrick Donnelly
2792
2793
h3. 2021 June 03
2794
2795
https://pulpito.ceph.com/pdonnell-2021-06-03_03:40:33-fs-wip-pdonnell-testing-20210603.020013-distro-basic-smithi/
2796
2797
* https://tracker.ceph.com/issues/45434
2798
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2799
* https://tracker.ceph.com/issues/50016
2800
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
2801
* https://tracker.ceph.com/issues/50821
2802
    qa: untar_snap_rm failure during mds thrashing
2803
* https://tracker.ceph.com/issues/50622 (regression)
2804
    msg: active_connections regression
2805
* https://tracker.ceph.com/issues/49845#note-2 (regression)
2806
    qa: failed umount in test_volumes
2807
* https://tracker.ceph.com/issues/48773
2808
    qa: scrub does not complete
2809
* https://tracker.ceph.com/issues/43216
2810
    MDSMonitor: removes MDS coming out of quorum election
2811 7 Patrick Donnelly
2812
2813 8 Patrick Donnelly
h3. 2021 May 18
2814
2815
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210518.214114
2816
2817
Regression in testing kernel caused some failures. Ilya fixed those and rerun
2818
looked better. Some odd new noise in the rerun relating to packaging and "No
2819
module named 'tasks.ceph'".
2820
2821
* https://tracker.ceph.com/issues/50824
2822
    qa: snaptest-git-ceph bus error
2823
* https://tracker.ceph.com/issues/50622 (regression)
2824
    msg: active_connections regression
2825
* https://tracker.ceph.com/issues/49845#note-2 (regression)
2826
    qa: failed umount in test_volumes
2827
* https://tracker.ceph.com/issues/48203 (stock kernel update required)
2828
    qa: quota failure
2829
2830
2831 7 Patrick Donnelly
h3. 2021 May 18
2832
2833
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210518.025642
2834
2835
* https://tracker.ceph.com/issues/50821
2836
    qa: untar_snap_rm failure during mds thrashing
2837
* https://tracker.ceph.com/issues/48773
2838
    qa: scrub does not complete
2839
* https://tracker.ceph.com/issues/45591
2840
    mgr: FAILED ceph_assert(daemon != nullptr)
2841
* https://tracker.ceph.com/issues/50866
2842
    osd: stat mismatch on objects
2843
* https://tracker.ceph.com/issues/50016
2844
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
2845
* https://tracker.ceph.com/issues/50867
2846
    qa: fs:mirror: reduced data availability
2847
* https://tracker.ceph.com/issues/50821
2848
    qa: untar_snap_rm failure during mds thrashing
2849
* https://tracker.ceph.com/issues/50622 (regression)
2850
    msg: active_connections regression
2851
* https://tracker.ceph.com/issues/50223
2852
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2853
* https://tracker.ceph.com/issues/50868
2854
    qa: "kern.log.gz already exists; not overwritten"
2855
* https://tracker.ceph.com/issues/50870
2856
    qa: test_full: "rm: cannot remove 'large_file_a': Permission denied"
2857 6 Patrick Donnelly
2858
2859
h3. 2021 May 11
2860
2861
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210511.232042
2862
2863
* one class of failures caused by PR
2864
* https://tracker.ceph.com/issues/48812
2865
    qa: test_scrub_pause_and_resume_with_abort failure
2866
* https://tracker.ceph.com/issues/50390
2867
    mds: monclient: wait_auth_rotating timed out after 30
2868
* https://tracker.ceph.com/issues/48773
2869
    qa: scrub does not complete
2870
* https://tracker.ceph.com/issues/50821
2871
    qa: untar_snap_rm failure during mds thrashing
2872
* https://tracker.ceph.com/issues/50224
2873
    qa: test_mirroring_init_failure_with_recovery failure
2874
* https://tracker.ceph.com/issues/50622 (regression)
2875
    msg: active_connections regression
2876
* https://tracker.ceph.com/issues/50825
2877
    qa: snaptest-git-ceph hang during mon thrashing v2
2878
* https://tracker.ceph.com/issues/50821
2879
    qa: untar_snap_rm failure during mds thrashing
2880
* https://tracker.ceph.com/issues/50823
2881
    qa: RuntimeError: timeout waiting for cluster to stabilize
2882 5 Patrick Donnelly
2883
2884
h3. 2021 May 14
2885
2886
https://pulpito.ceph.com/pdonnell-2021-05-14_21:45:42-fs-master-distro-basic-smithi/
2887
2888
* https://tracker.ceph.com/issues/48812
2889
    qa: test_scrub_pause_and_resume_with_abort failure
2890
* https://tracker.ceph.com/issues/50821
2891
    qa: untar_snap_rm failure during mds thrashing
2892
* https://tracker.ceph.com/issues/50622 (regression)
2893
    msg: active_connections regression
2894
* https://tracker.ceph.com/issues/50822
2895
    qa: testing kernel patch for client metrics causes mds abort
2896
* https://tracker.ceph.com/issues/48773
2897
    qa: scrub does not complete
2898
* https://tracker.ceph.com/issues/50823
2899
    qa: RuntimeError: timeout waiting for cluster to stabilize
2900
* https://tracker.ceph.com/issues/50824
2901
    qa: snaptest-git-ceph bus error
2902
* https://tracker.ceph.com/issues/50825
2903
    qa: snaptest-git-ceph hang during mon thrashing v2
2904
* https://tracker.ceph.com/issues/50826
2905
    kceph: stock RHEL kernel hangs on snaptests with mon|osd thrashers
2906 4 Patrick Donnelly
2907
2908
h3. 2021 May 01
2909
2910
https://pulpito.ceph.com/pdonnell-2021-05-01_09:07:09-fs-wip-pdonnell-testing-20210501.040415-distro-basic-smithi/
2911
2912
* https://tracker.ceph.com/issues/45434
2913
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2914
* https://tracker.ceph.com/issues/50281
2915
    qa: untar_snap_rm timeout
2916
* https://tracker.ceph.com/issues/48203 (stock kernel update required)
2917
    qa: quota failure
2918
* https://tracker.ceph.com/issues/48773
2919
    qa: scrub does not complete
2920
* https://tracker.ceph.com/issues/50390
2921
    mds: monclient: wait_auth_rotating timed out after 30
2922
* https://tracker.ceph.com/issues/50250
2923
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details"
2924
* https://tracker.ceph.com/issues/50622 (regression)
2925
    msg: active_connections regression
2926
* https://tracker.ceph.com/issues/45591
2927
    mgr: FAILED ceph_assert(daemon != nullptr)
2928
* https://tracker.ceph.com/issues/50221
2929
    qa: snaptest-git-ceph failure in git diff
2930
* https://tracker.ceph.com/issues/50016
2931
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
2932 3 Patrick Donnelly
2933
2934
h3. 2021 Apr 15
2935
2936
https://pulpito.ceph.com/pdonnell-2021-04-15_01:35:57-fs-wip-pdonnell-testing-20210414.230315-distro-basic-smithi/
2937
2938
* https://tracker.ceph.com/issues/50281
2939
    qa: untar_snap_rm timeout
2940
* https://tracker.ceph.com/issues/50220
2941
    qa: dbench workload timeout
2942
* https://tracker.ceph.com/issues/50246
2943
    mds: failure replaying journal (EMetaBlob)
2944
* https://tracker.ceph.com/issues/50250
2945
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details"
2946
* https://tracker.ceph.com/issues/50016
2947
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
2948
* https://tracker.ceph.com/issues/50222
2949
    osd: 5.2s0 deep-scrub : stat mismatch
2950
* https://tracker.ceph.com/issues/45434
2951
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2952
* https://tracker.ceph.com/issues/49845
2953
    qa: failed umount in test_volumes
2954
* https://tracker.ceph.com/issues/37808
2955
    osd: osdmap cache weak_refs assert during shutdown
2956
* https://tracker.ceph.com/issues/50387
2957
    client: fs/snaps failure
2958
* https://tracker.ceph.com/issues/50389
2959
    mds: "cluster [ERR] Error recovering journal 0x203: (2) No such file or directory" in cluster log"
2960
* https://tracker.ceph.com/issues/50216
2961
    qa: "ls: cannot access 'lost+found': No such file or directory"
2962
* https://tracker.ceph.com/issues/50390
2963
    mds: monclient: wait_auth_rotating timed out after 30
2964
2965 1 Patrick Donnelly
2966
2967 2 Patrick Donnelly
h3. 2021 Apr 08
2968
2969
https://pulpito.ceph.com/pdonnell-2021-04-08_22:42:24-fs-wip-pdonnell-testing-20210408.192301-distro-basic-smithi/
2970
2971
* https://tracker.ceph.com/issues/45434
2972
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2973
* https://tracker.ceph.com/issues/50016
2974
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
2975
* https://tracker.ceph.com/issues/48773
2976
    qa: scrub does not complete
2977
* https://tracker.ceph.com/issues/50279
2978
    qa: "Replacing daemon mds.b as rank 0 with standby daemon mds.c"
2979
* https://tracker.ceph.com/issues/50246
2980
    mds: failure replaying journal (EMetaBlob)
2981
* https://tracker.ceph.com/issues/48365
2982
    qa: ffsb build failure on CentOS 8.2
2983
* https://tracker.ceph.com/issues/50216
2984
    qa: "ls: cannot access 'lost+found': No such file or directory"
2985
* https://tracker.ceph.com/issues/50223
2986
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2987
* https://tracker.ceph.com/issues/50280
2988
    cephadm: RuntimeError: uid/gid not found
2989
* https://tracker.ceph.com/issues/50281
2990
    qa: untar_snap_rm timeout
2991
2992 1 Patrick Donnelly
h3. 2021 Apr 08
2993
2994
https://pulpito.ceph.com/pdonnell-2021-04-08_04:31:36-fs-wip-pdonnell-testing-20210408.024225-distro-basic-smithi/
2995
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210408.142238 (with logic inversion / QA fix)
2996
2997
* https://tracker.ceph.com/issues/50246
2998
    mds: failure replaying journal (EMetaBlob)
2999
* https://tracker.ceph.com/issues/50250
3000
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details"
3001
3002
3003
h3. 2021 Apr 07
3004
3005
https://pulpito.ceph.com/pdonnell-2021-04-07_02:12:41-fs-wip-pdonnell-testing-20210406.213012-distro-basic-smithi/
3006
3007
* https://tracker.ceph.com/issues/50215
3008
    qa: "log [ERR] : error reading sessionmap 'mds2_sessionmap'"
3009
* https://tracker.ceph.com/issues/49466
3010
    qa: "Command failed on gibba030 with status 1: 'set -ex\nsudo dd of=/tmp/tmp.ZEeZBasJer'"
3011
* https://tracker.ceph.com/issues/50216
3012
    qa: "ls: cannot access 'lost+found': No such file or directory"
3013
* https://tracker.ceph.com/issues/48773
3014
    qa: scrub does not complete
3015
* https://tracker.ceph.com/issues/49845
3016
    qa: failed umount in test_volumes
3017
* https://tracker.ceph.com/issues/50220
3018
    qa: dbench workload timeout
3019
* https://tracker.ceph.com/issues/50221
3020
    qa: snaptest-git-ceph failure in git diff
3021
* https://tracker.ceph.com/issues/50222
3022
    osd: 5.2s0 deep-scrub : stat mismatch
3023
* https://tracker.ceph.com/issues/50223
3024
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
3025
* https://tracker.ceph.com/issues/50224
3026
    qa: test_mirroring_init_failure_with_recovery failure
3027
3028
h3. 2021 Apr 01
3029
3030
https://pulpito.ceph.com/pdonnell-2021-04-01_00:45:34-fs-wip-pdonnell-testing-20210331.222326-distro-basic-smithi/
3031
3032
* https://tracker.ceph.com/issues/48772
3033
    qa: pjd: not ok 9, 44, 80
3034
* https://tracker.ceph.com/issues/50177
3035
    osd: "stalled aio... buggy kernel or bad device?"
3036
* https://tracker.ceph.com/issues/48771
3037
    qa: iogen: workload fails to cause balancing
3038
* https://tracker.ceph.com/issues/49845
3039
    qa: failed umount in test_volumes
3040
* https://tracker.ceph.com/issues/48773
3041
    qa: scrub does not complete
3042
* https://tracker.ceph.com/issues/48805
3043
    mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details"
3044
* https://tracker.ceph.com/issues/50178
3045
    qa: "TypeError: run() got an unexpected keyword argument 'shell'"
3046
* https://tracker.ceph.com/issues/45434
3047
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
3048
3049
h3. 2021 Mar 24
3050
3051
https://pulpito.ceph.com/pdonnell-2021-03-24_23:26:35-fs-wip-pdonnell-testing-20210324.190252-distro-basic-smithi/
3052
3053
* https://tracker.ceph.com/issues/49500
3054
    qa: "Assertion `cb_done' failed."
3055
* https://tracker.ceph.com/issues/50019
3056
    qa: mount failure with cephadm "probably no MDS server is up?"
3057
* https://tracker.ceph.com/issues/50020
3058
    qa: "RADOS object not found (Failed to operate read op for oid cephfs_mirror)"
3059
* https://tracker.ceph.com/issues/48773
3060
    qa: scrub does not complete
3061
* https://tracker.ceph.com/issues/45434
3062
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
3063
* https://tracker.ceph.com/issues/48805
3064
    mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details"
3065
* https://tracker.ceph.com/issues/48772
3066
    qa: pjd: not ok 9, 44, 80
3067
* https://tracker.ceph.com/issues/50021
3068
    qa: snaptest-git-ceph failure during mon thrashing
3069
* https://tracker.ceph.com/issues/48771
3070
    qa: iogen: workload fails to cause balancing
3071
* https://tracker.ceph.com/issues/50016
3072
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
3073
* https://tracker.ceph.com/issues/49466
3074
    qa: "Command failed on gibba030 with status 1: 'set -ex\nsudo dd of=/tmp/tmp.ZEeZBasJer'"
3075
3076
3077
h3. 2021 Mar 18
3078
3079
https://pulpito.ceph.com/pdonnell-2021-03-18_13:46:31-fs-wip-pdonnell-testing-20210318.024145-distro-basic-smithi/
3080
3081
* https://tracker.ceph.com/issues/49466
3082
    qa: "Command failed on gibba030 with status 1: 'set -ex\nsudo dd of=/tmp/tmp.ZEeZBasJer'"
3083
* https://tracker.ceph.com/issues/48773
3084
    qa: scrub does not complete
3085
* https://tracker.ceph.com/issues/48805
3086
    mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details"
3087
* https://tracker.ceph.com/issues/45434
3088
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
3089
* https://tracker.ceph.com/issues/49845
3090
    qa: failed umount in test_volumes
3091
* https://tracker.ceph.com/issues/49605
3092
    mgr: drops command on the floor
3093
* https://tracker.ceph.com/issues/48203 (stock kernel update required)
3094
    qa: quota failure
3095
* https://tracker.ceph.com/issues/49928
3096
    client: items pinned in cache preventing unmount x2
3097
3098
h3. 2021 Mar 15
3099
3100
https://pulpito.ceph.com/pdonnell-2021-03-15_22:16:56-fs-wip-pdonnell-testing-20210315.182203-distro-basic-smithi/
3101
3102
* https://tracker.ceph.com/issues/49842
3103
    qa: stuck pkg install
3104
* https://tracker.ceph.com/issues/49466
3105
    qa: "Command failed on gibba030 with status 1: 'set -ex\nsudo dd of=/tmp/tmp.ZEeZBasJer'"
3106
* https://tracker.ceph.com/issues/49822
3107
    test: test_mirroring_command_idempotency (tasks.cephfs.test_admin.TestMirroringCommands) failure
3108
* https://tracker.ceph.com/issues/49240
3109
    terminate called after throwing an instance of 'std::bad_alloc'
3110
* https://tracker.ceph.com/issues/48773
3111
    qa: scrub does not complete
3112
* https://tracker.ceph.com/issues/45434
3113
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
3114
* https://tracker.ceph.com/issues/49500
3115
    qa: "Assertion `cb_done' failed."
3116
* https://tracker.ceph.com/issues/49843
3117
    qa: fs/snaps/snaptest-upchildrealms.sh failure
3118
* https://tracker.ceph.com/issues/49845
3119
    qa: failed umount in test_volumes
3120
* https://tracker.ceph.com/issues/48805
3121
    mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details"
3122
* https://tracker.ceph.com/issues/49605
3123
    mgr: drops command on the floor
3124
3125
and failure caused by PR: https://github.com/ceph/ceph/pull/39969
3126
3127
3128
h3. 2021 Mar 09
3129
3130
https://pulpito.ceph.com/pdonnell-2021-03-09_03:27:39-fs-wip-pdonnell-testing-20210308.214827-distro-basic-smithi/
3131
3132
* https://tracker.ceph.com/issues/49500
3133
    qa: "Assertion `cb_done' failed."
3134
* https://tracker.ceph.com/issues/48805
3135
    mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details"
3136
* https://tracker.ceph.com/issues/48773
3137
    qa: scrub does not complete
3138
* https://tracker.ceph.com/issues/45434
3139
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
3140
* https://tracker.ceph.com/issues/49240
3141
    terminate called after throwing an instance of 'std::bad_alloc'
3142
* https://tracker.ceph.com/issues/49466
3143
    qa: "Command failed on gibba030 with status 1: 'set -ex\nsudo dd of=/tmp/tmp.ZEeZBasJer'"
3144
* https://tracker.ceph.com/issues/49684
3145
    qa: fs:cephadm mount does not wait for mds to be created
3146
* https://tracker.ceph.com/issues/48771
3147
    qa: iogen: workload fails to cause balancing