Project

General

Profile

Main » History » Version 250

Rishabh Dave, 04/04/2024 10:07 AM

1 221 Patrick Donnelly
h1. <code>main</code> branch
2 1 Patrick Donnelly
3 247 Rishabh Dave
h3. ADD NEW ENTRY HERE
4
5 249 Rishabh Dave
h3. 4 Apr 2024
6 246 Rishabh Dave
7
https://pulpito.ceph.com/rishabh-2024-03-27_05:27:11-fs-wip-rishabh-testing-20240326.131558-testing-default-smithi/
8
9
* https://tracker.ceph.com/issues/64927
10
  qa/cephfs: test_cephfs_mirror_blocklist raises "KeyError: 'rados_inst'"
11
* https://tracker.ceph.com/issues/65022
12
  qa: test_max_items_per_obj open procs not fully cleaned up
13
* https://tracker.ceph.com/issues/63699
14
  qa: failed cephfs-shell test_reading_conf
15
* https://tracker.ceph.com/issues/63700
16
  qa: test_cd_with_args failure
17
* https://tracker.ceph.com/issues/65136
18
  QA failure: test_fscrypt_dummy_encryption_with_quick_group
19
* https://tracker.ceph.com/issues/65246
20
  qa/cephfs: test_multifs_single_path_rootsquash (tasks.cephfs.test_admin.TestFsAuthorize)
21
22 248 Rishabh Dave
23 246 Rishabh Dave
* https://tracker.ceph.com/issues/58945
24
  qa: xfstests-dev's generic test suite has failures with fuse client
25 1 Patrick Donnelly
* https://tracker.ceph.com/issues/57656
26 248 Rishabh Dave
[testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
27 246 Rishabh Dave
* https://tracker.ceph.com/issues/63265
28 1 Patrick Donnelly
  qa: fs/snaps/snaptest-git-ceph.sh failed when reseting to tag 'v0.1'
29 246 Rishabh Dave
* https://tracker.ceph.com/issues/62067
30 248 Rishabh Dave
  ffsb.sh failure "Resource temporarily unavailable" 
31 246 Rishabh Dave
* https://tracker.ceph.com/issues/63949
32
  leak in mds.c detected by valgrind during CephFS QA run
33
* https://tracker.ceph.com/issues/48562
34
  qa: scrub - object missing on disk; some files may be lost
35
* https://tracker.ceph.com/issues/65020
36
  qa: Scrub error on inode 0x1000000356c (/volumes/qa/sv_0/2f8f6bb4-3ea9-47a0-bd79-a0f50dc149d5/client.0/tmp/clients/client7/~dmtmp/PARADOX) see mds.b log and `damage ls` output for details" in cluster log
37
* https://tracker.ceph.com/issues/64572
38
  workunits/fsx.sh failure
39
* https://tracker.ceph.com/issues/57676
40
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
41 1 Patrick Donnelly
* https://tracker.ceph.com/issues/64502
42 246 Rishabh Dave
  client: ceph-fuse fails to unmount after upgrade to main
43 1 Patrick Donnelly
* https://tracker.ceph.com/issues/54741
44
  crash: MDSTableClient::got_journaled_ack(unsigned long)
45 250 Rishabh Dave
46 248 Rishabh Dave
* https://tracker.ceph.com/issues/65265
47
  qa: health warning "no active mgr (MGR_DOWN)" occurs before and after test_nfs runs
48 1 Patrick Donnelly
* https://tracker.ceph.com/issues/65308
49
  qa: fs was offline but also unexpectedly degraded
50
* https://tracker.ceph.com/issues/65309
51
  qa: dbench.sh failed with "ERROR: handle 10318 was not found"
52 250 Rishabh Dave
53
* https://tracker.ceph.com/issues/65018
54
  PG_DEGRADED warnings during cluster creation via cephadm: "Health check failed: Degraded data redundancy: 2/192 objects degraded (1.042%), 1 pg degraded (PG_DEGRADED)" 
55
* https://tracker.ceph.com/issues/52624
56
  qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
57 245 Rishabh Dave
58 240 Patrick Donnelly
h3. 2024-04-02
59
60
https://tracker.ceph.com/issues/65215
61
62
* "qa: error during scrub thrashing: rank damage found: {'backtrace'}":https://tracker.ceph.com/issues/57676
63
* "qa: ceph tell 4.3a deep-scrub command not found":https://tracker.ceph.com/issues/64972
64
* "pacific/quincy/v18.2.0: client: ceph-fuse fails to unmount after upgrade to main":https://tracker.ceph.com/issues/64502
65
* "Test failure: test_cephfs_mirror_cancel_mirroring_and_readd":https://tracker.ceph.com/issues/64711
66
* "workunits/fsx.sh failure":https://tracker.ceph.com/issues/64572
67
* "qa: failed cephfs-shell test_reading_conf":https://tracker.ceph.com/issues/63699
68
* "centos 9 testing reveals rocksdb Leak_StillReachable memory leak in mons":https://tracker.ceph.com/issues/61774
69
* "qa: test_max_items_per_obj open procs not fully cleaned up":https://tracker.ceph.com/issues/65022
70
* "qa: dbench workload timeout":https://tracker.ceph.com/issues/50220
71
* "suites/fsstress.sh hangs on one client - test times out":https://tracker.ceph.com/issues/64707
72 241 Patrick Donnelly
* "qa/suites/fs/nfs: cluster [WRN] Health check failed: 1 stray daemon(s) not managed by cephadm (CEPHADM_STRAY_DAEMON)" in cluster log":https://tracker.ceph.com/issues/65021
73
* "qa: Scrub error on inode 0x1000000356c (/volumes/qa/sv_0/2f8f6bb4-3ea9-47a0-bd79-a0f50dc149d5/client.0/tmp/clients/client7/~dmtmp/PARADOX) see mds.b log and `damage ls` output for details in cluster log":https://tracker.ceph.com/issues/65020
74
* "qa: iogen workunit: The following counters failed to be set on mds daemons: {'mds.exported', 'mds.imported'}":https://tracker.ceph.com/issues/54108
75
* "ffsb.sh failure Resource temporarily unavailable":https://tracker.ceph.com/issues/62067
76
* "QA failure: test_fscrypt_dummy_encryption_with_quick_group":https://tracker.ceph.com/issues/65136
77 244 Patrick Donnelly
* "qa: cluster [WRN] Health detail: HEALTH_WARN 1 pool(s) do not have an application enabled in cluster log":https://tracker.ceph.com/issues/65271
78 241 Patrick Donnelly
* "qa: test_cephfs_mirror_cancel_sync fails in a 100 jobs run of fs:mirror suite":https://tracker.ceph.com/issues/64534
79 240 Patrick Donnelly
80 236 Patrick Donnelly
h3. 2024-03-28
81
82
https://tracker.ceph.com/issues/65213
83
84 237 Patrick Donnelly
* "qa: error during scrub thrashing: rank damage found: {'backtrace'}":https://tracker.ceph.com/issues/57676
85
* "workunits/fsx.sh failure":https://tracker.ceph.com/issues/64572
86
* "PG_DEGRADED warnings during cluster creation via cephadm: Health check failed: Degraded data":https://tracker.ceph.com/issues/65018
87 238 Patrick Donnelly
* "suites/fsstress.sh hangs on one client - test times out":https://tracker.ceph.com/issues/64707
88
* "qa: ceph tell 4.3a deep-scrub command not found":https://tracker.ceph.com/issues/64972
89
* "qa: iogen workunit: The following counters failed to be set on mds daemons: {'mds.exported', 'mds.imported'}":https://tracker.ceph.com/issues/54108
90 239 Patrick Donnelly
* "qa: Scrub error on inode 0x1000000356c (/volumes/qa/sv_0/2f8f6bb4-3ea9-47a0-bd79-a0f50dc149d5/client.0/tmp/clients/client7/~dmtmp/PARADOX) see mds.b log and `damage ls` output for details in cluster log":https://tracker.ceph.com/issues/65020
91
* "qa: failed cephfs-shell test_reading_conf":https://tracker.ceph.com/issues/63699
92
* "Test failure: test_cephfs_mirror_cancel_mirroring_and_readd":https://tracker.ceph.com/issues/64711
93
* "qa: test_max_items_per_obj open procs not fully cleaned up":https://tracker.ceph.com/issues/65022
94
* "pacific/quincy/v18.2.0: client: ceph-fuse fails to unmount after upgrade to main":https://tracker.ceph.com/issues/64502
95
* "centos 9 testing reveals rocksdb Leak_StillReachable memory leak in mons":https://tracker.ceph.com/issues/61774
96
* "qa: Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)":https://tracker.ceph.com/issues/52624
97
* "qa: dbench workload timeout":https://tracker.ceph.com/issues/50220
98
99
100 236 Patrick Donnelly
101 235 Milind Changire
h3. 2024-03-25
102
103
https://pulpito.ceph.com/mchangir-2024-03-22_09:46:06-fs:upgrade-wip-mchangir-testing-main-20240318.032620-testing-default-smithi/
104
* https://tracker.ceph.com/issues/64502
105
  fusermount -u fails with: teuthology.exceptions.MaxWhileTries: reached maximum tries (51) after waiting for 300 seconds
106
107
https://pulpito.ceph.com/mchangir-2024-03-22_09:48:09-fs:libcephfs-wip-mchangir-testing-main-20240318.032620-testing-default-smithi/
108
109
* https://tracker.ceph.com/issues/62245
110
  libcephfs/test.sh failed - https://tracker.ceph.com/issues/62245#note-3
111
112
113 228 Patrick Donnelly
h3. 2024-03-20
114
115 234 Patrick Donnelly
https://pulpito.ceph.com/?branch=wip-batrick-testing-20240320.145742
116 228 Patrick Donnelly
117 233 Patrick Donnelly
https://github.com/batrick/ceph/commit/360516069d9393362c4cc6eb9371680fe16d66ab
118
119 229 Patrick Donnelly
Ubuntu jobs filtered out because builds were skipped by jenkins/shaman.
120 1 Patrick Donnelly
121 229 Patrick Donnelly
This run has a lot more failures because https://github.com/ceph/ceph/pull/55455 fixed log WRN/ERR checks.
122 228 Patrick Donnelly
123 229 Patrick Donnelly
* https://tracker.ceph.com/issues/57676
124
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
125
* https://tracker.ceph.com/issues/64572
126
    workunits/fsx.sh failure
127
* https://tracker.ceph.com/issues/65018
128
    PG_DEGRADED warnings during cluster creation via cephadm: "Health check failed: Degraded data redundancy: 2/192 objects degraded (1.042%), 1 pg degraded (PG_DEGRADED)"
129
* https://tracker.ceph.com/issues/64707 (new issue)
130
    suites/fsstress.sh hangs on one client - test times out
131 1 Patrick Donnelly
* https://tracker.ceph.com/issues/64988
132
    qa: fs:workloads mgr client evicted indicated by "cluster [WRN] evicting unresponsive client smithi042:x (15288), after 303.306 seconds"
133
* https://tracker.ceph.com/issues/59684
134
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
135 230 Patrick Donnelly
* https://tracker.ceph.com/issues/64972
136
    qa: "ceph tell 4.3a deep-scrub" command not found
137
* https://tracker.ceph.com/issues/54108
138
    qa: iogen workunit: "The following counters failed to be set on mds daemons: {'mds.exported', 'mds.imported'}"
139
* https://tracker.ceph.com/issues/65019
140
    qa/suites/fs/top: [WRN] Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)" in cluster log 
141
* https://tracker.ceph.com/issues/65020
142
    qa: Scrub error on inode 0x1000000356c (/volumes/qa/sv_0/2f8f6bb4-3ea9-47a0-bd79-a0f50dc149d5/client.0/tmp/clients/client7/~dmtmp/PARADOX) see mds.b log and `damage ls` output for details" in cluster log
143
* https://tracker.ceph.com/issues/65021
144
    qa/suites/fs/nfs: cluster [WRN] Health check failed: 1 stray daemon(s) not managed by cephadm (CEPHADM_STRAY_DAEMON)" in cluster log
145
* https://tracker.ceph.com/issues/63699
146
    qa: failed cephfs-shell test_reading_conf
147 231 Patrick Donnelly
* https://tracker.ceph.com/issues/64711
148
    Test failure: test_cephfs_mirror_cancel_mirroring_and_readd (tasks.cephfs.test_mirroring.TestMirroring)
149
* https://tracker.ceph.com/issues/50821
150
    qa: untar_snap_rm failure during mds thrashing
151 232 Patrick Donnelly
* https://tracker.ceph.com/issues/65022
152
    qa: test_max_items_per_obj open procs not fully cleaned up
153 228 Patrick Donnelly
154 226 Venky Shankar
h3.  14th March 2024
155
156
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20240307.013758
157
158 227 Venky Shankar
(pjd.sh failures are related to a bug in the testing kernel. See - https://tracker.ceph.com/issues/64679#note-4)
159 226 Venky Shankar
160
* https://tracker.ceph.com/issues/62067
161
    ffsb.sh failure "Resource temporarily unavailable"
162
* https://tracker.ceph.com/issues/57676
163
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
164
* https://tracker.ceph.com/issues/64502
165
    pacific/quincy/v18.2.0: client: ceph-fuse fails to unmount after upgrade to main
166
* https://tracker.ceph.com/issues/64572
167
    workunits/fsx.sh failure
168
* https://tracker.ceph.com/issues/63700
169
    qa: test_cd_with_args failure
170
* https://tracker.ceph.com/issues/59684
171
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
172
* https://tracker.ceph.com/issues/61243
173
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
174
175 225 Venky Shankar
h3. 5th March 2024
176
177
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20240304.042522
178
179
* https://tracker.ceph.com/issues/57676
180
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
181
* https://tracker.ceph.com/issues/64502
182
    pacific/quincy/v18.2.0: client: ceph-fuse fails to unmount after upgrade to main
183
* https://tracker.ceph.com/issues/63949
184
    leak in mds.c detected by valgrind during CephFS QA run
185
* https://tracker.ceph.com/issues/57656
186
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
187
* https://tracker.ceph.com/issues/63699
188
    qa: failed cephfs-shell test_reading_conf
189
* https://tracker.ceph.com/issues/64572
190
    workunits/fsx.sh failure
191
* https://tracker.ceph.com/issues/64707 (new issue)
192
    suites/fsstress.sh hangs on one client - test times out
193
* https://tracker.ceph.com/issues/59684
194
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
195
* https://tracker.ceph.com/issues/63700
196
    qa: test_cd_with_args failure
197
* https://tracker.ceph.com/issues/64711
198
    Test failure: test_cephfs_mirror_cancel_mirroring_and_readd (tasks.cephfs.test_mirroring.TestMirroring)
199
* https://tracker.ceph.com/issues/64729 (new issue)
200
    mon.a (mon.0) 1281 : cluster 3 [WRN] MDS_SLOW_METADATA_IO: 3 MDSs report slow metadata IOs" in cluster log
201
* https://tracker.ceph.com/issues/64730
202
    fs/misc/multiple_rsync.sh workunit times out
203
204 224 Venky Shankar
h3. 26th Feb 2024
205
206
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20240216.060239
207
208
(This run is a bit messy due to
209
210
  a) OCI runtime issues in the testing kernel with centos9
211
  b) SELinux denials related failures
212
  c) Unrelated MON_DOWN warnings)
213
214
* https://tracker.ceph.com/issues/57676
215
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
216
* https://tracker.ceph.com/issues/63700
217
    qa: test_cd_with_args failure
218
* https://tracker.ceph.com/issues/63949
219
    leak in mds.c detected by valgrind during CephFS QA run
220
* https://tracker.ceph.com/issues/59684
221
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
222
* https://tracker.ceph.com/issues/61243
223
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
224
* https://tracker.ceph.com/issues/63699
225
    qa: failed cephfs-shell test_reading_conf
226
* https://tracker.ceph.com/issues/64172
227
    Test failure: test_multiple_path_r (tasks.cephfs.test_admin.TestFsAuthorize)
228
* https://tracker.ceph.com/issues/57656
229
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
230
* https://tracker.ceph.com/issues/64572
231
    workunits/fsx.sh failure
232
233 222 Patrick Donnelly
h3. 20th Feb 2024
234
235
https://github.com/ceph/ceph/pull/55601
236
https://github.com/ceph/ceph/pull/55659
237
238
https://pulpito.ceph.com/pdonnell-2024-02-20_07:23:03-fs:upgrade:mds_upgrade_sequence-wip-batrick-testing-20240220.022152-distro-default-smithi/
239
240
* https://tracker.ceph.com/issues/64502
241
    client: quincy ceph-fuse fails to unmount after upgrade to main
242
243 223 Patrick Donnelly
This run has numerous problems. #55601 introduces testing for the upgrade sequence from </code>reef/{v18.2.0,v18.2.1,reef}</code> as well as an extra dimension for the ceph-fuse client. The main "big" issue is i64502: the ceph-fuse client is not being unmounted when <code>fusermount -u</code> is called. Instead, the client begins to unmount only after daemons are shut down during test cleanup.
244 218 Venky Shankar
245
h3. 19th Feb 2024
246
247 220 Venky Shankar
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20240217.015652
248
249 218 Venky Shankar
* https://tracker.ceph.com/issues/61243
250
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
251
* https://tracker.ceph.com/issues/63700
252
    qa: test_cd_with_args failure
253
* https://tracker.ceph.com/issues/63141
254
    qa/cephfs: test_idem_unaffected_root_squash fails
255
* https://tracker.ceph.com/issues/59684
256
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
257
* https://tracker.ceph.com/issues/63949
258
    leak in mds.c detected by valgrind during CephFS QA run
259
* https://tracker.ceph.com/issues/63764
260
    Test failure: test_r_with_fsname_and_no_path_in_cap (tasks.cephfs.test_multifs_auth.TestMDSCaps)
261
* https://tracker.ceph.com/issues/63699
262
    qa: failed cephfs-shell test_reading_conf
263 219 Venky Shankar
* https://tracker.ceph.com/issues/64482
264
    ceph: stderr Error: OCI runtime error: crun: bpf create ``: Function not implemented
265 201 Rishabh Dave
266 217 Venky Shankar
h3. 29 Jan 2024
267
268
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20240119.075157-1
269
270
* https://tracker.ceph.com/issues/57676
271
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
272
* https://tracker.ceph.com/issues/63949
273
    leak in mds.c detected by valgrind during CephFS QA run
274
* https://tracker.ceph.com/issues/62067
275
    ffsb.sh failure "Resource temporarily unavailable"
276
* https://tracker.ceph.com/issues/64172
277
    Test failure: test_multiple_path_r (tasks.cephfs.test_admin.TestFsAuthorize)
278
* https://tracker.ceph.com/issues/63265
279
    qa: fs/snaps/snaptest-git-ceph.sh failed when reseting to tag 'v0.1'
280
* https://tracker.ceph.com/issues/61243
281
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
282
* https://tracker.ceph.com/issues/59684
283
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
284
* https://tracker.ceph.com/issues/57656
285
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
286
* https://tracker.ceph.com/issues/64209
287
    snaptest-multiple-capsnaps.sh fails with "got remote process result: 1"
288
289 216 Venky Shankar
h3. 17th Jan 2024
290
291
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20240103.072409-1
292
293
* https://tracker.ceph.com/issues/63764
294
    Test failure: test_r_with_fsname_and_no_path_in_cap (tasks.cephfs.test_multifs_auth.TestMDSCaps)
295
* https://tracker.ceph.com/issues/57676
296
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
297
* https://tracker.ceph.com/issues/51964
298
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
299
* https://tracker.ceph.com/issues/63949
300
    leak in mds.c detected by valgrind during CephFS QA run
301
* https://tracker.ceph.com/issues/62067
302
    ffsb.sh failure "Resource temporarily unavailable"
303
* https://tracker.ceph.com/issues/61243
304
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
305
* https://tracker.ceph.com/issues/63259
306
    mds: failed to store backtrace and force file system read-only
307
* https://tracker.ceph.com/issues/63265
308
    qa: fs/snaps/snaptest-git-ceph.sh failed when reseting to tag 'v0.1'
309
310
h3. 16 Jan 2024
311 215 Rishabh Dave
312 214 Rishabh Dave
https://pulpito.ceph.com/rishabh-2023-12-11_15:37:57-fs-rishabh-2023dec11-testing-default-smithi/
313
https://pulpito.ceph.com/rishabh-2023-12-17_11:19:43-fs-rishabh-2023dec11-testing-default-smithi/
314
https://pulpito.ceph.com/rishabh-2024-01-04_18:43:16-fs-rishabh-2024jan4-testing-default-smithi
315
316
* https://tracker.ceph.com/issues/63764
317
  Test failure: test_r_with_fsname_and_no_path_in_cap (tasks.cephfs.test_multifs_auth.TestMDSCaps)
318
* https://tracker.ceph.com/issues/63141
319
  qa/cephfs: test_idem_unaffected_root_squash fails
320
* https://tracker.ceph.com/issues/62067
321
  ffsb.sh failure "Resource temporarily unavailable" 
322
* https://tracker.ceph.com/issues/51964
323
  qa: test_cephfs_mirror_restart_sync_on_blocklist failure
324
* https://tracker.ceph.com/issues/54462 
325
  Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128
326
* https://tracker.ceph.com/issues/57676
327
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
328
329
* https://tracker.ceph.com/issues/63949
330
  valgrind leak in MDS
331
* https://tracker.ceph.com/issues/64041
332
  qa/cephfs: fs/upgrade/nofs suite attempts to jump more than 2 releases
333
* fsstress failure in last run was due a kernel MM layer failure, unrelated to CephFS
334
* from last run, job #7507400 failed due to MGR; FS wasn't degraded, so it's unrelated to CephFS
335
336 213 Venky Shankar
h3. 06 Dec 2023
337
338
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20231206.125818
339
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20231206.125818-x (rerun w/ squid kickoff changes)
340
341
* https://tracker.ceph.com/issues/63764
342
    Test failure: test_r_with_fsname_and_no_path_in_cap (tasks.cephfs.test_multifs_auth.TestMDSCaps)
343
* https://tracker.ceph.com/issues/63233
344
    mon|client|mds: valgrind reports possible leaks in the MDS
345
* https://tracker.ceph.com/issues/57676
346
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
347
* https://tracker.ceph.com/issues/62580
348
    testing: Test failure: test_snapshot_remove (tasks.cephfs.test_strays.TestStrays)
349
* https://tracker.ceph.com/issues/62067
350
    ffsb.sh failure "Resource temporarily unavailable"
351
* https://tracker.ceph.com/issues/61243
352
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
353
* https://tracker.ceph.com/issues/62081
354
    tasks/fscrypt-common does not finish, timesout
355
* https://tracker.ceph.com/issues/63265
356
    qa: fs/snaps/snaptest-git-ceph.sh failed when reseting to tag 'v0.1'
357
* https://tracker.ceph.com/issues/63806
358
    ffsb.sh workunit failure (MDS: std::out_of_range, damaged)
359
360 211 Patrick Donnelly
h3. 30 Nov 2023
361
362
https://pulpito.ceph.com/pdonnell-2023-11-30_08:05:19-fs:shell-wip-batrick-testing-20231130.014408-distro-default-smithi/
363
364
* https://tracker.ceph.com/issues/63699
365 212 Patrick Donnelly
    qa: failed cephfs-shell test_reading_conf
366
* https://tracker.ceph.com/issues/63700
367
    qa: test_cd_with_args failure
368 211 Patrick Donnelly
369 210 Venky Shankar
h3. 29 Nov 2023
370
371
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20231107.042705
372
373
* https://tracker.ceph.com/issues/63233
374
    mon|client|mds: valgrind reports possible leaks in the MDS
375
* https://tracker.ceph.com/issues/63141
376
    qa/cephfs: test_idem_unaffected_root_squash fails
377
* https://tracker.ceph.com/issues/57676
378
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
379
* https://tracker.ceph.com/issues/57655
380
    qa: fs:mixed-clients kernel_untar_build failure
381
* https://tracker.ceph.com/issues/62067
382
    ffsb.sh failure "Resource temporarily unavailable"
383
* https://tracker.ceph.com/issues/61243
384
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
385
* https://tracker.ceph.com/issues/62510 (pending RHEL back port)
    snaptest-git-ceph.sh failure with fs/thrash
386
* https://tracker.ceph.com/issues/62810
387
    Failure in snaptest-git-ceph.sh (it's an async unlink/create bug) -- Need to fix again
388
389 206 Venky Shankar
h3. 14 Nov 2023
390 207 Milind Changire
(Milind)
391
392
https://pulpito.ceph.com/mchangir-2023-11-13_10:27:15-fs-wip-mchangir-testing-20231110.052303-testing-default-smithi/
393
394
* https://tracker.ceph.com/issues/53859
395
  qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
396
* https://tracker.ceph.com/issues/63233
397
  mon|client|mds: valgrind reports possible leaks in the MDS
398
* https://tracker.ceph.com/issues/63521
399
  qa: Test failure: test_scrub_merge_dirfrags (tasks.cephfs.test_scrub_checks.TestScrubChecks)
400
* https://tracker.ceph.com/issues/57655
401
  qa: fs:mixed-clients kernel_untar_build failure
402
* https://tracker.ceph.com/issues/62580
403
    testing: Test failure: test_snapshot_remove (tasks.cephfs.test_strays.TestStrays)
404
* https://tracker.ceph.com/issues/57676
405
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
406
* https://tracker.ceph.com/issues/61243
407
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
408
* https://tracker.ceph.com/issues/63141
409
    qa/cephfs: test_idem_unaffected_root_squash fails
410
* https://tracker.ceph.com/issues/51964
411
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
412
* https://tracker.ceph.com/issues/63522
413
    No module named 'tasks.ceph_fuse'
414
    No module named 'tasks.kclient'
415
    No module named 'tasks.cephfs.fuse_mount'
416
    No module named 'tasks.ceph'
417
* https://tracker.ceph.com/issues/63523
418
    Command failed - qa/workunits/fs/misc/general_vxattrs.sh
419
420
421
h3. 14 Nov 2023
422 206 Venky Shankar
423
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20231106.073650
424
425
(nvm the fs:upgrade test failure - the PR is excluded from merge)
426
427
* https://tracker.ceph.com/issues/57676
428
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
429
* https://tracker.ceph.com/issues/63233
430
    mon|client|mds: valgrind reports possible leaks in the MDS
431
* https://tracker.ceph.com/issues/63141
432
    qa/cephfs: test_idem_unaffected_root_squash fails
433
* https://tracker.ceph.com/issues/62580
434
    testing: Test failure: test_snapshot_remove (tasks.cephfs.test_strays.TestStrays)
435
* https://tracker.ceph.com/issues/57655
436
    qa: fs:mixed-clients kernel_untar_build failure
437
* https://tracker.ceph.com/issues/51964
438
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
439
* https://tracker.ceph.com/issues/63519
440
    ceph-fuse: reef ceph-fuse crashes with main branch ceph-mds
441
* https://tracker.ceph.com/issues/57087
442
    qa: test_fragmented_injection (tasks.cephfs.test_data_scan.TestDataScan) failure
443
* https://tracker.ceph.com/issues/58945
444
    qa: xfstests-dev's generic test suite has 20 failures with fuse client
445
446 204 Rishabh Dave
h3. 7 Nov 2023
447
448 205 Rishabh Dave
fs: https://pulpito.ceph.com/rishabh-2023-11-04_04:30:51-fs-rishabh-2023nov3-testing-default-smithi/
449
re-run: https://pulpito.ceph.com/rishabh-2023-11-05_14:10:09-fs-rishabh-2023nov3-testing-default-smithi/
450
smoke: https://pulpito.ceph.com/rishabh-2023-11-08_08:39:05-smoke-rishabh-2023nov3-testing-default-smithi/
451 204 Rishabh Dave
452
* https://tracker.ceph.com/issues/53859
453
  qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
454
* https://tracker.ceph.com/issues/63233
455
  mon|client|mds: valgrind reports possible leaks in the MDS
456
* https://tracker.ceph.com/issues/57655
457
  qa: fs:mixed-clients kernel_untar_build failure
458
* https://tracker.ceph.com/issues/57676
459
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
460
461
* https://tracker.ceph.com/issues/63473
462
  fsstress.sh failed with errno 124
463
464 202 Rishabh Dave
h3. 3 Nov 2023
465 203 Rishabh Dave
466 202 Rishabh Dave
https://pulpito.ceph.com/rishabh-2023-10-27_06:26:52-fs-rishabh-2023oct26-testing-default-smithi/
467
468
* https://tracker.ceph.com/issues/63141
469
  qa/cephfs: test_idem_unaffected_root_squash fails
470
* https://tracker.ceph.com/issues/63233
471
  mon|client|mds: valgrind reports possible leaks in the MDS
472
* https://tracker.ceph.com/issues/57656
473
  dbench: write failed on handle 10010 (Resource temporarily unavailable)
474
* https://tracker.ceph.com/issues/57655
475
  qa: fs:mixed-clients kernel_untar_build failure
476
* https://tracker.ceph.com/issues/57676
477
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
478
479
* https://tracker.ceph.com/issues/59531
480
  "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio)"
481
* https://tracker.ceph.com/issues/52624
482
  qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
483
484 198 Patrick Donnelly
h3. 24 October 2023
485
486
https://pulpito.ceph.com/?branch=wip-batrick-testing-20231024.144545
487
488 200 Patrick Donnelly
Two failures:
489
490
https://pulpito.ceph.com/pdonnell-2023-10-26_05:21:22-fs-wip-batrick-testing-20231024.144545-distro-default-smithi/7438459/
491
https://pulpito.ceph.com/pdonnell-2023-10-26_05:21:22-fs-wip-batrick-testing-20231024.144545-distro-default-smithi/7438468/
492
493
probably related to https://github.com/ceph/ceph/pull/53255. Killing the mount as part of the test did not complete. Will research more.
494
495 198 Patrick Donnelly
* https://tracker.ceph.com/issues/52624
496
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
497
* https://tracker.ceph.com/issues/57676
498 199 Patrick Donnelly
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
499
* https://tracker.ceph.com/issues/63233
500
    mon|client|mds: valgrind reports possible leaks in the MDS
501
* https://tracker.ceph.com/issues/59531
502
    "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS 
503
* https://tracker.ceph.com/issues/57655
504
    qa: fs:mixed-clients kernel_untar_build failure
505 200 Patrick Donnelly
* https://tracker.ceph.com/issues/62067
506
    ffsb.sh failure "Resource temporarily unavailable"
507
* https://tracker.ceph.com/issues/63411
508
    qa: flush journal may cause timeouts of `scrub status`
509
* https://tracker.ceph.com/issues/61243
510
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
511
* https://tracker.ceph.com/issues/63141
512 198 Patrick Donnelly
    test_idem_unaffected_root_squash (test_admin.TestFsAuthorizeUpdate) fails
513 148 Rishabh Dave
514 195 Venky Shankar
h3. 18 Oct 2023
515
516
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20231018.065603
517
518
* https://tracker.ceph.com/issues/52624
519
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
520
* https://tracker.ceph.com/issues/57676
521
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
522
* https://tracker.ceph.com/issues/63233
523
    mon|client|mds: valgrind reports possible leaks in the MDS
524
* https://tracker.ceph.com/issues/63141
525
    qa/cephfs: test_idem_unaffected_root_squash fails
526
* https://tracker.ceph.com/issues/59531
527
  "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio)" 
528
* https://tracker.ceph.com/issues/62658
529
    error during scrub thrashing: reached maximum tries (31) after waiting for 900 seconds
530
* https://tracker.ceph.com/issues/62580
531
    testing: Test failure: test_snapshot_remove (tasks.cephfs.test_strays.TestStrays)
532
* https://tracker.ceph.com/issues/62067
533
    ffsb.sh failure "Resource temporarily unavailable"
534
* https://tracker.ceph.com/issues/57655
535
    qa: fs:mixed-clients kernel_untar_build failure
536
* https://tracker.ceph.com/issues/62036
537
    src/mds/MDCache.cc: 5131: FAILED ceph_assert(isolated_inodes.empty())
538
* https://tracker.ceph.com/issues/58945
539
    qa: xfstests-dev's generic test suite has 20 failures with fuse client
540
* https://tracker.ceph.com/issues/62847
541
    mds: blogbench requests stuck (5mds+scrub+snaps-flush)
542
543 193 Venky Shankar
h3. 13 Oct 2023
544
545
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20231013.093215
546
547
* https://tracker.ceph.com/issues/52624
548
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
549
* https://tracker.ceph.com/issues/62936
550
    Test failure: test_mirroring_init_failure_with_recovery (tasks.cephfs.test_mirroring.TestMirroring)
551
* https://tracker.ceph.com/issues/47292
552
    cephfs-shell: test_df_for_valid_file failure
553
* https://tracker.ceph.com/issues/63141
554
    qa/cephfs: test_idem_unaffected_root_squash fails
555
* https://tracker.ceph.com/issues/62081
556
    tasks/fscrypt-common does not finish, timesout
557 1 Patrick Donnelly
* https://tracker.ceph.com/issues/58945
558
    qa: xfstests-dev's generic test suite has 20 failures with fuse client
559 194 Venky Shankar
* https://tracker.ceph.com/issues/63233
560
    mon|client|mds: valgrind reports possible leaks in the MDS
561 193 Venky Shankar
562 190 Patrick Donnelly
h3. 16 Oct 2023
563
564
https://pulpito.ceph.com/?branch=wip-batrick-testing-20231016.203825
565
566 192 Patrick Donnelly
Infrastructure issues:
567
* /teuthology/pdonnell-2023-10-19_12:04:12-fs-wip-batrick-testing-20231016.203825-distro-default-smithi/7432286/teuthology.log
568
    Host lost.
569
570 196 Patrick Donnelly
One followup fix:
571
* https://pulpito.ceph.com/pdonnell-2023-10-20_00:33:29-fs-wip-batrick-testing-20231016.203825-distro-default-smithi/
572
573 192 Patrick Donnelly
Failures:
574
575
* https://tracker.ceph.com/issues/56694
576
    qa: avoid blocking forever on hung umount
577
* https://tracker.ceph.com/issues/63089
578
    qa: tasks/mirror times out
579
* https://tracker.ceph.com/issues/52624
580
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
581
* https://tracker.ceph.com/issues/59531
582
  "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio)" 
583
* https://tracker.ceph.com/issues/57676
584
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
585
* https://tracker.ceph.com/issues/62658 
586
    error during scrub thrashing: reached maximum tries (31) after waiting for 900 seconds
587
* https://tracker.ceph.com/issues/61243
588
  test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
589
* https://tracker.ceph.com/issues/57656
590
  dbench: write failed on handle 10010 (Resource temporarily unavailable)
591
* https://tracker.ceph.com/issues/63233
592
  mon|client|mds: valgrind reports possible leaks in the MDS
593 197 Patrick Donnelly
* https://tracker.ceph.com/issues/63278
594
  kclient: may wrongly decode session messages and believe it is blocklisted (dead jobs)
595 192 Patrick Donnelly
596 189 Rishabh Dave
h3. 9 Oct 2023
597
598
https://pulpito.ceph.com/rishabh-2023-10-06_11:56:52-fs-rishabh-cephfs-mon-testing-default-smithi/
599
600
* https://tracker.ceph.com/issues/54460
601
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
602
* https://tracker.ceph.com/issues/63141
603
  test_idem_unaffected_root_squash (test_admin.TestFsAuthorizeUpdate) fails
604
* https://tracker.ceph.com/issues/62937
605
  logrotate doesn't support parallel execution on same set of logfiles
606
* https://tracker.ceph.com/issues/61400
607
  valgrind+ceph-mon issues
608
* https://tracker.ceph.com/issues/57676
609
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
610
* https://tracker.ceph.com/issues/55805
611
  error during scrub thrashing reached max tries in 900 secs
612
613 188 Venky Shankar
h3. 26 Sep 2023
614
615
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230926.081818
616
617
* https://tracker.ceph.com/issues/52624
618
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
619
* https://tracker.ceph.com/issues/62873
620
    qa: FAIL: test_client_blocklisted_oldest_tid (tasks.cephfs.test_client_limits.TestClientLimits)
621
* https://tracker.ceph.com/issues/61400
622
    valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
623
* https://tracker.ceph.com/issues/57676
624
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
625
* https://tracker.ceph.com/issues/62682
626
    mon: no mdsmap broadcast after "fs set joinable" is set to true
627
* https://tracker.ceph.com/issues/63089
628
    qa: tasks/mirror times out
629
630 185 Rishabh Dave
h3. 22 Sep 2023
631
632
https://pulpito.ceph.com/rishabh-2023-09-12_12:12:15-fs-wip-rishabh-2023sep12-b2-testing-default-smithi/
633
634
* https://tracker.ceph.com/issues/59348
635
  qa: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota
636
* https://tracker.ceph.com/issues/59344
637
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
638
* https://tracker.ceph.com/issues/59531
639
  "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio)" 
640
* https://tracker.ceph.com/issues/61574
641
  build failure for mdtest project
642
* https://tracker.ceph.com/issues/62702
643
  fsstress.sh: MDS slow requests for the internal 'rename' requests
644
* https://tracker.ceph.com/issues/57676
645
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
646
647
* https://tracker.ceph.com/issues/62863 
648
  deadlock in ceph-fuse causes teuthology job to hang and fail
649
* https://tracker.ceph.com/issues/62870
650
  test_cluster_info (tasks.cephfs.test_nfs.TestNFS)
651
* https://tracker.ceph.com/issues/62873
652
  test_client_blocklisted_oldest_tid (tasks.cephfs.test_client_limits.TestClientLimits)
653
654 186 Venky Shankar
h3. 20 Sep 2023
655
656
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230920.072635
657
658
* https://tracker.ceph.com/issues/52624
659
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
660
* https://tracker.ceph.com/issues/61400
661
    valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
662
* https://tracker.ceph.com/issues/61399
663
    libmpich: undefined references to fi_strerror
664
* https://tracker.ceph.com/issues/62081
665
    tasks/fscrypt-common does not finish, timesout
666
* https://tracker.ceph.com/issues/62658 
667
    error during scrub thrashing: reached maximum tries (31) after waiting for 900 seconds
668
* https://tracker.ceph.com/issues/62915
669
    qa/suites/fs/nfs: No orchestrator configured (try `ceph orch set backend`) while running test cases
670
* https://tracker.ceph.com/issues/59531
671
    quincy: "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio)"
672
* https://tracker.ceph.com/issues/62873
673
  qa: FAIL: test_client_blocklisted_oldest_tid (tasks.cephfs.test_client_limits.TestClientLimits)
674
* https://tracker.ceph.com/issues/62936
675
    Test failure: test_mirroring_init_failure_with_recovery (tasks.cephfs.test_mirroring.TestMirroring)
676
* https://tracker.ceph.com/issues/62937
677
    Command failed on smithi027 with status 3: 'sudo logrotate /etc/logrotate.d/ceph-test.conf'
678
* https://tracker.ceph.com/issues/62510
679
    snaptest-git-ceph.sh failure with fs/thrash
680
* https://tracker.ceph.com/issues/62081
681
    tasks/fscrypt-common does not finish, timesout
682
* https://tracker.ceph.com/issues/62126
683
    test failure: suites/blogbench.sh stops running
684 187 Venky Shankar
* https://tracker.ceph.com/issues/62682
685
    mon: no mdsmap broadcast after "fs set joinable" is set to true
686 186 Venky Shankar
687 184 Milind Changire
h3. 19 Sep 2023
688
689
http://pulpito.front.sepia.ceph.com/mchangir-2023-09-12_05:40:22-fs-wip-mchangir-testing-20230908.140927-testing-default-smithi/
690
691
* https://tracker.ceph.com/issues/58220#note-9
692
  workunit fs/test_python.sh: test_disk_quota_exceeeded_error failure
693
* https://tracker.ceph.com/issues/62702
694
  Command failed (workunit test suites/fsstress.sh) on smithi124 with status 124
695
* https://tracker.ceph.com/issues/57676
696
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
697
* https://tracker.ceph.com/issues/59348
698
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
699
* https://tracker.ceph.com/issues/52624
700
  qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
701
* https://tracker.ceph.com/issues/51964
702
  qa: test_cephfs_mirror_restart_sync_on_blocklist failure
703
* https://tracker.ceph.com/issues/61243
704
  test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
705
* https://tracker.ceph.com/issues/59344
706
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
707
* https://tracker.ceph.com/issues/62873
708
  qa: FAIL: test_client_blocklisted_oldest_tid (tasks.cephfs.test_client_limits.TestClientLimits)
709
* https://tracker.ceph.com/issues/59413
710
  cephfs: qa snaptest-git-ceph.sh failed with "got remote process result: 128"
711
* https://tracker.ceph.com/issues/53859
712
  qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
713
* https://tracker.ceph.com/issues/62482
714
  qa: cluster [WRN] Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
715
716 178 Patrick Donnelly
717 177 Venky Shankar
h3. 13 Sep 2023
718
719
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230908.065909
720
721
* https://tracker.ceph.com/issues/52624
722
      qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
723
* https://tracker.ceph.com/issues/57655
724
    qa: fs:mixed-clients kernel_untar_build failure
725
* https://tracker.ceph.com/issues/57676
726
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
727
* https://tracker.ceph.com/issues/61243
728
    qa: tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev - 17 tests failed
729
* https://tracker.ceph.com/issues/62567
730
    postgres workunit times out - MDS_SLOW_REQUEST in logs
731
* https://tracker.ceph.com/issues/61400
732
    valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
733
* https://tracker.ceph.com/issues/61399
734
    libmpich: undefined references to fi_strerror
735
* https://tracker.ceph.com/issues/57655
736
    qa: fs:mixed-clients kernel_untar_build failure
737
* https://tracker.ceph.com/issues/57676
738
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
739
* https://tracker.ceph.com/issues/51964
740
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
741
* https://tracker.ceph.com/issues/62081
742
    tasks/fscrypt-common does not finish, timesout
743 178 Patrick Donnelly
744 179 Patrick Donnelly
h3. 2023 Sep 12
745 178 Patrick Donnelly
746
https://pulpito.ceph.com/pdonnell-2023-09-12_14:07:50-fs-wip-batrick-testing-20230912.122437-distro-default-smithi/
747 1 Patrick Donnelly
748 181 Patrick Donnelly
A few failures caused by qa refactoring in https://github.com/ceph/ceph/pull/48130 ; notably:
749
750 182 Patrick Donnelly
* Test failure: test_export_pin_many (tasks.cephfs.test_exports.TestExportPin) caused by fragmentation from config changes.
751 181 Patrick Donnelly
752
Failures:
753
754 179 Patrick Donnelly
* https://tracker.ceph.com/issues/59348
755
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
756
* https://tracker.ceph.com/issues/57656
757
  dbench: write failed on handle 10010 (Resource temporarily unavailable)
758
* https://tracker.ceph.com/issues/55805
759
  error scrub thrashing reached max tries in 900 secs
760
* https://tracker.ceph.com/issues/62067
761
    ffsb.sh failure "Resource temporarily unavailable"
762
* https://tracker.ceph.com/issues/59344
763
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
764
* https://tracker.ceph.com/issues/61399
765 180 Patrick Donnelly
  libmpich: undefined references to fi_strerror
766
* https://tracker.ceph.com/issues/62832
767
  common: config_proxy deadlock during shutdown (and possibly other times)
768
* https://tracker.ceph.com/issues/59413
769 1 Patrick Donnelly
  cephfs: qa snaptest-git-ceph.sh failed with "got remote process result: 128"
770 181 Patrick Donnelly
* https://tracker.ceph.com/issues/57676
771
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
772
* https://tracker.ceph.com/issues/62567
773
  Command failed with status 1: "sudo TESTDIR=/home/ubuntu/cephtest bash -c 'sudo -u postgres -- pgbench -s 500 -i'"
774
* https://tracker.ceph.com/issues/54460
775
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
776
* https://tracker.ceph.com/issues/58220#note-9
777
  workunit fs/test_python.sh: test_disk_quota_exceeeded_error failure
778
* https://tracker.ceph.com/issues/59348
779
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
780 183 Patrick Donnelly
* https://tracker.ceph.com/issues/62847
781
    mds: blogbench requests stuck (5mds+scrub+snaps-flush)
782
* https://tracker.ceph.com/issues/62848
783
    qa: fail_fs upgrade scenario hanging
784
* https://tracker.ceph.com/issues/62081
785
    tasks/fscrypt-common does not finish, timesout
786 177 Venky Shankar
787 176 Venky Shankar
h3. 11 Sep 2023
788 175 Venky Shankar
789
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230830.153114
790
791
* https://tracker.ceph.com/issues/52624
792
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
793
* https://tracker.ceph.com/issues/61399
794
    qa: build failure for ior (the failed instance is when compiling `mdtest`)
795
* https://tracker.ceph.com/issues/57655
796
    qa: fs:mixed-clients kernel_untar_build failure
797
* https://tracker.ceph.com/issues/61399
798
    ior build failure
799
* https://tracker.ceph.com/issues/59531
800
    quincy: "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio)"
801
* https://tracker.ceph.com/issues/59344
802
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
803
* https://tracker.ceph.com/issues/59346
804
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
805
* https://tracker.ceph.com/issues/59348
806
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
807
* https://tracker.ceph.com/issues/57676
808
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
809
* https://tracker.ceph.com/issues/61243
810
  qa: tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev - 17 tests failed
811
* https://tracker.ceph.com/issues/62567
812
  postgres workunit times out - MDS_SLOW_REQUEST in logs
813
814
815 174 Rishabh Dave
h3. 6 Sep 2023 Run 2
816
817
https://pulpito.ceph.com/rishabh-2023-08-25_01:50:32-fs-wip-rishabh-2023aug3-b5-testing-default-smithi/ 
818
819
* https://tracker.ceph.com/issues/51964
820
  test_cephfs_mirror_restart_sync_on_blocklist failure
821
* https://tracker.ceph.com/issues/59348
822
  test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
823
* https://tracker.ceph.com/issues/53859
824
  qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
825
* https://tracker.ceph.com/issues/61892
826
  test_strays.TestStrays.test_snapshot_remove failed
827
* https://tracker.ceph.com/issues/54460
828
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
829
* https://tracker.ceph.com/issues/59346
830
  fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write"
831
* https://tracker.ceph.com/issues/59344
832
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
833
* https://tracker.ceph.com/issues/62484
834
  qa: ffsb.sh test failure
835
* https://tracker.ceph.com/issues/62567
836
  Command failed with status 1: "sudo TESTDIR=/home/ubuntu/cephtest bash -c 'sudo -u postgres -- pgbench -s 500 -i'"
837
  
838
* https://tracker.ceph.com/issues/61399
839
  ior build failure
840
* https://tracker.ceph.com/issues/57676
841
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
842
* https://tracker.ceph.com/issues/55805
843
  error scrub thrashing reached max tries in 900 secs
844
845 172 Rishabh Dave
h3. 6 Sep 2023
846 171 Rishabh Dave
847 173 Rishabh Dave
https://pulpito.ceph.com/rishabh-2023-08-10_20:16:46-fs-wip-rishabh-2023Aug1-b4-testing-default-smithi/
848 171 Rishabh Dave
849 1 Patrick Donnelly
* https://tracker.ceph.com/issues/53859
850
  qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
851 173 Rishabh Dave
* https://tracker.ceph.com/issues/51964
852
  qa: test_cephfs_mirror_restart_sync_on_blocklist failure
853 1 Patrick Donnelly
* https://tracker.ceph.com/issues/61892
854 173 Rishabh Dave
  test_snapshot_remove (test_strays.TestStrays) failed
855
* https://tracker.ceph.com/issues/59348
856
  qa: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota
857
* https://tracker.ceph.com/issues/54462
858
  Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128
859
* https://tracker.ceph.com/issues/62556
860
  test_acls: xfstests_dev: python2 is missing
861
* https://tracker.ceph.com/issues/62067
862
  ffsb.sh failure "Resource temporarily unavailable"
863
* https://tracker.ceph.com/issues/57656
864
  dbench: write failed on handle 10010 (Resource temporarily unavailable)
865 1 Patrick Donnelly
* https://tracker.ceph.com/issues/59346
866
  fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write"
867 171 Rishabh Dave
* https://tracker.ceph.com/issues/59344
868 173 Rishabh Dave
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
869
870 171 Rishabh Dave
* https://tracker.ceph.com/issues/61399
871
  ior build failure
872
* https://tracker.ceph.com/issues/57676
873
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
874
* https://tracker.ceph.com/issues/55805
875
  error scrub thrashing reached max tries in 900 secs
876 173 Rishabh Dave
877
* https://tracker.ceph.com/issues/62567
878
  Command failed on smithi008 with status 1: "sudo TESTDIR=/home/ubuntu/cephtest bash -c 'sudo -u postgres -- pgbench -s 500 -i'"
879
* https://tracker.ceph.com/issues/62702
880
  workunit test suites/fsstress.sh on smithi066 with status 124
881 170 Rishabh Dave
882
h3. 5 Sep 2023
883
884
https://pulpito.ceph.com/rishabh-2023-08-25_06:38:25-fs-wip-rishabh-2023aug3-b5-testing-default-smithi/
885
orch:cephadm suite run: http://pulpito.front.sepia.ceph.com/rishabh-2023-09-05_12:16:09-orch:cephadm-wip-rishabh-2023aug3-b5-testing-default-smithi/
886
  this run has failures but acc to Adam King these are not relevant and should be ignored
887
888
* https://tracker.ceph.com/issues/61892
889
  test_snapshot_remove (test_strays.TestStrays) failed
890
* https://tracker.ceph.com/issues/59348
891
  test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota
892
* https://tracker.ceph.com/issues/54462
893
  Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128
894
* https://tracker.ceph.com/issues/62067
895
  ffsb.sh failure "Resource temporarily unavailable"
896
* https://tracker.ceph.com/issues/57656 
897
  dbench: write failed on handle 10010 (Resource temporarily unavailable)
898
* https://tracker.ceph.com/issues/59346
899
  fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write"
900
* https://tracker.ceph.com/issues/59344
901
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
902
* https://tracker.ceph.com/issues/50223
903
  client.xxxx isn't responding to mclientcaps(revoke)
904
* https://tracker.ceph.com/issues/57655
905
  qa: fs:mixed-clients kernel_untar_build failure
906
* https://tracker.ceph.com/issues/62187
907
  iozone.sh: line 5: iozone: command not found
908
 
909
* https://tracker.ceph.com/issues/61399
910
  ior build failure
911
* https://tracker.ceph.com/issues/57676
912
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
913
* https://tracker.ceph.com/issues/55805
914
  error scrub thrashing reached max tries in 900 secs
915 169 Venky Shankar
916
917
h3. 31 Aug 2023
918
919
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230824.045828
920
921
* https://tracker.ceph.com/issues/52624
922
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
923
* https://tracker.ceph.com/issues/62187
924
    iozone: command not found
925
* https://tracker.ceph.com/issues/61399
926
    ior build failure
927
* https://tracker.ceph.com/issues/59531
928
    quincy: "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio)"
929
* https://tracker.ceph.com/issues/61399
930
    qa: build failure for ior (the failed instance is when compiling `mdtest`)
931
* https://tracker.ceph.com/issues/57655
932
    qa: fs:mixed-clients kernel_untar_build failure
933
* https://tracker.ceph.com/issues/59344
934
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
935
* https://tracker.ceph.com/issues/59346
936
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
937
* https://tracker.ceph.com/issues/59348
938
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
939
* https://tracker.ceph.com/issues/59413
940
    cephfs: qa snaptest-git-ceph.sh failed with "got remote process result: 128"
941
* https://tracker.ceph.com/issues/62653
942
    qa: unimplemented fcntl command: 1036 with fsstress
943
* https://tracker.ceph.com/issues/61400
944
    valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
945
* https://tracker.ceph.com/issues/62658
946
    error during scrub thrashing: reached maximum tries (31) after waiting for 900 seconds
947
* https://tracker.ceph.com/issues/62188
948
    AttributeError: 'RemoteProcess' object has no attribute 'read' (happens only with multis-auth test)
949 168 Venky Shankar
950
951
h3. 25 Aug 2023
952
953
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230822.064807
954
955
* https://tracker.ceph.com/issues/59344
956
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
957
* https://tracker.ceph.com/issues/59346
958
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
959
* https://tracker.ceph.com/issues/59348
960
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
961
* https://tracker.ceph.com/issues/57655
962
    qa: fs:mixed-clients kernel_untar_build failure
963
* https://tracker.ceph.com/issues/61243
964
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
965
* https://tracker.ceph.com/issues/61399
966
    ior build failure
967
* https://tracker.ceph.com/issues/61399
968
    qa: build failure for ior (the failed instance is when compiling `mdtest`)
969
* https://tracker.ceph.com/issues/62484
970
    qa: ffsb.sh test failure
971
* https://tracker.ceph.com/issues/59531
972
    quincy: "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio)"
973
* https://tracker.ceph.com/issues/62510
974
    snaptest-git-ceph.sh failure with fs/thrash
975 167 Venky Shankar
976
977
h3. 24 Aug 2023
978
979
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230822.060131
980
981
* https://tracker.ceph.com/issues/57676
982
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
983
* https://tracker.ceph.com/issues/51964
984
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
985
* https://tracker.ceph.com/issues/59344
986
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
987
* https://tracker.ceph.com/issues/59346
988
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
989
* https://tracker.ceph.com/issues/59348
990
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
991
* https://tracker.ceph.com/issues/61399
992
    ior build failure
993
* https://tracker.ceph.com/issues/61399
994
    qa: build failure for ior (the failed instance is when compiling `mdtest`)
995
* https://tracker.ceph.com/issues/62510
996
    snaptest-git-ceph.sh failure with fs/thrash
997
* https://tracker.ceph.com/issues/62484
998
    qa: ffsb.sh test failure
999
* https://tracker.ceph.com/issues/57087
1000
    qa: test_fragmented_injection (tasks.cephfs.test_data_scan.TestDataScan) failure
1001
* https://tracker.ceph.com/issues/57656
1002
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
1003
* https://tracker.ceph.com/issues/62187
1004
    iozone: command not found
1005
* https://tracker.ceph.com/issues/62188
1006
    AttributeError: 'RemoteProcess' object has no attribute 'read' (happens only with multis-auth test)
1007
* https://tracker.ceph.com/issues/62567
1008
    postgres workunit times out - MDS_SLOW_REQUEST in logs
1009 166 Venky Shankar
1010
1011
h3. 22 Aug 2023
1012
1013
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230809.035933
1014
1015
* https://tracker.ceph.com/issues/57676
1016
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1017
* https://tracker.ceph.com/issues/51964
1018
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1019
* https://tracker.ceph.com/issues/59344
1020
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
1021
* https://tracker.ceph.com/issues/59346
1022
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
1023
* https://tracker.ceph.com/issues/59348
1024
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1025
* https://tracker.ceph.com/issues/61399
1026
    ior build failure
1027
* https://tracker.ceph.com/issues/61399
1028
    qa: build failure for ior (the failed instance is when compiling `mdtest`)
1029
* https://tracker.ceph.com/issues/57655
1030
    qa: fs:mixed-clients kernel_untar_build failure
1031
* https://tracker.ceph.com/issues/61243
1032
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
1033
* https://tracker.ceph.com/issues/62188
1034
    AttributeError: 'RemoteProcess' object has no attribute 'read' (happens only with multis-auth test)
1035
* https://tracker.ceph.com/issues/62510
1036
    snaptest-git-ceph.sh failure with fs/thrash
1037
* https://tracker.ceph.com/issues/62511
1038
    src/mds/MDLog.cc: 299: FAILED ceph_assert(!mds_is_shutting_down)
1039 165 Venky Shankar
1040
1041
h3. 14 Aug 2023
1042
1043
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230808.093601
1044
1045
* https://tracker.ceph.com/issues/51964
1046
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1047
* https://tracker.ceph.com/issues/61400
1048
    valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
1049
* https://tracker.ceph.com/issues/61399
1050
    ior build failure
1051
* https://tracker.ceph.com/issues/59348
1052
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1053
* https://tracker.ceph.com/issues/59531
1054
    cluster [WRN] OSD bench result of 137706.272521 IOPS exceeded the threshold
1055
* https://tracker.ceph.com/issues/59344
1056
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
1057
* https://tracker.ceph.com/issues/59346
1058
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
1059
* https://tracker.ceph.com/issues/61399
1060
    qa: build failure for ior (the failed instance is when compiling `mdtest`)
1061
* https://tracker.ceph.com/issues/59684 [kclient bug]
1062
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
1063
* https://tracker.ceph.com/issues/61243 (NEW)
1064
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
1065
* https://tracker.ceph.com/issues/57655
1066
    qa: fs:mixed-clients kernel_untar_build failure
1067
* https://tracker.ceph.com/issues/57656
1068
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
1069 163 Venky Shankar
1070
1071
h3. 28 JULY 2023
1072
1073
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230725.053049
1074
1075
* https://tracker.ceph.com/issues/51964
1076
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1077
* https://tracker.ceph.com/issues/61400
1078
    valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
1079
* https://tracker.ceph.com/issues/61399
1080
    ior build failure
1081
* https://tracker.ceph.com/issues/57676
1082
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1083
* https://tracker.ceph.com/issues/59348
1084
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1085
* https://tracker.ceph.com/issues/59531
1086
    cluster [WRN] OSD bench result of 137706.272521 IOPS exceeded the threshold
1087
* https://tracker.ceph.com/issues/59344
1088
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
1089
* https://tracker.ceph.com/issues/59346
1090
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
1091
* https://github.com/ceph/ceph/pull/52556
1092
    task/test_nfs: AttributeError: 'TestNFS' object has no attribute 'run_ceph_cmd' (see note #4)
1093
* https://tracker.ceph.com/issues/62187
1094
    iozone: command not found
1095
* https://tracker.ceph.com/issues/61399
1096
    qa: build failure for ior (the failed instance is when compiling `mdtest`)
1097
* https://tracker.ceph.com/issues/62188
1098 164 Rishabh Dave
    AttributeError: 'RemoteProcess' object has no attribute 'read' (happens only with multis-auth test)
1099 158 Rishabh Dave
1100
h3. 24 Jul 2023
1101
1102
https://pulpito.ceph.com/rishabh-2023-07-13_21:35:13-fs-wip-rishabh-2023Jul13-testing-default-smithi/
1103
https://pulpito.ceph.com/rishabh-2023-07-14_10:26:42-fs-wip-rishabh-2023Jul13-testing-default-smithi/
1104
There were few failure from one of the PRs under testing. Following run confirms that removing this PR fixes these failures -
1105
https://pulpito.ceph.com/rishabh-2023-07-18_02:11:50-fs-wip-rishabh-2023Jul13-m-quota-testing-default-smithi/
1106
One more extra run to check if blogbench.sh fail every time:
1107
https://pulpito.ceph.com/rishabh-2023-07-21_17:58:19-fs-wip-rishabh-2023Jul13-m-quota-testing-default-smithi/
1108
blogbench.sh failure were seen on above runs for first time, following run with main branch that confirms that "blogbench.sh" was not related to any of the PRs that are under testing -
1109 161 Rishabh Dave
https://pulpito.ceph.com/rishabh-2023-07-21_21:30:53-fs-wip-rishabh-2023Jul13-base-2-testing-default-smithi/
1110
1111
* https://tracker.ceph.com/issues/61892
1112
  test_snapshot_remove (test_strays.TestStrays) failed
1113
* https://tracker.ceph.com/issues/53859
1114
  test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
1115
* https://tracker.ceph.com/issues/61982
1116
  test_clean_stale_subvolume_snapshot_metadata (tasks.cephfs.test_volumes.TestSubvolumeSnapshots)
1117
* https://tracker.ceph.com/issues/52438
1118
  qa: ffsb timeout
1119
* https://tracker.ceph.com/issues/54460
1120
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1121
* https://tracker.ceph.com/issues/57655
1122
  qa: fs:mixed-clients kernel_untar_build failure
1123
* https://tracker.ceph.com/issues/48773
1124
  reached max tries: scrub does not complete
1125
* https://tracker.ceph.com/issues/58340
1126
  mds: fsstress.sh hangs with multimds
1127
* https://tracker.ceph.com/issues/61400
1128
  valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
1129
* https://tracker.ceph.com/issues/57206
1130
  libcephfs/test.sh: ceph_test_libcephfs_reclaim
1131
  
1132
* https://tracker.ceph.com/issues/57656
1133
  [testing] dbench: write failed on handle 10010 (Resource temporarily unavailable)
1134
* https://tracker.ceph.com/issues/61399
1135
  ior build failure
1136
* https://tracker.ceph.com/issues/57676
1137
  error during scrub thrashing: backtrace
1138
  
1139
* https://tracker.ceph.com/issues/38452
1140
  'sudo -u postgres -- pgbench -s 500 -i' failed
1141 158 Rishabh Dave
* https://tracker.ceph.com/issues/62126
1142 157 Venky Shankar
  blogbench.sh failure
1143
1144
h3. 18 July 2023
1145
1146
* https://tracker.ceph.com/issues/52624
1147
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
1148
* https://tracker.ceph.com/issues/57676
1149
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1150
* https://tracker.ceph.com/issues/54460
1151
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1152
* https://tracker.ceph.com/issues/57655
1153
    qa: fs:mixed-clients kernel_untar_build failure
1154
* https://tracker.ceph.com/issues/51964
1155
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1156
* https://tracker.ceph.com/issues/59344
1157
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
1158
* https://tracker.ceph.com/issues/61182
1159
    cephfs-mirror-ha-workunit: reached maximum tries (50) after waiting for 300 seconds
1160
* https://tracker.ceph.com/issues/61957
1161
    test_client_limits.TestClientLimits.test_client_release_bug
1162
* https://tracker.ceph.com/issues/59348
1163
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1164
* https://tracker.ceph.com/issues/61892
1165
    test_strays.TestStrays.test_snapshot_remove failed
1166
* https://tracker.ceph.com/issues/59346
1167
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
1168
* https://tracker.ceph.com/issues/44565
1169
    src/mds/SimpleLock.h: 528: FAILED ceph_assert(state == LOCK_XLOCK || state == LOCK_XLOCKDONE || state == LOCK_XLOCKSNAP || state == LOCK_LOCK_XLOCK || state == LOCK_LOCK || is_locallock())
1170
* https://tracker.ceph.com/issues/62067
1171
    ffsb.sh failure "Resource temporarily unavailable"
1172 156 Venky Shankar
1173
1174
h3. 17 July 2023
1175
1176
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230704.040136
1177
1178
* https://tracker.ceph.com/issues/61982
1179
    Test failure: test_clean_stale_subvolume_snapshot_metadata (tasks.cephfs.test_volumes.TestSubvolumeSnapshots)
1180
* https://tracker.ceph.com/issues/59344
1181
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
1182
* https://tracker.ceph.com/issues/61182
1183
    cephfs-mirror-ha-workunit: reached maximum tries (50) after waiting for 300 seconds
1184
* https://tracker.ceph.com/issues/61957
1185
    test_client_limits.TestClientLimits.test_client_release_bug
1186
* https://tracker.ceph.com/issues/61400
1187
    valgrind+ceph-mon: segmentation fault in rocksdb+tcmalloc
1188
* https://tracker.ceph.com/issues/59348
1189
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1190
* https://tracker.ceph.com/issues/61892
1191
    test_strays.TestStrays.test_snapshot_remove failed
1192
* https://tracker.ceph.com/issues/59346
1193
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
1194
* https://tracker.ceph.com/issues/62036
1195
    src/mds/MDCache.cc: 5131: FAILED ceph_assert(isolated_inodes.empty())
1196
* https://tracker.ceph.com/issues/61737
1197
    coredump from '/bin/podman pull quay.ceph.io/ceph-ci/ceph:pacific'
1198
* https://tracker.ceph.com/issues/44565
1199
    src/mds/SimpleLock.h: 528: FAILED ceph_assert(state == LOCK_XLOCK || state == LOCK_XLOCKDONE || state == LOCK_XLOCKSNAP || state == LOCK_LOCK_XLOCK || state == LOCK_LOCK || is_locallock())
1200 155 Rishabh Dave
1201 1 Patrick Donnelly
1202 153 Rishabh Dave
h3. 13 July 2023 Run 2
1203 152 Rishabh Dave
1204
1205
https://pulpito.ceph.com/rishabh-2023-07-08_23:33:40-fs-wip-rishabh-2023Jul9-testing-default-smithi/
1206
https://pulpito.ceph.com/rishabh-2023-07-09_20:19:09-fs-wip-rishabh-2023Jul9-testing-default-smithi/
1207
1208
* https://tracker.ceph.com/issues/61957
1209
  test_client_limits.TestClientLimits.test_client_release_bug
1210
* https://tracker.ceph.com/issues/61982
1211
  Test failure: test_clean_stale_subvolume_snapshot_metadata (tasks.cephfs.test_volumes.TestSubvolumeSnapshots)
1212
* https://tracker.ceph.com/issues/59348
1213
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1214
* https://tracker.ceph.com/issues/59344
1215
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
1216
* https://tracker.ceph.com/issues/54460
1217
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1218
* https://tracker.ceph.com/issues/57655
1219
  qa: fs:mixed-clients kernel_untar_build failure
1220
* https://tracker.ceph.com/issues/61400
1221
  valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
1222
* https://tracker.ceph.com/issues/61399
1223
  ior build failure
1224
1225 151 Venky Shankar
h3. 13 July 2023
1226
1227
https://pulpito.ceph.com/vshankar-2023-07-04_11:45:30-fs-wip-vshankar-testing-20230704.040242-testing-default-smithi/
1228
1229
* https://tracker.ceph.com/issues/54460
1230
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1231
* https://tracker.ceph.com/issues/61400
1232
    valgrind+ceph-mon: segmentation fault in rocksdb+tcmalloc
1233
* https://tracker.ceph.com/issues/57655
1234
    qa: fs:mixed-clients kernel_untar_build failure
1235
* https://tracker.ceph.com/issues/61945
1236
    LibCephFS.DelegTimeout failure
1237
* https://tracker.ceph.com/issues/52624
1238
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
1239
* https://tracker.ceph.com/issues/57676
1240
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1241
* https://tracker.ceph.com/issues/59348
1242
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1243
* https://tracker.ceph.com/issues/59344
1244
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
1245
* https://tracker.ceph.com/issues/51964
1246
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1247
* https://tracker.ceph.com/issues/59346
1248
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
1249
* https://tracker.ceph.com/issues/61982
1250
    Test failure: test_clean_stale_subvolume_snapshot_metadata (tasks.cephfs.test_volumes.TestSubvolumeSnapshots)
1251 150 Rishabh Dave
1252
1253
h3. 13 Jul 2023
1254
1255
https://pulpito.ceph.com/rishabh-2023-07-05_22:21:20-fs-wip-rishabh-2023Jul5-testing-default-smithi/
1256
https://pulpito.ceph.com/rishabh-2023-07-06_19:33:28-fs-wip-rishabh-2023Jul5-testing-default-smithi/
1257
1258
* https://tracker.ceph.com/issues/61957
1259
  test_client_limits.TestClientLimits.test_client_release_bug
1260
* https://tracker.ceph.com/issues/59348
1261
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1262
* https://tracker.ceph.com/issues/59346
1263
  fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
1264
* https://tracker.ceph.com/issues/48773
1265
  scrub does not complete: reached max tries
1266
* https://tracker.ceph.com/issues/59344
1267
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
1268
* https://tracker.ceph.com/issues/52438
1269
  qa: ffsb timeout
1270
* https://tracker.ceph.com/issues/57656
1271
  [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
1272
* https://tracker.ceph.com/issues/58742
1273
  xfstests-dev: kcephfs: generic
1274
* https://tracker.ceph.com/issues/61399
1275 148 Rishabh Dave
  libmpich: undefined references to fi_strerror
1276 149 Rishabh Dave
1277 148 Rishabh Dave
h3. 12 July 2023
1278
1279
https://pulpito.ceph.com/rishabh-2023-07-05_18:32:52-fs-wip-rishabh-CephManager-in-CephFSTestCase-testing-default-smithi/
1280
https://pulpito.ceph.com/rishabh-2023-07-06_19:46:43-fs-wip-rishabh-CephManager-in-CephFSTestCase-testing-default-smithi/
1281
1282
* https://tracker.ceph.com/issues/61892
1283
  test_strays.TestStrays.test_snapshot_remove failed
1284
* https://tracker.ceph.com/issues/59348
1285
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1286
* https://tracker.ceph.com/issues/53859
1287
  qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
1288
* https://tracker.ceph.com/issues/59346
1289
  fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write"
1290
* https://tracker.ceph.com/issues/58742
1291
  xfstests-dev: kcephfs: generic
1292
* https://tracker.ceph.com/issues/59344
1293
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
1294
* https://tracker.ceph.com/issues/52438
1295
  qa: ffsb timeout
1296
* https://tracker.ceph.com/issues/57656
1297
  [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
1298
* https://tracker.ceph.com/issues/54460
1299
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1300
* https://tracker.ceph.com/issues/57655
1301
  qa: fs:mixed-clients kernel_untar_build failure
1302
* https://tracker.ceph.com/issues/61182
1303
  cephfs-mirror-ha-workunit: reached maximum tries (50) after waiting for 300 seconds
1304
* https://tracker.ceph.com/issues/61400
1305
  valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
1306 147 Rishabh Dave
* https://tracker.ceph.com/issues/48773
1307 146 Patrick Donnelly
  reached max tries: scrub does not complete
1308
1309
h3. 05 July 2023
1310
1311
https://pulpito.ceph.com/pdonnell-2023-07-05_03:38:33-fs:libcephfs-wip-pdonnell-testing-20230705.003205-distro-default-smithi/
1312
1313 137 Rishabh Dave
* https://tracker.ceph.com/issues/59346
1314 143 Rishabh Dave
  fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write"
1315
1316
h3. 27 Jun 2023
1317
1318
https://pulpito.ceph.com/rishabh-2023-06-21_23:38:17-fs-wip-rishabh-improvements-authmon-testing-default-smithi/
1319 144 Rishabh Dave
https://pulpito.ceph.com/rishabh-2023-06-23_17:37:30-fs-wip-rishabh-improvements-authmon-distro-default-smithi/
1320
1321
* https://tracker.ceph.com/issues/59348
1322
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1323
* https://tracker.ceph.com/issues/54460
1324
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1325
* https://tracker.ceph.com/issues/59346
1326
  fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write"
1327
* https://tracker.ceph.com/issues/59344
1328
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
1329
* https://tracker.ceph.com/issues/61399
1330
  libmpich: undefined references to fi_strerror
1331
* https://tracker.ceph.com/issues/50223
1332
  client.xxxx isn't responding to mclientcaps(revoke)
1333 143 Rishabh Dave
* https://tracker.ceph.com/issues/61831
1334
  Test failure: test_mirroring_init_failure_with_recovery (tasks.cephfs.test_mirroring.TestMirroring)
1335 142 Venky Shankar
1336
1337
h3. 22 June 2023
1338
1339
* https://tracker.ceph.com/issues/57676
1340
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1341
* https://tracker.ceph.com/issues/54460
1342
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1343
* https://tracker.ceph.com/issues/59344
1344
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
1345
* https://tracker.ceph.com/issues/59348
1346
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1347
* https://tracker.ceph.com/issues/61400
1348
    valgrind+ceph-mon: segmentation fault in rocksdb+tcmalloc
1349
* https://tracker.ceph.com/issues/57655
1350
    qa: fs:mixed-clients kernel_untar_build failure
1351
* https://tracker.ceph.com/issues/61394
1352
    qa/quincy: cluster [WRN] evicting unresponsive client smithi152 (4298), after 303.726 seconds" in cluster log
1353
* https://tracker.ceph.com/issues/61762
1354
    qa: wait_for_clean: failed before timeout expired
1355
* https://tracker.ceph.com/issues/61775
1356
    cephfs-mirror: mirror daemon does not shutdown (in mirror ha tests)
1357
* https://tracker.ceph.com/issues/44565
1358
    src/mds/SimpleLock.h: 528: FAILED ceph_assert(state == LOCK_XLOCK || state == LOCK_XLOCKDONE || state == LOCK_XLOCKSNAP || state == LOCK_LOCK_XLOCK || state == LOCK_LOCK || is_locallock())
1359
* https://tracker.ceph.com/issues/61790
1360
    cephfs client to mds comms remain silent after reconnect
1361
* https://tracker.ceph.com/issues/61791
1362
    snaptest-git-ceph.sh test timed out (job dead)
1363 139 Venky Shankar
1364
1365
h3. 20 June 2023
1366
1367
https://pulpito.ceph.com/vshankar-2023-06-15_04:58:28-fs-wip-vshankar-testing-20230614.124123-testing-default-smithi/
1368
1369
* https://tracker.ceph.com/issues/57676
1370
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1371
* https://tracker.ceph.com/issues/54460
1372
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1373 140 Venky Shankar
* https://tracker.ceph.com/issues/54462
1374 1 Patrick Donnelly
  Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128
1375 141 Venky Shankar
* https://tracker.ceph.com/issues/58340
1376 139 Venky Shankar
  mds: fsstress.sh hangs with multimds
1377
* https://tracker.ceph.com/issues/59344
1378
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
1379
* https://tracker.ceph.com/issues/59348
1380
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1381
* https://tracker.ceph.com/issues/57656
1382
  [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
1383
* https://tracker.ceph.com/issues/61400
1384
    valgrind+ceph-mon: segmentation fault in rocksdb+tcmalloc
1385
* https://tracker.ceph.com/issues/57655
1386
    qa: fs:mixed-clients kernel_untar_build failure
1387
* https://tracker.ceph.com/issues/44565
1388
    src/mds/SimpleLock.h: 528: FAILED ceph_assert(state == LOCK_XLOCK || state == LOCK_XLOCKDONE || state == LOCK_XLOCKSNAP || state == LOCK_LOCK_XLOCK || state == LOCK_LOCK || is_locallock())
1389
* https://tracker.ceph.com/issues/61737
1390 138 Rishabh Dave
    coredump from '/bin/podman pull quay.ceph.io/ceph-ci/ceph:pacific'
1391
1392
h3. 16 June 2023
1393
1394 1 Patrick Donnelly
https://pulpito.ceph.com/rishabh-2023-05-16_10:39:13-fs-wip-rishabh-2023May15-1524-testing-default-smithi/
1395 145 Rishabh Dave
https://pulpito.ceph.com/rishabh-2023-05-17_11:09:48-fs-wip-rishabh-2023May15-1524-testing-default-smithi/
1396 138 Rishabh Dave
https://pulpito.ceph.com/rishabh-2023-05-18_10:01:53-fs-wip-rishabh-2023May15-1524-testing-default-smithi/
1397 1 Patrick Donnelly
(bins were rebuilt with a subset of orig PRs) https://pulpito.ceph.com/rishabh-2023-06-09_10:19:22-fs-wip-rishabh-2023Jun9-1308-testing-default-smithi/
1398
1399
1400
* https://tracker.ceph.com/issues/59344
1401
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
1402 138 Rishabh Dave
* https://tracker.ceph.com/issues/59348
1403
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1404 145 Rishabh Dave
* https://tracker.ceph.com/issues/59346
1405
  fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write"
1406
* https://tracker.ceph.com/issues/57656
1407
  [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
1408
* https://tracker.ceph.com/issues/54460
1409
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1410 138 Rishabh Dave
* https://tracker.ceph.com/issues/54462
1411
  Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128
1412 145 Rishabh Dave
* https://tracker.ceph.com/issues/61399
1413
  libmpich: undefined references to fi_strerror
1414
* https://tracker.ceph.com/issues/58945
1415
  xfstests-dev: ceph-fuse: generic 
1416 138 Rishabh Dave
* https://tracker.ceph.com/issues/58742
1417 136 Patrick Donnelly
  xfstests-dev: kcephfs: generic
1418
1419
h3. 24 May 2023
1420
1421
https://pulpito.ceph.com/pdonnell-2023-05-23_18:20:18-fs-wip-pdonnell-testing-20230523.134409-distro-default-smithi/
1422
1423
* https://tracker.ceph.com/issues/57676
1424
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1425
* https://tracker.ceph.com/issues/59683
1426
    Error: Unable to find a match: userspace-rcu-devel libedit-devel device-mapper-devel with fscrypt tests
1427
* https://tracker.ceph.com/issues/61399
1428
    qa: "[Makefile:299: ior] Error 1"
1429
* https://tracker.ceph.com/issues/61265
1430
    qa: tasks.cephfs.fuse_mount:process failed to terminate after unmount
1431
* https://tracker.ceph.com/issues/59348
1432
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1433
* https://tracker.ceph.com/issues/59346
1434
    qa/workunits/fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write"
1435
* https://tracker.ceph.com/issues/61400
1436
    valgrind+ceph-mon: segmentation fault in rocksdb+tcmalloc
1437
* https://tracker.ceph.com/issues/54460
1438
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1439
* https://tracker.ceph.com/issues/51964
1440
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1441
* https://tracker.ceph.com/issues/59344
1442
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
1443
* https://tracker.ceph.com/issues/61407
1444
    mds: abort on CInode::verify_dirfrags
1445
* https://tracker.ceph.com/issues/48773
1446
    qa: scrub does not complete
1447
* https://tracker.ceph.com/issues/57655
1448
    qa: fs:mixed-clients kernel_untar_build failure
1449
* https://tracker.ceph.com/issues/61409
1450 128 Venky Shankar
    qa: _test_stale_caps does not wait for file flush before stat
1451
1452
h3. 15 May 2023
1453 130 Venky Shankar
1454 128 Venky Shankar
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230509.090020
1455
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230509.090020-6
1456
1457
* https://tracker.ceph.com/issues/52624
1458
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
1459
* https://tracker.ceph.com/issues/54460
1460
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1461
* https://tracker.ceph.com/issues/57676
1462
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1463
* https://tracker.ceph.com/issues/59684 [kclient bug]
1464
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
1465
* https://tracker.ceph.com/issues/59348
1466
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1467 131 Venky Shankar
* https://tracker.ceph.com/issues/61148
1468
    dbench test results in call trace in dmesg [kclient bug]
1469 133 Kotresh Hiremath Ravishankar
* https://tracker.ceph.com/issues/58340
1470 134 Kotresh Hiremath Ravishankar
    mds: fsstress.sh hangs with multimds
1471 125 Venky Shankar
1472
 
1473 129 Rishabh Dave
h3. 11 May 2023
1474
1475
https://pulpito.ceph.com/yuriw-2023-05-10_18:21:40-fs-wip-yuri7-testing-2023-05-10-0742-distro-default-smithi/
1476
1477
* https://tracker.ceph.com/issues/59684 [kclient bug]
1478
  Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
1479
* https://tracker.ceph.com/issues/59348
1480
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1481
* https://tracker.ceph.com/issues/57655
1482
  qa: fs:mixed-clients kernel_untar_build failure
1483
* https://tracker.ceph.com/issues/57676
1484
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
1485
* https://tracker.ceph.com/issues/55805
1486
  error during scrub thrashing reached max tries in 900 secs
1487
* https://tracker.ceph.com/issues/54460
1488
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1489
* https://tracker.ceph.com/issues/57656
1490
  [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
1491
* https://tracker.ceph.com/issues/58220
1492
  Command failed (workunit test fs/quota/quota.sh) on smithi081 with status 1:
1493 1 Patrick Donnelly
* https://tracker.ceph.com/issues/58220#note-9
1494
  workunit fs/test_python.sh: test_disk_quota_exceeeded_error failure
1495 134 Kotresh Hiremath Ravishankar
* https://tracker.ceph.com/issues/59342
1496
  qa/workunits/kernel_untar_build.sh failed when compiling the Linux source
1497 135 Kotresh Hiremath Ravishankar
* https://tracker.ceph.com/issues/58949
1498
    test_cephfs.test_disk_quota_exceeeded_error - AssertionError: DiskQuotaExceeded not raised by write
1499 129 Rishabh Dave
* https://tracker.ceph.com/issues/61243 (NEW)
1500
  test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
1501
1502 125 Venky Shankar
h3. 11 May 2023
1503 127 Venky Shankar
1504
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230509.054005
1505 126 Venky Shankar
1506 125 Venky Shankar
(no fsstress job failure [https://tracker.ceph.com/issues/58340] since https://github.com/ceph/ceph/pull/49553
1507
 was included in the branch, however, the PR got updated and needs retest).
1508
1509
* https://tracker.ceph.com/issues/52624
1510
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
1511
* https://tracker.ceph.com/issues/54460
1512
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1513
* https://tracker.ceph.com/issues/57676
1514
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1515
* https://tracker.ceph.com/issues/59683
1516
    Error: Unable to find a match: userspace-rcu-devel libedit-devel device-mapper-devel with fscrypt tests
1517
* https://tracker.ceph.com/issues/59684 [kclient bug]
1518
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
1519
* https://tracker.ceph.com/issues/59348
1520 124 Venky Shankar
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1521
1522
h3. 09 May 2023
1523
1524
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230506.143554
1525
1526
* https://tracker.ceph.com/issues/52624
1527
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
1528
* https://tracker.ceph.com/issues/58340
1529
    mds: fsstress.sh hangs with multimds
1530
* https://tracker.ceph.com/issues/54460
1531
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1532
* https://tracker.ceph.com/issues/57676
1533
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1534
* https://tracker.ceph.com/issues/51964
1535
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1536
* https://tracker.ceph.com/issues/59350
1537
    qa: test_flag_scrub_mdsdir (tasks.cephfs.test_scrub_checks.TestScrubChecks) ... ERROR
1538
* https://tracker.ceph.com/issues/59683
1539
    Error: Unable to find a match: userspace-rcu-devel libedit-devel device-mapper-devel with fscrypt tests
1540
* https://tracker.ceph.com/issues/59684 [kclient bug]
1541
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
1542
* https://tracker.ceph.com/issues/59348
1543 123 Venky Shankar
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1544
1545
h3. 10 Apr 2023
1546
1547
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230330.105356
1548
1549
* https://tracker.ceph.com/issues/52624
1550
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
1551
* https://tracker.ceph.com/issues/58340
1552
    mds: fsstress.sh hangs with multimds
1553
* https://tracker.ceph.com/issues/54460
1554
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1555
* https://tracker.ceph.com/issues/57676
1556
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1557 119 Rishabh Dave
* https://tracker.ceph.com/issues/51964
1558 120 Rishabh Dave
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1559 121 Rishabh Dave
1560 120 Rishabh Dave
h3. 31 Mar 2023
1561 122 Rishabh Dave
1562
run: http://pulpito.front.sepia.ceph.com/rishabh-2023-03-03_21:39:49-fs-wip-rishabh-2023Mar03-2316-testing-default-smithi/
1563 120 Rishabh Dave
re-run (some PRs removed from batch): http://pulpito.front.sepia.ceph.com/rishabh-2023-03-11_05:54:03-fs-wip-rishabh-2023Mar10-1727-testing-default-smithi/
1564
re-run (some PRs removed from batch): http://pulpito.front.sepia.ceph.com/rishabh-2023-03-23_08:27:28-fs-wip-rishabh-2023Mar20-2250-testing-default-smithi/
1565
1566
There were many more re-runs for "failed+dead" jobs as well as for individual jobs. half of the PRs from the batch were removed (gradually over subsequent re-runs).
1567
1568
* https://tracker.ceph.com/issues/57676
1569
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
1570
* https://tracker.ceph.com/issues/54460
1571
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1572
* https://tracker.ceph.com/issues/58220
1573
  Command failed (workunit test fs/quota/quota.sh) on smithixxx with status 1
1574
* https://tracker.ceph.com/issues/58220#note-9
1575
  workunit fs/test_python.sh: test_disk_quota_exceeeded_error failure
1576
* https://tracker.ceph.com/issues/56695
1577
  Command failed (workunit test suites/pjd.sh)
1578
* https://tracker.ceph.com/issues/58564 
1579
  workuit dbench failed with error code 1
1580
* https://tracker.ceph.com/issues/57206
1581
  libcephfs/test.sh: ceph_test_libcephfs_reclaim
1582
* https://tracker.ceph.com/issues/57580
1583
  Test failure: test_newops_getvxattr (tasks.cephfs.test_newops.TestNewOps)
1584
* https://tracker.ceph.com/issues/58940
1585
  ceph osd hit ceph_abort
1586
* https://tracker.ceph.com/issues/55805
1587 118 Venky Shankar
  error scrub thrashing reached max tries in 900 secs
1588
1589
h3. 30 March 2023
1590
1591
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230315.085747
1592
1593
* https://tracker.ceph.com/issues/58938
1594
    qa: xfstests-dev's generic test suite has 7 failures with kclient
1595
* https://tracker.ceph.com/issues/51964
1596
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1597
* https://tracker.ceph.com/issues/58340
1598 114 Venky Shankar
    mds: fsstress.sh hangs with multimds
1599
1600 115 Venky Shankar
h3. 29 March 2023
1601 114 Venky Shankar
1602
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230317.095222
1603
1604
* https://tracker.ceph.com/issues/56695
1605
    [RHEL stock] pjd test failures
1606
* https://tracker.ceph.com/issues/57676
1607
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1608
* https://tracker.ceph.com/issues/57087
1609
    qa: test_fragmented_injection (tasks.cephfs.test_data_scan.TestDataScan) failure
1610 116 Venky Shankar
* https://tracker.ceph.com/issues/58340
1611
    mds: fsstress.sh hangs with multimds
1612 114 Venky Shankar
* https://tracker.ceph.com/issues/57655
1613
    qa: fs:mixed-clients kernel_untar_build failure
1614 117 Venky Shankar
* https://tracker.ceph.com/issues/59230
1615
    Test failure: test_object_deletion (tasks.cephfs.test_damage.TestDamage)
1616 114 Venky Shankar
* https://tracker.ceph.com/issues/58938
1617 113 Venky Shankar
    qa: xfstests-dev's generic test suite has 7 failures with kclient
1618
1619
h3. 13 Mar 2023
1620
1621
* https://tracker.ceph.com/issues/56695
1622
    [RHEL stock] pjd test failures
1623
* https://tracker.ceph.com/issues/57676
1624
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1625
* https://tracker.ceph.com/issues/51964
1626
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1627
* https://tracker.ceph.com/issues/54460
1628
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1629
* https://tracker.ceph.com/issues/57656
1630 112 Venky Shankar
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
1631
1632
h3. 09 Mar 2023
1633
1634
https://pulpito.ceph.com/vshankar-2023-03-03_04:39:14-fs-wip-vshankar-testing-20230303.023823-testing-default-smithi/
1635
https://pulpito.ceph.com/vshankar-2023-03-08_15:12:36-fs-wip-vshankar-testing-20230308.112059-testing-default-smithi/
1636
1637
* https://tracker.ceph.com/issues/56695
1638
    [RHEL stock] pjd test failures
1639
* https://tracker.ceph.com/issues/57676
1640
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1641
* https://tracker.ceph.com/issues/51964
1642
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1643
* https://tracker.ceph.com/issues/54460
1644
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1645
* https://tracker.ceph.com/issues/58340
1646
    mds: fsstress.sh hangs with multimds
1647
* https://tracker.ceph.com/issues/57087
1648 111 Venky Shankar
    qa: test_fragmented_injection (tasks.cephfs.test_data_scan.TestDataScan) failure
1649
1650
h3. 07 Mar 2023
1651
1652
https://pulpito.ceph.com/vshankar-2023-03-02_09:21:58-fs-wip-vshankar-testing-20230222.044949-testing-default-smithi/
1653
https://pulpito.ceph.com/vshankar-2023-03-07_05:15:12-fs-wip-vshankar-testing-20230307.030510-testing-default-smithi/
1654
1655
* https://tracker.ceph.com/issues/56695
1656
    [RHEL stock] pjd test failures
1657
* https://tracker.ceph.com/issues/57676
1658
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1659
* https://tracker.ceph.com/issues/51964
1660
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1661
* https://tracker.ceph.com/issues/57656
1662
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
1663
* https://tracker.ceph.com/issues/57655
1664
    qa: fs:mixed-clients kernel_untar_build failure
1665
* https://tracker.ceph.com/issues/58220
1666
    Command failed (workunit test fs/quota/quota.sh) on smithi081 with status 1:
1667
* https://tracker.ceph.com/issues/54460
1668
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1669
* https://tracker.ceph.com/issues/58934
1670 109 Venky Shankar
    snaptest-git-ceph.sh failure with ceph-fuse
1671
1672
h3. 28 Feb 2023
1673
1674
https://pulpito.ceph.com/vshankar-2023-02-24_02:11:45-fs-wip-vshankar-testing-20230222.025426-testing-default-smithi/
1675
1676
* https://tracker.ceph.com/issues/56695
1677
    [RHEL stock] pjd test failures
1678
* https://tracker.ceph.com/issues/57676
1679
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1680 110 Venky Shankar
* https://tracker.ceph.com/issues/56446
1681 109 Venky Shankar
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
1682
1683 107 Venky Shankar
(teuthology infra issues causing testing delays - merging PRs which have tests passing)
1684
1685
h3. 25 Jan 2023
1686
1687
https://pulpito.ceph.com/vshankar-2023-01-25_07:57:32-fs-wip-vshankar-testing-20230125.055346-testing-default-smithi/
1688
1689
* https://tracker.ceph.com/issues/52624
1690
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
1691
* https://tracker.ceph.com/issues/56695
1692
    [RHEL stock] pjd test failures
1693
* https://tracker.ceph.com/issues/57676
1694
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1695
* https://tracker.ceph.com/issues/56446
1696
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
1697
* https://tracker.ceph.com/issues/57206
1698
    libcephfs/test.sh: ceph_test_libcephfs_reclaim
1699
* https://tracker.ceph.com/issues/58220
1700
    Command failed (workunit test fs/quota/quota.sh) on smithi081 with status 1:
1701
* https://tracker.ceph.com/issues/58340
1702
  mds: fsstress.sh hangs with multimds
1703
* https://tracker.ceph.com/issues/56011
1704
    fs/thrash: snaptest-snap-rm-cmp.sh fails in mds5sum comparison
1705
* https://tracker.ceph.com/issues/54460
1706 101 Rishabh Dave
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1707
1708
h3. 30 JAN 2023
1709
1710
run: http://pulpito.front.sepia.ceph.com/rishabh-2022-11-28_08:04:11-fs-wip-rishabh-testing-2022Nov24-1818-testing-default-smithi/
1711
re-run: http://pulpito.front.sepia.ceph.com/rishabh-2023-01-13_12:08:33-fs-wip-rishabh-testing-2022Nov24-11Jan2023-distro-default-smithi/
1712 105 Rishabh Dave
re-run of re-run: http://pulpito.front.sepia.ceph.com/rishabh-2023-01-23_18:53:32-fs-wip-rishabh-testing-2022Nov24-11Jan2023-distro-default-smithi/
1713
1714 101 Rishabh Dave
* https://tracker.ceph.com/issues/52624
1715
  qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
1716
* https://tracker.ceph.com/issues/56695
1717
  [RHEL stock] pjd test failures
1718
* https://tracker.ceph.com/issues/57676
1719
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
1720
* https://tracker.ceph.com/issues/55332
1721
  Failure in snaptest-git-ceph.sh
1722
* https://tracker.ceph.com/issues/51964
1723
  qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1724
* https://tracker.ceph.com/issues/56446
1725
  Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
1726
* https://tracker.ceph.com/issues/57655 
1727
  qa: fs:mixed-clients kernel_untar_build failure
1728
* https://tracker.ceph.com/issues/54460
1729
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1730 103 Rishabh Dave
* https://tracker.ceph.com/issues/58340
1731
  mds: fsstress.sh hangs with multimds
1732 101 Rishabh Dave
* https://tracker.ceph.com/issues/58219
1733 102 Rishabh Dave
  Command crashed: 'ceph-dencoder type inode_backtrace_t import - decode dump_json'
1734
1735
* "Failed to load ceph-mgr modules: prometheus" in cluster log"
1736 106 Rishabh Dave
  http://pulpito.front.sepia.ceph.com/rishabh-2023-01-23_18:53:32-fs-wip-rishabh-testing-2022Nov24-11Jan2023-distro-default-smithi/7134086
1737
  Acc to Venky this was fixed in https://github.com/ceph/ceph/commit/cf6089200d96fc56b08ee17a4e31f19823370dc8
1738 102 Rishabh Dave
* Created https://tracker.ceph.com/issues/58564
1739 100 Venky Shankar
  workunit test suites/dbench.sh failed error code 1
1740
1741
h3. 15 Dec 2022
1742
1743
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20221215.112736
1744
1745
* https://tracker.ceph.com/issues/52624
1746
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1747
* https://tracker.ceph.com/issues/56695
1748
    [RHEL stock] pjd test failures
1749
* https://tracker.ceph.com/issues/58219
1750
* https://tracker.ceph.com/issues/57655
1751
* qa: fs:mixed-clients kernel_untar_build failure
1752
    Test failure: test_journal_migration (tasks.cephfs.test_journal_migration.TestJournalMigration)
1753
* https://tracker.ceph.com/issues/57676
1754
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1755
* https://tracker.ceph.com/issues/58340
1756 96 Venky Shankar
    mds: fsstress.sh hangs with multimds
1757
1758
h3. 08 Dec 2022
1759 99 Venky Shankar
1760 96 Venky Shankar
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20221130.043104
1761
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20221209.043803
1762
1763
(lots of transient git.ceph.com failures)
1764
1765
* https://tracker.ceph.com/issues/52624
1766
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1767
* https://tracker.ceph.com/issues/56695
1768
    [RHEL stock] pjd test failures
1769
* https://tracker.ceph.com/issues/57655
1770
    qa: fs:mixed-clients kernel_untar_build failure
1771
* https://tracker.ceph.com/issues/58219
1772
    Test failure: test_journal_migration (tasks.cephfs.test_journal_migration.TestJournalMigration)
1773
* https://tracker.ceph.com/issues/58220
1774
    Command failed (workunit test fs/quota/quota.sh) on smithi081 with status 1:
1775 97 Venky Shankar
* https://tracker.ceph.com/issues/57676
1776
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1777 98 Venky Shankar
* https://tracker.ceph.com/issues/53859
1778
    qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
1779
* https://tracker.ceph.com/issues/54460
1780
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1781 96 Venky Shankar
* https://tracker.ceph.com/issues/58244
1782 95 Venky Shankar
    Test failure: test_rebuild_inotable (tasks.cephfs.test_data_scan.TestDataScan)
1783
1784
h3. 14 Oct 2022
1785
1786
https://pulpito.ceph.com/vshankar-2022-10-12_04:56:59-fs-wip-vshankar-testing-20221011-145847-testing-default-smithi/
1787
https://pulpito.ceph.com/vshankar-2022-10-14_04:04:57-fs-wip-vshankar-testing-20221014-072608-testing-default-smithi/
1788
1789
* https://tracker.ceph.com/issues/52624
1790
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1791
* https://tracker.ceph.com/issues/55804
1792
    Command failed (workunit test suites/pjd.sh)
1793
* https://tracker.ceph.com/issues/51964
1794
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1795
* https://tracker.ceph.com/issues/57682
1796
    client: ERROR: test_reconnect_after_blocklisted
1797 90 Rishabh Dave
* https://tracker.ceph.com/issues/54460
1798 91 Rishabh Dave
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1799
1800
h3. 10 Oct 2022
1801 92 Rishabh Dave
1802 91 Rishabh Dave
http://pulpito.front.sepia.ceph.com/rishabh-2022-09-30_19:45:21-fs-wip-rishabh-testing-30Sep2022-testing-default-smithi/
1803
1804
reruns
1805
* fs-thrash, passed: http://pulpito.front.sepia.ceph.com/rishabh-2022-10-04_13:19:47-fs-wip-rishabh-testing-30Sep2022-testing-default-smithi/
1806 94 Rishabh Dave
* fs-verify, passed: http://pulpito.front.sepia.ceph.com/rishabh-2022-10-05_12:25:37-fs-wip-rishabh-testing-30Sep2022-testing-default-smithi/
1807 91 Rishabh Dave
* cephadm failures also passed after many re-runs: http://pulpito.front.sepia.ceph.com/rishabh-2022-10-06_13:50:51-fs-wip-rishabh-testing-30Sep2022-2-testing-default-smithi/
1808 93 Rishabh Dave
    ** needed this PR to be merged in ceph-ci branch - https://github.com/ceph/ceph/pull/47458
1809 91 Rishabh Dave
1810
known bugs
1811
* https://tracker.ceph.com/issues/52624
1812
  qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1813
* https://tracker.ceph.com/issues/50223
1814
  client.xxxx isn't responding to mclientcaps(revoke
1815
* https://tracker.ceph.com/issues/57299
1816
  qa: test_dump_loads fails with JSONDecodeError
1817
* https://tracker.ceph.com/issues/57655 [Exist in main as well]
1818
  qa: fs:mixed-clients kernel_untar_build failure
1819
* https://tracker.ceph.com/issues/57206
1820 90 Rishabh Dave
  libcephfs/test.sh: ceph_test_libcephfs_reclaim
1821
1822
h3. 2022 Sep 29
1823
1824
http://pulpito.front.sepia.ceph.com/rishabh-2022-09-14_12:48:43-fs-wip-rishabh-testing-2022Sep9-1708-testing-default-smithi/
1825
1826
* https://tracker.ceph.com/issues/55804
1827
  Command failed (workunit test suites/pjd.sh)
1828
* https://tracker.ceph.com/issues/36593
1829
  Command failed (workunit test fs/quota/quota.sh) on smithixxx with status 1
1830
* https://tracker.ceph.com/issues/52624
1831
  qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1832
* https://tracker.ceph.com/issues/51964
1833
  qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1834
* https://tracker.ceph.com/issues/56632
1835
  Test failure: test_subvolume_snapshot_clone_quota_exceeded
1836
* https://tracker.ceph.com/issues/50821
1837 88 Patrick Donnelly
  qa: untar_snap_rm failure during mds thrashing
1838
1839
h3. 2022 Sep 26
1840
1841
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20220923.171109
1842
1843
* https://tracker.ceph.com/issues/55804
1844
    qa failure: pjd link tests failed
1845
* https://tracker.ceph.com/issues/57676
1846
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1847
* https://tracker.ceph.com/issues/52624
1848
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1849
* https://tracker.ceph.com/issues/57580
1850
    Test failure: test_newops_getvxattr (tasks.cephfs.test_newops.TestNewOps)
1851
* https://tracker.ceph.com/issues/48773
1852
    qa: scrub does not complete
1853
* https://tracker.ceph.com/issues/57299
1854
    qa: test_dump_loads fails with JSONDecodeError
1855
* https://tracker.ceph.com/issues/57280
1856
    qa: tasks/kernel_cfuse_workunits_untarbuild_blogbench fails - Failed to fetch package version from shaman
1857
* https://tracker.ceph.com/issues/57205
1858
    Test failure: test_subvolume_group_ls_filter_internal_directories (tasks.cephfs.test_volumes.TestSubvolumeGroups)
1859
* https://tracker.ceph.com/issues/57656
1860
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
1861
* https://tracker.ceph.com/issues/57677
1862
    qa: "1 MDSs behind on trimming (MDS_TRIM)"
1863
* https://tracker.ceph.com/issues/57206
1864
    libcephfs/test.sh: ceph_test_libcephfs_reclaim
1865
* https://tracker.ceph.com/issues/57446
1866
    qa: test_subvolume_snapshot_info_if_orphan_clone fails
1867 89 Patrick Donnelly
* https://tracker.ceph.com/issues/57655 [Exist in main as well]
1868
    qa: fs:mixed-clients kernel_untar_build failure
1869 88 Patrick Donnelly
* https://tracker.ceph.com/issues/57682
1870
    client: ERROR: test_reconnect_after_blocklisted
1871 87 Patrick Donnelly
1872
1873
h3. 2022 Sep 22
1874
1875
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20220920.234701
1876
1877
* https://tracker.ceph.com/issues/57299
1878
    qa: test_dump_loads fails with JSONDecodeError
1879
* https://tracker.ceph.com/issues/57205
1880
    Test failure: test_subvolume_group_ls_filter_internal_directories (tasks.cephfs.test_volumes.TestSubvolumeGroups)
1881
* https://tracker.ceph.com/issues/52624
1882
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1883
* https://tracker.ceph.com/issues/57580
1884
    Test failure: test_newops_getvxattr (tasks.cephfs.test_newops.TestNewOps)
1885
* https://tracker.ceph.com/issues/57280
1886
    qa: tasks/kernel_cfuse_workunits_untarbuild_blogbench fails - Failed to fetch package version from shaman
1887
* https://tracker.ceph.com/issues/48773
1888
    qa: scrub does not complete
1889
* https://tracker.ceph.com/issues/56446
1890
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
1891
* https://tracker.ceph.com/issues/57206
1892
    libcephfs/test.sh: ceph_test_libcephfs_reclaim
1893
* https://tracker.ceph.com/issues/51267
1894
    CommandFailedError: Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithi096 with status 1:...
1895
1896
NEW:
1897
1898
* https://tracker.ceph.com/issues/57656
1899
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
1900
* https://tracker.ceph.com/issues/57655 [Exist in main as well]
1901
    qa: fs:mixed-clients kernel_untar_build failure
1902
* https://tracker.ceph.com/issues/57657
1903
    mds: scrub locates mismatch between child accounted_rstats and self rstats
1904
1905
Segfault probably caused by: https://github.com/ceph/ceph/pull/47795#issuecomment-1255724799
1906 80 Venky Shankar
1907 79 Venky Shankar
1908
h3. 2022 Sep 16
1909
1910
https://pulpito.ceph.com/?branch=wip-vshankar-testing1-20220905-132828
1911
1912
* https://tracker.ceph.com/issues/57446
1913
    qa: test_subvolume_snapshot_info_if_orphan_clone fails
1914
* https://tracker.ceph.com/issues/57299
1915
    qa: test_dump_loads fails with JSONDecodeError
1916
* https://tracker.ceph.com/issues/50223
1917
    client.xxxx isn't responding to mclientcaps(revoke)
1918
* https://tracker.ceph.com/issues/52624
1919
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1920
* https://tracker.ceph.com/issues/57205
1921
    Test failure: test_subvolume_group_ls_filter_internal_directories (tasks.cephfs.test_volumes.TestSubvolumeGroups)
1922
* https://tracker.ceph.com/issues/57280
1923
    qa: tasks/kernel_cfuse_workunits_untarbuild_blogbench fails - Failed to fetch package version from shaman
1924
* https://tracker.ceph.com/issues/51282
1925
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
1926
* https://tracker.ceph.com/issues/48203
1927
  https://tracker.ceph.com/issues/36593
1928
    qa: quota failure
1929
    qa: quota failure caused by clients stepping on each other
1930
* https://tracker.ceph.com/issues/57580
1931 77 Rishabh Dave
    Test failure: test_newops_getvxattr (tasks.cephfs.test_newops.TestNewOps)
1932
1933 76 Rishabh Dave
1934
h3. 2022 Aug 26
1935
1936
http://pulpito.front.sepia.ceph.com/rishabh-2022-08-22_17:49:59-fs-wip-rishabh-testing-2022Aug19-testing-default-smithi/
1937
http://pulpito.front.sepia.ceph.com/rishabh-2022-08-24_11:56:51-fs-wip-rishabh-testing-2022Aug19-testing-default-smithi/
1938
1939
* https://tracker.ceph.com/issues/57206
1940
  libcephfs/test.sh: ceph_test_libcephfs_reclaim
1941
* https://tracker.ceph.com/issues/56632
1942
  Test failure: test_subvolume_snapshot_clone_quota_exceeded (tasks.cephfs.test_volumes.TestSubvolumeSnapshotClones)
1943
* https://tracker.ceph.com/issues/56446
1944
  Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
1945
* https://tracker.ceph.com/issues/51964
1946
  qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1947
* https://tracker.ceph.com/issues/53859
1948
  qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
1949
1950
* https://tracker.ceph.com/issues/54460
1951
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1952
* https://tracker.ceph.com/issues/54462
1953
  Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128
1954
* https://tracker.ceph.com/issues/54460
1955
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1956
* https://tracker.ceph.com/issues/36593
1957
  Command failed (workunit test fs/quota/quota.sh) on smithixxx with status 1
1958
1959
* https://tracker.ceph.com/issues/52624
1960
  qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1961
* https://tracker.ceph.com/issues/55804
1962
  Command failed (workunit test suites/pjd.sh)
1963
* https://tracker.ceph.com/issues/50223
1964
  client.xxxx isn't responding to mclientcaps(revoke)
1965 75 Venky Shankar
1966
1967
h3. 2022 Aug 22
1968
1969
https://pulpito.ceph.com/vshankar-2022-08-12_09:34:24-fs-wip-vshankar-testing1-20220812-072441-testing-default-smithi/
1970
https://pulpito.ceph.com/vshankar-2022-08-18_04:30:42-fs-wip-vshankar-testing1-20220818-082047-testing-default-smithi/ (drop problematic PR and re-run)
1971
1972
* https://tracker.ceph.com/issues/52624
1973
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1974
* https://tracker.ceph.com/issues/56446
1975
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
1976
* https://tracker.ceph.com/issues/55804
1977
    Command failed (workunit test suites/pjd.sh)
1978
* https://tracker.ceph.com/issues/51278
1979
    mds: "FAILED ceph_assert(!segments.empty())"
1980
* https://tracker.ceph.com/issues/54460
1981
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1982
* https://tracker.ceph.com/issues/57205
1983
    Test failure: test_subvolume_group_ls_filter_internal_directories (tasks.cephfs.test_volumes.TestSubvolumeGroups)
1984
* https://tracker.ceph.com/issues/57206
1985
    ceph_test_libcephfs_reclaim crashes during test
1986
* https://tracker.ceph.com/issues/53859
1987
  Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
1988
* https://tracker.ceph.com/issues/50223
1989 72 Venky Shankar
    client.xxxx isn't responding to mclientcaps(revoke)
1990
1991
h3. 2022 Aug 12
1992
1993
https://pulpito.ceph.com/vshankar-2022-08-10_04:06:00-fs-wip-vshankar-testing-20220805-190751-testing-default-smithi/
1994
https://pulpito.ceph.com/vshankar-2022-08-11_12:16:58-fs-wip-vshankar-testing-20220811-145809-testing-default-smithi/ (drop problematic PR and re-run)
1995
1996
* https://tracker.ceph.com/issues/52624
1997
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1998
* https://tracker.ceph.com/issues/56446
1999
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
2000
* https://tracker.ceph.com/issues/51964
2001
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
2002
* https://tracker.ceph.com/issues/55804
2003
    Command failed (workunit test suites/pjd.sh)
2004
* https://tracker.ceph.com/issues/50223
2005
    client.xxxx isn't responding to mclientcaps(revoke)
2006
* https://tracker.ceph.com/issues/50821
2007 73 Venky Shankar
    qa: untar_snap_rm failure during mds thrashing
2008 72 Venky Shankar
* https://tracker.ceph.com/issues/54460
2009 71 Venky Shankar
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
2010
2011
h3. 2022 Aug 04
2012
2013
https://pulpito.ceph.com/?branch=wip-vshankar-testing1-20220804-123835 (only mgr/volumes, mgr/stats)
2014
2015 69 Rishabh Dave
Unrealted teuthology failure on rhel
2016 68 Rishabh Dave
2017
h3. 2022 Jul 25
2018
2019
http://pulpito.front.sepia.ceph.com/rishabh-2022-07-22_11:34:20-fs-wip-rishabh-testing-2022Jul22-1400-testing-default-smithi/
2020
2021 74 Rishabh Dave
1st re-run: http://pulpito.front.sepia.ceph.com/rishabh-2022-07-24_03:51:19-fs-wip-rishabh-testing-2022Jul22-1400-testing-default-smithi
2022
2nd re-run: http://pulpito.front.sepia.ceph.com/rishabh-2022-07-24_08:53:36-fs-wip-rishabh-testing-2022Jul22-1400-testing-default-smithi/
2023 68 Rishabh Dave
3rd re-run: http://pulpito.front.sepia.ceph.com/rishabh-2022-07-24_08:53:36-fs-wip-rishabh-testing-2022Jul22-1400-testing-default-smithi/
2024
4th (final) re-run: http://pulpito.front.sepia.ceph.com/rishabh-2022-07-28_03:59:01-fs-wip-rishabh-testing-2022Jul28-0143-testing-default-smithi/
2025
2026
* https://tracker.ceph.com/issues/55804
2027
  Command failed (workunit test suites/pjd.sh)
2028
* https://tracker.ceph.com/issues/50223
2029
  client.xxxx isn't responding to mclientcaps(revoke)
2030
2031
* https://tracker.ceph.com/issues/54460
2032
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
2033 1 Patrick Donnelly
* https://tracker.ceph.com/issues/36593
2034 74 Rishabh Dave
  Command failed (workunit test fs/quota/quota.sh) on smithixxx with status 1
2035 68 Rishabh Dave
* https://tracker.ceph.com/issues/54462
2036 67 Patrick Donnelly
  Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128~
2037
2038
h3. 2022 July 22
2039
2040
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20220721.235756
2041
2042
MDS_HEALTH_DUMMY error in log fixed by followup commit.
2043
transient selinux ping failure
2044
2045
* https://tracker.ceph.com/issues/56694
2046
    qa: avoid blocking forever on hung umount
2047
* https://tracker.ceph.com/issues/56695
2048
    [RHEL stock] pjd test failures
2049
* https://tracker.ceph.com/issues/56696
2050
    admin keyring disappears during qa run
2051
* https://tracker.ceph.com/issues/56697
2052
    qa: fs/snaps fails for fuse
2053
* https://tracker.ceph.com/issues/50222
2054
    osd: 5.2s0 deep-scrub : stat mismatch
2055
* https://tracker.ceph.com/issues/56698
2056
    client: FAILED ceph_assert(_size == 0)
2057
* https://tracker.ceph.com/issues/50223
2058
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2059 66 Rishabh Dave
2060 65 Rishabh Dave
2061
h3. 2022 Jul 15
2062
2063
http://pulpito.front.sepia.ceph.com/rishabh-2022-07-08_23:53:34-fs-wip-rishabh-testing-2022Jul08-1820-testing-default-smithi/
2064
2065
re-run: http://pulpito.front.sepia.ceph.com/rishabh-2022-07-15_06:42:04-fs-wip-rishabh-testing-2022Jul08-1820-testing-default-smithi/
2066
2067
* https://tracker.ceph.com/issues/53859
2068
  Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
2069
* https://tracker.ceph.com/issues/55804
2070
  Command failed (workunit test suites/pjd.sh)
2071
* https://tracker.ceph.com/issues/50223
2072
  client.xxxx isn't responding to mclientcaps(revoke)
2073
* https://tracker.ceph.com/issues/50222
2074
  osd: deep-scrub : stat mismatch
2075
2076
* https://tracker.ceph.com/issues/56632
2077
  Test failure: test_subvolume_snapshot_clone_quota_exceeded (tasks.cephfs.test_volumes.TestSubvolumeSnapshotClones)
2078
* https://tracker.ceph.com/issues/56634
2079
  workunit test fs/snaps/snaptest-intodir.sh
2080
* https://tracker.ceph.com/issues/56644
2081
  Test failure: test_rapid_creation (tasks.cephfs.test_fragment.TestFragmentation)
2082
2083 61 Rishabh Dave
2084
2085
h3. 2022 July 05
2086 62 Rishabh Dave
2087 64 Rishabh Dave
http://pulpito.front.sepia.ceph.com/rishabh-2022-07-02_14:14:52-fs-wip-rishabh-testing-20220702-1631-testing-default-smithi/
2088
2089
On 1st re-run some jobs passed - http://pulpito.front.sepia.ceph.com/rishabh-2022-07-03_15:10:28-fs-wip-rishabh-testing-20220702-1631-distro-default-smithi/
2090
2091
On 2nd re-run only few jobs failed -
2092 62 Rishabh Dave
http://pulpito.front.sepia.ceph.com/rishabh-2022-07-06_05:24:29-fs-wip-rishabh-testing-20220705-2132-distro-default-smithi/
2093
http://pulpito.front.sepia.ceph.com/rishabh-2022-07-06_05:24:29-fs-wip-rishabh-testing-20220705-2132-distro-default-smithi/
2094
2095
* https://tracker.ceph.com/issues/56446
2096
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
2097
* https://tracker.ceph.com/issues/55804
2098
    Command failed (workunit test suites/pjd.sh) on smithi047 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/
2099
2100
* https://tracker.ceph.com/issues/56445
2101 63 Rishabh Dave
    Command failed on smithi080 with status 123: "find /home/ubuntu/cephtest/archive/syslog -name '*.log' -print0 | sudo xargs -0 --no-run-if-empty -- gzip --"
2102
* https://tracker.ceph.com/issues/51267
2103
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithi098 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1
2104 62 Rishabh Dave
* https://tracker.ceph.com/issues/50224
2105
    Test failure: test_mirroring_init_failure_with_recovery (tasks.cephfs.test_mirroring.TestMirroring)
2106 61 Rishabh Dave
2107 58 Venky Shankar
2108
2109
h3. 2022 July 04
2110
2111
https://pulpito.ceph.com/vshankar-2022-06-29_09:19:00-fs-wip-vshankar-testing-20220627-100931-testing-default-smithi/
2112
(rhel runs were borked due to: https://lists.ceph.io/hyperkitty/list/dev@ceph.io/thread/JSZQFUKVLDND4W33PXDGCABPHNSPT6SS/, tests ran with --filter-out=rhel)
2113
2114
* https://tracker.ceph.com/issues/56445
2115 59 Rishabh Dave
    Command failed on smithi162 with status 123: "find /home/ubuntu/cephtest/archive/syslog -name '*.log' -print0 | sudo xargs -0 --no-run-if-empty -- gzip --"
2116
* https://tracker.ceph.com/issues/56446
2117
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
2118
* https://tracker.ceph.com/issues/51964
2119 60 Rishabh Dave
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
2120 59 Rishabh Dave
* https://tracker.ceph.com/issues/52624
2121 57 Venky Shankar
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2122
2123
h3. 2022 June 20
2124
2125
https://pulpito.ceph.com/vshankar-2022-06-15_04:03:39-fs-wip-vshankar-testing1-20220615-072516-testing-default-smithi/
2126
https://pulpito.ceph.com/vshankar-2022-06-19_08:22:46-fs-wip-vshankar-testing1-20220619-102531-testing-default-smithi/
2127
2128
* https://tracker.ceph.com/issues/52624
2129
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2130
* https://tracker.ceph.com/issues/55804
2131
    qa failure: pjd link tests failed
2132
* https://tracker.ceph.com/issues/54108
2133
    qa: iogen workunit: "The following counters failed to be set on mds daemons: {'mds.exported', 'mds.imported'}"
2134
* https://tracker.ceph.com/issues/55332
2135 56 Patrick Donnelly
    Failure in snaptest-git-ceph.sh (it's an async unlink/create bug)
2136
2137
h3. 2022 June 13
2138
2139
https://pulpito.ceph.com/pdonnell-2022-06-12_05:08:12-fs:workload-wip-pdonnell-testing-20220612.004943-distro-default-smithi/
2140
2141
* https://tracker.ceph.com/issues/56024
2142
    cephadm: removes ceph.conf during qa run causing command failure
2143
* https://tracker.ceph.com/issues/48773
2144
    qa: scrub does not complete
2145
* https://tracker.ceph.com/issues/56012
2146
    mds: src/mds/MDLog.cc: 283: FAILED ceph_assert(!mds->is_ any_replay())
2147 55 Venky Shankar
2148 54 Venky Shankar
2149
h3. 2022 Jun 13
2150
2151
https://pulpito.ceph.com/vshankar-2022-06-07_00:25:50-fs-wip-vshankar-testing-20220606-223254-testing-default-smithi/
2152
https://pulpito.ceph.com/vshankar-2022-06-10_01:04:46-fs-wip-vshankar-testing-20220609-175550-testing-default-smithi/
2153
2154
* https://tracker.ceph.com/issues/52624
2155
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2156
* https://tracker.ceph.com/issues/51964
2157
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
2158
* https://tracker.ceph.com/issues/53859
2159
    qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
2160
* https://tracker.ceph.com/issues/55804
2161
    qa failure: pjd link tests failed
2162
* https://tracker.ceph.com/issues/56003
2163
    client: src/include/xlist.h: 81: FAILED ceph_assert(_size == 0)
2164
* https://tracker.ceph.com/issues/56011
2165
    fs/thrash: snaptest-snap-rm-cmp.sh fails in mds5sum comparison
2166
* https://tracker.ceph.com/issues/56012
2167 53 Venky Shankar
    mds: src/mds/MDLog.cc: 283: FAILED ceph_assert(!mds->is_ any_replay())
2168
2169
h3. 2022 Jun 07
2170
2171
https://pulpito.ceph.com/vshankar-2022-06-06_21:25:41-fs-wip-vshankar-testing1-20220606-230129-testing-default-smithi/
2172
https://pulpito.ceph.com/vshankar-2022-06-07_10:53:31-fs-wip-vshankar-testing1-20220607-104134-testing-default-smithi/ (rerun after dropping a problematic PR)
2173
2174
* https://tracker.ceph.com/issues/52624
2175
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2176
* https://tracker.ceph.com/issues/50223
2177
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2178
* https://tracker.ceph.com/issues/50224
2179 51 Venky Shankar
    qa: test_mirroring_init_failure_with_recovery failure
2180
2181
h3. 2022 May 12
2182 52 Venky Shankar
2183 51 Venky Shankar
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20220509-125847
2184
https://pulpito.ceph.com/vshankar-2022-05-13_17:09:16-fs-wip-vshankar-testing-20220513-120051-testing-default-smithi/ (drop prs + rerun)
2185
2186
* https://tracker.ceph.com/issues/52624
2187
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2188
* https://tracker.ceph.com/issues/50223
2189
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2190
* https://tracker.ceph.com/issues/55332
2191
    Failure in snaptest-git-ceph.sh
2192
* https://tracker.ceph.com/issues/53859
2193 1 Patrick Donnelly
    qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
2194 52 Venky Shankar
* https://tracker.ceph.com/issues/55538
2195
    Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead)
2196 51 Venky Shankar
* https://tracker.ceph.com/issues/55258
2197 49 Venky Shankar
    lots of "heartbeat_check: no reply from X.X.X.X" in OSD logs (cropss up again, though very infrequent)
2198
2199 50 Venky Shankar
h3. 2022 May 04
2200
2201
https://pulpito.ceph.com/vshankar-2022-05-01_13:18:44-fs-wip-vshankar-testing1-20220428-204527-testing-default-smithi/
2202 49 Venky Shankar
https://pulpito.ceph.com/vshankar-2022-05-02_16:58:59-fs-wip-vshankar-testing1-20220502-201957-testing-default-smithi/ (after dropping PRs)
2203
2204
* https://tracker.ceph.com/issues/52624
2205
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2206
* https://tracker.ceph.com/issues/50223
2207
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2208
* https://tracker.ceph.com/issues/55332
2209
    Failure in snaptest-git-ceph.sh
2210
* https://tracker.ceph.com/issues/53859
2211
    qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
2212
* https://tracker.ceph.com/issues/55516
2213
    qa: fs suite tests failing with "json.decoder.JSONDecodeError: Extra data: line 2 column 82 (char 82)"
2214
* https://tracker.ceph.com/issues/55537
2215
    mds: crash during fs:upgrade test
2216
* https://tracker.ceph.com/issues/55538
2217 48 Venky Shankar
    Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead)
2218
2219
h3. 2022 Apr 25
2220
2221
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20220420-113951 (owner vshankar)
2222
2223
* https://tracker.ceph.com/issues/52624
2224
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2225
* https://tracker.ceph.com/issues/50223
2226
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2227
* https://tracker.ceph.com/issues/55258
2228
    lots of "heartbeat_check: no reply from X.X.X.X" in OSD logs
2229
* https://tracker.ceph.com/issues/55377
2230 47 Venky Shankar
    kclient: mds revoke Fwb caps stuck after the kclient tries writebcak once
2231
2232
h3. 2022 Apr 14
2233
2234
https://pulpito.ceph.com/?branch=wip-vshankar-testing1-20220411-144044
2235
2236
* https://tracker.ceph.com/issues/52624
2237
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2238
* https://tracker.ceph.com/issues/50223
2239
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2240
* https://tracker.ceph.com/issues/52438
2241
    qa: ffsb timeout
2242
* https://tracker.ceph.com/issues/55170
2243
    mds: crash during rejoin (CDir::fetch_keys)
2244
* https://tracker.ceph.com/issues/55331
2245
    pjd failure
2246
* https://tracker.ceph.com/issues/48773
2247
    qa: scrub does not complete
2248
* https://tracker.ceph.com/issues/55332
2249
    Failure in snaptest-git-ceph.sh
2250
* https://tracker.ceph.com/issues/55258
2251 45 Venky Shankar
    lots of "heartbeat_check: no reply from X.X.X.X" in OSD logs
2252
2253 46 Venky Shankar
h3. 2022 Apr 11
2254 45 Venky Shankar
2255
https://pulpito.ceph.com/?branch=wip-vshankar-testing-55110-20220408-203242
2256
2257
* https://tracker.ceph.com/issues/48773
2258
    qa: scrub does not complete
2259
* https://tracker.ceph.com/issues/52624
2260
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2261
* https://tracker.ceph.com/issues/52438
2262
    qa: ffsb timeout
2263
* https://tracker.ceph.com/issues/48680
2264
    mds: scrubbing stuck "scrub active (0 inodes in the stack)"
2265
* https://tracker.ceph.com/issues/55236
2266
    qa: fs/snaps tests fails with "hit max job timeout"
2267
* https://tracker.ceph.com/issues/54108
2268
    qa: iogen workunit: "The following counters failed to be set on mds daemons: {'mds.exported', 'mds.imported'}"
2269
* https://tracker.ceph.com/issues/54971
2270
    Test failure: test_perf_stats_stale_metrics (tasks.cephfs.test_mds_metrics.TestMDSMetrics)
2271
* https://tracker.ceph.com/issues/50223
2272
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2273
* https://tracker.ceph.com/issues/55258
2274 44 Venky Shankar
    lots of "heartbeat_check: no reply from X.X.X.X" in OSD logs
2275 42 Venky Shankar
2276 43 Venky Shankar
h3. 2022 Mar 21
2277
2278
https://pulpito.ceph.com/vshankar-2022-03-20_02:16:37-fs-wip-vshankar-testing-20220319-163539-testing-default-smithi/
2279
2280
Run didn't go well, lots of failures - debugging by dropping PRs and running against master branch. Only merging unrelated PRs that pass tests.
2281
2282
2283 42 Venky Shankar
h3. 2022 Mar 08
2284
2285
https://pulpito.ceph.com/vshankar-2022-02-28_04:32:15-fs-wip-vshankar-testing-20220226-211550-testing-default-smithi/
2286
2287
rerun with
2288
- (drop) https://github.com/ceph/ceph/pull/44679
2289
- (drop) https://github.com/ceph/ceph/pull/44958
2290
https://pulpito.ceph.com/vshankar-2022-03-06_14:47:51-fs-wip-vshankar-testing-20220304-132102-testing-default-smithi/
2291
2292
* https://tracker.ceph.com/issues/54419 (new)
2293
    `ceph orch upgrade start` seems to never reach completion
2294
* https://tracker.ceph.com/issues/51964
2295
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
2296
* https://tracker.ceph.com/issues/52624
2297
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2298
* https://tracker.ceph.com/issues/50223
2299
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2300
* https://tracker.ceph.com/issues/52438
2301
    qa: ffsb timeout
2302
* https://tracker.ceph.com/issues/50821
2303
    qa: untar_snap_rm failure during mds thrashing
2304 41 Venky Shankar
2305
2306
h3. 2022 Feb 09
2307
2308
https://pulpito.ceph.com/vshankar-2022-02-05_17:27:49-fs-wip-vshankar-testing-20220201-113815-testing-default-smithi/
2309
2310
rerun with
2311
- (drop) https://github.com/ceph/ceph/pull/37938
2312
- (drop) https://github.com/ceph/ceph/pull/44335
2313
- (drop) https://github.com/ceph/ceph/pull/44491
2314
- (drop) https://github.com/ceph/ceph/pull/44501
2315
https://pulpito.ceph.com/vshankar-2022-02-08_14:27:29-fs-wip-vshankar-testing-20220208-181241-testing-default-smithi/
2316
2317
* https://tracker.ceph.com/issues/51964
2318
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
2319
* https://tracker.ceph.com/issues/54066
2320
    test_subvolume_no_upgrade_v1_sanity fails with `AssertionError: 1000 != 0`
2321
* https://tracker.ceph.com/issues/48773
2322
    qa: scrub does not complete
2323
* https://tracker.ceph.com/issues/52624
2324
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2325
* https://tracker.ceph.com/issues/50223
2326
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2327
* https://tracker.ceph.com/issues/52438
2328 40 Patrick Donnelly
    qa: ffsb timeout
2329
2330
h3. 2022 Feb 01
2331
2332
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20220127.171526
2333
2334
* https://tracker.ceph.com/issues/54107
2335
    kclient: hang during umount
2336
* https://tracker.ceph.com/issues/54106
2337
    kclient: hang during workunit cleanup
2338
* https://tracker.ceph.com/issues/54108
2339
    qa: iogen workunit: "The following counters failed to be set on mds daemons: {'mds.exported', 'mds.imported'}"
2340
* https://tracker.ceph.com/issues/48773
2341
    qa: scrub does not complete
2342
* https://tracker.ceph.com/issues/52438
2343
    qa: ffsb timeout
2344 36 Venky Shankar
2345
2346
h3. 2022 Jan 13
2347 39 Venky Shankar
2348 36 Venky Shankar
https://pulpito.ceph.com/vshankar-2022-01-06_13:18:41-fs-wip-vshankar-testing-20220106-145819-testing-default-smithi/
2349 38 Venky Shankar
2350
rerun with:
2351 36 Venky Shankar
- (add) https://github.com/ceph/ceph/pull/44570
2352
- (drop) https://github.com/ceph/ceph/pull/43184
2353
https://pulpito.ceph.com/vshankar-2022-01-13_04:42:40-fs-wip-vshankar-testing-20220106-145819-testing-default-smithi/
2354
2355
* https://tracker.ceph.com/issues/50223
2356
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2357
* https://tracker.ceph.com/issues/51282
2358
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2359
* https://tracker.ceph.com/issues/48773
2360
    qa: scrub does not complete
2361
* https://tracker.ceph.com/issues/52624
2362
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2363
* https://tracker.ceph.com/issues/53859
2364 34 Venky Shankar
    qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
2365
2366
h3. 2022 Jan 03
2367
2368
https://pulpito.ceph.com/vshankar-2021-12-22_07:37:44-fs-wip-vshankar-testing-20211216-114012-testing-default-smithi/
2369
https://pulpito.ceph.com/vshankar-2022-01-03_12:27:45-fs-wip-vshankar-testing-20220103-142738-testing-default-smithi/ (rerun)
2370
2371
* https://tracker.ceph.com/issues/50223
2372
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2373
* https://tracker.ceph.com/issues/51964
2374
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
2375
* https://tracker.ceph.com/issues/51267
2376
    CommandFailedError: Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithi096 with status 1:...
2377
* https://tracker.ceph.com/issues/51282
2378
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2379
* https://tracker.ceph.com/issues/50821
2380
    qa: untar_snap_rm failure during mds thrashing
2381 35 Ramana Raja
* https://tracker.ceph.com/issues/51278
2382
    mds: "FAILED ceph_assert(!segments.empty())"
2383
* https://tracker.ceph.com/issues/52279
2384 34 Venky Shankar
    cephadm tests fail due to: error adding seccomp filter rule for syscall bdflush: requested action matches default action of filter
2385 33 Patrick Donnelly
2386
2387
h3. 2021 Dec 22
2388
2389
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211222.014316
2390
2391
* https://tracker.ceph.com/issues/52624
2392
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2393
* https://tracker.ceph.com/issues/50223
2394
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2395
* https://tracker.ceph.com/issues/52279
2396
    cephadm tests fail due to: error adding seccomp filter rule for syscall bdflush: requested action matches default action of filter
2397
* https://tracker.ceph.com/issues/50224
2398
    qa: test_mirroring_init_failure_with_recovery failure
2399
* https://tracker.ceph.com/issues/48773
2400
    qa: scrub does not complete
2401 32 Venky Shankar
2402
2403
h3. 2021 Nov 30
2404
2405
https://pulpito.ceph.com/vshankar-2021-11-24_07:14:27-fs-wip-vshankar-testing-20211124-094330-testing-default-smithi/
2406
https://pulpito.ceph.com/vshankar-2021-11-30_06:23:32-fs-wip-vshankar-testing-20211124-094330-distro-default-smithi/ (rerun w/ QA fixes)
2407
2408
* https://tracker.ceph.com/issues/53436
2409
    mds, mon: mds beacon messages get dropped? (mds never reaches up:active state)
2410
* https://tracker.ceph.com/issues/51964
2411
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
2412
* https://tracker.ceph.com/issues/48812
2413
    qa: test_scrub_pause_and_resume_with_abort failure
2414
* https://tracker.ceph.com/issues/51076
2415
    "wait_for_recovery: failed before timeout expired" during thrashosd test with EC backend.
2416
* https://tracker.ceph.com/issues/50223
2417
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2418
* https://tracker.ceph.com/issues/52624
2419
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2420
* https://tracker.ceph.com/issues/50250
2421
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
2422 31 Patrick Donnelly
2423
2424
h3. 2021 November 9
2425
2426
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211109.180315
2427
2428
* https://tracker.ceph.com/issues/53214
2429
    qa: "dd: error reading '/sys/kernel/debug/ceph/2a934501-6731-4052-a836-f42229a869be.client4874/metrics': Is a directory"
2430
* https://tracker.ceph.com/issues/48773
2431
    qa: scrub does not complete
2432
* https://tracker.ceph.com/issues/50223
2433
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2434
* https://tracker.ceph.com/issues/51282
2435
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2436
* https://tracker.ceph.com/issues/52624
2437
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2438
* https://tracker.ceph.com/issues/53216
2439
    qa: "RuntimeError: value of attributes should be either str or None. client_id"
2440
* https://tracker.ceph.com/issues/50250
2441
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
2442
2443 30 Patrick Donnelly
2444
2445
h3. 2021 November 03
2446
2447
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211103.023355
2448
2449
* https://tracker.ceph.com/issues/51964
2450
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
2451
* https://tracker.ceph.com/issues/51282
2452
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2453
* https://tracker.ceph.com/issues/52436
2454
    fs/ceph: "corrupt mdsmap"
2455
* https://tracker.ceph.com/issues/53074
2456
    pybind/mgr/cephadm: upgrade sequence does not continue if no MDS are active
2457
* https://tracker.ceph.com/issues/53150
2458
    pybind/mgr/cephadm/upgrade: tolerate MDS failures during upgrade straddling v16.2.5
2459
* https://tracker.ceph.com/issues/53155
2460
    MDSMonitor: assertion during upgrade to v16.2.5+
2461 29 Patrick Donnelly
2462
2463
h3. 2021 October 26
2464
2465
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211025.000447
2466
2467
* https://tracker.ceph.com/issues/53074
2468
    pybind/mgr/cephadm: upgrade sequence does not continue if no MDS are active
2469
* https://tracker.ceph.com/issues/52997
2470
    testing: hang ing umount
2471
* https://tracker.ceph.com/issues/50824
2472
    qa: snaptest-git-ceph bus error
2473
* https://tracker.ceph.com/issues/52436
2474
    fs/ceph: "corrupt mdsmap"
2475
* https://tracker.ceph.com/issues/48773
2476
    qa: scrub does not complete
2477
* https://tracker.ceph.com/issues/53082
2478
    ceph-fuse: segmenetation fault in Client::handle_mds_map
2479
* https://tracker.ceph.com/issues/50223
2480
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2481
* https://tracker.ceph.com/issues/52624
2482
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2483
* https://tracker.ceph.com/issues/50224
2484
    qa: test_mirroring_init_failure_with_recovery failure
2485
* https://tracker.ceph.com/issues/50821
2486
    qa: untar_snap_rm failure during mds thrashing
2487
* https://tracker.ceph.com/issues/50250
2488
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
2489
2490 27 Patrick Donnelly
2491
2492 28 Patrick Donnelly
h3. 2021 October 19
2493 27 Patrick Donnelly
2494
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211019.013028
2495
2496
* https://tracker.ceph.com/issues/52995
2497
    qa: test_standby_count_wanted failure
2498
* https://tracker.ceph.com/issues/52948
2499
    osd: fails to come up: "teuthology.misc:7 of 8 OSDs are up"
2500
* https://tracker.ceph.com/issues/52996
2501
    qa: test_perf_counters via test_openfiletable
2502
* https://tracker.ceph.com/issues/48772
2503
    qa: pjd: not ok 9, 44, 80
2504
* https://tracker.ceph.com/issues/52997
2505
    testing: hang ing umount
2506
* https://tracker.ceph.com/issues/50250
2507
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
2508
* https://tracker.ceph.com/issues/52624
2509
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2510
* https://tracker.ceph.com/issues/50223
2511
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2512
* https://tracker.ceph.com/issues/50821
2513
    qa: untar_snap_rm failure during mds thrashing
2514
* https://tracker.ceph.com/issues/48773
2515
    qa: scrub does not complete
2516 26 Patrick Donnelly
2517
2518
h3. 2021 October 12
2519
2520
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211012.192211
2521
2522
Some failures caused by teuthology bug: https://tracker.ceph.com/issues/52944
2523
2524
New test caused failure: https://github.com/ceph/ceph/pull/43297#discussion_r729883167
2525
2526
2527
* https://tracker.ceph.com/issues/51282
2528
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2529
* https://tracker.ceph.com/issues/52948
2530
    osd: fails to come up: "teuthology.misc:7 of 8 OSDs are up"
2531
* https://tracker.ceph.com/issues/48773
2532
    qa: scrub does not complete
2533
* https://tracker.ceph.com/issues/50224
2534
    qa: test_mirroring_init_failure_with_recovery failure
2535
* https://tracker.ceph.com/issues/52949
2536
    RuntimeError: The following counters failed to be set on mds daemons: {'mds.dir_split'}
2537 25 Patrick Donnelly
2538 23 Patrick Donnelly
2539 24 Patrick Donnelly
h3. 2021 October 02
2540
2541
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211002.163337
2542
2543
Some failures caused by cephadm upgrade test. Fixed in follow-up qa commit.
2544
2545
test_simple failures caused by PR in this set.
2546
2547
A few reruns because of QA infra noise.
2548
2549
* https://tracker.ceph.com/issues/52822
2550
    qa: failed pacific install on fs:upgrade
2551
* https://tracker.ceph.com/issues/52624
2552
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2553
* https://tracker.ceph.com/issues/50223
2554
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2555
* https://tracker.ceph.com/issues/48773
2556
    qa: scrub does not complete
2557
2558
2559 23 Patrick Donnelly
h3. 2021 September 20
2560
2561
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210917.174826
2562
2563
* https://tracker.ceph.com/issues/52677
2564
    qa: test_simple failure
2565
* https://tracker.ceph.com/issues/51279
2566
    kclient hangs on umount (testing branch)
2567
* https://tracker.ceph.com/issues/50223
2568
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2569
* https://tracker.ceph.com/issues/50250
2570
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
2571
* https://tracker.ceph.com/issues/52624
2572
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2573
* https://tracker.ceph.com/issues/52438
2574
    qa: ffsb timeout
2575 22 Patrick Donnelly
2576
2577
h3. 2021 September 10
2578
2579
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210910.181451
2580
2581
* https://tracker.ceph.com/issues/50223
2582
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2583
* https://tracker.ceph.com/issues/50250
2584
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
2585
* https://tracker.ceph.com/issues/52624
2586
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2587
* https://tracker.ceph.com/issues/52625
2588
    qa: test_kill_mdstable (tasks.cephfs.test_snapshots.TestSnapshots)
2589
* https://tracker.ceph.com/issues/52439
2590
    qa: acls does not compile on centos stream
2591
* https://tracker.ceph.com/issues/50821
2592
    qa: untar_snap_rm failure during mds thrashing
2593
* https://tracker.ceph.com/issues/48773
2594
    qa: scrub does not complete
2595
* https://tracker.ceph.com/issues/52626
2596
    mds: ScrubStack.cc: 831: FAILED ceph_assert(diri)
2597
* https://tracker.ceph.com/issues/51279
2598
    kclient hangs on umount (testing branch)
2599 21 Patrick Donnelly
2600
2601
h3. 2021 August 27
2602
2603
Several jobs died because of device failures.
2604
2605
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210827.024746
2606
2607
* https://tracker.ceph.com/issues/52430
2608
    mds: fast async create client mount breaks racy test
2609
* https://tracker.ceph.com/issues/52436
2610
    fs/ceph: "corrupt mdsmap"
2611
* https://tracker.ceph.com/issues/52437
2612
    mds: InoTable::replay_release_ids abort via test_inotable_sync
2613
* https://tracker.ceph.com/issues/51282
2614
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2615
* https://tracker.ceph.com/issues/52438
2616
    qa: ffsb timeout
2617
* https://tracker.ceph.com/issues/52439
2618
    qa: acls does not compile on centos stream
2619 20 Patrick Donnelly
2620
2621
h3. 2021 July 30
2622
2623
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210729.214022
2624
2625
* https://tracker.ceph.com/issues/50250
2626
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
2627
* https://tracker.ceph.com/issues/51282
2628
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2629
* https://tracker.ceph.com/issues/48773
2630
    qa: scrub does not complete
2631
* https://tracker.ceph.com/issues/51975
2632
    pybind/mgr/stats: KeyError
2633 19 Patrick Donnelly
2634
2635
h3. 2021 July 28
2636
2637
https://pulpito.ceph.com/pdonnell-2021-07-28_00:39:45-fs-wip-pdonnell-testing-20210727.213757-distro-basic-smithi/
2638
2639
with qa fix: https://pulpito.ceph.com/pdonnell-2021-07-28_16:20:28-fs-wip-pdonnell-testing-20210728.141004-distro-basic-smithi/
2640
2641
* https://tracker.ceph.com/issues/51905
2642
    qa: "error reading sessionmap 'mds1_sessionmap'"
2643
* https://tracker.ceph.com/issues/48773
2644
    qa: scrub does not complete
2645
* https://tracker.ceph.com/issues/50250
2646
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
2647
* https://tracker.ceph.com/issues/51267
2648
    CommandFailedError: Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithi096 with status 1:...
2649
* https://tracker.ceph.com/issues/51279
2650
    kclient hangs on umount (testing branch)
2651 18 Patrick Donnelly
2652
2653
h3. 2021 July 16
2654
2655
https://pulpito.ceph.com/pdonnell-2021-07-16_05:50:11-fs-wip-pdonnell-testing-20210716.022804-distro-basic-smithi/
2656
2657
* https://tracker.ceph.com/issues/48773
2658
    qa: scrub does not complete
2659
* https://tracker.ceph.com/issues/48772
2660
    qa: pjd: not ok 9, 44, 80
2661
* https://tracker.ceph.com/issues/45434
2662
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2663
* https://tracker.ceph.com/issues/51279
2664
    kclient hangs on umount (testing branch)
2665
* https://tracker.ceph.com/issues/50824
2666
    qa: snaptest-git-ceph bus error
2667 17 Patrick Donnelly
2668
2669
h3. 2021 July 04
2670
2671
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210703.052904
2672
2673
* https://tracker.ceph.com/issues/48773
2674
    qa: scrub does not complete
2675
* https://tracker.ceph.com/issues/39150
2676
    mon: "FAILED ceph_assert(session_map.sessions.empty())" when out of quorum
2677
* https://tracker.ceph.com/issues/45434
2678
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2679
* https://tracker.ceph.com/issues/51282
2680
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2681
* https://tracker.ceph.com/issues/48771
2682
    qa: iogen: workload fails to cause balancing
2683
* https://tracker.ceph.com/issues/51279
2684
    kclient hangs on umount (testing branch)
2685
* https://tracker.ceph.com/issues/50250
2686
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
2687 16 Patrick Donnelly
2688
2689
h3. 2021 July 01
2690
2691
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210701.192056
2692
2693
* https://tracker.ceph.com/issues/51197
2694
    qa: [WRN] Scrub error on inode 0x10000001520 (/client.0/tmp/t/linux-5.4/Documentation/driver-api) see mds.f log and `damage ls` output for details
2695
* https://tracker.ceph.com/issues/50866
2696
    osd: stat mismatch on objects
2697
* https://tracker.ceph.com/issues/48773
2698
    qa: scrub does not complete
2699 15 Patrick Donnelly
2700
2701
h3. 2021 June 26
2702
2703
https://pulpito.ceph.com/pdonnell-2021-06-26_00:57:00-fs-wip-pdonnell-testing-20210625.225421-distro-basic-smithi/
2704
2705
* https://tracker.ceph.com/issues/51183
2706
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
2707
* https://tracker.ceph.com/issues/51410
2708
    kclient: fails to finish reconnect during MDS thrashing (testing branch)
2709
* https://tracker.ceph.com/issues/48773
2710
    qa: scrub does not complete
2711
* https://tracker.ceph.com/issues/51282
2712
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2713
* https://tracker.ceph.com/issues/51169
2714
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
2715
* https://tracker.ceph.com/issues/48772
2716
    qa: pjd: not ok 9, 44, 80
2717 14 Patrick Donnelly
2718
2719
h3. 2021 June 21
2720
2721
https://pulpito.ceph.com/pdonnell-2021-06-22_00:27:21-fs-wip-pdonnell-testing-20210621.231646-distro-basic-smithi/
2722
2723
One failure caused by PR: https://github.com/ceph/ceph/pull/41935#issuecomment-866472599
2724
2725
* https://tracker.ceph.com/issues/51282
2726
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2727
* https://tracker.ceph.com/issues/51183
2728
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
2729
* https://tracker.ceph.com/issues/48773
2730
    qa: scrub does not complete
2731
* https://tracker.ceph.com/issues/48771
2732
    qa: iogen: workload fails to cause balancing
2733
* https://tracker.ceph.com/issues/51169
2734
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
2735
* https://tracker.ceph.com/issues/50495
2736
    libcephfs: shutdown race fails with status 141
2737
* https://tracker.ceph.com/issues/45434
2738
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2739
* https://tracker.ceph.com/issues/50824
2740
    qa: snaptest-git-ceph bus error
2741
* https://tracker.ceph.com/issues/50223
2742
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2743 13 Patrick Donnelly
2744
2745
h3. 2021 June 16
2746
2747
https://pulpito.ceph.com/pdonnell-2021-06-16_21:26:55-fs-wip-pdonnell-testing-20210616.191804-distro-basic-smithi/
2748
2749
MDS abort class of failures caused by PR: https://github.com/ceph/ceph/pull/41667
2750
2751
* https://tracker.ceph.com/issues/45434
2752
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2753
* https://tracker.ceph.com/issues/51169
2754
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
2755
* https://tracker.ceph.com/issues/43216
2756
    MDSMonitor: removes MDS coming out of quorum election
2757
* https://tracker.ceph.com/issues/51278
2758
    mds: "FAILED ceph_assert(!segments.empty())"
2759
* https://tracker.ceph.com/issues/51279
2760
    kclient hangs on umount (testing branch)
2761
* https://tracker.ceph.com/issues/51280
2762
    mds: "FAILED ceph_assert(r == 0 || r == -2)"
2763
* https://tracker.ceph.com/issues/51183
2764
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
2765
* https://tracker.ceph.com/issues/51281
2766
    qa: snaptest-snap-rm-cmp.sh: "echo 'FAIL: bad match, /tmp/a 4637e766853d1ad16a7b17079e2c6f03 != real c3883760b18d50e8d78819c54d579b00'"
2767
* https://tracker.ceph.com/issues/48773
2768
    qa: scrub does not complete
2769
* https://tracker.ceph.com/issues/51076
2770
    "wait_for_recovery: failed before timeout expired" during thrashosd test with EC backend.
2771
* https://tracker.ceph.com/issues/51228
2772
    qa: rmdir: failed to remove 'a/.snap/*': No such file or directory
2773
* https://tracker.ceph.com/issues/51282
2774
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2775 12 Patrick Donnelly
2776
2777
h3. 2021 June 14
2778
2779
https://pulpito.ceph.com/pdonnell-2021-06-14_20:53:05-fs-wip-pdonnell-testing-20210614.173325-distro-basic-smithi/
2780
2781
Some Ubuntu 20.04 upgrade fallout. In particular, upgrade tests are failing due to missing packages for 18.04 Pacific.
2782
2783
* https://tracker.ceph.com/issues/51169
2784
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
2785
* https://tracker.ceph.com/issues/51228
2786
    qa: rmdir: failed to remove 'a/.snap/*': No such file or directory
2787
* https://tracker.ceph.com/issues/48773
2788
    qa: scrub does not complete
2789
* https://tracker.ceph.com/issues/51183
2790
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
2791
* https://tracker.ceph.com/issues/45434
2792
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2793
* https://tracker.ceph.com/issues/51182
2794
    pybind/mgr/snap_schedule: Invalid command: Unexpected argument 'fs=cephfs'
2795
* https://tracker.ceph.com/issues/51229
2796
    qa: test_multi_snap_schedule list difference failure
2797
* https://tracker.ceph.com/issues/50821
2798
    qa: untar_snap_rm failure during mds thrashing
2799 11 Patrick Donnelly
2800
2801
h3. 2021 June 13
2802
2803
https://pulpito.ceph.com/pdonnell-2021-06-12_02:45:35-fs-wip-pdonnell-testing-20210612.002809-distro-basic-smithi/
2804
2805
Some Ubuntu 20.04 upgrade fallout. In particular, upgrade tests are failing due to missing packages for 18.04 Pacific.
2806
2807
* https://tracker.ceph.com/issues/51169
2808
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
2809
* https://tracker.ceph.com/issues/48773
2810
    qa: scrub does not complete
2811
* https://tracker.ceph.com/issues/51182
2812
    pybind/mgr/snap_schedule: Invalid command: Unexpected argument 'fs=cephfs'
2813
* https://tracker.ceph.com/issues/51183
2814
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
2815
* https://tracker.ceph.com/issues/51197
2816
    qa: [WRN] Scrub error on inode 0x10000001520 (/client.0/tmp/t/linux-5.4/Documentation/driver-api) see mds.f log and `damage ls` output for details
2817
* https://tracker.ceph.com/issues/45434
2818 10 Patrick Donnelly
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2819
2820
h3. 2021 June 11
2821
2822
https://pulpito.ceph.com/pdonnell-2021-06-11_18:02:10-fs-wip-pdonnell-testing-20210611.162716-distro-basic-smithi/
2823
2824
Some Ubuntu 20.04 upgrade fallout. In particular, upgrade tests are failing due to missing packages for 18.04 Pacific.
2825
2826
* https://tracker.ceph.com/issues/51169
2827
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
2828
* https://tracker.ceph.com/issues/45434
2829
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2830
* https://tracker.ceph.com/issues/48771
2831
    qa: iogen: workload fails to cause balancing
2832
* https://tracker.ceph.com/issues/43216
2833
    MDSMonitor: removes MDS coming out of quorum election
2834
* https://tracker.ceph.com/issues/51182
2835
    pybind/mgr/snap_schedule: Invalid command: Unexpected argument 'fs=cephfs'
2836
* https://tracker.ceph.com/issues/50223
2837
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2838
* https://tracker.ceph.com/issues/48773
2839
    qa: scrub does not complete
2840
* https://tracker.ceph.com/issues/51183
2841
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
2842
* https://tracker.ceph.com/issues/51184
2843
    qa: fs:bugs does not specify distro
2844 9 Patrick Donnelly
2845
2846
h3. 2021 June 03
2847
2848
https://pulpito.ceph.com/pdonnell-2021-06-03_03:40:33-fs-wip-pdonnell-testing-20210603.020013-distro-basic-smithi/
2849
2850
* https://tracker.ceph.com/issues/45434
2851
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2852
* https://tracker.ceph.com/issues/50016
2853
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
2854
* https://tracker.ceph.com/issues/50821
2855
    qa: untar_snap_rm failure during mds thrashing
2856
* https://tracker.ceph.com/issues/50622 (regression)
2857
    msg: active_connections regression
2858
* https://tracker.ceph.com/issues/49845#note-2 (regression)
2859
    qa: failed umount in test_volumes
2860
* https://tracker.ceph.com/issues/48773
2861
    qa: scrub does not complete
2862
* https://tracker.ceph.com/issues/43216
2863
    MDSMonitor: removes MDS coming out of quorum election
2864 7 Patrick Donnelly
2865
2866 8 Patrick Donnelly
h3. 2021 May 18
2867
2868
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210518.214114
2869
2870
Regression in testing kernel caused some failures. Ilya fixed those and rerun
2871
looked better. Some odd new noise in the rerun relating to packaging and "No
2872
module named 'tasks.ceph'".
2873
2874
* https://tracker.ceph.com/issues/50824
2875
    qa: snaptest-git-ceph bus error
2876
* https://tracker.ceph.com/issues/50622 (regression)
2877
    msg: active_connections regression
2878
* https://tracker.ceph.com/issues/49845#note-2 (regression)
2879
    qa: failed umount in test_volumes
2880
* https://tracker.ceph.com/issues/48203 (stock kernel update required)
2881
    qa: quota failure
2882
2883
2884 7 Patrick Donnelly
h3. 2021 May 18
2885
2886
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210518.025642
2887
2888
* https://tracker.ceph.com/issues/50821
2889
    qa: untar_snap_rm failure during mds thrashing
2890
* https://tracker.ceph.com/issues/48773
2891
    qa: scrub does not complete
2892
* https://tracker.ceph.com/issues/45591
2893
    mgr: FAILED ceph_assert(daemon != nullptr)
2894
* https://tracker.ceph.com/issues/50866
2895
    osd: stat mismatch on objects
2896
* https://tracker.ceph.com/issues/50016
2897
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
2898
* https://tracker.ceph.com/issues/50867
2899
    qa: fs:mirror: reduced data availability
2900
* https://tracker.ceph.com/issues/50821
2901
    qa: untar_snap_rm failure during mds thrashing
2902
* https://tracker.ceph.com/issues/50622 (regression)
2903
    msg: active_connections regression
2904
* https://tracker.ceph.com/issues/50223
2905
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2906
* https://tracker.ceph.com/issues/50868
2907
    qa: "kern.log.gz already exists; not overwritten"
2908
* https://tracker.ceph.com/issues/50870
2909
    qa: test_full: "rm: cannot remove 'large_file_a': Permission denied"
2910 6 Patrick Donnelly
2911
2912
h3. 2021 May 11
2913
2914
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210511.232042
2915
2916
* one class of failures caused by PR
2917
* https://tracker.ceph.com/issues/48812
2918
    qa: test_scrub_pause_and_resume_with_abort failure
2919
* https://tracker.ceph.com/issues/50390
2920
    mds: monclient: wait_auth_rotating timed out after 30
2921
* https://tracker.ceph.com/issues/48773
2922
    qa: scrub does not complete
2923
* https://tracker.ceph.com/issues/50821
2924
    qa: untar_snap_rm failure during mds thrashing
2925
* https://tracker.ceph.com/issues/50224
2926
    qa: test_mirroring_init_failure_with_recovery failure
2927
* https://tracker.ceph.com/issues/50622 (regression)
2928
    msg: active_connections regression
2929
* https://tracker.ceph.com/issues/50825
2930
    qa: snaptest-git-ceph hang during mon thrashing v2
2931
* https://tracker.ceph.com/issues/50821
2932
    qa: untar_snap_rm failure during mds thrashing
2933
* https://tracker.ceph.com/issues/50823
2934
    qa: RuntimeError: timeout waiting for cluster to stabilize
2935 5 Patrick Donnelly
2936
2937
h3. 2021 May 14
2938
2939
https://pulpito.ceph.com/pdonnell-2021-05-14_21:45:42-fs-master-distro-basic-smithi/
2940
2941
* https://tracker.ceph.com/issues/48812
2942
    qa: test_scrub_pause_and_resume_with_abort failure
2943
* https://tracker.ceph.com/issues/50821
2944
    qa: untar_snap_rm failure during mds thrashing
2945
* https://tracker.ceph.com/issues/50622 (regression)
2946
    msg: active_connections regression
2947
* https://tracker.ceph.com/issues/50822
2948
    qa: testing kernel patch for client metrics causes mds abort
2949
* https://tracker.ceph.com/issues/48773
2950
    qa: scrub does not complete
2951
* https://tracker.ceph.com/issues/50823
2952
    qa: RuntimeError: timeout waiting for cluster to stabilize
2953
* https://tracker.ceph.com/issues/50824
2954
    qa: snaptest-git-ceph bus error
2955
* https://tracker.ceph.com/issues/50825
2956
    qa: snaptest-git-ceph hang during mon thrashing v2
2957
* https://tracker.ceph.com/issues/50826
2958
    kceph: stock RHEL kernel hangs on snaptests with mon|osd thrashers
2959 4 Patrick Donnelly
2960
2961
h3. 2021 May 01
2962
2963
https://pulpito.ceph.com/pdonnell-2021-05-01_09:07:09-fs-wip-pdonnell-testing-20210501.040415-distro-basic-smithi/
2964
2965
* https://tracker.ceph.com/issues/45434
2966
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2967
* https://tracker.ceph.com/issues/50281
2968
    qa: untar_snap_rm timeout
2969
* https://tracker.ceph.com/issues/48203 (stock kernel update required)
2970
    qa: quota failure
2971
* https://tracker.ceph.com/issues/48773
2972
    qa: scrub does not complete
2973
* https://tracker.ceph.com/issues/50390
2974
    mds: monclient: wait_auth_rotating timed out after 30
2975
* https://tracker.ceph.com/issues/50250
2976
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details"
2977
* https://tracker.ceph.com/issues/50622 (regression)
2978
    msg: active_connections regression
2979
* https://tracker.ceph.com/issues/45591
2980
    mgr: FAILED ceph_assert(daemon != nullptr)
2981
* https://tracker.ceph.com/issues/50221
2982
    qa: snaptest-git-ceph failure in git diff
2983
* https://tracker.ceph.com/issues/50016
2984
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
2985 3 Patrick Donnelly
2986
2987
h3. 2021 Apr 15
2988
2989
https://pulpito.ceph.com/pdonnell-2021-04-15_01:35:57-fs-wip-pdonnell-testing-20210414.230315-distro-basic-smithi/
2990
2991
* https://tracker.ceph.com/issues/50281
2992
    qa: untar_snap_rm timeout
2993
* https://tracker.ceph.com/issues/50220
2994
    qa: dbench workload timeout
2995
* https://tracker.ceph.com/issues/50246
2996
    mds: failure replaying journal (EMetaBlob)
2997
* https://tracker.ceph.com/issues/50250
2998
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details"
2999
* https://tracker.ceph.com/issues/50016
3000
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
3001
* https://tracker.ceph.com/issues/50222
3002
    osd: 5.2s0 deep-scrub : stat mismatch
3003
* https://tracker.ceph.com/issues/45434
3004
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
3005
* https://tracker.ceph.com/issues/49845
3006
    qa: failed umount in test_volumes
3007
* https://tracker.ceph.com/issues/37808
3008
    osd: osdmap cache weak_refs assert during shutdown
3009
* https://tracker.ceph.com/issues/50387
3010
    client: fs/snaps failure
3011
* https://tracker.ceph.com/issues/50389
3012
    mds: "cluster [ERR] Error recovering journal 0x203: (2) No such file or directory" in cluster log"
3013
* https://tracker.ceph.com/issues/50216
3014
    qa: "ls: cannot access 'lost+found': No such file or directory"
3015
* https://tracker.ceph.com/issues/50390
3016
    mds: monclient: wait_auth_rotating timed out after 30
3017
3018 1 Patrick Donnelly
3019
3020 2 Patrick Donnelly
h3. 2021 Apr 08
3021
3022
https://pulpito.ceph.com/pdonnell-2021-04-08_22:42:24-fs-wip-pdonnell-testing-20210408.192301-distro-basic-smithi/
3023
3024
* https://tracker.ceph.com/issues/45434
3025
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
3026
* https://tracker.ceph.com/issues/50016
3027
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
3028
* https://tracker.ceph.com/issues/48773
3029
    qa: scrub does not complete
3030
* https://tracker.ceph.com/issues/50279
3031
    qa: "Replacing daemon mds.b as rank 0 with standby daemon mds.c"
3032
* https://tracker.ceph.com/issues/50246
3033
    mds: failure replaying journal (EMetaBlob)
3034
* https://tracker.ceph.com/issues/48365
3035
    qa: ffsb build failure on CentOS 8.2
3036
* https://tracker.ceph.com/issues/50216
3037
    qa: "ls: cannot access 'lost+found': No such file or directory"
3038
* https://tracker.ceph.com/issues/50223
3039
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
3040
* https://tracker.ceph.com/issues/50280
3041
    cephadm: RuntimeError: uid/gid not found
3042
* https://tracker.ceph.com/issues/50281
3043
    qa: untar_snap_rm timeout
3044
3045 1 Patrick Donnelly
h3. 2021 Apr 08
3046
3047
https://pulpito.ceph.com/pdonnell-2021-04-08_04:31:36-fs-wip-pdonnell-testing-20210408.024225-distro-basic-smithi/
3048
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210408.142238 (with logic inversion / QA fix)
3049
3050
* https://tracker.ceph.com/issues/50246
3051
    mds: failure replaying journal (EMetaBlob)
3052
* https://tracker.ceph.com/issues/50250
3053
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details"
3054
3055
3056
h3. 2021 Apr 07
3057
3058
https://pulpito.ceph.com/pdonnell-2021-04-07_02:12:41-fs-wip-pdonnell-testing-20210406.213012-distro-basic-smithi/
3059
3060
* https://tracker.ceph.com/issues/50215
3061
    qa: "log [ERR] : error reading sessionmap 'mds2_sessionmap'"
3062
* https://tracker.ceph.com/issues/49466
3063
    qa: "Command failed on gibba030 with status 1: 'set -ex\nsudo dd of=/tmp/tmp.ZEeZBasJer'"
3064
* https://tracker.ceph.com/issues/50216
3065
    qa: "ls: cannot access 'lost+found': No such file or directory"
3066
* https://tracker.ceph.com/issues/48773
3067
    qa: scrub does not complete
3068
* https://tracker.ceph.com/issues/49845
3069
    qa: failed umount in test_volumes
3070
* https://tracker.ceph.com/issues/50220
3071
    qa: dbench workload timeout
3072
* https://tracker.ceph.com/issues/50221
3073
    qa: snaptest-git-ceph failure in git diff
3074
* https://tracker.ceph.com/issues/50222
3075
    osd: 5.2s0 deep-scrub : stat mismatch
3076
* https://tracker.ceph.com/issues/50223
3077
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
3078
* https://tracker.ceph.com/issues/50224
3079
    qa: test_mirroring_init_failure_with_recovery failure
3080
3081
h3. 2021 Apr 01
3082
3083
https://pulpito.ceph.com/pdonnell-2021-04-01_00:45:34-fs-wip-pdonnell-testing-20210331.222326-distro-basic-smithi/
3084
3085
* https://tracker.ceph.com/issues/48772
3086
    qa: pjd: not ok 9, 44, 80
3087
* https://tracker.ceph.com/issues/50177
3088
    osd: "stalled aio... buggy kernel or bad device?"
3089
* https://tracker.ceph.com/issues/48771
3090
    qa: iogen: workload fails to cause balancing
3091
* https://tracker.ceph.com/issues/49845
3092
    qa: failed umount in test_volumes
3093
* https://tracker.ceph.com/issues/48773
3094
    qa: scrub does not complete
3095
* https://tracker.ceph.com/issues/48805
3096
    mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details"
3097
* https://tracker.ceph.com/issues/50178
3098
    qa: "TypeError: run() got an unexpected keyword argument 'shell'"
3099
* https://tracker.ceph.com/issues/45434
3100
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
3101
3102
h3. 2021 Mar 24
3103
3104
https://pulpito.ceph.com/pdonnell-2021-03-24_23:26:35-fs-wip-pdonnell-testing-20210324.190252-distro-basic-smithi/
3105
3106
* https://tracker.ceph.com/issues/49500
3107
    qa: "Assertion `cb_done' failed."
3108
* https://tracker.ceph.com/issues/50019
3109
    qa: mount failure with cephadm "probably no MDS server is up?"
3110
* https://tracker.ceph.com/issues/50020
3111
    qa: "RADOS object not found (Failed to operate read op for oid cephfs_mirror)"
3112
* https://tracker.ceph.com/issues/48773
3113
    qa: scrub does not complete
3114
* https://tracker.ceph.com/issues/45434
3115
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
3116
* https://tracker.ceph.com/issues/48805
3117
    mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details"
3118
* https://tracker.ceph.com/issues/48772
3119
    qa: pjd: not ok 9, 44, 80
3120
* https://tracker.ceph.com/issues/50021
3121
    qa: snaptest-git-ceph failure during mon thrashing
3122
* https://tracker.ceph.com/issues/48771
3123
    qa: iogen: workload fails to cause balancing
3124
* https://tracker.ceph.com/issues/50016
3125
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
3126
* https://tracker.ceph.com/issues/49466
3127
    qa: "Command failed on gibba030 with status 1: 'set -ex\nsudo dd of=/tmp/tmp.ZEeZBasJer'"
3128
3129
3130
h3. 2021 Mar 18
3131
3132
https://pulpito.ceph.com/pdonnell-2021-03-18_13:46:31-fs-wip-pdonnell-testing-20210318.024145-distro-basic-smithi/
3133
3134
* https://tracker.ceph.com/issues/49466
3135
    qa: "Command failed on gibba030 with status 1: 'set -ex\nsudo dd of=/tmp/tmp.ZEeZBasJer'"
3136
* https://tracker.ceph.com/issues/48773
3137
    qa: scrub does not complete
3138
* https://tracker.ceph.com/issues/48805
3139
    mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details"
3140
* https://tracker.ceph.com/issues/45434
3141
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
3142
* https://tracker.ceph.com/issues/49845
3143
    qa: failed umount in test_volumes
3144
* https://tracker.ceph.com/issues/49605
3145
    mgr: drops command on the floor
3146
* https://tracker.ceph.com/issues/48203 (stock kernel update required)
3147
    qa: quota failure
3148
* https://tracker.ceph.com/issues/49928
3149
    client: items pinned in cache preventing unmount x2
3150
3151
h3. 2021 Mar 15
3152
3153
https://pulpito.ceph.com/pdonnell-2021-03-15_22:16:56-fs-wip-pdonnell-testing-20210315.182203-distro-basic-smithi/
3154
3155
* https://tracker.ceph.com/issues/49842
3156
    qa: stuck pkg install
3157
* https://tracker.ceph.com/issues/49466
3158
    qa: "Command failed on gibba030 with status 1: 'set -ex\nsudo dd of=/tmp/tmp.ZEeZBasJer'"
3159
* https://tracker.ceph.com/issues/49822
3160
    test: test_mirroring_command_idempotency (tasks.cephfs.test_admin.TestMirroringCommands) failure
3161
* https://tracker.ceph.com/issues/49240
3162
    terminate called after throwing an instance of 'std::bad_alloc'
3163
* https://tracker.ceph.com/issues/48773
3164
    qa: scrub does not complete
3165
* https://tracker.ceph.com/issues/45434
3166
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
3167
* https://tracker.ceph.com/issues/49500
3168
    qa: "Assertion `cb_done' failed."
3169
* https://tracker.ceph.com/issues/49843
3170
    qa: fs/snaps/snaptest-upchildrealms.sh failure
3171
* https://tracker.ceph.com/issues/49845
3172
    qa: failed umount in test_volumes
3173
* https://tracker.ceph.com/issues/48805
3174
    mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details"
3175
* https://tracker.ceph.com/issues/49605
3176
    mgr: drops command on the floor
3177
3178
and failure caused by PR: https://github.com/ceph/ceph/pull/39969
3179
3180
3181
h3. 2021 Mar 09
3182
3183
https://pulpito.ceph.com/pdonnell-2021-03-09_03:27:39-fs-wip-pdonnell-testing-20210308.214827-distro-basic-smithi/
3184
3185
* https://tracker.ceph.com/issues/49500
3186
    qa: "Assertion `cb_done' failed."
3187
* https://tracker.ceph.com/issues/48805
3188
    mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details"
3189
* https://tracker.ceph.com/issues/48773
3190
    qa: scrub does not complete
3191
* https://tracker.ceph.com/issues/45434
3192
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
3193
* https://tracker.ceph.com/issues/49240
3194
    terminate called after throwing an instance of 'std::bad_alloc'
3195
* https://tracker.ceph.com/issues/49466
3196
    qa: "Command failed on gibba030 with status 1: 'set -ex\nsudo dd of=/tmp/tmp.ZEeZBasJer'"
3197
* https://tracker.ceph.com/issues/49684
3198
    qa: fs:cephadm mount does not wait for mds to be created
3199
* https://tracker.ceph.com/issues/48771
3200
    qa: iogen: workload fails to cause balancing