Project

General

Profile

Main » History » Version 249

Rishabh Dave, 04/04/2024 10:06 AM

1 221 Patrick Donnelly
h1. <code>main</code> branch
2 1 Patrick Donnelly
3 247 Rishabh Dave
h3. ADD NEW ENTRY HERE
4
5 249 Rishabh Dave
h3. 4 Apr 2024
6 246 Rishabh Dave
7
https://pulpito.ceph.com/rishabh-2024-03-27_05:27:11-fs-wip-rishabh-testing-20240326.131558-testing-default-smithi/
8
9
* https://tracker.ceph.com/issues/64927
10
  qa/cephfs: test_cephfs_mirror_blocklist raises "KeyError: 'rados_inst'"
11
* https://tracker.ceph.com/issues/65022
12
  qa: test_max_items_per_obj open procs not fully cleaned up
13
* https://tracker.ceph.com/issues/63699
14
  qa: failed cephfs-shell test_reading_conf
15
* https://tracker.ceph.com/issues/63700
16
  qa: test_cd_with_args failure
17
* https://tracker.ceph.com/issues/65136
18
  QA failure: test_fscrypt_dummy_encryption_with_quick_group
19
* https://tracker.ceph.com/issues/65246
20
  qa/cephfs: test_multifs_single_path_rootsquash (tasks.cephfs.test_admin.TestFsAuthorize)
21
22 248 Rishabh Dave
23 246 Rishabh Dave
* https://tracker.ceph.com/issues/58945
24
  qa: xfstests-dev's generic test suite has failures with fuse client
25 1 Patrick Donnelly
* https://tracker.ceph.com/issues/57656
26 248 Rishabh Dave
[testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
27 246 Rishabh Dave
* https://tracker.ceph.com/issues/63265
28 1 Patrick Donnelly
  qa: fs/snaps/snaptest-git-ceph.sh failed when reseting to tag 'v0.1'
29 246 Rishabh Dave
* https://tracker.ceph.com/issues/62067
30 248 Rishabh Dave
  ffsb.sh failure "Resource temporarily unavailable" 
31 246 Rishabh Dave
* https://tracker.ceph.com/issues/63949
32
  leak in mds.c detected by valgrind during CephFS QA run
33
* https://tracker.ceph.com/issues/48562
34
  qa: scrub - object missing on disk; some files may be lost
35
* https://tracker.ceph.com/issues/65020
36
  qa: Scrub error on inode 0x1000000356c (/volumes/qa/sv_0/2f8f6bb4-3ea9-47a0-bd79-a0f50dc149d5/client.0/tmp/clients/client7/~dmtmp/PARADOX) see mds.b log and `damage ls` output for details" in cluster log
37
* https://tracker.ceph.com/issues/64572
38
  workunits/fsx.sh failure
39
* https://tracker.ceph.com/issues/57676
40
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
41 1 Patrick Donnelly
* https://tracker.ceph.com/issues/64502
42 246 Rishabh Dave
  client: ceph-fuse fails to unmount after upgrade to main
43
* https://tracker.ceph.com/issues/65018
44 248 Rishabh Dave
  PG_DEGRADED warnings during cluster creation via cephadm: "Health check failed: Degraded data redundancy: 2/192 objects degraded (1.042%), 1 pg degraded (PG_DEGRADED)" 
45 246 Rishabh Dave
* https://tracker.ceph.com/issues/52624
46 1 Patrick Donnelly
  qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
47
* https://tracker.ceph.com/issues/54741
48
  crash: MDSTableClient::got_journaled_ack(unsigned long)
49 248 Rishabh Dave
* https://tracker.ceph.com/issues/65265
50
  qa: health warning "no active mgr (MGR_DOWN)" occurs before and after test_nfs runs
51
* https://tracker.ceph.com/issues/65308
52
  qa: fs was offline but also unexpectedly degraded
53
* https://tracker.ceph.com/issues/65309
54
  qa: dbench.sh failed with "ERROR: handle 10318 was not found"
55 245 Rishabh Dave
56 240 Patrick Donnelly
h3. 2024-04-02
57
58
https://tracker.ceph.com/issues/65215
59
60
* "qa: error during scrub thrashing: rank damage found: {'backtrace'}":https://tracker.ceph.com/issues/57676
61
* "qa: ceph tell 4.3a deep-scrub command not found":https://tracker.ceph.com/issues/64972
62
* "pacific/quincy/v18.2.0: client: ceph-fuse fails to unmount after upgrade to main":https://tracker.ceph.com/issues/64502
63
* "Test failure: test_cephfs_mirror_cancel_mirroring_and_readd":https://tracker.ceph.com/issues/64711
64
* "workunits/fsx.sh failure":https://tracker.ceph.com/issues/64572
65
* "qa: failed cephfs-shell test_reading_conf":https://tracker.ceph.com/issues/63699
66
* "centos 9 testing reveals rocksdb Leak_StillReachable memory leak in mons":https://tracker.ceph.com/issues/61774
67
* "qa: test_max_items_per_obj open procs not fully cleaned up":https://tracker.ceph.com/issues/65022
68
* "qa: dbench workload timeout":https://tracker.ceph.com/issues/50220
69
* "suites/fsstress.sh hangs on one client - test times out":https://tracker.ceph.com/issues/64707
70 241 Patrick Donnelly
* "qa/suites/fs/nfs: cluster [WRN] Health check failed: 1 stray daemon(s) not managed by cephadm (CEPHADM_STRAY_DAEMON)" in cluster log":https://tracker.ceph.com/issues/65021
71
* "qa: Scrub error on inode 0x1000000356c (/volumes/qa/sv_0/2f8f6bb4-3ea9-47a0-bd79-a0f50dc149d5/client.0/tmp/clients/client7/~dmtmp/PARADOX) see mds.b log and `damage ls` output for details in cluster log":https://tracker.ceph.com/issues/65020
72
* "qa: iogen workunit: The following counters failed to be set on mds daemons: {'mds.exported', 'mds.imported'}":https://tracker.ceph.com/issues/54108
73
* "ffsb.sh failure Resource temporarily unavailable":https://tracker.ceph.com/issues/62067
74
* "QA failure: test_fscrypt_dummy_encryption_with_quick_group":https://tracker.ceph.com/issues/65136
75 244 Patrick Donnelly
* "qa: cluster [WRN] Health detail: HEALTH_WARN 1 pool(s) do not have an application enabled in cluster log":https://tracker.ceph.com/issues/65271
76 241 Patrick Donnelly
* "qa: test_cephfs_mirror_cancel_sync fails in a 100 jobs run of fs:mirror suite":https://tracker.ceph.com/issues/64534
77 240 Patrick Donnelly
78 236 Patrick Donnelly
h3. 2024-03-28
79
80
https://tracker.ceph.com/issues/65213
81
82 237 Patrick Donnelly
* "qa: error during scrub thrashing: rank damage found: {'backtrace'}":https://tracker.ceph.com/issues/57676
83
* "workunits/fsx.sh failure":https://tracker.ceph.com/issues/64572
84
* "PG_DEGRADED warnings during cluster creation via cephadm: Health check failed: Degraded data":https://tracker.ceph.com/issues/65018
85 238 Patrick Donnelly
* "suites/fsstress.sh hangs on one client - test times out":https://tracker.ceph.com/issues/64707
86
* "qa: ceph tell 4.3a deep-scrub command not found":https://tracker.ceph.com/issues/64972
87
* "qa: iogen workunit: The following counters failed to be set on mds daemons: {'mds.exported', 'mds.imported'}":https://tracker.ceph.com/issues/54108
88 239 Patrick Donnelly
* "qa: Scrub error on inode 0x1000000356c (/volumes/qa/sv_0/2f8f6bb4-3ea9-47a0-bd79-a0f50dc149d5/client.0/tmp/clients/client7/~dmtmp/PARADOX) see mds.b log and `damage ls` output for details in cluster log":https://tracker.ceph.com/issues/65020
89
* "qa: failed cephfs-shell test_reading_conf":https://tracker.ceph.com/issues/63699
90
* "Test failure: test_cephfs_mirror_cancel_mirroring_and_readd":https://tracker.ceph.com/issues/64711
91
* "qa: test_max_items_per_obj open procs not fully cleaned up":https://tracker.ceph.com/issues/65022
92
* "pacific/quincy/v18.2.0: client: ceph-fuse fails to unmount after upgrade to main":https://tracker.ceph.com/issues/64502
93
* "centos 9 testing reveals rocksdb Leak_StillReachable memory leak in mons":https://tracker.ceph.com/issues/61774
94
* "qa: Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)":https://tracker.ceph.com/issues/52624
95
* "qa: dbench workload timeout":https://tracker.ceph.com/issues/50220
96
97
98 236 Patrick Donnelly
99 235 Milind Changire
h3. 2024-03-25
100
101
https://pulpito.ceph.com/mchangir-2024-03-22_09:46:06-fs:upgrade-wip-mchangir-testing-main-20240318.032620-testing-default-smithi/
102
* https://tracker.ceph.com/issues/64502
103
  fusermount -u fails with: teuthology.exceptions.MaxWhileTries: reached maximum tries (51) after waiting for 300 seconds
104
105
https://pulpito.ceph.com/mchangir-2024-03-22_09:48:09-fs:libcephfs-wip-mchangir-testing-main-20240318.032620-testing-default-smithi/
106
107
* https://tracker.ceph.com/issues/62245
108
  libcephfs/test.sh failed - https://tracker.ceph.com/issues/62245#note-3
109
110
111 228 Patrick Donnelly
h3. 2024-03-20
112
113 234 Patrick Donnelly
https://pulpito.ceph.com/?branch=wip-batrick-testing-20240320.145742
114 228 Patrick Donnelly
115 233 Patrick Donnelly
https://github.com/batrick/ceph/commit/360516069d9393362c4cc6eb9371680fe16d66ab
116
117 229 Patrick Donnelly
Ubuntu jobs filtered out because builds were skipped by jenkins/shaman.
118 1 Patrick Donnelly
119 229 Patrick Donnelly
This run has a lot more failures because https://github.com/ceph/ceph/pull/55455 fixed log WRN/ERR checks.
120 228 Patrick Donnelly
121 229 Patrick Donnelly
* https://tracker.ceph.com/issues/57676
122
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
123
* https://tracker.ceph.com/issues/64572
124
    workunits/fsx.sh failure
125
* https://tracker.ceph.com/issues/65018
126
    PG_DEGRADED warnings during cluster creation via cephadm: "Health check failed: Degraded data redundancy: 2/192 objects degraded (1.042%), 1 pg degraded (PG_DEGRADED)"
127
* https://tracker.ceph.com/issues/64707 (new issue)
128
    suites/fsstress.sh hangs on one client - test times out
129 1 Patrick Donnelly
* https://tracker.ceph.com/issues/64988
130
    qa: fs:workloads mgr client evicted indicated by "cluster [WRN] evicting unresponsive client smithi042:x (15288), after 303.306 seconds"
131
* https://tracker.ceph.com/issues/59684
132
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
133 230 Patrick Donnelly
* https://tracker.ceph.com/issues/64972
134
    qa: "ceph tell 4.3a deep-scrub" command not found
135
* https://tracker.ceph.com/issues/54108
136
    qa: iogen workunit: "The following counters failed to be set on mds daemons: {'mds.exported', 'mds.imported'}"
137
* https://tracker.ceph.com/issues/65019
138
    qa/suites/fs/top: [WRN] Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)" in cluster log 
139
* https://tracker.ceph.com/issues/65020
140
    qa: Scrub error on inode 0x1000000356c (/volumes/qa/sv_0/2f8f6bb4-3ea9-47a0-bd79-a0f50dc149d5/client.0/tmp/clients/client7/~dmtmp/PARADOX) see mds.b log and `damage ls` output for details" in cluster log
141
* https://tracker.ceph.com/issues/65021
142
    qa/suites/fs/nfs: cluster [WRN] Health check failed: 1 stray daemon(s) not managed by cephadm (CEPHADM_STRAY_DAEMON)" in cluster log
143
* https://tracker.ceph.com/issues/63699
144
    qa: failed cephfs-shell test_reading_conf
145 231 Patrick Donnelly
* https://tracker.ceph.com/issues/64711
146
    Test failure: test_cephfs_mirror_cancel_mirroring_and_readd (tasks.cephfs.test_mirroring.TestMirroring)
147
* https://tracker.ceph.com/issues/50821
148
    qa: untar_snap_rm failure during mds thrashing
149 232 Patrick Donnelly
* https://tracker.ceph.com/issues/65022
150
    qa: test_max_items_per_obj open procs not fully cleaned up
151 228 Patrick Donnelly
152 226 Venky Shankar
h3.  14th March 2024
153
154
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20240307.013758
155
156 227 Venky Shankar
(pjd.sh failures are related to a bug in the testing kernel. See - https://tracker.ceph.com/issues/64679#note-4)
157 226 Venky Shankar
158
* https://tracker.ceph.com/issues/62067
159
    ffsb.sh failure "Resource temporarily unavailable"
160
* https://tracker.ceph.com/issues/57676
161
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
162
* https://tracker.ceph.com/issues/64502
163
    pacific/quincy/v18.2.0: client: ceph-fuse fails to unmount after upgrade to main
164
* https://tracker.ceph.com/issues/64572
165
    workunits/fsx.sh failure
166
* https://tracker.ceph.com/issues/63700
167
    qa: test_cd_with_args failure
168
* https://tracker.ceph.com/issues/59684
169
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
170
* https://tracker.ceph.com/issues/61243
171
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
172
173 225 Venky Shankar
h3. 5th March 2024
174
175
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20240304.042522
176
177
* https://tracker.ceph.com/issues/57676
178
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
179
* https://tracker.ceph.com/issues/64502
180
    pacific/quincy/v18.2.0: client: ceph-fuse fails to unmount after upgrade to main
181
* https://tracker.ceph.com/issues/63949
182
    leak in mds.c detected by valgrind during CephFS QA run
183
* https://tracker.ceph.com/issues/57656
184
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
185
* https://tracker.ceph.com/issues/63699
186
    qa: failed cephfs-shell test_reading_conf
187
* https://tracker.ceph.com/issues/64572
188
    workunits/fsx.sh failure
189
* https://tracker.ceph.com/issues/64707 (new issue)
190
    suites/fsstress.sh hangs on one client - test times out
191
* https://tracker.ceph.com/issues/59684
192
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
193
* https://tracker.ceph.com/issues/63700
194
    qa: test_cd_with_args failure
195
* https://tracker.ceph.com/issues/64711
196
    Test failure: test_cephfs_mirror_cancel_mirroring_and_readd (tasks.cephfs.test_mirroring.TestMirroring)
197
* https://tracker.ceph.com/issues/64729 (new issue)
198
    mon.a (mon.0) 1281 : cluster 3 [WRN] MDS_SLOW_METADATA_IO: 3 MDSs report slow metadata IOs" in cluster log
199
* https://tracker.ceph.com/issues/64730
200
    fs/misc/multiple_rsync.sh workunit times out
201
202 224 Venky Shankar
h3. 26th Feb 2024
203
204
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20240216.060239
205
206
(This run is a bit messy due to
207
208
  a) OCI runtime issues in the testing kernel with centos9
209
  b) SELinux denials related failures
210
  c) Unrelated MON_DOWN warnings)
211
212
* https://tracker.ceph.com/issues/57676
213
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
214
* https://tracker.ceph.com/issues/63700
215
    qa: test_cd_with_args failure
216
* https://tracker.ceph.com/issues/63949
217
    leak in mds.c detected by valgrind during CephFS QA run
218
* https://tracker.ceph.com/issues/59684
219
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
220
* https://tracker.ceph.com/issues/61243
221
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
222
* https://tracker.ceph.com/issues/63699
223
    qa: failed cephfs-shell test_reading_conf
224
* https://tracker.ceph.com/issues/64172
225
    Test failure: test_multiple_path_r (tasks.cephfs.test_admin.TestFsAuthorize)
226
* https://tracker.ceph.com/issues/57656
227
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
228
* https://tracker.ceph.com/issues/64572
229
    workunits/fsx.sh failure
230
231 222 Patrick Donnelly
h3. 20th Feb 2024
232
233
https://github.com/ceph/ceph/pull/55601
234
https://github.com/ceph/ceph/pull/55659
235
236
https://pulpito.ceph.com/pdonnell-2024-02-20_07:23:03-fs:upgrade:mds_upgrade_sequence-wip-batrick-testing-20240220.022152-distro-default-smithi/
237
238
* https://tracker.ceph.com/issues/64502
239
    client: quincy ceph-fuse fails to unmount after upgrade to main
240
241 223 Patrick Donnelly
This run has numerous problems. #55601 introduces testing for the upgrade sequence from </code>reef/{v18.2.0,v18.2.1,reef}</code> as well as an extra dimension for the ceph-fuse client. The main "big" issue is i64502: the ceph-fuse client is not being unmounted when <code>fusermount -u</code> is called. Instead, the client begins to unmount only after daemons are shut down during test cleanup.
242 218 Venky Shankar
243
h3. 19th Feb 2024
244
245 220 Venky Shankar
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20240217.015652
246
247 218 Venky Shankar
* https://tracker.ceph.com/issues/61243
248
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
249
* https://tracker.ceph.com/issues/63700
250
    qa: test_cd_with_args failure
251
* https://tracker.ceph.com/issues/63141
252
    qa/cephfs: test_idem_unaffected_root_squash fails
253
* https://tracker.ceph.com/issues/59684
254
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
255
* https://tracker.ceph.com/issues/63949
256
    leak in mds.c detected by valgrind during CephFS QA run
257
* https://tracker.ceph.com/issues/63764
258
    Test failure: test_r_with_fsname_and_no_path_in_cap (tasks.cephfs.test_multifs_auth.TestMDSCaps)
259
* https://tracker.ceph.com/issues/63699
260
    qa: failed cephfs-shell test_reading_conf
261 219 Venky Shankar
* https://tracker.ceph.com/issues/64482
262
    ceph: stderr Error: OCI runtime error: crun: bpf create ``: Function not implemented
263 201 Rishabh Dave
264 217 Venky Shankar
h3. 29 Jan 2024
265
266
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20240119.075157-1
267
268
* https://tracker.ceph.com/issues/57676
269
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
270
* https://tracker.ceph.com/issues/63949
271
    leak in mds.c detected by valgrind during CephFS QA run
272
* https://tracker.ceph.com/issues/62067
273
    ffsb.sh failure "Resource temporarily unavailable"
274
* https://tracker.ceph.com/issues/64172
275
    Test failure: test_multiple_path_r (tasks.cephfs.test_admin.TestFsAuthorize)
276
* https://tracker.ceph.com/issues/63265
277
    qa: fs/snaps/snaptest-git-ceph.sh failed when reseting to tag 'v0.1'
278
* https://tracker.ceph.com/issues/61243
279
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
280
* https://tracker.ceph.com/issues/59684
281
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
282
* https://tracker.ceph.com/issues/57656
283
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
284
* https://tracker.ceph.com/issues/64209
285
    snaptest-multiple-capsnaps.sh fails with "got remote process result: 1"
286
287 216 Venky Shankar
h3. 17th Jan 2024
288
289
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20240103.072409-1
290
291
* https://tracker.ceph.com/issues/63764
292
    Test failure: test_r_with_fsname_and_no_path_in_cap (tasks.cephfs.test_multifs_auth.TestMDSCaps)
293
* https://tracker.ceph.com/issues/57676
294
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
295
* https://tracker.ceph.com/issues/51964
296
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
297
* https://tracker.ceph.com/issues/63949
298
    leak in mds.c detected by valgrind during CephFS QA run
299
* https://tracker.ceph.com/issues/62067
300
    ffsb.sh failure "Resource temporarily unavailable"
301
* https://tracker.ceph.com/issues/61243
302
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
303
* https://tracker.ceph.com/issues/63259
304
    mds: failed to store backtrace and force file system read-only
305
* https://tracker.ceph.com/issues/63265
306
    qa: fs/snaps/snaptest-git-ceph.sh failed when reseting to tag 'v0.1'
307
308
h3. 16 Jan 2024
309 215 Rishabh Dave
310 214 Rishabh Dave
https://pulpito.ceph.com/rishabh-2023-12-11_15:37:57-fs-rishabh-2023dec11-testing-default-smithi/
311
https://pulpito.ceph.com/rishabh-2023-12-17_11:19:43-fs-rishabh-2023dec11-testing-default-smithi/
312
https://pulpito.ceph.com/rishabh-2024-01-04_18:43:16-fs-rishabh-2024jan4-testing-default-smithi
313
314
* https://tracker.ceph.com/issues/63764
315
  Test failure: test_r_with_fsname_and_no_path_in_cap (tasks.cephfs.test_multifs_auth.TestMDSCaps)
316
* https://tracker.ceph.com/issues/63141
317
  qa/cephfs: test_idem_unaffected_root_squash fails
318
* https://tracker.ceph.com/issues/62067
319
  ffsb.sh failure "Resource temporarily unavailable" 
320
* https://tracker.ceph.com/issues/51964
321
  qa: test_cephfs_mirror_restart_sync_on_blocklist failure
322
* https://tracker.ceph.com/issues/54462 
323
  Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128
324
* https://tracker.ceph.com/issues/57676
325
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
326
327
* https://tracker.ceph.com/issues/63949
328
  valgrind leak in MDS
329
* https://tracker.ceph.com/issues/64041
330
  qa/cephfs: fs/upgrade/nofs suite attempts to jump more than 2 releases
331
* fsstress failure in last run was due a kernel MM layer failure, unrelated to CephFS
332
* from last run, job #7507400 failed due to MGR; FS wasn't degraded, so it's unrelated to CephFS
333
334 213 Venky Shankar
h3. 06 Dec 2023
335
336
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20231206.125818
337
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20231206.125818-x (rerun w/ squid kickoff changes)
338
339
* https://tracker.ceph.com/issues/63764
340
    Test failure: test_r_with_fsname_and_no_path_in_cap (tasks.cephfs.test_multifs_auth.TestMDSCaps)
341
* https://tracker.ceph.com/issues/63233
342
    mon|client|mds: valgrind reports possible leaks in the MDS
343
* https://tracker.ceph.com/issues/57676
344
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
345
* https://tracker.ceph.com/issues/62580
346
    testing: Test failure: test_snapshot_remove (tasks.cephfs.test_strays.TestStrays)
347
* https://tracker.ceph.com/issues/62067
348
    ffsb.sh failure "Resource temporarily unavailable"
349
* https://tracker.ceph.com/issues/61243
350
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
351
* https://tracker.ceph.com/issues/62081
352
    tasks/fscrypt-common does not finish, timesout
353
* https://tracker.ceph.com/issues/63265
354
    qa: fs/snaps/snaptest-git-ceph.sh failed when reseting to tag 'v0.1'
355
* https://tracker.ceph.com/issues/63806
356
    ffsb.sh workunit failure (MDS: std::out_of_range, damaged)
357
358 211 Patrick Donnelly
h3. 30 Nov 2023
359
360
https://pulpito.ceph.com/pdonnell-2023-11-30_08:05:19-fs:shell-wip-batrick-testing-20231130.014408-distro-default-smithi/
361
362
* https://tracker.ceph.com/issues/63699
363 212 Patrick Donnelly
    qa: failed cephfs-shell test_reading_conf
364
* https://tracker.ceph.com/issues/63700
365
    qa: test_cd_with_args failure
366 211 Patrick Donnelly
367 210 Venky Shankar
h3. 29 Nov 2023
368
369
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20231107.042705
370
371
* https://tracker.ceph.com/issues/63233
372
    mon|client|mds: valgrind reports possible leaks in the MDS
373
* https://tracker.ceph.com/issues/63141
374
    qa/cephfs: test_idem_unaffected_root_squash fails
375
* https://tracker.ceph.com/issues/57676
376
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
377
* https://tracker.ceph.com/issues/57655
378
    qa: fs:mixed-clients kernel_untar_build failure
379
* https://tracker.ceph.com/issues/62067
380
    ffsb.sh failure "Resource temporarily unavailable"
381
* https://tracker.ceph.com/issues/61243
382
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
383
* https://tracker.ceph.com/issues/62510 (pending RHEL back port)
    snaptest-git-ceph.sh failure with fs/thrash
384
* https://tracker.ceph.com/issues/62810
385
    Failure in snaptest-git-ceph.sh (it's an async unlink/create bug) -- Need to fix again
386
387 206 Venky Shankar
h3. 14 Nov 2023
388 207 Milind Changire
(Milind)
389
390
https://pulpito.ceph.com/mchangir-2023-11-13_10:27:15-fs-wip-mchangir-testing-20231110.052303-testing-default-smithi/
391
392
* https://tracker.ceph.com/issues/53859
393
  qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
394
* https://tracker.ceph.com/issues/63233
395
  mon|client|mds: valgrind reports possible leaks in the MDS
396
* https://tracker.ceph.com/issues/63521
397
  qa: Test failure: test_scrub_merge_dirfrags (tasks.cephfs.test_scrub_checks.TestScrubChecks)
398
* https://tracker.ceph.com/issues/57655
399
  qa: fs:mixed-clients kernel_untar_build failure
400
* https://tracker.ceph.com/issues/62580
401
    testing: Test failure: test_snapshot_remove (tasks.cephfs.test_strays.TestStrays)
402
* https://tracker.ceph.com/issues/57676
403
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
404
* https://tracker.ceph.com/issues/61243
405
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
406
* https://tracker.ceph.com/issues/63141
407
    qa/cephfs: test_idem_unaffected_root_squash fails
408
* https://tracker.ceph.com/issues/51964
409
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
410
* https://tracker.ceph.com/issues/63522
411
    No module named 'tasks.ceph_fuse'
412
    No module named 'tasks.kclient'
413
    No module named 'tasks.cephfs.fuse_mount'
414
    No module named 'tasks.ceph'
415
* https://tracker.ceph.com/issues/63523
416
    Command failed - qa/workunits/fs/misc/general_vxattrs.sh
417
418
419
h3. 14 Nov 2023
420 206 Venky Shankar
421
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20231106.073650
422
423
(nvm the fs:upgrade test failure - the PR is excluded from merge)
424
425
* https://tracker.ceph.com/issues/57676
426
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
427
* https://tracker.ceph.com/issues/63233
428
    mon|client|mds: valgrind reports possible leaks in the MDS
429
* https://tracker.ceph.com/issues/63141
430
    qa/cephfs: test_idem_unaffected_root_squash fails
431
* https://tracker.ceph.com/issues/62580
432
    testing: Test failure: test_snapshot_remove (tasks.cephfs.test_strays.TestStrays)
433
* https://tracker.ceph.com/issues/57655
434
    qa: fs:mixed-clients kernel_untar_build failure
435
* https://tracker.ceph.com/issues/51964
436
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
437
* https://tracker.ceph.com/issues/63519
438
    ceph-fuse: reef ceph-fuse crashes with main branch ceph-mds
439
* https://tracker.ceph.com/issues/57087
440
    qa: test_fragmented_injection (tasks.cephfs.test_data_scan.TestDataScan) failure
441
* https://tracker.ceph.com/issues/58945
442
    qa: xfstests-dev's generic test suite has 20 failures with fuse client
443
444 204 Rishabh Dave
h3. 7 Nov 2023
445
446 205 Rishabh Dave
fs: https://pulpito.ceph.com/rishabh-2023-11-04_04:30:51-fs-rishabh-2023nov3-testing-default-smithi/
447
re-run: https://pulpito.ceph.com/rishabh-2023-11-05_14:10:09-fs-rishabh-2023nov3-testing-default-smithi/
448
smoke: https://pulpito.ceph.com/rishabh-2023-11-08_08:39:05-smoke-rishabh-2023nov3-testing-default-smithi/
449 204 Rishabh Dave
450
* https://tracker.ceph.com/issues/53859
451
  qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
452
* https://tracker.ceph.com/issues/63233
453
  mon|client|mds: valgrind reports possible leaks in the MDS
454
* https://tracker.ceph.com/issues/57655
455
  qa: fs:mixed-clients kernel_untar_build failure
456
* https://tracker.ceph.com/issues/57676
457
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
458
459
* https://tracker.ceph.com/issues/63473
460
  fsstress.sh failed with errno 124
461
462 202 Rishabh Dave
h3. 3 Nov 2023
463 203 Rishabh Dave
464 202 Rishabh Dave
https://pulpito.ceph.com/rishabh-2023-10-27_06:26:52-fs-rishabh-2023oct26-testing-default-smithi/
465
466
* https://tracker.ceph.com/issues/63141
467
  qa/cephfs: test_idem_unaffected_root_squash fails
468
* https://tracker.ceph.com/issues/63233
469
  mon|client|mds: valgrind reports possible leaks in the MDS
470
* https://tracker.ceph.com/issues/57656
471
  dbench: write failed on handle 10010 (Resource temporarily unavailable)
472
* https://tracker.ceph.com/issues/57655
473
  qa: fs:mixed-clients kernel_untar_build failure
474
* https://tracker.ceph.com/issues/57676
475
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
476
477
* https://tracker.ceph.com/issues/59531
478
  "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio)"
479
* https://tracker.ceph.com/issues/52624
480
  qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
481
482 198 Patrick Donnelly
h3. 24 October 2023
483
484
https://pulpito.ceph.com/?branch=wip-batrick-testing-20231024.144545
485
486 200 Patrick Donnelly
Two failures:
487
488
https://pulpito.ceph.com/pdonnell-2023-10-26_05:21:22-fs-wip-batrick-testing-20231024.144545-distro-default-smithi/7438459/
489
https://pulpito.ceph.com/pdonnell-2023-10-26_05:21:22-fs-wip-batrick-testing-20231024.144545-distro-default-smithi/7438468/
490
491
probably related to https://github.com/ceph/ceph/pull/53255. Killing the mount as part of the test did not complete. Will research more.
492
493 198 Patrick Donnelly
* https://tracker.ceph.com/issues/52624
494
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
495
* https://tracker.ceph.com/issues/57676
496 199 Patrick Donnelly
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
497
* https://tracker.ceph.com/issues/63233
498
    mon|client|mds: valgrind reports possible leaks in the MDS
499
* https://tracker.ceph.com/issues/59531
500
    "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS 
501
* https://tracker.ceph.com/issues/57655
502
    qa: fs:mixed-clients kernel_untar_build failure
503 200 Patrick Donnelly
* https://tracker.ceph.com/issues/62067
504
    ffsb.sh failure "Resource temporarily unavailable"
505
* https://tracker.ceph.com/issues/63411
506
    qa: flush journal may cause timeouts of `scrub status`
507
* https://tracker.ceph.com/issues/61243
508
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
509
* https://tracker.ceph.com/issues/63141
510 198 Patrick Donnelly
    test_idem_unaffected_root_squash (test_admin.TestFsAuthorizeUpdate) fails
511 148 Rishabh Dave
512 195 Venky Shankar
h3. 18 Oct 2023
513
514
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20231018.065603
515
516
* https://tracker.ceph.com/issues/52624
517
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
518
* https://tracker.ceph.com/issues/57676
519
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
520
* https://tracker.ceph.com/issues/63233
521
    mon|client|mds: valgrind reports possible leaks in the MDS
522
* https://tracker.ceph.com/issues/63141
523
    qa/cephfs: test_idem_unaffected_root_squash fails
524
* https://tracker.ceph.com/issues/59531
525
  "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio)" 
526
* https://tracker.ceph.com/issues/62658
527
    error during scrub thrashing: reached maximum tries (31) after waiting for 900 seconds
528
* https://tracker.ceph.com/issues/62580
529
    testing: Test failure: test_snapshot_remove (tasks.cephfs.test_strays.TestStrays)
530
* https://tracker.ceph.com/issues/62067
531
    ffsb.sh failure "Resource temporarily unavailable"
532
* https://tracker.ceph.com/issues/57655
533
    qa: fs:mixed-clients kernel_untar_build failure
534
* https://tracker.ceph.com/issues/62036
535
    src/mds/MDCache.cc: 5131: FAILED ceph_assert(isolated_inodes.empty())
536
* https://tracker.ceph.com/issues/58945
537
    qa: xfstests-dev's generic test suite has 20 failures with fuse client
538
* https://tracker.ceph.com/issues/62847
539
    mds: blogbench requests stuck (5mds+scrub+snaps-flush)
540
541 193 Venky Shankar
h3. 13 Oct 2023
542
543
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20231013.093215
544
545
* https://tracker.ceph.com/issues/52624
546
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
547
* https://tracker.ceph.com/issues/62936
548
    Test failure: test_mirroring_init_failure_with_recovery (tasks.cephfs.test_mirroring.TestMirroring)
549
* https://tracker.ceph.com/issues/47292
550
    cephfs-shell: test_df_for_valid_file failure
551
* https://tracker.ceph.com/issues/63141
552
    qa/cephfs: test_idem_unaffected_root_squash fails
553
* https://tracker.ceph.com/issues/62081
554
    tasks/fscrypt-common does not finish, timesout
555 1 Patrick Donnelly
* https://tracker.ceph.com/issues/58945
556
    qa: xfstests-dev's generic test suite has 20 failures with fuse client
557 194 Venky Shankar
* https://tracker.ceph.com/issues/63233
558
    mon|client|mds: valgrind reports possible leaks in the MDS
559 193 Venky Shankar
560 190 Patrick Donnelly
h3. 16 Oct 2023
561
562
https://pulpito.ceph.com/?branch=wip-batrick-testing-20231016.203825
563
564 192 Patrick Donnelly
Infrastructure issues:
565
* /teuthology/pdonnell-2023-10-19_12:04:12-fs-wip-batrick-testing-20231016.203825-distro-default-smithi/7432286/teuthology.log
566
    Host lost.
567
568 196 Patrick Donnelly
One followup fix:
569
* https://pulpito.ceph.com/pdonnell-2023-10-20_00:33:29-fs-wip-batrick-testing-20231016.203825-distro-default-smithi/
570
571 192 Patrick Donnelly
Failures:
572
573
* https://tracker.ceph.com/issues/56694
574
    qa: avoid blocking forever on hung umount
575
* https://tracker.ceph.com/issues/63089
576
    qa: tasks/mirror times out
577
* https://tracker.ceph.com/issues/52624
578
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
579
* https://tracker.ceph.com/issues/59531
580
  "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio)" 
581
* https://tracker.ceph.com/issues/57676
582
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
583
* https://tracker.ceph.com/issues/62658 
584
    error during scrub thrashing: reached maximum tries (31) after waiting for 900 seconds
585
* https://tracker.ceph.com/issues/61243
586
  test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
587
* https://tracker.ceph.com/issues/57656
588
  dbench: write failed on handle 10010 (Resource temporarily unavailable)
589
* https://tracker.ceph.com/issues/63233
590
  mon|client|mds: valgrind reports possible leaks in the MDS
591 197 Patrick Donnelly
* https://tracker.ceph.com/issues/63278
592
  kclient: may wrongly decode session messages and believe it is blocklisted (dead jobs)
593 192 Patrick Donnelly
594 189 Rishabh Dave
h3. 9 Oct 2023
595
596
https://pulpito.ceph.com/rishabh-2023-10-06_11:56:52-fs-rishabh-cephfs-mon-testing-default-smithi/
597
598
* https://tracker.ceph.com/issues/54460
599
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
600
* https://tracker.ceph.com/issues/63141
601
  test_idem_unaffected_root_squash (test_admin.TestFsAuthorizeUpdate) fails
602
* https://tracker.ceph.com/issues/62937
603
  logrotate doesn't support parallel execution on same set of logfiles
604
* https://tracker.ceph.com/issues/61400
605
  valgrind+ceph-mon issues
606
* https://tracker.ceph.com/issues/57676
607
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
608
* https://tracker.ceph.com/issues/55805
609
  error during scrub thrashing reached max tries in 900 secs
610
611 188 Venky Shankar
h3. 26 Sep 2023
612
613
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230926.081818
614
615
* https://tracker.ceph.com/issues/52624
616
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
617
* https://tracker.ceph.com/issues/62873
618
    qa: FAIL: test_client_blocklisted_oldest_tid (tasks.cephfs.test_client_limits.TestClientLimits)
619
* https://tracker.ceph.com/issues/61400
620
    valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
621
* https://tracker.ceph.com/issues/57676
622
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
623
* https://tracker.ceph.com/issues/62682
624
    mon: no mdsmap broadcast after "fs set joinable" is set to true
625
* https://tracker.ceph.com/issues/63089
626
    qa: tasks/mirror times out
627
628 185 Rishabh Dave
h3. 22 Sep 2023
629
630
https://pulpito.ceph.com/rishabh-2023-09-12_12:12:15-fs-wip-rishabh-2023sep12-b2-testing-default-smithi/
631
632
* https://tracker.ceph.com/issues/59348
633
  qa: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota
634
* https://tracker.ceph.com/issues/59344
635
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
636
* https://tracker.ceph.com/issues/59531
637
  "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio)" 
638
* https://tracker.ceph.com/issues/61574
639
  build failure for mdtest project
640
* https://tracker.ceph.com/issues/62702
641
  fsstress.sh: MDS slow requests for the internal 'rename' requests
642
* https://tracker.ceph.com/issues/57676
643
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
644
645
* https://tracker.ceph.com/issues/62863 
646
  deadlock in ceph-fuse causes teuthology job to hang and fail
647
* https://tracker.ceph.com/issues/62870
648
  test_cluster_info (tasks.cephfs.test_nfs.TestNFS)
649
* https://tracker.ceph.com/issues/62873
650
  test_client_blocklisted_oldest_tid (tasks.cephfs.test_client_limits.TestClientLimits)
651
652 186 Venky Shankar
h3. 20 Sep 2023
653
654
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230920.072635
655
656
* https://tracker.ceph.com/issues/52624
657
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
658
* https://tracker.ceph.com/issues/61400
659
    valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
660
* https://tracker.ceph.com/issues/61399
661
    libmpich: undefined references to fi_strerror
662
* https://tracker.ceph.com/issues/62081
663
    tasks/fscrypt-common does not finish, timesout
664
* https://tracker.ceph.com/issues/62658 
665
    error during scrub thrashing: reached maximum tries (31) after waiting for 900 seconds
666
* https://tracker.ceph.com/issues/62915
667
    qa/suites/fs/nfs: No orchestrator configured (try `ceph orch set backend`) while running test cases
668
* https://tracker.ceph.com/issues/59531
669
    quincy: "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio)"
670
* https://tracker.ceph.com/issues/62873
671
  qa: FAIL: test_client_blocklisted_oldest_tid (tasks.cephfs.test_client_limits.TestClientLimits)
672
* https://tracker.ceph.com/issues/62936
673
    Test failure: test_mirroring_init_failure_with_recovery (tasks.cephfs.test_mirroring.TestMirroring)
674
* https://tracker.ceph.com/issues/62937
675
    Command failed on smithi027 with status 3: 'sudo logrotate /etc/logrotate.d/ceph-test.conf'
676
* https://tracker.ceph.com/issues/62510
677
    snaptest-git-ceph.sh failure with fs/thrash
678
* https://tracker.ceph.com/issues/62081
679
    tasks/fscrypt-common does not finish, timesout
680
* https://tracker.ceph.com/issues/62126
681
    test failure: suites/blogbench.sh stops running
682 187 Venky Shankar
* https://tracker.ceph.com/issues/62682
683
    mon: no mdsmap broadcast after "fs set joinable" is set to true
684 186 Venky Shankar
685 184 Milind Changire
h3. 19 Sep 2023
686
687
http://pulpito.front.sepia.ceph.com/mchangir-2023-09-12_05:40:22-fs-wip-mchangir-testing-20230908.140927-testing-default-smithi/
688
689
* https://tracker.ceph.com/issues/58220#note-9
690
  workunit fs/test_python.sh: test_disk_quota_exceeeded_error failure
691
* https://tracker.ceph.com/issues/62702
692
  Command failed (workunit test suites/fsstress.sh) on smithi124 with status 124
693
* https://tracker.ceph.com/issues/57676
694
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
695
* https://tracker.ceph.com/issues/59348
696
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
697
* https://tracker.ceph.com/issues/52624
698
  qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
699
* https://tracker.ceph.com/issues/51964
700
  qa: test_cephfs_mirror_restart_sync_on_blocklist failure
701
* https://tracker.ceph.com/issues/61243
702
  test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
703
* https://tracker.ceph.com/issues/59344
704
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
705
* https://tracker.ceph.com/issues/62873
706
  qa: FAIL: test_client_blocklisted_oldest_tid (tasks.cephfs.test_client_limits.TestClientLimits)
707
* https://tracker.ceph.com/issues/59413
708
  cephfs: qa snaptest-git-ceph.sh failed with "got remote process result: 128"
709
* https://tracker.ceph.com/issues/53859
710
  qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
711
* https://tracker.ceph.com/issues/62482
712
  qa: cluster [WRN] Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
713
714 178 Patrick Donnelly
715 177 Venky Shankar
h3. 13 Sep 2023
716
717
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230908.065909
718
719
* https://tracker.ceph.com/issues/52624
720
      qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
721
* https://tracker.ceph.com/issues/57655
722
    qa: fs:mixed-clients kernel_untar_build failure
723
* https://tracker.ceph.com/issues/57676
724
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
725
* https://tracker.ceph.com/issues/61243
726
    qa: tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev - 17 tests failed
727
* https://tracker.ceph.com/issues/62567
728
    postgres workunit times out - MDS_SLOW_REQUEST in logs
729
* https://tracker.ceph.com/issues/61400
730
    valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
731
* https://tracker.ceph.com/issues/61399
732
    libmpich: undefined references to fi_strerror
733
* https://tracker.ceph.com/issues/57655
734
    qa: fs:mixed-clients kernel_untar_build failure
735
* https://tracker.ceph.com/issues/57676
736
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
737
* https://tracker.ceph.com/issues/51964
738
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
739
* https://tracker.ceph.com/issues/62081
740
    tasks/fscrypt-common does not finish, timesout
741 178 Patrick Donnelly
742 179 Patrick Donnelly
h3. 2023 Sep 12
743 178 Patrick Donnelly
744
https://pulpito.ceph.com/pdonnell-2023-09-12_14:07:50-fs-wip-batrick-testing-20230912.122437-distro-default-smithi/
745 1 Patrick Donnelly
746 181 Patrick Donnelly
A few failures caused by qa refactoring in https://github.com/ceph/ceph/pull/48130 ; notably:
747
748 182 Patrick Donnelly
* Test failure: test_export_pin_many (tasks.cephfs.test_exports.TestExportPin) caused by fragmentation from config changes.
749 181 Patrick Donnelly
750
Failures:
751
752 179 Patrick Donnelly
* https://tracker.ceph.com/issues/59348
753
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
754
* https://tracker.ceph.com/issues/57656
755
  dbench: write failed on handle 10010 (Resource temporarily unavailable)
756
* https://tracker.ceph.com/issues/55805
757
  error scrub thrashing reached max tries in 900 secs
758
* https://tracker.ceph.com/issues/62067
759
    ffsb.sh failure "Resource temporarily unavailable"
760
* https://tracker.ceph.com/issues/59344
761
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
762
* https://tracker.ceph.com/issues/61399
763 180 Patrick Donnelly
  libmpich: undefined references to fi_strerror
764
* https://tracker.ceph.com/issues/62832
765
  common: config_proxy deadlock during shutdown (and possibly other times)
766
* https://tracker.ceph.com/issues/59413
767 1 Patrick Donnelly
  cephfs: qa snaptest-git-ceph.sh failed with "got remote process result: 128"
768 181 Patrick Donnelly
* https://tracker.ceph.com/issues/57676
769
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
770
* https://tracker.ceph.com/issues/62567
771
  Command failed with status 1: "sudo TESTDIR=/home/ubuntu/cephtest bash -c 'sudo -u postgres -- pgbench -s 500 -i'"
772
* https://tracker.ceph.com/issues/54460
773
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
774
* https://tracker.ceph.com/issues/58220#note-9
775
  workunit fs/test_python.sh: test_disk_quota_exceeeded_error failure
776
* https://tracker.ceph.com/issues/59348
777
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
778 183 Patrick Donnelly
* https://tracker.ceph.com/issues/62847
779
    mds: blogbench requests stuck (5mds+scrub+snaps-flush)
780
* https://tracker.ceph.com/issues/62848
781
    qa: fail_fs upgrade scenario hanging
782
* https://tracker.ceph.com/issues/62081
783
    tasks/fscrypt-common does not finish, timesout
784 177 Venky Shankar
785 176 Venky Shankar
h3. 11 Sep 2023
786 175 Venky Shankar
787
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230830.153114
788
789
* https://tracker.ceph.com/issues/52624
790
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
791
* https://tracker.ceph.com/issues/61399
792
    qa: build failure for ior (the failed instance is when compiling `mdtest`)
793
* https://tracker.ceph.com/issues/57655
794
    qa: fs:mixed-clients kernel_untar_build failure
795
* https://tracker.ceph.com/issues/61399
796
    ior build failure
797
* https://tracker.ceph.com/issues/59531
798
    quincy: "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio)"
799
* https://tracker.ceph.com/issues/59344
800
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
801
* https://tracker.ceph.com/issues/59346
802
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
803
* https://tracker.ceph.com/issues/59348
804
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
805
* https://tracker.ceph.com/issues/57676
806
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
807
* https://tracker.ceph.com/issues/61243
808
  qa: tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev - 17 tests failed
809
* https://tracker.ceph.com/issues/62567
810
  postgres workunit times out - MDS_SLOW_REQUEST in logs
811
812
813 174 Rishabh Dave
h3. 6 Sep 2023 Run 2
814
815
https://pulpito.ceph.com/rishabh-2023-08-25_01:50:32-fs-wip-rishabh-2023aug3-b5-testing-default-smithi/ 
816
817
* https://tracker.ceph.com/issues/51964
818
  test_cephfs_mirror_restart_sync_on_blocklist failure
819
* https://tracker.ceph.com/issues/59348
820
  test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
821
* https://tracker.ceph.com/issues/53859
822
  qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
823
* https://tracker.ceph.com/issues/61892
824
  test_strays.TestStrays.test_snapshot_remove failed
825
* https://tracker.ceph.com/issues/54460
826
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
827
* https://tracker.ceph.com/issues/59346
828
  fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write"
829
* https://tracker.ceph.com/issues/59344
830
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
831
* https://tracker.ceph.com/issues/62484
832
  qa: ffsb.sh test failure
833
* https://tracker.ceph.com/issues/62567
834
  Command failed with status 1: "sudo TESTDIR=/home/ubuntu/cephtest bash -c 'sudo -u postgres -- pgbench -s 500 -i'"
835
  
836
* https://tracker.ceph.com/issues/61399
837
  ior build failure
838
* https://tracker.ceph.com/issues/57676
839
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
840
* https://tracker.ceph.com/issues/55805
841
  error scrub thrashing reached max tries in 900 secs
842
843 172 Rishabh Dave
h3. 6 Sep 2023
844 171 Rishabh Dave
845 173 Rishabh Dave
https://pulpito.ceph.com/rishabh-2023-08-10_20:16:46-fs-wip-rishabh-2023Aug1-b4-testing-default-smithi/
846 171 Rishabh Dave
847 1 Patrick Donnelly
* https://tracker.ceph.com/issues/53859
848
  qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
849 173 Rishabh Dave
* https://tracker.ceph.com/issues/51964
850
  qa: test_cephfs_mirror_restart_sync_on_blocklist failure
851 1 Patrick Donnelly
* https://tracker.ceph.com/issues/61892
852 173 Rishabh Dave
  test_snapshot_remove (test_strays.TestStrays) failed
853
* https://tracker.ceph.com/issues/59348
854
  qa: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota
855
* https://tracker.ceph.com/issues/54462
856
  Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128
857
* https://tracker.ceph.com/issues/62556
858
  test_acls: xfstests_dev: python2 is missing
859
* https://tracker.ceph.com/issues/62067
860
  ffsb.sh failure "Resource temporarily unavailable"
861
* https://tracker.ceph.com/issues/57656
862
  dbench: write failed on handle 10010 (Resource temporarily unavailable)
863 1 Patrick Donnelly
* https://tracker.ceph.com/issues/59346
864
  fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write"
865 171 Rishabh Dave
* https://tracker.ceph.com/issues/59344
866 173 Rishabh Dave
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
867
868 171 Rishabh Dave
* https://tracker.ceph.com/issues/61399
869
  ior build failure
870
* https://tracker.ceph.com/issues/57676
871
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
872
* https://tracker.ceph.com/issues/55805
873
  error scrub thrashing reached max tries in 900 secs
874 173 Rishabh Dave
875
* https://tracker.ceph.com/issues/62567
876
  Command failed on smithi008 with status 1: "sudo TESTDIR=/home/ubuntu/cephtest bash -c 'sudo -u postgres -- pgbench -s 500 -i'"
877
* https://tracker.ceph.com/issues/62702
878
  workunit test suites/fsstress.sh on smithi066 with status 124
879 170 Rishabh Dave
880
h3. 5 Sep 2023
881
882
https://pulpito.ceph.com/rishabh-2023-08-25_06:38:25-fs-wip-rishabh-2023aug3-b5-testing-default-smithi/
883
orch:cephadm suite run: http://pulpito.front.sepia.ceph.com/rishabh-2023-09-05_12:16:09-orch:cephadm-wip-rishabh-2023aug3-b5-testing-default-smithi/
884
  this run has failures but acc to Adam King these are not relevant and should be ignored
885
886
* https://tracker.ceph.com/issues/61892
887
  test_snapshot_remove (test_strays.TestStrays) failed
888
* https://tracker.ceph.com/issues/59348
889
  test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota
890
* https://tracker.ceph.com/issues/54462
891
  Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128
892
* https://tracker.ceph.com/issues/62067
893
  ffsb.sh failure "Resource temporarily unavailable"
894
* https://tracker.ceph.com/issues/57656 
895
  dbench: write failed on handle 10010 (Resource temporarily unavailable)
896
* https://tracker.ceph.com/issues/59346
897
  fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write"
898
* https://tracker.ceph.com/issues/59344
899
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
900
* https://tracker.ceph.com/issues/50223
901
  client.xxxx isn't responding to mclientcaps(revoke)
902
* https://tracker.ceph.com/issues/57655
903
  qa: fs:mixed-clients kernel_untar_build failure
904
* https://tracker.ceph.com/issues/62187
905
  iozone.sh: line 5: iozone: command not found
906
 
907
* https://tracker.ceph.com/issues/61399
908
  ior build failure
909
* https://tracker.ceph.com/issues/57676
910
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
911
* https://tracker.ceph.com/issues/55805
912
  error scrub thrashing reached max tries in 900 secs
913 169 Venky Shankar
914
915
h3. 31 Aug 2023
916
917
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230824.045828
918
919
* https://tracker.ceph.com/issues/52624
920
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
921
* https://tracker.ceph.com/issues/62187
922
    iozone: command not found
923
* https://tracker.ceph.com/issues/61399
924
    ior build failure
925
* https://tracker.ceph.com/issues/59531
926
    quincy: "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio)"
927
* https://tracker.ceph.com/issues/61399
928
    qa: build failure for ior (the failed instance is when compiling `mdtest`)
929
* https://tracker.ceph.com/issues/57655
930
    qa: fs:mixed-clients kernel_untar_build failure
931
* https://tracker.ceph.com/issues/59344
932
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
933
* https://tracker.ceph.com/issues/59346
934
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
935
* https://tracker.ceph.com/issues/59348
936
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
937
* https://tracker.ceph.com/issues/59413
938
    cephfs: qa snaptest-git-ceph.sh failed with "got remote process result: 128"
939
* https://tracker.ceph.com/issues/62653
940
    qa: unimplemented fcntl command: 1036 with fsstress
941
* https://tracker.ceph.com/issues/61400
942
    valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
943
* https://tracker.ceph.com/issues/62658
944
    error during scrub thrashing: reached maximum tries (31) after waiting for 900 seconds
945
* https://tracker.ceph.com/issues/62188
946
    AttributeError: 'RemoteProcess' object has no attribute 'read' (happens only with multis-auth test)
947 168 Venky Shankar
948
949
h3. 25 Aug 2023
950
951
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230822.064807
952
953
* https://tracker.ceph.com/issues/59344
954
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
955
* https://tracker.ceph.com/issues/59346
956
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
957
* https://tracker.ceph.com/issues/59348
958
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
959
* https://tracker.ceph.com/issues/57655
960
    qa: fs:mixed-clients kernel_untar_build failure
961
* https://tracker.ceph.com/issues/61243
962
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
963
* https://tracker.ceph.com/issues/61399
964
    ior build failure
965
* https://tracker.ceph.com/issues/61399
966
    qa: build failure for ior (the failed instance is when compiling `mdtest`)
967
* https://tracker.ceph.com/issues/62484
968
    qa: ffsb.sh test failure
969
* https://tracker.ceph.com/issues/59531
970
    quincy: "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio)"
971
* https://tracker.ceph.com/issues/62510
972
    snaptest-git-ceph.sh failure with fs/thrash
973 167 Venky Shankar
974
975
h3. 24 Aug 2023
976
977
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230822.060131
978
979
* https://tracker.ceph.com/issues/57676
980
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
981
* https://tracker.ceph.com/issues/51964
982
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
983
* https://tracker.ceph.com/issues/59344
984
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
985
* https://tracker.ceph.com/issues/59346
986
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
987
* https://tracker.ceph.com/issues/59348
988
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
989
* https://tracker.ceph.com/issues/61399
990
    ior build failure
991
* https://tracker.ceph.com/issues/61399
992
    qa: build failure for ior (the failed instance is when compiling `mdtest`)
993
* https://tracker.ceph.com/issues/62510
994
    snaptest-git-ceph.sh failure with fs/thrash
995
* https://tracker.ceph.com/issues/62484
996
    qa: ffsb.sh test failure
997
* https://tracker.ceph.com/issues/57087
998
    qa: test_fragmented_injection (tasks.cephfs.test_data_scan.TestDataScan) failure
999
* https://tracker.ceph.com/issues/57656
1000
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
1001
* https://tracker.ceph.com/issues/62187
1002
    iozone: command not found
1003
* https://tracker.ceph.com/issues/62188
1004
    AttributeError: 'RemoteProcess' object has no attribute 'read' (happens only with multis-auth test)
1005
* https://tracker.ceph.com/issues/62567
1006
    postgres workunit times out - MDS_SLOW_REQUEST in logs
1007 166 Venky Shankar
1008
1009
h3. 22 Aug 2023
1010
1011
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230809.035933
1012
1013
* https://tracker.ceph.com/issues/57676
1014
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1015
* https://tracker.ceph.com/issues/51964
1016
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1017
* https://tracker.ceph.com/issues/59344
1018
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
1019
* https://tracker.ceph.com/issues/59346
1020
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
1021
* https://tracker.ceph.com/issues/59348
1022
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1023
* https://tracker.ceph.com/issues/61399
1024
    ior build failure
1025
* https://tracker.ceph.com/issues/61399
1026
    qa: build failure for ior (the failed instance is when compiling `mdtest`)
1027
* https://tracker.ceph.com/issues/57655
1028
    qa: fs:mixed-clients kernel_untar_build failure
1029
* https://tracker.ceph.com/issues/61243
1030
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
1031
* https://tracker.ceph.com/issues/62188
1032
    AttributeError: 'RemoteProcess' object has no attribute 'read' (happens only with multis-auth test)
1033
* https://tracker.ceph.com/issues/62510
1034
    snaptest-git-ceph.sh failure with fs/thrash
1035
* https://tracker.ceph.com/issues/62511
1036
    src/mds/MDLog.cc: 299: FAILED ceph_assert(!mds_is_shutting_down)
1037 165 Venky Shankar
1038
1039
h3. 14 Aug 2023
1040
1041
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230808.093601
1042
1043
* https://tracker.ceph.com/issues/51964
1044
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1045
* https://tracker.ceph.com/issues/61400
1046
    valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
1047
* https://tracker.ceph.com/issues/61399
1048
    ior build failure
1049
* https://tracker.ceph.com/issues/59348
1050
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1051
* https://tracker.ceph.com/issues/59531
1052
    cluster [WRN] OSD bench result of 137706.272521 IOPS exceeded the threshold
1053
* https://tracker.ceph.com/issues/59344
1054
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
1055
* https://tracker.ceph.com/issues/59346
1056
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
1057
* https://tracker.ceph.com/issues/61399
1058
    qa: build failure for ior (the failed instance is when compiling `mdtest`)
1059
* https://tracker.ceph.com/issues/59684 [kclient bug]
1060
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
1061
* https://tracker.ceph.com/issues/61243 (NEW)
1062
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
1063
* https://tracker.ceph.com/issues/57655
1064
    qa: fs:mixed-clients kernel_untar_build failure
1065
* https://tracker.ceph.com/issues/57656
1066
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
1067 163 Venky Shankar
1068
1069
h3. 28 JULY 2023
1070
1071
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230725.053049
1072
1073
* https://tracker.ceph.com/issues/51964
1074
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1075
* https://tracker.ceph.com/issues/61400
1076
    valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
1077
* https://tracker.ceph.com/issues/61399
1078
    ior build failure
1079
* https://tracker.ceph.com/issues/57676
1080
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1081
* https://tracker.ceph.com/issues/59348
1082
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1083
* https://tracker.ceph.com/issues/59531
1084
    cluster [WRN] OSD bench result of 137706.272521 IOPS exceeded the threshold
1085
* https://tracker.ceph.com/issues/59344
1086
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
1087
* https://tracker.ceph.com/issues/59346
1088
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
1089
* https://github.com/ceph/ceph/pull/52556
1090
    task/test_nfs: AttributeError: 'TestNFS' object has no attribute 'run_ceph_cmd' (see note #4)
1091
* https://tracker.ceph.com/issues/62187
1092
    iozone: command not found
1093
* https://tracker.ceph.com/issues/61399
1094
    qa: build failure for ior (the failed instance is when compiling `mdtest`)
1095
* https://tracker.ceph.com/issues/62188
1096 164 Rishabh Dave
    AttributeError: 'RemoteProcess' object has no attribute 'read' (happens only with multis-auth test)
1097 158 Rishabh Dave
1098
h3. 24 Jul 2023
1099
1100
https://pulpito.ceph.com/rishabh-2023-07-13_21:35:13-fs-wip-rishabh-2023Jul13-testing-default-smithi/
1101
https://pulpito.ceph.com/rishabh-2023-07-14_10:26:42-fs-wip-rishabh-2023Jul13-testing-default-smithi/
1102
There were few failure from one of the PRs under testing. Following run confirms that removing this PR fixes these failures -
1103
https://pulpito.ceph.com/rishabh-2023-07-18_02:11:50-fs-wip-rishabh-2023Jul13-m-quota-testing-default-smithi/
1104
One more extra run to check if blogbench.sh fail every time:
1105
https://pulpito.ceph.com/rishabh-2023-07-21_17:58:19-fs-wip-rishabh-2023Jul13-m-quota-testing-default-smithi/
1106
blogbench.sh failure were seen on above runs for first time, following run with main branch that confirms that "blogbench.sh" was not related to any of the PRs that are under testing -
1107 161 Rishabh Dave
https://pulpito.ceph.com/rishabh-2023-07-21_21:30:53-fs-wip-rishabh-2023Jul13-base-2-testing-default-smithi/
1108
1109
* https://tracker.ceph.com/issues/61892
1110
  test_snapshot_remove (test_strays.TestStrays) failed
1111
* https://tracker.ceph.com/issues/53859
1112
  test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
1113
* https://tracker.ceph.com/issues/61982
1114
  test_clean_stale_subvolume_snapshot_metadata (tasks.cephfs.test_volumes.TestSubvolumeSnapshots)
1115
* https://tracker.ceph.com/issues/52438
1116
  qa: ffsb timeout
1117
* https://tracker.ceph.com/issues/54460
1118
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1119
* https://tracker.ceph.com/issues/57655
1120
  qa: fs:mixed-clients kernel_untar_build failure
1121
* https://tracker.ceph.com/issues/48773
1122
  reached max tries: scrub does not complete
1123
* https://tracker.ceph.com/issues/58340
1124
  mds: fsstress.sh hangs with multimds
1125
* https://tracker.ceph.com/issues/61400
1126
  valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
1127
* https://tracker.ceph.com/issues/57206
1128
  libcephfs/test.sh: ceph_test_libcephfs_reclaim
1129
  
1130
* https://tracker.ceph.com/issues/57656
1131
  [testing] dbench: write failed on handle 10010 (Resource temporarily unavailable)
1132
* https://tracker.ceph.com/issues/61399
1133
  ior build failure
1134
* https://tracker.ceph.com/issues/57676
1135
  error during scrub thrashing: backtrace
1136
  
1137
* https://tracker.ceph.com/issues/38452
1138
  'sudo -u postgres -- pgbench -s 500 -i' failed
1139 158 Rishabh Dave
* https://tracker.ceph.com/issues/62126
1140 157 Venky Shankar
  blogbench.sh failure
1141
1142
h3. 18 July 2023
1143
1144
* https://tracker.ceph.com/issues/52624
1145
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
1146
* https://tracker.ceph.com/issues/57676
1147
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1148
* https://tracker.ceph.com/issues/54460
1149
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1150
* https://tracker.ceph.com/issues/57655
1151
    qa: fs:mixed-clients kernel_untar_build failure
1152
* https://tracker.ceph.com/issues/51964
1153
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1154
* https://tracker.ceph.com/issues/59344
1155
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
1156
* https://tracker.ceph.com/issues/61182
1157
    cephfs-mirror-ha-workunit: reached maximum tries (50) after waiting for 300 seconds
1158
* https://tracker.ceph.com/issues/61957
1159
    test_client_limits.TestClientLimits.test_client_release_bug
1160
* https://tracker.ceph.com/issues/59348
1161
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1162
* https://tracker.ceph.com/issues/61892
1163
    test_strays.TestStrays.test_snapshot_remove failed
1164
* https://tracker.ceph.com/issues/59346
1165
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
1166
* https://tracker.ceph.com/issues/44565
1167
    src/mds/SimpleLock.h: 528: FAILED ceph_assert(state == LOCK_XLOCK || state == LOCK_XLOCKDONE || state == LOCK_XLOCKSNAP || state == LOCK_LOCK_XLOCK || state == LOCK_LOCK || is_locallock())
1168
* https://tracker.ceph.com/issues/62067
1169
    ffsb.sh failure "Resource temporarily unavailable"
1170 156 Venky Shankar
1171
1172
h3. 17 July 2023
1173
1174
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230704.040136
1175
1176
* https://tracker.ceph.com/issues/61982
1177
    Test failure: test_clean_stale_subvolume_snapshot_metadata (tasks.cephfs.test_volumes.TestSubvolumeSnapshots)
1178
* https://tracker.ceph.com/issues/59344
1179
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
1180
* https://tracker.ceph.com/issues/61182
1181
    cephfs-mirror-ha-workunit: reached maximum tries (50) after waiting for 300 seconds
1182
* https://tracker.ceph.com/issues/61957
1183
    test_client_limits.TestClientLimits.test_client_release_bug
1184
* https://tracker.ceph.com/issues/61400
1185
    valgrind+ceph-mon: segmentation fault in rocksdb+tcmalloc
1186
* https://tracker.ceph.com/issues/59348
1187
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1188
* https://tracker.ceph.com/issues/61892
1189
    test_strays.TestStrays.test_snapshot_remove failed
1190
* https://tracker.ceph.com/issues/59346
1191
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
1192
* https://tracker.ceph.com/issues/62036
1193
    src/mds/MDCache.cc: 5131: FAILED ceph_assert(isolated_inodes.empty())
1194
* https://tracker.ceph.com/issues/61737
1195
    coredump from '/bin/podman pull quay.ceph.io/ceph-ci/ceph:pacific'
1196
* https://tracker.ceph.com/issues/44565
1197
    src/mds/SimpleLock.h: 528: FAILED ceph_assert(state == LOCK_XLOCK || state == LOCK_XLOCKDONE || state == LOCK_XLOCKSNAP || state == LOCK_LOCK_XLOCK || state == LOCK_LOCK || is_locallock())
1198 155 Rishabh Dave
1199 1 Patrick Donnelly
1200 153 Rishabh Dave
h3. 13 July 2023 Run 2
1201 152 Rishabh Dave
1202
1203
https://pulpito.ceph.com/rishabh-2023-07-08_23:33:40-fs-wip-rishabh-2023Jul9-testing-default-smithi/
1204
https://pulpito.ceph.com/rishabh-2023-07-09_20:19:09-fs-wip-rishabh-2023Jul9-testing-default-smithi/
1205
1206
* https://tracker.ceph.com/issues/61957
1207
  test_client_limits.TestClientLimits.test_client_release_bug
1208
* https://tracker.ceph.com/issues/61982
1209
  Test failure: test_clean_stale_subvolume_snapshot_metadata (tasks.cephfs.test_volumes.TestSubvolumeSnapshots)
1210
* https://tracker.ceph.com/issues/59348
1211
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1212
* https://tracker.ceph.com/issues/59344
1213
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
1214
* https://tracker.ceph.com/issues/54460
1215
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1216
* https://tracker.ceph.com/issues/57655
1217
  qa: fs:mixed-clients kernel_untar_build failure
1218
* https://tracker.ceph.com/issues/61400
1219
  valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
1220
* https://tracker.ceph.com/issues/61399
1221
  ior build failure
1222
1223 151 Venky Shankar
h3. 13 July 2023
1224
1225
https://pulpito.ceph.com/vshankar-2023-07-04_11:45:30-fs-wip-vshankar-testing-20230704.040242-testing-default-smithi/
1226
1227
* https://tracker.ceph.com/issues/54460
1228
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1229
* https://tracker.ceph.com/issues/61400
1230
    valgrind+ceph-mon: segmentation fault in rocksdb+tcmalloc
1231
* https://tracker.ceph.com/issues/57655
1232
    qa: fs:mixed-clients kernel_untar_build failure
1233
* https://tracker.ceph.com/issues/61945
1234
    LibCephFS.DelegTimeout failure
1235
* https://tracker.ceph.com/issues/52624
1236
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
1237
* https://tracker.ceph.com/issues/57676
1238
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1239
* https://tracker.ceph.com/issues/59348
1240
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1241
* https://tracker.ceph.com/issues/59344
1242
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
1243
* https://tracker.ceph.com/issues/51964
1244
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1245
* https://tracker.ceph.com/issues/59346
1246
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
1247
* https://tracker.ceph.com/issues/61982
1248
    Test failure: test_clean_stale_subvolume_snapshot_metadata (tasks.cephfs.test_volumes.TestSubvolumeSnapshots)
1249 150 Rishabh Dave
1250
1251
h3. 13 Jul 2023
1252
1253
https://pulpito.ceph.com/rishabh-2023-07-05_22:21:20-fs-wip-rishabh-2023Jul5-testing-default-smithi/
1254
https://pulpito.ceph.com/rishabh-2023-07-06_19:33:28-fs-wip-rishabh-2023Jul5-testing-default-smithi/
1255
1256
* https://tracker.ceph.com/issues/61957
1257
  test_client_limits.TestClientLimits.test_client_release_bug
1258
* https://tracker.ceph.com/issues/59348
1259
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1260
* https://tracker.ceph.com/issues/59346
1261
  fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
1262
* https://tracker.ceph.com/issues/48773
1263
  scrub does not complete: reached max tries
1264
* https://tracker.ceph.com/issues/59344
1265
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
1266
* https://tracker.ceph.com/issues/52438
1267
  qa: ffsb timeout
1268
* https://tracker.ceph.com/issues/57656
1269
  [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
1270
* https://tracker.ceph.com/issues/58742
1271
  xfstests-dev: kcephfs: generic
1272
* https://tracker.ceph.com/issues/61399
1273 148 Rishabh Dave
  libmpich: undefined references to fi_strerror
1274 149 Rishabh Dave
1275 148 Rishabh Dave
h3. 12 July 2023
1276
1277
https://pulpito.ceph.com/rishabh-2023-07-05_18:32:52-fs-wip-rishabh-CephManager-in-CephFSTestCase-testing-default-smithi/
1278
https://pulpito.ceph.com/rishabh-2023-07-06_19:46:43-fs-wip-rishabh-CephManager-in-CephFSTestCase-testing-default-smithi/
1279
1280
* https://tracker.ceph.com/issues/61892
1281
  test_strays.TestStrays.test_snapshot_remove failed
1282
* https://tracker.ceph.com/issues/59348
1283
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1284
* https://tracker.ceph.com/issues/53859
1285
  qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
1286
* https://tracker.ceph.com/issues/59346
1287
  fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write"
1288
* https://tracker.ceph.com/issues/58742
1289
  xfstests-dev: kcephfs: generic
1290
* https://tracker.ceph.com/issues/59344
1291
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
1292
* https://tracker.ceph.com/issues/52438
1293
  qa: ffsb timeout
1294
* https://tracker.ceph.com/issues/57656
1295
  [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
1296
* https://tracker.ceph.com/issues/54460
1297
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1298
* https://tracker.ceph.com/issues/57655
1299
  qa: fs:mixed-clients kernel_untar_build failure
1300
* https://tracker.ceph.com/issues/61182
1301
  cephfs-mirror-ha-workunit: reached maximum tries (50) after waiting for 300 seconds
1302
* https://tracker.ceph.com/issues/61400
1303
  valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
1304 147 Rishabh Dave
* https://tracker.ceph.com/issues/48773
1305 146 Patrick Donnelly
  reached max tries: scrub does not complete
1306
1307
h3. 05 July 2023
1308
1309
https://pulpito.ceph.com/pdonnell-2023-07-05_03:38:33-fs:libcephfs-wip-pdonnell-testing-20230705.003205-distro-default-smithi/
1310
1311 137 Rishabh Dave
* https://tracker.ceph.com/issues/59346
1312 143 Rishabh Dave
  fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write"
1313
1314
h3. 27 Jun 2023
1315
1316
https://pulpito.ceph.com/rishabh-2023-06-21_23:38:17-fs-wip-rishabh-improvements-authmon-testing-default-smithi/
1317 144 Rishabh Dave
https://pulpito.ceph.com/rishabh-2023-06-23_17:37:30-fs-wip-rishabh-improvements-authmon-distro-default-smithi/
1318
1319
* https://tracker.ceph.com/issues/59348
1320
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1321
* https://tracker.ceph.com/issues/54460
1322
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1323
* https://tracker.ceph.com/issues/59346
1324
  fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write"
1325
* https://tracker.ceph.com/issues/59344
1326
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
1327
* https://tracker.ceph.com/issues/61399
1328
  libmpich: undefined references to fi_strerror
1329
* https://tracker.ceph.com/issues/50223
1330
  client.xxxx isn't responding to mclientcaps(revoke)
1331 143 Rishabh Dave
* https://tracker.ceph.com/issues/61831
1332
  Test failure: test_mirroring_init_failure_with_recovery (tasks.cephfs.test_mirroring.TestMirroring)
1333 142 Venky Shankar
1334
1335
h3. 22 June 2023
1336
1337
* https://tracker.ceph.com/issues/57676
1338
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1339
* https://tracker.ceph.com/issues/54460
1340
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1341
* https://tracker.ceph.com/issues/59344
1342
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
1343
* https://tracker.ceph.com/issues/59348
1344
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1345
* https://tracker.ceph.com/issues/61400
1346
    valgrind+ceph-mon: segmentation fault in rocksdb+tcmalloc
1347
* https://tracker.ceph.com/issues/57655
1348
    qa: fs:mixed-clients kernel_untar_build failure
1349
* https://tracker.ceph.com/issues/61394
1350
    qa/quincy: cluster [WRN] evicting unresponsive client smithi152 (4298), after 303.726 seconds" in cluster log
1351
* https://tracker.ceph.com/issues/61762
1352
    qa: wait_for_clean: failed before timeout expired
1353
* https://tracker.ceph.com/issues/61775
1354
    cephfs-mirror: mirror daemon does not shutdown (in mirror ha tests)
1355
* https://tracker.ceph.com/issues/44565
1356
    src/mds/SimpleLock.h: 528: FAILED ceph_assert(state == LOCK_XLOCK || state == LOCK_XLOCKDONE || state == LOCK_XLOCKSNAP || state == LOCK_LOCK_XLOCK || state == LOCK_LOCK || is_locallock())
1357
* https://tracker.ceph.com/issues/61790
1358
    cephfs client to mds comms remain silent after reconnect
1359
* https://tracker.ceph.com/issues/61791
1360
    snaptest-git-ceph.sh test timed out (job dead)
1361 139 Venky Shankar
1362
1363
h3. 20 June 2023
1364
1365
https://pulpito.ceph.com/vshankar-2023-06-15_04:58:28-fs-wip-vshankar-testing-20230614.124123-testing-default-smithi/
1366
1367
* https://tracker.ceph.com/issues/57676
1368
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1369
* https://tracker.ceph.com/issues/54460
1370
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1371 140 Venky Shankar
* https://tracker.ceph.com/issues/54462
1372 1 Patrick Donnelly
  Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128
1373 141 Venky Shankar
* https://tracker.ceph.com/issues/58340
1374 139 Venky Shankar
  mds: fsstress.sh hangs with multimds
1375
* https://tracker.ceph.com/issues/59344
1376
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
1377
* https://tracker.ceph.com/issues/59348
1378
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1379
* https://tracker.ceph.com/issues/57656
1380
  [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
1381
* https://tracker.ceph.com/issues/61400
1382
    valgrind+ceph-mon: segmentation fault in rocksdb+tcmalloc
1383
* https://tracker.ceph.com/issues/57655
1384
    qa: fs:mixed-clients kernel_untar_build failure
1385
* https://tracker.ceph.com/issues/44565
1386
    src/mds/SimpleLock.h: 528: FAILED ceph_assert(state == LOCK_XLOCK || state == LOCK_XLOCKDONE || state == LOCK_XLOCKSNAP || state == LOCK_LOCK_XLOCK || state == LOCK_LOCK || is_locallock())
1387
* https://tracker.ceph.com/issues/61737
1388 138 Rishabh Dave
    coredump from '/bin/podman pull quay.ceph.io/ceph-ci/ceph:pacific'
1389
1390
h3. 16 June 2023
1391
1392 1 Patrick Donnelly
https://pulpito.ceph.com/rishabh-2023-05-16_10:39:13-fs-wip-rishabh-2023May15-1524-testing-default-smithi/
1393 145 Rishabh Dave
https://pulpito.ceph.com/rishabh-2023-05-17_11:09:48-fs-wip-rishabh-2023May15-1524-testing-default-smithi/
1394 138 Rishabh Dave
https://pulpito.ceph.com/rishabh-2023-05-18_10:01:53-fs-wip-rishabh-2023May15-1524-testing-default-smithi/
1395 1 Patrick Donnelly
(bins were rebuilt with a subset of orig PRs) https://pulpito.ceph.com/rishabh-2023-06-09_10:19:22-fs-wip-rishabh-2023Jun9-1308-testing-default-smithi/
1396
1397
1398
* https://tracker.ceph.com/issues/59344
1399
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
1400 138 Rishabh Dave
* https://tracker.ceph.com/issues/59348
1401
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1402 145 Rishabh Dave
* https://tracker.ceph.com/issues/59346
1403
  fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write"
1404
* https://tracker.ceph.com/issues/57656
1405
  [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
1406
* https://tracker.ceph.com/issues/54460
1407
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1408 138 Rishabh Dave
* https://tracker.ceph.com/issues/54462
1409
  Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128
1410 145 Rishabh Dave
* https://tracker.ceph.com/issues/61399
1411
  libmpich: undefined references to fi_strerror
1412
* https://tracker.ceph.com/issues/58945
1413
  xfstests-dev: ceph-fuse: generic 
1414 138 Rishabh Dave
* https://tracker.ceph.com/issues/58742
1415 136 Patrick Donnelly
  xfstests-dev: kcephfs: generic
1416
1417
h3. 24 May 2023
1418
1419
https://pulpito.ceph.com/pdonnell-2023-05-23_18:20:18-fs-wip-pdonnell-testing-20230523.134409-distro-default-smithi/
1420
1421
* https://tracker.ceph.com/issues/57676
1422
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1423
* https://tracker.ceph.com/issues/59683
1424
    Error: Unable to find a match: userspace-rcu-devel libedit-devel device-mapper-devel with fscrypt tests
1425
* https://tracker.ceph.com/issues/61399
1426
    qa: "[Makefile:299: ior] Error 1"
1427
* https://tracker.ceph.com/issues/61265
1428
    qa: tasks.cephfs.fuse_mount:process failed to terminate after unmount
1429
* https://tracker.ceph.com/issues/59348
1430
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1431
* https://tracker.ceph.com/issues/59346
1432
    qa/workunits/fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write"
1433
* https://tracker.ceph.com/issues/61400
1434
    valgrind+ceph-mon: segmentation fault in rocksdb+tcmalloc
1435
* https://tracker.ceph.com/issues/54460
1436
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1437
* https://tracker.ceph.com/issues/51964
1438
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1439
* https://tracker.ceph.com/issues/59344
1440
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
1441
* https://tracker.ceph.com/issues/61407
1442
    mds: abort on CInode::verify_dirfrags
1443
* https://tracker.ceph.com/issues/48773
1444
    qa: scrub does not complete
1445
* https://tracker.ceph.com/issues/57655
1446
    qa: fs:mixed-clients kernel_untar_build failure
1447
* https://tracker.ceph.com/issues/61409
1448 128 Venky Shankar
    qa: _test_stale_caps does not wait for file flush before stat
1449
1450
h3. 15 May 2023
1451 130 Venky Shankar
1452 128 Venky Shankar
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230509.090020
1453
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230509.090020-6
1454
1455
* https://tracker.ceph.com/issues/52624
1456
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
1457
* https://tracker.ceph.com/issues/54460
1458
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1459
* https://tracker.ceph.com/issues/57676
1460
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1461
* https://tracker.ceph.com/issues/59684 [kclient bug]
1462
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
1463
* https://tracker.ceph.com/issues/59348
1464
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1465 131 Venky Shankar
* https://tracker.ceph.com/issues/61148
1466
    dbench test results in call trace in dmesg [kclient bug]
1467 133 Kotresh Hiremath Ravishankar
* https://tracker.ceph.com/issues/58340
1468 134 Kotresh Hiremath Ravishankar
    mds: fsstress.sh hangs with multimds
1469 125 Venky Shankar
1470
 
1471 129 Rishabh Dave
h3. 11 May 2023
1472
1473
https://pulpito.ceph.com/yuriw-2023-05-10_18:21:40-fs-wip-yuri7-testing-2023-05-10-0742-distro-default-smithi/
1474
1475
* https://tracker.ceph.com/issues/59684 [kclient bug]
1476
  Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
1477
* https://tracker.ceph.com/issues/59348
1478
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1479
* https://tracker.ceph.com/issues/57655
1480
  qa: fs:mixed-clients kernel_untar_build failure
1481
* https://tracker.ceph.com/issues/57676
1482
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
1483
* https://tracker.ceph.com/issues/55805
1484
  error during scrub thrashing reached max tries in 900 secs
1485
* https://tracker.ceph.com/issues/54460
1486
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1487
* https://tracker.ceph.com/issues/57656
1488
  [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
1489
* https://tracker.ceph.com/issues/58220
1490
  Command failed (workunit test fs/quota/quota.sh) on smithi081 with status 1:
1491 1 Patrick Donnelly
* https://tracker.ceph.com/issues/58220#note-9
1492
  workunit fs/test_python.sh: test_disk_quota_exceeeded_error failure
1493 134 Kotresh Hiremath Ravishankar
* https://tracker.ceph.com/issues/59342
1494
  qa/workunits/kernel_untar_build.sh failed when compiling the Linux source
1495 135 Kotresh Hiremath Ravishankar
* https://tracker.ceph.com/issues/58949
1496
    test_cephfs.test_disk_quota_exceeeded_error - AssertionError: DiskQuotaExceeded not raised by write
1497 129 Rishabh Dave
* https://tracker.ceph.com/issues/61243 (NEW)
1498
  test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
1499
1500 125 Venky Shankar
h3. 11 May 2023
1501 127 Venky Shankar
1502
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230509.054005
1503 126 Venky Shankar
1504 125 Venky Shankar
(no fsstress job failure [https://tracker.ceph.com/issues/58340] since https://github.com/ceph/ceph/pull/49553
1505
 was included in the branch, however, the PR got updated and needs retest).
1506
1507
* https://tracker.ceph.com/issues/52624
1508
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
1509
* https://tracker.ceph.com/issues/54460
1510
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1511
* https://tracker.ceph.com/issues/57676
1512
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1513
* https://tracker.ceph.com/issues/59683
1514
    Error: Unable to find a match: userspace-rcu-devel libedit-devel device-mapper-devel with fscrypt tests
1515
* https://tracker.ceph.com/issues/59684 [kclient bug]
1516
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
1517
* https://tracker.ceph.com/issues/59348
1518 124 Venky Shankar
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1519
1520
h3. 09 May 2023
1521
1522
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230506.143554
1523
1524
* https://tracker.ceph.com/issues/52624
1525
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
1526
* https://tracker.ceph.com/issues/58340
1527
    mds: fsstress.sh hangs with multimds
1528
* https://tracker.ceph.com/issues/54460
1529
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1530
* https://tracker.ceph.com/issues/57676
1531
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1532
* https://tracker.ceph.com/issues/51964
1533
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1534
* https://tracker.ceph.com/issues/59350
1535
    qa: test_flag_scrub_mdsdir (tasks.cephfs.test_scrub_checks.TestScrubChecks) ... ERROR
1536
* https://tracker.ceph.com/issues/59683
1537
    Error: Unable to find a match: userspace-rcu-devel libedit-devel device-mapper-devel with fscrypt tests
1538
* https://tracker.ceph.com/issues/59684 [kclient bug]
1539
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
1540
* https://tracker.ceph.com/issues/59348
1541 123 Venky Shankar
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1542
1543
h3. 10 Apr 2023
1544
1545
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230330.105356
1546
1547
* https://tracker.ceph.com/issues/52624
1548
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
1549
* https://tracker.ceph.com/issues/58340
1550
    mds: fsstress.sh hangs with multimds
1551
* https://tracker.ceph.com/issues/54460
1552
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1553
* https://tracker.ceph.com/issues/57676
1554
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1555 119 Rishabh Dave
* https://tracker.ceph.com/issues/51964
1556 120 Rishabh Dave
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1557 121 Rishabh Dave
1558 120 Rishabh Dave
h3. 31 Mar 2023
1559 122 Rishabh Dave
1560
run: http://pulpito.front.sepia.ceph.com/rishabh-2023-03-03_21:39:49-fs-wip-rishabh-2023Mar03-2316-testing-default-smithi/
1561 120 Rishabh Dave
re-run (some PRs removed from batch): http://pulpito.front.sepia.ceph.com/rishabh-2023-03-11_05:54:03-fs-wip-rishabh-2023Mar10-1727-testing-default-smithi/
1562
re-run (some PRs removed from batch): http://pulpito.front.sepia.ceph.com/rishabh-2023-03-23_08:27:28-fs-wip-rishabh-2023Mar20-2250-testing-default-smithi/
1563
1564
There were many more re-runs for "failed+dead" jobs as well as for individual jobs. half of the PRs from the batch were removed (gradually over subsequent re-runs).
1565
1566
* https://tracker.ceph.com/issues/57676
1567
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
1568
* https://tracker.ceph.com/issues/54460
1569
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1570
* https://tracker.ceph.com/issues/58220
1571
  Command failed (workunit test fs/quota/quota.sh) on smithixxx with status 1
1572
* https://tracker.ceph.com/issues/58220#note-9
1573
  workunit fs/test_python.sh: test_disk_quota_exceeeded_error failure
1574
* https://tracker.ceph.com/issues/56695
1575
  Command failed (workunit test suites/pjd.sh)
1576
* https://tracker.ceph.com/issues/58564 
1577
  workuit dbench failed with error code 1
1578
* https://tracker.ceph.com/issues/57206
1579
  libcephfs/test.sh: ceph_test_libcephfs_reclaim
1580
* https://tracker.ceph.com/issues/57580
1581
  Test failure: test_newops_getvxattr (tasks.cephfs.test_newops.TestNewOps)
1582
* https://tracker.ceph.com/issues/58940
1583
  ceph osd hit ceph_abort
1584
* https://tracker.ceph.com/issues/55805
1585 118 Venky Shankar
  error scrub thrashing reached max tries in 900 secs
1586
1587
h3. 30 March 2023
1588
1589
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230315.085747
1590
1591
* https://tracker.ceph.com/issues/58938
1592
    qa: xfstests-dev's generic test suite has 7 failures with kclient
1593
* https://tracker.ceph.com/issues/51964
1594
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1595
* https://tracker.ceph.com/issues/58340
1596 114 Venky Shankar
    mds: fsstress.sh hangs with multimds
1597
1598 115 Venky Shankar
h3. 29 March 2023
1599 114 Venky Shankar
1600
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230317.095222
1601
1602
* https://tracker.ceph.com/issues/56695
1603
    [RHEL stock] pjd test failures
1604
* https://tracker.ceph.com/issues/57676
1605
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1606
* https://tracker.ceph.com/issues/57087
1607
    qa: test_fragmented_injection (tasks.cephfs.test_data_scan.TestDataScan) failure
1608 116 Venky Shankar
* https://tracker.ceph.com/issues/58340
1609
    mds: fsstress.sh hangs with multimds
1610 114 Venky Shankar
* https://tracker.ceph.com/issues/57655
1611
    qa: fs:mixed-clients kernel_untar_build failure
1612 117 Venky Shankar
* https://tracker.ceph.com/issues/59230
1613
    Test failure: test_object_deletion (tasks.cephfs.test_damage.TestDamage)
1614 114 Venky Shankar
* https://tracker.ceph.com/issues/58938
1615 113 Venky Shankar
    qa: xfstests-dev's generic test suite has 7 failures with kclient
1616
1617
h3. 13 Mar 2023
1618
1619
* https://tracker.ceph.com/issues/56695
1620
    [RHEL stock] pjd test failures
1621
* https://tracker.ceph.com/issues/57676
1622
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1623
* https://tracker.ceph.com/issues/51964
1624
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1625
* https://tracker.ceph.com/issues/54460
1626
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1627
* https://tracker.ceph.com/issues/57656
1628 112 Venky Shankar
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
1629
1630
h3. 09 Mar 2023
1631
1632
https://pulpito.ceph.com/vshankar-2023-03-03_04:39:14-fs-wip-vshankar-testing-20230303.023823-testing-default-smithi/
1633
https://pulpito.ceph.com/vshankar-2023-03-08_15:12:36-fs-wip-vshankar-testing-20230308.112059-testing-default-smithi/
1634
1635
* https://tracker.ceph.com/issues/56695
1636
    [RHEL stock] pjd test failures
1637
* https://tracker.ceph.com/issues/57676
1638
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1639
* https://tracker.ceph.com/issues/51964
1640
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1641
* https://tracker.ceph.com/issues/54460
1642
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1643
* https://tracker.ceph.com/issues/58340
1644
    mds: fsstress.sh hangs with multimds
1645
* https://tracker.ceph.com/issues/57087
1646 111 Venky Shankar
    qa: test_fragmented_injection (tasks.cephfs.test_data_scan.TestDataScan) failure
1647
1648
h3. 07 Mar 2023
1649
1650
https://pulpito.ceph.com/vshankar-2023-03-02_09:21:58-fs-wip-vshankar-testing-20230222.044949-testing-default-smithi/
1651
https://pulpito.ceph.com/vshankar-2023-03-07_05:15:12-fs-wip-vshankar-testing-20230307.030510-testing-default-smithi/
1652
1653
* https://tracker.ceph.com/issues/56695
1654
    [RHEL stock] pjd test failures
1655
* https://tracker.ceph.com/issues/57676
1656
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1657
* https://tracker.ceph.com/issues/51964
1658
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1659
* https://tracker.ceph.com/issues/57656
1660
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
1661
* https://tracker.ceph.com/issues/57655
1662
    qa: fs:mixed-clients kernel_untar_build failure
1663
* https://tracker.ceph.com/issues/58220
1664
    Command failed (workunit test fs/quota/quota.sh) on smithi081 with status 1:
1665
* https://tracker.ceph.com/issues/54460
1666
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1667
* https://tracker.ceph.com/issues/58934
1668 109 Venky Shankar
    snaptest-git-ceph.sh failure with ceph-fuse
1669
1670
h3. 28 Feb 2023
1671
1672
https://pulpito.ceph.com/vshankar-2023-02-24_02:11:45-fs-wip-vshankar-testing-20230222.025426-testing-default-smithi/
1673
1674
* https://tracker.ceph.com/issues/56695
1675
    [RHEL stock] pjd test failures
1676
* https://tracker.ceph.com/issues/57676
1677
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1678 110 Venky Shankar
* https://tracker.ceph.com/issues/56446
1679 109 Venky Shankar
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
1680
1681 107 Venky Shankar
(teuthology infra issues causing testing delays - merging PRs which have tests passing)
1682
1683
h3. 25 Jan 2023
1684
1685
https://pulpito.ceph.com/vshankar-2023-01-25_07:57:32-fs-wip-vshankar-testing-20230125.055346-testing-default-smithi/
1686
1687
* https://tracker.ceph.com/issues/52624
1688
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
1689
* https://tracker.ceph.com/issues/56695
1690
    [RHEL stock] pjd test failures
1691
* https://tracker.ceph.com/issues/57676
1692
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1693
* https://tracker.ceph.com/issues/56446
1694
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
1695
* https://tracker.ceph.com/issues/57206
1696
    libcephfs/test.sh: ceph_test_libcephfs_reclaim
1697
* https://tracker.ceph.com/issues/58220
1698
    Command failed (workunit test fs/quota/quota.sh) on smithi081 with status 1:
1699
* https://tracker.ceph.com/issues/58340
1700
  mds: fsstress.sh hangs with multimds
1701
* https://tracker.ceph.com/issues/56011
1702
    fs/thrash: snaptest-snap-rm-cmp.sh fails in mds5sum comparison
1703
* https://tracker.ceph.com/issues/54460
1704 101 Rishabh Dave
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1705
1706
h3. 30 JAN 2023
1707
1708
run: http://pulpito.front.sepia.ceph.com/rishabh-2022-11-28_08:04:11-fs-wip-rishabh-testing-2022Nov24-1818-testing-default-smithi/
1709
re-run: http://pulpito.front.sepia.ceph.com/rishabh-2023-01-13_12:08:33-fs-wip-rishabh-testing-2022Nov24-11Jan2023-distro-default-smithi/
1710 105 Rishabh Dave
re-run of re-run: http://pulpito.front.sepia.ceph.com/rishabh-2023-01-23_18:53:32-fs-wip-rishabh-testing-2022Nov24-11Jan2023-distro-default-smithi/
1711
1712 101 Rishabh Dave
* https://tracker.ceph.com/issues/52624
1713
  qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
1714
* https://tracker.ceph.com/issues/56695
1715
  [RHEL stock] pjd test failures
1716
* https://tracker.ceph.com/issues/57676
1717
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
1718
* https://tracker.ceph.com/issues/55332
1719
  Failure in snaptest-git-ceph.sh
1720
* https://tracker.ceph.com/issues/51964
1721
  qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1722
* https://tracker.ceph.com/issues/56446
1723
  Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
1724
* https://tracker.ceph.com/issues/57655 
1725
  qa: fs:mixed-clients kernel_untar_build failure
1726
* https://tracker.ceph.com/issues/54460
1727
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1728 103 Rishabh Dave
* https://tracker.ceph.com/issues/58340
1729
  mds: fsstress.sh hangs with multimds
1730 101 Rishabh Dave
* https://tracker.ceph.com/issues/58219
1731 102 Rishabh Dave
  Command crashed: 'ceph-dencoder type inode_backtrace_t import - decode dump_json'
1732
1733
* "Failed to load ceph-mgr modules: prometheus" in cluster log"
1734 106 Rishabh Dave
  http://pulpito.front.sepia.ceph.com/rishabh-2023-01-23_18:53:32-fs-wip-rishabh-testing-2022Nov24-11Jan2023-distro-default-smithi/7134086
1735
  Acc to Venky this was fixed in https://github.com/ceph/ceph/commit/cf6089200d96fc56b08ee17a4e31f19823370dc8
1736 102 Rishabh Dave
* Created https://tracker.ceph.com/issues/58564
1737 100 Venky Shankar
  workunit test suites/dbench.sh failed error code 1
1738
1739
h3. 15 Dec 2022
1740
1741
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20221215.112736
1742
1743
* https://tracker.ceph.com/issues/52624
1744
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1745
* https://tracker.ceph.com/issues/56695
1746
    [RHEL stock] pjd test failures
1747
* https://tracker.ceph.com/issues/58219
1748
* https://tracker.ceph.com/issues/57655
1749
* qa: fs:mixed-clients kernel_untar_build failure
1750
    Test failure: test_journal_migration (tasks.cephfs.test_journal_migration.TestJournalMigration)
1751
* https://tracker.ceph.com/issues/57676
1752
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1753
* https://tracker.ceph.com/issues/58340
1754 96 Venky Shankar
    mds: fsstress.sh hangs with multimds
1755
1756
h3. 08 Dec 2022
1757 99 Venky Shankar
1758 96 Venky Shankar
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20221130.043104
1759
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20221209.043803
1760
1761
(lots of transient git.ceph.com failures)
1762
1763
* https://tracker.ceph.com/issues/52624
1764
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1765
* https://tracker.ceph.com/issues/56695
1766
    [RHEL stock] pjd test failures
1767
* https://tracker.ceph.com/issues/57655
1768
    qa: fs:mixed-clients kernel_untar_build failure
1769
* https://tracker.ceph.com/issues/58219
1770
    Test failure: test_journal_migration (tasks.cephfs.test_journal_migration.TestJournalMigration)
1771
* https://tracker.ceph.com/issues/58220
1772
    Command failed (workunit test fs/quota/quota.sh) on smithi081 with status 1:
1773 97 Venky Shankar
* https://tracker.ceph.com/issues/57676
1774
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1775 98 Venky Shankar
* https://tracker.ceph.com/issues/53859
1776
    qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
1777
* https://tracker.ceph.com/issues/54460
1778
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1779 96 Venky Shankar
* https://tracker.ceph.com/issues/58244
1780 95 Venky Shankar
    Test failure: test_rebuild_inotable (tasks.cephfs.test_data_scan.TestDataScan)
1781
1782
h3. 14 Oct 2022
1783
1784
https://pulpito.ceph.com/vshankar-2022-10-12_04:56:59-fs-wip-vshankar-testing-20221011-145847-testing-default-smithi/
1785
https://pulpito.ceph.com/vshankar-2022-10-14_04:04:57-fs-wip-vshankar-testing-20221014-072608-testing-default-smithi/
1786
1787
* https://tracker.ceph.com/issues/52624
1788
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1789
* https://tracker.ceph.com/issues/55804
1790
    Command failed (workunit test suites/pjd.sh)
1791
* https://tracker.ceph.com/issues/51964
1792
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1793
* https://tracker.ceph.com/issues/57682
1794
    client: ERROR: test_reconnect_after_blocklisted
1795 90 Rishabh Dave
* https://tracker.ceph.com/issues/54460
1796 91 Rishabh Dave
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1797
1798
h3. 10 Oct 2022
1799 92 Rishabh Dave
1800 91 Rishabh Dave
http://pulpito.front.sepia.ceph.com/rishabh-2022-09-30_19:45:21-fs-wip-rishabh-testing-30Sep2022-testing-default-smithi/
1801
1802
reruns
1803
* fs-thrash, passed: http://pulpito.front.sepia.ceph.com/rishabh-2022-10-04_13:19:47-fs-wip-rishabh-testing-30Sep2022-testing-default-smithi/
1804 94 Rishabh Dave
* fs-verify, passed: http://pulpito.front.sepia.ceph.com/rishabh-2022-10-05_12:25:37-fs-wip-rishabh-testing-30Sep2022-testing-default-smithi/
1805 91 Rishabh Dave
* cephadm failures also passed after many re-runs: http://pulpito.front.sepia.ceph.com/rishabh-2022-10-06_13:50:51-fs-wip-rishabh-testing-30Sep2022-2-testing-default-smithi/
1806 93 Rishabh Dave
    ** needed this PR to be merged in ceph-ci branch - https://github.com/ceph/ceph/pull/47458
1807 91 Rishabh Dave
1808
known bugs
1809
* https://tracker.ceph.com/issues/52624
1810
  qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1811
* https://tracker.ceph.com/issues/50223
1812
  client.xxxx isn't responding to mclientcaps(revoke
1813
* https://tracker.ceph.com/issues/57299
1814
  qa: test_dump_loads fails with JSONDecodeError
1815
* https://tracker.ceph.com/issues/57655 [Exist in main as well]
1816
  qa: fs:mixed-clients kernel_untar_build failure
1817
* https://tracker.ceph.com/issues/57206
1818 90 Rishabh Dave
  libcephfs/test.sh: ceph_test_libcephfs_reclaim
1819
1820
h3. 2022 Sep 29
1821
1822
http://pulpito.front.sepia.ceph.com/rishabh-2022-09-14_12:48:43-fs-wip-rishabh-testing-2022Sep9-1708-testing-default-smithi/
1823
1824
* https://tracker.ceph.com/issues/55804
1825
  Command failed (workunit test suites/pjd.sh)
1826
* https://tracker.ceph.com/issues/36593
1827
  Command failed (workunit test fs/quota/quota.sh) on smithixxx with status 1
1828
* https://tracker.ceph.com/issues/52624
1829
  qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1830
* https://tracker.ceph.com/issues/51964
1831
  qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1832
* https://tracker.ceph.com/issues/56632
1833
  Test failure: test_subvolume_snapshot_clone_quota_exceeded
1834
* https://tracker.ceph.com/issues/50821
1835 88 Patrick Donnelly
  qa: untar_snap_rm failure during mds thrashing
1836
1837
h3. 2022 Sep 26
1838
1839
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20220923.171109
1840
1841
* https://tracker.ceph.com/issues/55804
1842
    qa failure: pjd link tests failed
1843
* https://tracker.ceph.com/issues/57676
1844
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1845
* https://tracker.ceph.com/issues/52624
1846
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1847
* https://tracker.ceph.com/issues/57580
1848
    Test failure: test_newops_getvxattr (tasks.cephfs.test_newops.TestNewOps)
1849
* https://tracker.ceph.com/issues/48773
1850
    qa: scrub does not complete
1851
* https://tracker.ceph.com/issues/57299
1852
    qa: test_dump_loads fails with JSONDecodeError
1853
* https://tracker.ceph.com/issues/57280
1854
    qa: tasks/kernel_cfuse_workunits_untarbuild_blogbench fails - Failed to fetch package version from shaman
1855
* https://tracker.ceph.com/issues/57205
1856
    Test failure: test_subvolume_group_ls_filter_internal_directories (tasks.cephfs.test_volumes.TestSubvolumeGroups)
1857
* https://tracker.ceph.com/issues/57656
1858
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
1859
* https://tracker.ceph.com/issues/57677
1860
    qa: "1 MDSs behind on trimming (MDS_TRIM)"
1861
* https://tracker.ceph.com/issues/57206
1862
    libcephfs/test.sh: ceph_test_libcephfs_reclaim
1863
* https://tracker.ceph.com/issues/57446
1864
    qa: test_subvolume_snapshot_info_if_orphan_clone fails
1865 89 Patrick Donnelly
* https://tracker.ceph.com/issues/57655 [Exist in main as well]
1866
    qa: fs:mixed-clients kernel_untar_build failure
1867 88 Patrick Donnelly
* https://tracker.ceph.com/issues/57682
1868
    client: ERROR: test_reconnect_after_blocklisted
1869 87 Patrick Donnelly
1870
1871
h3. 2022 Sep 22
1872
1873
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20220920.234701
1874
1875
* https://tracker.ceph.com/issues/57299
1876
    qa: test_dump_loads fails with JSONDecodeError
1877
* https://tracker.ceph.com/issues/57205
1878
    Test failure: test_subvolume_group_ls_filter_internal_directories (tasks.cephfs.test_volumes.TestSubvolumeGroups)
1879
* https://tracker.ceph.com/issues/52624
1880
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1881
* https://tracker.ceph.com/issues/57580
1882
    Test failure: test_newops_getvxattr (tasks.cephfs.test_newops.TestNewOps)
1883
* https://tracker.ceph.com/issues/57280
1884
    qa: tasks/kernel_cfuse_workunits_untarbuild_blogbench fails - Failed to fetch package version from shaman
1885
* https://tracker.ceph.com/issues/48773
1886
    qa: scrub does not complete
1887
* https://tracker.ceph.com/issues/56446
1888
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
1889
* https://tracker.ceph.com/issues/57206
1890
    libcephfs/test.sh: ceph_test_libcephfs_reclaim
1891
* https://tracker.ceph.com/issues/51267
1892
    CommandFailedError: Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithi096 with status 1:...
1893
1894
NEW:
1895
1896
* https://tracker.ceph.com/issues/57656
1897
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
1898
* https://tracker.ceph.com/issues/57655 [Exist in main as well]
1899
    qa: fs:mixed-clients kernel_untar_build failure
1900
* https://tracker.ceph.com/issues/57657
1901
    mds: scrub locates mismatch between child accounted_rstats and self rstats
1902
1903
Segfault probably caused by: https://github.com/ceph/ceph/pull/47795#issuecomment-1255724799
1904 80 Venky Shankar
1905 79 Venky Shankar
1906
h3. 2022 Sep 16
1907
1908
https://pulpito.ceph.com/?branch=wip-vshankar-testing1-20220905-132828
1909
1910
* https://tracker.ceph.com/issues/57446
1911
    qa: test_subvolume_snapshot_info_if_orphan_clone fails
1912
* https://tracker.ceph.com/issues/57299
1913
    qa: test_dump_loads fails with JSONDecodeError
1914
* https://tracker.ceph.com/issues/50223
1915
    client.xxxx isn't responding to mclientcaps(revoke)
1916
* https://tracker.ceph.com/issues/52624
1917
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1918
* https://tracker.ceph.com/issues/57205
1919
    Test failure: test_subvolume_group_ls_filter_internal_directories (tasks.cephfs.test_volumes.TestSubvolumeGroups)
1920
* https://tracker.ceph.com/issues/57280
1921
    qa: tasks/kernel_cfuse_workunits_untarbuild_blogbench fails - Failed to fetch package version from shaman
1922
* https://tracker.ceph.com/issues/51282
1923
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
1924
* https://tracker.ceph.com/issues/48203
1925
  https://tracker.ceph.com/issues/36593
1926
    qa: quota failure
1927
    qa: quota failure caused by clients stepping on each other
1928
* https://tracker.ceph.com/issues/57580
1929 77 Rishabh Dave
    Test failure: test_newops_getvxattr (tasks.cephfs.test_newops.TestNewOps)
1930
1931 76 Rishabh Dave
1932
h3. 2022 Aug 26
1933
1934
http://pulpito.front.sepia.ceph.com/rishabh-2022-08-22_17:49:59-fs-wip-rishabh-testing-2022Aug19-testing-default-smithi/
1935
http://pulpito.front.sepia.ceph.com/rishabh-2022-08-24_11:56:51-fs-wip-rishabh-testing-2022Aug19-testing-default-smithi/
1936
1937
* https://tracker.ceph.com/issues/57206
1938
  libcephfs/test.sh: ceph_test_libcephfs_reclaim
1939
* https://tracker.ceph.com/issues/56632
1940
  Test failure: test_subvolume_snapshot_clone_quota_exceeded (tasks.cephfs.test_volumes.TestSubvolumeSnapshotClones)
1941
* https://tracker.ceph.com/issues/56446
1942
  Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
1943
* https://tracker.ceph.com/issues/51964
1944
  qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1945
* https://tracker.ceph.com/issues/53859
1946
  qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
1947
1948
* https://tracker.ceph.com/issues/54460
1949
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1950
* https://tracker.ceph.com/issues/54462
1951
  Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128
1952
* https://tracker.ceph.com/issues/54460
1953
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1954
* https://tracker.ceph.com/issues/36593
1955
  Command failed (workunit test fs/quota/quota.sh) on smithixxx with status 1
1956
1957
* https://tracker.ceph.com/issues/52624
1958
  qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1959
* https://tracker.ceph.com/issues/55804
1960
  Command failed (workunit test suites/pjd.sh)
1961
* https://tracker.ceph.com/issues/50223
1962
  client.xxxx isn't responding to mclientcaps(revoke)
1963 75 Venky Shankar
1964
1965
h3. 2022 Aug 22
1966
1967
https://pulpito.ceph.com/vshankar-2022-08-12_09:34:24-fs-wip-vshankar-testing1-20220812-072441-testing-default-smithi/
1968
https://pulpito.ceph.com/vshankar-2022-08-18_04:30:42-fs-wip-vshankar-testing1-20220818-082047-testing-default-smithi/ (drop problematic PR and re-run)
1969
1970
* https://tracker.ceph.com/issues/52624
1971
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1972
* https://tracker.ceph.com/issues/56446
1973
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
1974
* https://tracker.ceph.com/issues/55804
1975
    Command failed (workunit test suites/pjd.sh)
1976
* https://tracker.ceph.com/issues/51278
1977
    mds: "FAILED ceph_assert(!segments.empty())"
1978
* https://tracker.ceph.com/issues/54460
1979
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1980
* https://tracker.ceph.com/issues/57205
1981
    Test failure: test_subvolume_group_ls_filter_internal_directories (tasks.cephfs.test_volumes.TestSubvolumeGroups)
1982
* https://tracker.ceph.com/issues/57206
1983
    ceph_test_libcephfs_reclaim crashes during test
1984
* https://tracker.ceph.com/issues/53859
1985
  Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
1986
* https://tracker.ceph.com/issues/50223
1987 72 Venky Shankar
    client.xxxx isn't responding to mclientcaps(revoke)
1988
1989
h3. 2022 Aug 12
1990
1991
https://pulpito.ceph.com/vshankar-2022-08-10_04:06:00-fs-wip-vshankar-testing-20220805-190751-testing-default-smithi/
1992
https://pulpito.ceph.com/vshankar-2022-08-11_12:16:58-fs-wip-vshankar-testing-20220811-145809-testing-default-smithi/ (drop problematic PR and re-run)
1993
1994
* https://tracker.ceph.com/issues/52624
1995
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1996
* https://tracker.ceph.com/issues/56446
1997
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
1998
* https://tracker.ceph.com/issues/51964
1999
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
2000
* https://tracker.ceph.com/issues/55804
2001
    Command failed (workunit test suites/pjd.sh)
2002
* https://tracker.ceph.com/issues/50223
2003
    client.xxxx isn't responding to mclientcaps(revoke)
2004
* https://tracker.ceph.com/issues/50821
2005 73 Venky Shankar
    qa: untar_snap_rm failure during mds thrashing
2006 72 Venky Shankar
* https://tracker.ceph.com/issues/54460
2007 71 Venky Shankar
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
2008
2009
h3. 2022 Aug 04
2010
2011
https://pulpito.ceph.com/?branch=wip-vshankar-testing1-20220804-123835 (only mgr/volumes, mgr/stats)
2012
2013 69 Rishabh Dave
Unrealted teuthology failure on rhel
2014 68 Rishabh Dave
2015
h3. 2022 Jul 25
2016
2017
http://pulpito.front.sepia.ceph.com/rishabh-2022-07-22_11:34:20-fs-wip-rishabh-testing-2022Jul22-1400-testing-default-smithi/
2018
2019 74 Rishabh Dave
1st re-run: http://pulpito.front.sepia.ceph.com/rishabh-2022-07-24_03:51:19-fs-wip-rishabh-testing-2022Jul22-1400-testing-default-smithi
2020
2nd re-run: http://pulpito.front.sepia.ceph.com/rishabh-2022-07-24_08:53:36-fs-wip-rishabh-testing-2022Jul22-1400-testing-default-smithi/
2021 68 Rishabh Dave
3rd re-run: http://pulpito.front.sepia.ceph.com/rishabh-2022-07-24_08:53:36-fs-wip-rishabh-testing-2022Jul22-1400-testing-default-smithi/
2022
4th (final) re-run: http://pulpito.front.sepia.ceph.com/rishabh-2022-07-28_03:59:01-fs-wip-rishabh-testing-2022Jul28-0143-testing-default-smithi/
2023
2024
* https://tracker.ceph.com/issues/55804
2025
  Command failed (workunit test suites/pjd.sh)
2026
* https://tracker.ceph.com/issues/50223
2027
  client.xxxx isn't responding to mclientcaps(revoke)
2028
2029
* https://tracker.ceph.com/issues/54460
2030
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
2031 1 Patrick Donnelly
* https://tracker.ceph.com/issues/36593
2032 74 Rishabh Dave
  Command failed (workunit test fs/quota/quota.sh) on smithixxx with status 1
2033 68 Rishabh Dave
* https://tracker.ceph.com/issues/54462
2034 67 Patrick Donnelly
  Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128~
2035
2036
h3. 2022 July 22
2037
2038
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20220721.235756
2039
2040
MDS_HEALTH_DUMMY error in log fixed by followup commit.
2041
transient selinux ping failure
2042
2043
* https://tracker.ceph.com/issues/56694
2044
    qa: avoid blocking forever on hung umount
2045
* https://tracker.ceph.com/issues/56695
2046
    [RHEL stock] pjd test failures
2047
* https://tracker.ceph.com/issues/56696
2048
    admin keyring disappears during qa run
2049
* https://tracker.ceph.com/issues/56697
2050
    qa: fs/snaps fails for fuse
2051
* https://tracker.ceph.com/issues/50222
2052
    osd: 5.2s0 deep-scrub : stat mismatch
2053
* https://tracker.ceph.com/issues/56698
2054
    client: FAILED ceph_assert(_size == 0)
2055
* https://tracker.ceph.com/issues/50223
2056
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2057 66 Rishabh Dave
2058 65 Rishabh Dave
2059
h3. 2022 Jul 15
2060
2061
http://pulpito.front.sepia.ceph.com/rishabh-2022-07-08_23:53:34-fs-wip-rishabh-testing-2022Jul08-1820-testing-default-smithi/
2062
2063
re-run: http://pulpito.front.sepia.ceph.com/rishabh-2022-07-15_06:42:04-fs-wip-rishabh-testing-2022Jul08-1820-testing-default-smithi/
2064
2065
* https://tracker.ceph.com/issues/53859
2066
  Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
2067
* https://tracker.ceph.com/issues/55804
2068
  Command failed (workunit test suites/pjd.sh)
2069
* https://tracker.ceph.com/issues/50223
2070
  client.xxxx isn't responding to mclientcaps(revoke)
2071
* https://tracker.ceph.com/issues/50222
2072
  osd: deep-scrub : stat mismatch
2073
2074
* https://tracker.ceph.com/issues/56632
2075
  Test failure: test_subvolume_snapshot_clone_quota_exceeded (tasks.cephfs.test_volumes.TestSubvolumeSnapshotClones)
2076
* https://tracker.ceph.com/issues/56634
2077
  workunit test fs/snaps/snaptest-intodir.sh
2078
* https://tracker.ceph.com/issues/56644
2079
  Test failure: test_rapid_creation (tasks.cephfs.test_fragment.TestFragmentation)
2080
2081 61 Rishabh Dave
2082
2083
h3. 2022 July 05
2084 62 Rishabh Dave
2085 64 Rishabh Dave
http://pulpito.front.sepia.ceph.com/rishabh-2022-07-02_14:14:52-fs-wip-rishabh-testing-20220702-1631-testing-default-smithi/
2086
2087
On 1st re-run some jobs passed - http://pulpito.front.sepia.ceph.com/rishabh-2022-07-03_15:10:28-fs-wip-rishabh-testing-20220702-1631-distro-default-smithi/
2088
2089
On 2nd re-run only few jobs failed -
2090 62 Rishabh Dave
http://pulpito.front.sepia.ceph.com/rishabh-2022-07-06_05:24:29-fs-wip-rishabh-testing-20220705-2132-distro-default-smithi/
2091
http://pulpito.front.sepia.ceph.com/rishabh-2022-07-06_05:24:29-fs-wip-rishabh-testing-20220705-2132-distro-default-smithi/
2092
2093
* https://tracker.ceph.com/issues/56446
2094
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
2095
* https://tracker.ceph.com/issues/55804
2096
    Command failed (workunit test suites/pjd.sh) on smithi047 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/
2097
2098
* https://tracker.ceph.com/issues/56445
2099 63 Rishabh Dave
    Command failed on smithi080 with status 123: "find /home/ubuntu/cephtest/archive/syslog -name '*.log' -print0 | sudo xargs -0 --no-run-if-empty -- gzip --"
2100
* https://tracker.ceph.com/issues/51267
2101
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithi098 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1
2102 62 Rishabh Dave
* https://tracker.ceph.com/issues/50224
2103
    Test failure: test_mirroring_init_failure_with_recovery (tasks.cephfs.test_mirroring.TestMirroring)
2104 61 Rishabh Dave
2105 58 Venky Shankar
2106
2107
h3. 2022 July 04
2108
2109
https://pulpito.ceph.com/vshankar-2022-06-29_09:19:00-fs-wip-vshankar-testing-20220627-100931-testing-default-smithi/
2110
(rhel runs were borked due to: https://lists.ceph.io/hyperkitty/list/dev@ceph.io/thread/JSZQFUKVLDND4W33PXDGCABPHNSPT6SS/, tests ran with --filter-out=rhel)
2111
2112
* https://tracker.ceph.com/issues/56445
2113 59 Rishabh Dave
    Command failed on smithi162 with status 123: "find /home/ubuntu/cephtest/archive/syslog -name '*.log' -print0 | sudo xargs -0 --no-run-if-empty -- gzip --"
2114
* https://tracker.ceph.com/issues/56446
2115
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
2116
* https://tracker.ceph.com/issues/51964
2117 60 Rishabh Dave
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
2118 59 Rishabh Dave
* https://tracker.ceph.com/issues/52624
2119 57 Venky Shankar
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2120
2121
h3. 2022 June 20
2122
2123
https://pulpito.ceph.com/vshankar-2022-06-15_04:03:39-fs-wip-vshankar-testing1-20220615-072516-testing-default-smithi/
2124
https://pulpito.ceph.com/vshankar-2022-06-19_08:22:46-fs-wip-vshankar-testing1-20220619-102531-testing-default-smithi/
2125
2126
* https://tracker.ceph.com/issues/52624
2127
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2128
* https://tracker.ceph.com/issues/55804
2129
    qa failure: pjd link tests failed
2130
* https://tracker.ceph.com/issues/54108
2131
    qa: iogen workunit: "The following counters failed to be set on mds daemons: {'mds.exported', 'mds.imported'}"
2132
* https://tracker.ceph.com/issues/55332
2133 56 Patrick Donnelly
    Failure in snaptest-git-ceph.sh (it's an async unlink/create bug)
2134
2135
h3. 2022 June 13
2136
2137
https://pulpito.ceph.com/pdonnell-2022-06-12_05:08:12-fs:workload-wip-pdonnell-testing-20220612.004943-distro-default-smithi/
2138
2139
* https://tracker.ceph.com/issues/56024
2140
    cephadm: removes ceph.conf during qa run causing command failure
2141
* https://tracker.ceph.com/issues/48773
2142
    qa: scrub does not complete
2143
* https://tracker.ceph.com/issues/56012
2144
    mds: src/mds/MDLog.cc: 283: FAILED ceph_assert(!mds->is_ any_replay())
2145 55 Venky Shankar
2146 54 Venky Shankar
2147
h3. 2022 Jun 13
2148
2149
https://pulpito.ceph.com/vshankar-2022-06-07_00:25:50-fs-wip-vshankar-testing-20220606-223254-testing-default-smithi/
2150
https://pulpito.ceph.com/vshankar-2022-06-10_01:04:46-fs-wip-vshankar-testing-20220609-175550-testing-default-smithi/
2151
2152
* https://tracker.ceph.com/issues/52624
2153
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2154
* https://tracker.ceph.com/issues/51964
2155
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
2156
* https://tracker.ceph.com/issues/53859
2157
    qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
2158
* https://tracker.ceph.com/issues/55804
2159
    qa failure: pjd link tests failed
2160
* https://tracker.ceph.com/issues/56003
2161
    client: src/include/xlist.h: 81: FAILED ceph_assert(_size == 0)
2162
* https://tracker.ceph.com/issues/56011
2163
    fs/thrash: snaptest-snap-rm-cmp.sh fails in mds5sum comparison
2164
* https://tracker.ceph.com/issues/56012
2165 53 Venky Shankar
    mds: src/mds/MDLog.cc: 283: FAILED ceph_assert(!mds->is_ any_replay())
2166
2167
h3. 2022 Jun 07
2168
2169
https://pulpito.ceph.com/vshankar-2022-06-06_21:25:41-fs-wip-vshankar-testing1-20220606-230129-testing-default-smithi/
2170
https://pulpito.ceph.com/vshankar-2022-06-07_10:53:31-fs-wip-vshankar-testing1-20220607-104134-testing-default-smithi/ (rerun after dropping a problematic PR)
2171
2172
* https://tracker.ceph.com/issues/52624
2173
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2174
* https://tracker.ceph.com/issues/50223
2175
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2176
* https://tracker.ceph.com/issues/50224
2177 51 Venky Shankar
    qa: test_mirroring_init_failure_with_recovery failure
2178
2179
h3. 2022 May 12
2180 52 Venky Shankar
2181 51 Venky Shankar
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20220509-125847
2182
https://pulpito.ceph.com/vshankar-2022-05-13_17:09:16-fs-wip-vshankar-testing-20220513-120051-testing-default-smithi/ (drop prs + rerun)
2183
2184
* https://tracker.ceph.com/issues/52624
2185
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2186
* https://tracker.ceph.com/issues/50223
2187
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2188
* https://tracker.ceph.com/issues/55332
2189
    Failure in snaptest-git-ceph.sh
2190
* https://tracker.ceph.com/issues/53859
2191 1 Patrick Donnelly
    qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
2192 52 Venky Shankar
* https://tracker.ceph.com/issues/55538
2193
    Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead)
2194 51 Venky Shankar
* https://tracker.ceph.com/issues/55258
2195 49 Venky Shankar
    lots of "heartbeat_check: no reply from X.X.X.X" in OSD logs (cropss up again, though very infrequent)
2196
2197 50 Venky Shankar
h3. 2022 May 04
2198
2199
https://pulpito.ceph.com/vshankar-2022-05-01_13:18:44-fs-wip-vshankar-testing1-20220428-204527-testing-default-smithi/
2200 49 Venky Shankar
https://pulpito.ceph.com/vshankar-2022-05-02_16:58:59-fs-wip-vshankar-testing1-20220502-201957-testing-default-smithi/ (after dropping PRs)
2201
2202
* https://tracker.ceph.com/issues/52624
2203
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2204
* https://tracker.ceph.com/issues/50223
2205
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2206
* https://tracker.ceph.com/issues/55332
2207
    Failure in snaptest-git-ceph.sh
2208
* https://tracker.ceph.com/issues/53859
2209
    qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
2210
* https://tracker.ceph.com/issues/55516
2211
    qa: fs suite tests failing with "json.decoder.JSONDecodeError: Extra data: line 2 column 82 (char 82)"
2212
* https://tracker.ceph.com/issues/55537
2213
    mds: crash during fs:upgrade test
2214
* https://tracker.ceph.com/issues/55538
2215 48 Venky Shankar
    Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead)
2216
2217
h3. 2022 Apr 25
2218
2219
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20220420-113951 (owner vshankar)
2220
2221
* https://tracker.ceph.com/issues/52624
2222
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2223
* https://tracker.ceph.com/issues/50223
2224
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2225
* https://tracker.ceph.com/issues/55258
2226
    lots of "heartbeat_check: no reply from X.X.X.X" in OSD logs
2227
* https://tracker.ceph.com/issues/55377
2228 47 Venky Shankar
    kclient: mds revoke Fwb caps stuck after the kclient tries writebcak once
2229
2230
h3. 2022 Apr 14
2231
2232
https://pulpito.ceph.com/?branch=wip-vshankar-testing1-20220411-144044
2233
2234
* https://tracker.ceph.com/issues/52624
2235
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2236
* https://tracker.ceph.com/issues/50223
2237
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2238
* https://tracker.ceph.com/issues/52438
2239
    qa: ffsb timeout
2240
* https://tracker.ceph.com/issues/55170
2241
    mds: crash during rejoin (CDir::fetch_keys)
2242
* https://tracker.ceph.com/issues/55331
2243
    pjd failure
2244
* https://tracker.ceph.com/issues/48773
2245
    qa: scrub does not complete
2246
* https://tracker.ceph.com/issues/55332
2247
    Failure in snaptest-git-ceph.sh
2248
* https://tracker.ceph.com/issues/55258
2249 45 Venky Shankar
    lots of "heartbeat_check: no reply from X.X.X.X" in OSD logs
2250
2251 46 Venky Shankar
h3. 2022 Apr 11
2252 45 Venky Shankar
2253
https://pulpito.ceph.com/?branch=wip-vshankar-testing-55110-20220408-203242
2254
2255
* https://tracker.ceph.com/issues/48773
2256
    qa: scrub does not complete
2257
* https://tracker.ceph.com/issues/52624
2258
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2259
* https://tracker.ceph.com/issues/52438
2260
    qa: ffsb timeout
2261
* https://tracker.ceph.com/issues/48680
2262
    mds: scrubbing stuck "scrub active (0 inodes in the stack)"
2263
* https://tracker.ceph.com/issues/55236
2264
    qa: fs/snaps tests fails with "hit max job timeout"
2265
* https://tracker.ceph.com/issues/54108
2266
    qa: iogen workunit: "The following counters failed to be set on mds daemons: {'mds.exported', 'mds.imported'}"
2267
* https://tracker.ceph.com/issues/54971
2268
    Test failure: test_perf_stats_stale_metrics (tasks.cephfs.test_mds_metrics.TestMDSMetrics)
2269
* https://tracker.ceph.com/issues/50223
2270
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2271
* https://tracker.ceph.com/issues/55258
2272 44 Venky Shankar
    lots of "heartbeat_check: no reply from X.X.X.X" in OSD logs
2273 42 Venky Shankar
2274 43 Venky Shankar
h3. 2022 Mar 21
2275
2276
https://pulpito.ceph.com/vshankar-2022-03-20_02:16:37-fs-wip-vshankar-testing-20220319-163539-testing-default-smithi/
2277
2278
Run didn't go well, lots of failures - debugging by dropping PRs and running against master branch. Only merging unrelated PRs that pass tests.
2279
2280
2281 42 Venky Shankar
h3. 2022 Mar 08
2282
2283
https://pulpito.ceph.com/vshankar-2022-02-28_04:32:15-fs-wip-vshankar-testing-20220226-211550-testing-default-smithi/
2284
2285
rerun with
2286
- (drop) https://github.com/ceph/ceph/pull/44679
2287
- (drop) https://github.com/ceph/ceph/pull/44958
2288
https://pulpito.ceph.com/vshankar-2022-03-06_14:47:51-fs-wip-vshankar-testing-20220304-132102-testing-default-smithi/
2289
2290
* https://tracker.ceph.com/issues/54419 (new)
2291
    `ceph orch upgrade start` seems to never reach completion
2292
* https://tracker.ceph.com/issues/51964
2293
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
2294
* https://tracker.ceph.com/issues/52624
2295
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2296
* https://tracker.ceph.com/issues/50223
2297
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2298
* https://tracker.ceph.com/issues/52438
2299
    qa: ffsb timeout
2300
* https://tracker.ceph.com/issues/50821
2301
    qa: untar_snap_rm failure during mds thrashing
2302 41 Venky Shankar
2303
2304
h3. 2022 Feb 09
2305
2306
https://pulpito.ceph.com/vshankar-2022-02-05_17:27:49-fs-wip-vshankar-testing-20220201-113815-testing-default-smithi/
2307
2308
rerun with
2309
- (drop) https://github.com/ceph/ceph/pull/37938
2310
- (drop) https://github.com/ceph/ceph/pull/44335
2311
- (drop) https://github.com/ceph/ceph/pull/44491
2312
- (drop) https://github.com/ceph/ceph/pull/44501
2313
https://pulpito.ceph.com/vshankar-2022-02-08_14:27:29-fs-wip-vshankar-testing-20220208-181241-testing-default-smithi/
2314
2315
* https://tracker.ceph.com/issues/51964
2316
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
2317
* https://tracker.ceph.com/issues/54066
2318
    test_subvolume_no_upgrade_v1_sanity fails with `AssertionError: 1000 != 0`
2319
* https://tracker.ceph.com/issues/48773
2320
    qa: scrub does not complete
2321
* https://tracker.ceph.com/issues/52624
2322
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2323
* https://tracker.ceph.com/issues/50223
2324
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2325
* https://tracker.ceph.com/issues/52438
2326 40 Patrick Donnelly
    qa: ffsb timeout
2327
2328
h3. 2022 Feb 01
2329
2330
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20220127.171526
2331
2332
* https://tracker.ceph.com/issues/54107
2333
    kclient: hang during umount
2334
* https://tracker.ceph.com/issues/54106
2335
    kclient: hang during workunit cleanup
2336
* https://tracker.ceph.com/issues/54108
2337
    qa: iogen workunit: "The following counters failed to be set on mds daemons: {'mds.exported', 'mds.imported'}"
2338
* https://tracker.ceph.com/issues/48773
2339
    qa: scrub does not complete
2340
* https://tracker.ceph.com/issues/52438
2341
    qa: ffsb timeout
2342 36 Venky Shankar
2343
2344
h3. 2022 Jan 13
2345 39 Venky Shankar
2346 36 Venky Shankar
https://pulpito.ceph.com/vshankar-2022-01-06_13:18:41-fs-wip-vshankar-testing-20220106-145819-testing-default-smithi/
2347 38 Venky Shankar
2348
rerun with:
2349 36 Venky Shankar
- (add) https://github.com/ceph/ceph/pull/44570
2350
- (drop) https://github.com/ceph/ceph/pull/43184
2351
https://pulpito.ceph.com/vshankar-2022-01-13_04:42:40-fs-wip-vshankar-testing-20220106-145819-testing-default-smithi/
2352
2353
* https://tracker.ceph.com/issues/50223
2354
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2355
* https://tracker.ceph.com/issues/51282
2356
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2357
* https://tracker.ceph.com/issues/48773
2358
    qa: scrub does not complete
2359
* https://tracker.ceph.com/issues/52624
2360
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2361
* https://tracker.ceph.com/issues/53859
2362 34 Venky Shankar
    qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
2363
2364
h3. 2022 Jan 03
2365
2366
https://pulpito.ceph.com/vshankar-2021-12-22_07:37:44-fs-wip-vshankar-testing-20211216-114012-testing-default-smithi/
2367
https://pulpito.ceph.com/vshankar-2022-01-03_12:27:45-fs-wip-vshankar-testing-20220103-142738-testing-default-smithi/ (rerun)
2368
2369
* https://tracker.ceph.com/issues/50223
2370
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2371
* https://tracker.ceph.com/issues/51964
2372
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
2373
* https://tracker.ceph.com/issues/51267
2374
    CommandFailedError: Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithi096 with status 1:...
2375
* https://tracker.ceph.com/issues/51282
2376
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2377
* https://tracker.ceph.com/issues/50821
2378
    qa: untar_snap_rm failure during mds thrashing
2379 35 Ramana Raja
* https://tracker.ceph.com/issues/51278
2380
    mds: "FAILED ceph_assert(!segments.empty())"
2381
* https://tracker.ceph.com/issues/52279
2382 34 Venky Shankar
    cephadm tests fail due to: error adding seccomp filter rule for syscall bdflush: requested action matches default action of filter
2383 33 Patrick Donnelly
2384
2385
h3. 2021 Dec 22
2386
2387
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211222.014316
2388
2389
* https://tracker.ceph.com/issues/52624
2390
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2391
* https://tracker.ceph.com/issues/50223
2392
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2393
* https://tracker.ceph.com/issues/52279
2394
    cephadm tests fail due to: error adding seccomp filter rule for syscall bdflush: requested action matches default action of filter
2395
* https://tracker.ceph.com/issues/50224
2396
    qa: test_mirroring_init_failure_with_recovery failure
2397
* https://tracker.ceph.com/issues/48773
2398
    qa: scrub does not complete
2399 32 Venky Shankar
2400
2401
h3. 2021 Nov 30
2402
2403
https://pulpito.ceph.com/vshankar-2021-11-24_07:14:27-fs-wip-vshankar-testing-20211124-094330-testing-default-smithi/
2404
https://pulpito.ceph.com/vshankar-2021-11-30_06:23:32-fs-wip-vshankar-testing-20211124-094330-distro-default-smithi/ (rerun w/ QA fixes)
2405
2406
* https://tracker.ceph.com/issues/53436
2407
    mds, mon: mds beacon messages get dropped? (mds never reaches up:active state)
2408
* https://tracker.ceph.com/issues/51964
2409
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
2410
* https://tracker.ceph.com/issues/48812
2411
    qa: test_scrub_pause_and_resume_with_abort failure
2412
* https://tracker.ceph.com/issues/51076
2413
    "wait_for_recovery: failed before timeout expired" during thrashosd test with EC backend.
2414
* https://tracker.ceph.com/issues/50223
2415
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2416
* https://tracker.ceph.com/issues/52624
2417
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2418
* https://tracker.ceph.com/issues/50250
2419
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
2420 31 Patrick Donnelly
2421
2422
h3. 2021 November 9
2423
2424
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211109.180315
2425
2426
* https://tracker.ceph.com/issues/53214
2427
    qa: "dd: error reading '/sys/kernel/debug/ceph/2a934501-6731-4052-a836-f42229a869be.client4874/metrics': Is a directory"
2428
* https://tracker.ceph.com/issues/48773
2429
    qa: scrub does not complete
2430
* https://tracker.ceph.com/issues/50223
2431
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2432
* https://tracker.ceph.com/issues/51282
2433
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2434
* https://tracker.ceph.com/issues/52624
2435
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2436
* https://tracker.ceph.com/issues/53216
2437
    qa: "RuntimeError: value of attributes should be either str or None. client_id"
2438
* https://tracker.ceph.com/issues/50250
2439
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
2440
2441 30 Patrick Donnelly
2442
2443
h3. 2021 November 03
2444
2445
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211103.023355
2446
2447
* https://tracker.ceph.com/issues/51964
2448
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
2449
* https://tracker.ceph.com/issues/51282
2450
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2451
* https://tracker.ceph.com/issues/52436
2452
    fs/ceph: "corrupt mdsmap"
2453
* https://tracker.ceph.com/issues/53074
2454
    pybind/mgr/cephadm: upgrade sequence does not continue if no MDS are active
2455
* https://tracker.ceph.com/issues/53150
2456
    pybind/mgr/cephadm/upgrade: tolerate MDS failures during upgrade straddling v16.2.5
2457
* https://tracker.ceph.com/issues/53155
2458
    MDSMonitor: assertion during upgrade to v16.2.5+
2459 29 Patrick Donnelly
2460
2461
h3. 2021 October 26
2462
2463
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211025.000447
2464
2465
* https://tracker.ceph.com/issues/53074
2466
    pybind/mgr/cephadm: upgrade sequence does not continue if no MDS are active
2467
* https://tracker.ceph.com/issues/52997
2468
    testing: hang ing umount
2469
* https://tracker.ceph.com/issues/50824
2470
    qa: snaptest-git-ceph bus error
2471
* https://tracker.ceph.com/issues/52436
2472
    fs/ceph: "corrupt mdsmap"
2473
* https://tracker.ceph.com/issues/48773
2474
    qa: scrub does not complete
2475
* https://tracker.ceph.com/issues/53082
2476
    ceph-fuse: segmenetation fault in Client::handle_mds_map
2477
* https://tracker.ceph.com/issues/50223
2478
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2479
* https://tracker.ceph.com/issues/52624
2480
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2481
* https://tracker.ceph.com/issues/50224
2482
    qa: test_mirroring_init_failure_with_recovery failure
2483
* https://tracker.ceph.com/issues/50821
2484
    qa: untar_snap_rm failure during mds thrashing
2485
* https://tracker.ceph.com/issues/50250
2486
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
2487
2488 27 Patrick Donnelly
2489
2490 28 Patrick Donnelly
h3. 2021 October 19
2491 27 Patrick Donnelly
2492
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211019.013028
2493
2494
* https://tracker.ceph.com/issues/52995
2495
    qa: test_standby_count_wanted failure
2496
* https://tracker.ceph.com/issues/52948
2497
    osd: fails to come up: "teuthology.misc:7 of 8 OSDs are up"
2498
* https://tracker.ceph.com/issues/52996
2499
    qa: test_perf_counters via test_openfiletable
2500
* https://tracker.ceph.com/issues/48772
2501
    qa: pjd: not ok 9, 44, 80
2502
* https://tracker.ceph.com/issues/52997
2503
    testing: hang ing umount
2504
* https://tracker.ceph.com/issues/50250
2505
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
2506
* https://tracker.ceph.com/issues/52624
2507
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2508
* https://tracker.ceph.com/issues/50223
2509
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2510
* https://tracker.ceph.com/issues/50821
2511
    qa: untar_snap_rm failure during mds thrashing
2512
* https://tracker.ceph.com/issues/48773
2513
    qa: scrub does not complete
2514 26 Patrick Donnelly
2515
2516
h3. 2021 October 12
2517
2518
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211012.192211
2519
2520
Some failures caused by teuthology bug: https://tracker.ceph.com/issues/52944
2521
2522
New test caused failure: https://github.com/ceph/ceph/pull/43297#discussion_r729883167
2523
2524
2525
* https://tracker.ceph.com/issues/51282
2526
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2527
* https://tracker.ceph.com/issues/52948
2528
    osd: fails to come up: "teuthology.misc:7 of 8 OSDs are up"
2529
* https://tracker.ceph.com/issues/48773
2530
    qa: scrub does not complete
2531
* https://tracker.ceph.com/issues/50224
2532
    qa: test_mirroring_init_failure_with_recovery failure
2533
* https://tracker.ceph.com/issues/52949
2534
    RuntimeError: The following counters failed to be set on mds daemons: {'mds.dir_split'}
2535 25 Patrick Donnelly
2536 23 Patrick Donnelly
2537 24 Patrick Donnelly
h3. 2021 October 02
2538
2539
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211002.163337
2540
2541
Some failures caused by cephadm upgrade test. Fixed in follow-up qa commit.
2542
2543
test_simple failures caused by PR in this set.
2544
2545
A few reruns because of QA infra noise.
2546
2547
* https://tracker.ceph.com/issues/52822
2548
    qa: failed pacific install on fs:upgrade
2549
* https://tracker.ceph.com/issues/52624
2550
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2551
* https://tracker.ceph.com/issues/50223
2552
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2553
* https://tracker.ceph.com/issues/48773
2554
    qa: scrub does not complete
2555
2556
2557 23 Patrick Donnelly
h3. 2021 September 20
2558
2559
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210917.174826
2560
2561
* https://tracker.ceph.com/issues/52677
2562
    qa: test_simple failure
2563
* https://tracker.ceph.com/issues/51279
2564
    kclient hangs on umount (testing branch)
2565
* https://tracker.ceph.com/issues/50223
2566
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2567
* https://tracker.ceph.com/issues/50250
2568
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
2569
* https://tracker.ceph.com/issues/52624
2570
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2571
* https://tracker.ceph.com/issues/52438
2572
    qa: ffsb timeout
2573 22 Patrick Donnelly
2574
2575
h3. 2021 September 10
2576
2577
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210910.181451
2578
2579
* https://tracker.ceph.com/issues/50223
2580
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2581
* https://tracker.ceph.com/issues/50250
2582
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
2583
* https://tracker.ceph.com/issues/52624
2584
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2585
* https://tracker.ceph.com/issues/52625
2586
    qa: test_kill_mdstable (tasks.cephfs.test_snapshots.TestSnapshots)
2587
* https://tracker.ceph.com/issues/52439
2588
    qa: acls does not compile on centos stream
2589
* https://tracker.ceph.com/issues/50821
2590
    qa: untar_snap_rm failure during mds thrashing
2591
* https://tracker.ceph.com/issues/48773
2592
    qa: scrub does not complete
2593
* https://tracker.ceph.com/issues/52626
2594
    mds: ScrubStack.cc: 831: FAILED ceph_assert(diri)
2595
* https://tracker.ceph.com/issues/51279
2596
    kclient hangs on umount (testing branch)
2597 21 Patrick Donnelly
2598
2599
h3. 2021 August 27
2600
2601
Several jobs died because of device failures.
2602
2603
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210827.024746
2604
2605
* https://tracker.ceph.com/issues/52430
2606
    mds: fast async create client mount breaks racy test
2607
* https://tracker.ceph.com/issues/52436
2608
    fs/ceph: "corrupt mdsmap"
2609
* https://tracker.ceph.com/issues/52437
2610
    mds: InoTable::replay_release_ids abort via test_inotable_sync
2611
* https://tracker.ceph.com/issues/51282
2612
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2613
* https://tracker.ceph.com/issues/52438
2614
    qa: ffsb timeout
2615
* https://tracker.ceph.com/issues/52439
2616
    qa: acls does not compile on centos stream
2617 20 Patrick Donnelly
2618
2619
h3. 2021 July 30
2620
2621
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210729.214022
2622
2623
* https://tracker.ceph.com/issues/50250
2624
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
2625
* https://tracker.ceph.com/issues/51282
2626
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2627
* https://tracker.ceph.com/issues/48773
2628
    qa: scrub does not complete
2629
* https://tracker.ceph.com/issues/51975
2630
    pybind/mgr/stats: KeyError
2631 19 Patrick Donnelly
2632
2633
h3. 2021 July 28
2634
2635
https://pulpito.ceph.com/pdonnell-2021-07-28_00:39:45-fs-wip-pdonnell-testing-20210727.213757-distro-basic-smithi/
2636
2637
with qa fix: https://pulpito.ceph.com/pdonnell-2021-07-28_16:20:28-fs-wip-pdonnell-testing-20210728.141004-distro-basic-smithi/
2638
2639
* https://tracker.ceph.com/issues/51905
2640
    qa: "error reading sessionmap 'mds1_sessionmap'"
2641
* https://tracker.ceph.com/issues/48773
2642
    qa: scrub does not complete
2643
* https://tracker.ceph.com/issues/50250
2644
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
2645
* https://tracker.ceph.com/issues/51267
2646
    CommandFailedError: Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithi096 with status 1:...
2647
* https://tracker.ceph.com/issues/51279
2648
    kclient hangs on umount (testing branch)
2649 18 Patrick Donnelly
2650
2651
h3. 2021 July 16
2652
2653
https://pulpito.ceph.com/pdonnell-2021-07-16_05:50:11-fs-wip-pdonnell-testing-20210716.022804-distro-basic-smithi/
2654
2655
* https://tracker.ceph.com/issues/48773
2656
    qa: scrub does not complete
2657
* https://tracker.ceph.com/issues/48772
2658
    qa: pjd: not ok 9, 44, 80
2659
* https://tracker.ceph.com/issues/45434
2660
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2661
* https://tracker.ceph.com/issues/51279
2662
    kclient hangs on umount (testing branch)
2663
* https://tracker.ceph.com/issues/50824
2664
    qa: snaptest-git-ceph bus error
2665 17 Patrick Donnelly
2666
2667
h3. 2021 July 04
2668
2669
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210703.052904
2670
2671
* https://tracker.ceph.com/issues/48773
2672
    qa: scrub does not complete
2673
* https://tracker.ceph.com/issues/39150
2674
    mon: "FAILED ceph_assert(session_map.sessions.empty())" when out of quorum
2675
* https://tracker.ceph.com/issues/45434
2676
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2677
* https://tracker.ceph.com/issues/51282
2678
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2679
* https://tracker.ceph.com/issues/48771
2680
    qa: iogen: workload fails to cause balancing
2681
* https://tracker.ceph.com/issues/51279
2682
    kclient hangs on umount (testing branch)
2683
* https://tracker.ceph.com/issues/50250
2684
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
2685 16 Patrick Donnelly
2686
2687
h3. 2021 July 01
2688
2689
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210701.192056
2690
2691
* https://tracker.ceph.com/issues/51197
2692
    qa: [WRN] Scrub error on inode 0x10000001520 (/client.0/tmp/t/linux-5.4/Documentation/driver-api) see mds.f log and `damage ls` output for details
2693
* https://tracker.ceph.com/issues/50866
2694
    osd: stat mismatch on objects
2695
* https://tracker.ceph.com/issues/48773
2696
    qa: scrub does not complete
2697 15 Patrick Donnelly
2698
2699
h3. 2021 June 26
2700
2701
https://pulpito.ceph.com/pdonnell-2021-06-26_00:57:00-fs-wip-pdonnell-testing-20210625.225421-distro-basic-smithi/
2702
2703
* https://tracker.ceph.com/issues/51183
2704
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
2705
* https://tracker.ceph.com/issues/51410
2706
    kclient: fails to finish reconnect during MDS thrashing (testing branch)
2707
* https://tracker.ceph.com/issues/48773
2708
    qa: scrub does not complete
2709
* https://tracker.ceph.com/issues/51282
2710
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2711
* https://tracker.ceph.com/issues/51169
2712
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
2713
* https://tracker.ceph.com/issues/48772
2714
    qa: pjd: not ok 9, 44, 80
2715 14 Patrick Donnelly
2716
2717
h3. 2021 June 21
2718
2719
https://pulpito.ceph.com/pdonnell-2021-06-22_00:27:21-fs-wip-pdonnell-testing-20210621.231646-distro-basic-smithi/
2720
2721
One failure caused by PR: https://github.com/ceph/ceph/pull/41935#issuecomment-866472599
2722
2723
* https://tracker.ceph.com/issues/51282
2724
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2725
* https://tracker.ceph.com/issues/51183
2726
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
2727
* https://tracker.ceph.com/issues/48773
2728
    qa: scrub does not complete
2729
* https://tracker.ceph.com/issues/48771
2730
    qa: iogen: workload fails to cause balancing
2731
* https://tracker.ceph.com/issues/51169
2732
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
2733
* https://tracker.ceph.com/issues/50495
2734
    libcephfs: shutdown race fails with status 141
2735
* https://tracker.ceph.com/issues/45434
2736
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2737
* https://tracker.ceph.com/issues/50824
2738
    qa: snaptest-git-ceph bus error
2739
* https://tracker.ceph.com/issues/50223
2740
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2741 13 Patrick Donnelly
2742
2743
h3. 2021 June 16
2744
2745
https://pulpito.ceph.com/pdonnell-2021-06-16_21:26:55-fs-wip-pdonnell-testing-20210616.191804-distro-basic-smithi/
2746
2747
MDS abort class of failures caused by PR: https://github.com/ceph/ceph/pull/41667
2748
2749
* https://tracker.ceph.com/issues/45434
2750
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2751
* https://tracker.ceph.com/issues/51169
2752
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
2753
* https://tracker.ceph.com/issues/43216
2754
    MDSMonitor: removes MDS coming out of quorum election
2755
* https://tracker.ceph.com/issues/51278
2756
    mds: "FAILED ceph_assert(!segments.empty())"
2757
* https://tracker.ceph.com/issues/51279
2758
    kclient hangs on umount (testing branch)
2759
* https://tracker.ceph.com/issues/51280
2760
    mds: "FAILED ceph_assert(r == 0 || r == -2)"
2761
* https://tracker.ceph.com/issues/51183
2762
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
2763
* https://tracker.ceph.com/issues/51281
2764
    qa: snaptest-snap-rm-cmp.sh: "echo 'FAIL: bad match, /tmp/a 4637e766853d1ad16a7b17079e2c6f03 != real c3883760b18d50e8d78819c54d579b00'"
2765
* https://tracker.ceph.com/issues/48773
2766
    qa: scrub does not complete
2767
* https://tracker.ceph.com/issues/51076
2768
    "wait_for_recovery: failed before timeout expired" during thrashosd test with EC backend.
2769
* https://tracker.ceph.com/issues/51228
2770
    qa: rmdir: failed to remove 'a/.snap/*': No such file or directory
2771
* https://tracker.ceph.com/issues/51282
2772
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2773 12 Patrick Donnelly
2774
2775
h3. 2021 June 14
2776
2777
https://pulpito.ceph.com/pdonnell-2021-06-14_20:53:05-fs-wip-pdonnell-testing-20210614.173325-distro-basic-smithi/
2778
2779
Some Ubuntu 20.04 upgrade fallout. In particular, upgrade tests are failing due to missing packages for 18.04 Pacific.
2780
2781
* https://tracker.ceph.com/issues/51169
2782
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
2783
* https://tracker.ceph.com/issues/51228
2784
    qa: rmdir: failed to remove 'a/.snap/*': No such file or directory
2785
* https://tracker.ceph.com/issues/48773
2786
    qa: scrub does not complete
2787
* https://tracker.ceph.com/issues/51183
2788
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
2789
* https://tracker.ceph.com/issues/45434
2790
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2791
* https://tracker.ceph.com/issues/51182
2792
    pybind/mgr/snap_schedule: Invalid command: Unexpected argument 'fs=cephfs'
2793
* https://tracker.ceph.com/issues/51229
2794
    qa: test_multi_snap_schedule list difference failure
2795
* https://tracker.ceph.com/issues/50821
2796
    qa: untar_snap_rm failure during mds thrashing
2797 11 Patrick Donnelly
2798
2799
h3. 2021 June 13
2800
2801
https://pulpito.ceph.com/pdonnell-2021-06-12_02:45:35-fs-wip-pdonnell-testing-20210612.002809-distro-basic-smithi/
2802
2803
Some Ubuntu 20.04 upgrade fallout. In particular, upgrade tests are failing due to missing packages for 18.04 Pacific.
2804
2805
* https://tracker.ceph.com/issues/51169
2806
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
2807
* https://tracker.ceph.com/issues/48773
2808
    qa: scrub does not complete
2809
* https://tracker.ceph.com/issues/51182
2810
    pybind/mgr/snap_schedule: Invalid command: Unexpected argument 'fs=cephfs'
2811
* https://tracker.ceph.com/issues/51183
2812
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
2813
* https://tracker.ceph.com/issues/51197
2814
    qa: [WRN] Scrub error on inode 0x10000001520 (/client.0/tmp/t/linux-5.4/Documentation/driver-api) see mds.f log and `damage ls` output for details
2815
* https://tracker.ceph.com/issues/45434
2816 10 Patrick Donnelly
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2817
2818
h3. 2021 June 11
2819
2820
https://pulpito.ceph.com/pdonnell-2021-06-11_18:02:10-fs-wip-pdonnell-testing-20210611.162716-distro-basic-smithi/
2821
2822
Some Ubuntu 20.04 upgrade fallout. In particular, upgrade tests are failing due to missing packages for 18.04 Pacific.
2823
2824
* https://tracker.ceph.com/issues/51169
2825
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
2826
* https://tracker.ceph.com/issues/45434
2827
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2828
* https://tracker.ceph.com/issues/48771
2829
    qa: iogen: workload fails to cause balancing
2830
* https://tracker.ceph.com/issues/43216
2831
    MDSMonitor: removes MDS coming out of quorum election
2832
* https://tracker.ceph.com/issues/51182
2833
    pybind/mgr/snap_schedule: Invalid command: Unexpected argument 'fs=cephfs'
2834
* https://tracker.ceph.com/issues/50223
2835
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2836
* https://tracker.ceph.com/issues/48773
2837
    qa: scrub does not complete
2838
* https://tracker.ceph.com/issues/51183
2839
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
2840
* https://tracker.ceph.com/issues/51184
2841
    qa: fs:bugs does not specify distro
2842 9 Patrick Donnelly
2843
2844
h3. 2021 June 03
2845
2846
https://pulpito.ceph.com/pdonnell-2021-06-03_03:40:33-fs-wip-pdonnell-testing-20210603.020013-distro-basic-smithi/
2847
2848
* https://tracker.ceph.com/issues/45434
2849
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2850
* https://tracker.ceph.com/issues/50016
2851
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
2852
* https://tracker.ceph.com/issues/50821
2853
    qa: untar_snap_rm failure during mds thrashing
2854
* https://tracker.ceph.com/issues/50622 (regression)
2855
    msg: active_connections regression
2856
* https://tracker.ceph.com/issues/49845#note-2 (regression)
2857
    qa: failed umount in test_volumes
2858
* https://tracker.ceph.com/issues/48773
2859
    qa: scrub does not complete
2860
* https://tracker.ceph.com/issues/43216
2861
    MDSMonitor: removes MDS coming out of quorum election
2862 7 Patrick Donnelly
2863
2864 8 Patrick Donnelly
h3. 2021 May 18
2865
2866
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210518.214114
2867
2868
Regression in testing kernel caused some failures. Ilya fixed those and rerun
2869
looked better. Some odd new noise in the rerun relating to packaging and "No
2870
module named 'tasks.ceph'".
2871
2872
* https://tracker.ceph.com/issues/50824
2873
    qa: snaptest-git-ceph bus error
2874
* https://tracker.ceph.com/issues/50622 (regression)
2875
    msg: active_connections regression
2876
* https://tracker.ceph.com/issues/49845#note-2 (regression)
2877
    qa: failed umount in test_volumes
2878
* https://tracker.ceph.com/issues/48203 (stock kernel update required)
2879
    qa: quota failure
2880
2881
2882 7 Patrick Donnelly
h3. 2021 May 18
2883
2884
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210518.025642
2885
2886
* https://tracker.ceph.com/issues/50821
2887
    qa: untar_snap_rm failure during mds thrashing
2888
* https://tracker.ceph.com/issues/48773
2889
    qa: scrub does not complete
2890
* https://tracker.ceph.com/issues/45591
2891
    mgr: FAILED ceph_assert(daemon != nullptr)
2892
* https://tracker.ceph.com/issues/50866
2893
    osd: stat mismatch on objects
2894
* https://tracker.ceph.com/issues/50016
2895
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
2896
* https://tracker.ceph.com/issues/50867
2897
    qa: fs:mirror: reduced data availability
2898
* https://tracker.ceph.com/issues/50821
2899
    qa: untar_snap_rm failure during mds thrashing
2900
* https://tracker.ceph.com/issues/50622 (regression)
2901
    msg: active_connections regression
2902
* https://tracker.ceph.com/issues/50223
2903
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2904
* https://tracker.ceph.com/issues/50868
2905
    qa: "kern.log.gz already exists; not overwritten"
2906
* https://tracker.ceph.com/issues/50870
2907
    qa: test_full: "rm: cannot remove 'large_file_a': Permission denied"
2908 6 Patrick Donnelly
2909
2910
h3. 2021 May 11
2911
2912
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210511.232042
2913
2914
* one class of failures caused by PR
2915
* https://tracker.ceph.com/issues/48812
2916
    qa: test_scrub_pause_and_resume_with_abort failure
2917
* https://tracker.ceph.com/issues/50390
2918
    mds: monclient: wait_auth_rotating timed out after 30
2919
* https://tracker.ceph.com/issues/48773
2920
    qa: scrub does not complete
2921
* https://tracker.ceph.com/issues/50821
2922
    qa: untar_snap_rm failure during mds thrashing
2923
* https://tracker.ceph.com/issues/50224
2924
    qa: test_mirroring_init_failure_with_recovery failure
2925
* https://tracker.ceph.com/issues/50622 (regression)
2926
    msg: active_connections regression
2927
* https://tracker.ceph.com/issues/50825
2928
    qa: snaptest-git-ceph hang during mon thrashing v2
2929
* https://tracker.ceph.com/issues/50821
2930
    qa: untar_snap_rm failure during mds thrashing
2931
* https://tracker.ceph.com/issues/50823
2932
    qa: RuntimeError: timeout waiting for cluster to stabilize
2933 5 Patrick Donnelly
2934
2935
h3. 2021 May 14
2936
2937
https://pulpito.ceph.com/pdonnell-2021-05-14_21:45:42-fs-master-distro-basic-smithi/
2938
2939
* https://tracker.ceph.com/issues/48812
2940
    qa: test_scrub_pause_and_resume_with_abort failure
2941
* https://tracker.ceph.com/issues/50821
2942
    qa: untar_snap_rm failure during mds thrashing
2943
* https://tracker.ceph.com/issues/50622 (regression)
2944
    msg: active_connections regression
2945
* https://tracker.ceph.com/issues/50822
2946
    qa: testing kernel patch for client metrics causes mds abort
2947
* https://tracker.ceph.com/issues/48773
2948
    qa: scrub does not complete
2949
* https://tracker.ceph.com/issues/50823
2950
    qa: RuntimeError: timeout waiting for cluster to stabilize
2951
* https://tracker.ceph.com/issues/50824
2952
    qa: snaptest-git-ceph bus error
2953
* https://tracker.ceph.com/issues/50825
2954
    qa: snaptest-git-ceph hang during mon thrashing v2
2955
* https://tracker.ceph.com/issues/50826
2956
    kceph: stock RHEL kernel hangs on snaptests with mon|osd thrashers
2957 4 Patrick Donnelly
2958
2959
h3. 2021 May 01
2960
2961
https://pulpito.ceph.com/pdonnell-2021-05-01_09:07:09-fs-wip-pdonnell-testing-20210501.040415-distro-basic-smithi/
2962
2963
* https://tracker.ceph.com/issues/45434
2964
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2965
* https://tracker.ceph.com/issues/50281
2966
    qa: untar_snap_rm timeout
2967
* https://tracker.ceph.com/issues/48203 (stock kernel update required)
2968
    qa: quota failure
2969
* https://tracker.ceph.com/issues/48773
2970
    qa: scrub does not complete
2971
* https://tracker.ceph.com/issues/50390
2972
    mds: monclient: wait_auth_rotating timed out after 30
2973
* https://tracker.ceph.com/issues/50250
2974
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details"
2975
* https://tracker.ceph.com/issues/50622 (regression)
2976
    msg: active_connections regression
2977
* https://tracker.ceph.com/issues/45591
2978
    mgr: FAILED ceph_assert(daemon != nullptr)
2979
* https://tracker.ceph.com/issues/50221
2980
    qa: snaptest-git-ceph failure in git diff
2981
* https://tracker.ceph.com/issues/50016
2982
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
2983 3 Patrick Donnelly
2984
2985
h3. 2021 Apr 15
2986
2987
https://pulpito.ceph.com/pdonnell-2021-04-15_01:35:57-fs-wip-pdonnell-testing-20210414.230315-distro-basic-smithi/
2988
2989
* https://tracker.ceph.com/issues/50281
2990
    qa: untar_snap_rm timeout
2991
* https://tracker.ceph.com/issues/50220
2992
    qa: dbench workload timeout
2993
* https://tracker.ceph.com/issues/50246
2994
    mds: failure replaying journal (EMetaBlob)
2995
* https://tracker.ceph.com/issues/50250
2996
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details"
2997
* https://tracker.ceph.com/issues/50016
2998
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
2999
* https://tracker.ceph.com/issues/50222
3000
    osd: 5.2s0 deep-scrub : stat mismatch
3001
* https://tracker.ceph.com/issues/45434
3002
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
3003
* https://tracker.ceph.com/issues/49845
3004
    qa: failed umount in test_volumes
3005
* https://tracker.ceph.com/issues/37808
3006
    osd: osdmap cache weak_refs assert during shutdown
3007
* https://tracker.ceph.com/issues/50387
3008
    client: fs/snaps failure
3009
* https://tracker.ceph.com/issues/50389
3010
    mds: "cluster [ERR] Error recovering journal 0x203: (2) No such file or directory" in cluster log"
3011
* https://tracker.ceph.com/issues/50216
3012
    qa: "ls: cannot access 'lost+found': No such file or directory"
3013
* https://tracker.ceph.com/issues/50390
3014
    mds: monclient: wait_auth_rotating timed out after 30
3015
3016 1 Patrick Donnelly
3017
3018 2 Patrick Donnelly
h3. 2021 Apr 08
3019
3020
https://pulpito.ceph.com/pdonnell-2021-04-08_22:42:24-fs-wip-pdonnell-testing-20210408.192301-distro-basic-smithi/
3021
3022
* https://tracker.ceph.com/issues/45434
3023
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
3024
* https://tracker.ceph.com/issues/50016
3025
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
3026
* https://tracker.ceph.com/issues/48773
3027
    qa: scrub does not complete
3028
* https://tracker.ceph.com/issues/50279
3029
    qa: "Replacing daemon mds.b as rank 0 with standby daemon mds.c"
3030
* https://tracker.ceph.com/issues/50246
3031
    mds: failure replaying journal (EMetaBlob)
3032
* https://tracker.ceph.com/issues/48365
3033
    qa: ffsb build failure on CentOS 8.2
3034
* https://tracker.ceph.com/issues/50216
3035
    qa: "ls: cannot access 'lost+found': No such file or directory"
3036
* https://tracker.ceph.com/issues/50223
3037
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
3038
* https://tracker.ceph.com/issues/50280
3039
    cephadm: RuntimeError: uid/gid not found
3040
* https://tracker.ceph.com/issues/50281
3041
    qa: untar_snap_rm timeout
3042
3043 1 Patrick Donnelly
h3. 2021 Apr 08
3044
3045
https://pulpito.ceph.com/pdonnell-2021-04-08_04:31:36-fs-wip-pdonnell-testing-20210408.024225-distro-basic-smithi/
3046
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210408.142238 (with logic inversion / QA fix)
3047
3048
* https://tracker.ceph.com/issues/50246
3049
    mds: failure replaying journal (EMetaBlob)
3050
* https://tracker.ceph.com/issues/50250
3051
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details"
3052
3053
3054
h3. 2021 Apr 07
3055
3056
https://pulpito.ceph.com/pdonnell-2021-04-07_02:12:41-fs-wip-pdonnell-testing-20210406.213012-distro-basic-smithi/
3057
3058
* https://tracker.ceph.com/issues/50215
3059
    qa: "log [ERR] : error reading sessionmap 'mds2_sessionmap'"
3060
* https://tracker.ceph.com/issues/49466
3061
    qa: "Command failed on gibba030 with status 1: 'set -ex\nsudo dd of=/tmp/tmp.ZEeZBasJer'"
3062
* https://tracker.ceph.com/issues/50216
3063
    qa: "ls: cannot access 'lost+found': No such file or directory"
3064
* https://tracker.ceph.com/issues/48773
3065
    qa: scrub does not complete
3066
* https://tracker.ceph.com/issues/49845
3067
    qa: failed umount in test_volumes
3068
* https://tracker.ceph.com/issues/50220
3069
    qa: dbench workload timeout
3070
* https://tracker.ceph.com/issues/50221
3071
    qa: snaptest-git-ceph failure in git diff
3072
* https://tracker.ceph.com/issues/50222
3073
    osd: 5.2s0 deep-scrub : stat mismatch
3074
* https://tracker.ceph.com/issues/50223
3075
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
3076
* https://tracker.ceph.com/issues/50224
3077
    qa: test_mirroring_init_failure_with_recovery failure
3078
3079
h3. 2021 Apr 01
3080
3081
https://pulpito.ceph.com/pdonnell-2021-04-01_00:45:34-fs-wip-pdonnell-testing-20210331.222326-distro-basic-smithi/
3082
3083
* https://tracker.ceph.com/issues/48772
3084
    qa: pjd: not ok 9, 44, 80
3085
* https://tracker.ceph.com/issues/50177
3086
    osd: "stalled aio... buggy kernel or bad device?"
3087
* https://tracker.ceph.com/issues/48771
3088
    qa: iogen: workload fails to cause balancing
3089
* https://tracker.ceph.com/issues/49845
3090
    qa: failed umount in test_volumes
3091
* https://tracker.ceph.com/issues/48773
3092
    qa: scrub does not complete
3093
* https://tracker.ceph.com/issues/48805
3094
    mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details"
3095
* https://tracker.ceph.com/issues/50178
3096
    qa: "TypeError: run() got an unexpected keyword argument 'shell'"
3097
* https://tracker.ceph.com/issues/45434
3098
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
3099
3100
h3. 2021 Mar 24
3101
3102
https://pulpito.ceph.com/pdonnell-2021-03-24_23:26:35-fs-wip-pdonnell-testing-20210324.190252-distro-basic-smithi/
3103
3104
* https://tracker.ceph.com/issues/49500
3105
    qa: "Assertion `cb_done' failed."
3106
* https://tracker.ceph.com/issues/50019
3107
    qa: mount failure with cephadm "probably no MDS server is up?"
3108
* https://tracker.ceph.com/issues/50020
3109
    qa: "RADOS object not found (Failed to operate read op for oid cephfs_mirror)"
3110
* https://tracker.ceph.com/issues/48773
3111
    qa: scrub does not complete
3112
* https://tracker.ceph.com/issues/45434
3113
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
3114
* https://tracker.ceph.com/issues/48805
3115
    mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details"
3116
* https://tracker.ceph.com/issues/48772
3117
    qa: pjd: not ok 9, 44, 80
3118
* https://tracker.ceph.com/issues/50021
3119
    qa: snaptest-git-ceph failure during mon thrashing
3120
* https://tracker.ceph.com/issues/48771
3121
    qa: iogen: workload fails to cause balancing
3122
* https://tracker.ceph.com/issues/50016
3123
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
3124
* https://tracker.ceph.com/issues/49466
3125
    qa: "Command failed on gibba030 with status 1: 'set -ex\nsudo dd of=/tmp/tmp.ZEeZBasJer'"
3126
3127
3128
h3. 2021 Mar 18
3129
3130
https://pulpito.ceph.com/pdonnell-2021-03-18_13:46:31-fs-wip-pdonnell-testing-20210318.024145-distro-basic-smithi/
3131
3132
* https://tracker.ceph.com/issues/49466
3133
    qa: "Command failed on gibba030 with status 1: 'set -ex\nsudo dd of=/tmp/tmp.ZEeZBasJer'"
3134
* https://tracker.ceph.com/issues/48773
3135
    qa: scrub does not complete
3136
* https://tracker.ceph.com/issues/48805
3137
    mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details"
3138
* https://tracker.ceph.com/issues/45434
3139
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
3140
* https://tracker.ceph.com/issues/49845
3141
    qa: failed umount in test_volumes
3142
* https://tracker.ceph.com/issues/49605
3143
    mgr: drops command on the floor
3144
* https://tracker.ceph.com/issues/48203 (stock kernel update required)
3145
    qa: quota failure
3146
* https://tracker.ceph.com/issues/49928
3147
    client: items pinned in cache preventing unmount x2
3148
3149
h3. 2021 Mar 15
3150
3151
https://pulpito.ceph.com/pdonnell-2021-03-15_22:16:56-fs-wip-pdonnell-testing-20210315.182203-distro-basic-smithi/
3152
3153
* https://tracker.ceph.com/issues/49842
3154
    qa: stuck pkg install
3155
* https://tracker.ceph.com/issues/49466
3156
    qa: "Command failed on gibba030 with status 1: 'set -ex\nsudo dd of=/tmp/tmp.ZEeZBasJer'"
3157
* https://tracker.ceph.com/issues/49822
3158
    test: test_mirroring_command_idempotency (tasks.cephfs.test_admin.TestMirroringCommands) failure
3159
* https://tracker.ceph.com/issues/49240
3160
    terminate called after throwing an instance of 'std::bad_alloc'
3161
* https://tracker.ceph.com/issues/48773
3162
    qa: scrub does not complete
3163
* https://tracker.ceph.com/issues/45434
3164
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
3165
* https://tracker.ceph.com/issues/49500
3166
    qa: "Assertion `cb_done' failed."
3167
* https://tracker.ceph.com/issues/49843
3168
    qa: fs/snaps/snaptest-upchildrealms.sh failure
3169
* https://tracker.ceph.com/issues/49845
3170
    qa: failed umount in test_volumes
3171
* https://tracker.ceph.com/issues/48805
3172
    mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details"
3173
* https://tracker.ceph.com/issues/49605
3174
    mgr: drops command on the floor
3175
3176
and failure caused by PR: https://github.com/ceph/ceph/pull/39969
3177
3178
3179
h3. 2021 Mar 09
3180
3181
https://pulpito.ceph.com/pdonnell-2021-03-09_03:27:39-fs-wip-pdonnell-testing-20210308.214827-distro-basic-smithi/
3182
3183
* https://tracker.ceph.com/issues/49500
3184
    qa: "Assertion `cb_done' failed."
3185
* https://tracker.ceph.com/issues/48805
3186
    mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details"
3187
* https://tracker.ceph.com/issues/48773
3188
    qa: scrub does not complete
3189
* https://tracker.ceph.com/issues/45434
3190
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
3191
* https://tracker.ceph.com/issues/49240
3192
    terminate called after throwing an instance of 'std::bad_alloc'
3193
* https://tracker.ceph.com/issues/49466
3194
    qa: "Command failed on gibba030 with status 1: 'set -ex\nsudo dd of=/tmp/tmp.ZEeZBasJer'"
3195
* https://tracker.ceph.com/issues/49684
3196
    qa: fs:cephadm mount does not wait for mds to be created
3197
* https://tracker.ceph.com/issues/48771
3198
    qa: iogen: workload fails to cause balancing