Project

General

Profile

Main » History » Version 251

Rishabh Dave, 04/04/2024 10:16 AM

1 221 Patrick Donnelly
h1. <code>main</code> branch
2 1 Patrick Donnelly
3 247 Rishabh Dave
h3. ADD NEW ENTRY HERE
4
5 249 Rishabh Dave
h3. 4 Apr 2024
6 246 Rishabh Dave
7
https://pulpito.ceph.com/rishabh-2024-03-27_05:27:11-fs-wip-rishabh-testing-20240326.131558-testing-default-smithi/
8
9 251 Rishabh Dave
https://pulpito.ceph.com/rishabh-2024-03-27_05:27:11-fs-wip-rishabh-testing-20240326.131558-testing-default-smithi/
10
11 246 Rishabh Dave
* https://tracker.ceph.com/issues/64927
12
  qa/cephfs: test_cephfs_mirror_blocklist raises "KeyError: 'rados_inst'"
13
* https://tracker.ceph.com/issues/65022
14
  qa: test_max_items_per_obj open procs not fully cleaned up
15
* https://tracker.ceph.com/issues/63699
16
  qa: failed cephfs-shell test_reading_conf
17
* https://tracker.ceph.com/issues/63700
18
  qa: test_cd_with_args failure
19
* https://tracker.ceph.com/issues/65136
20
  QA failure: test_fscrypt_dummy_encryption_with_quick_group
21
* https://tracker.ceph.com/issues/65246
22
  qa/cephfs: test_multifs_single_path_rootsquash (tasks.cephfs.test_admin.TestFsAuthorize)
23
24 248 Rishabh Dave
25 246 Rishabh Dave
* https://tracker.ceph.com/issues/58945
26 1 Patrick Donnelly
  qa: xfstests-dev's generic test suite has failures with fuse client
27
* https://tracker.ceph.com/issues/57656
28 251 Rishabh Dave
  [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
29 1 Patrick Donnelly
* https://tracker.ceph.com/issues/63265
30
  qa: fs/snaps/snaptest-git-ceph.sh failed when reseting to tag 'v0.1'
31 246 Rishabh Dave
* https://tracker.ceph.com/issues/62067
32 251 Rishabh Dave
  ffsb.sh failure "Resource temporarily unavailable"
33 246 Rishabh Dave
* https://tracker.ceph.com/issues/63949
34
  leak in mds.c detected by valgrind during CephFS QA run
35
* https://tracker.ceph.com/issues/48562
36
  qa: scrub - object missing on disk; some files may be lost
37
* https://tracker.ceph.com/issues/65020
38
  qa: Scrub error on inode 0x1000000356c (/volumes/qa/sv_0/2f8f6bb4-3ea9-47a0-bd79-a0f50dc149d5/client.0/tmp/clients/client7/~dmtmp/PARADOX) see mds.b log and `damage ls` output for details" in cluster log
39
* https://tracker.ceph.com/issues/64572
40
  workunits/fsx.sh failure
41
* https://tracker.ceph.com/issues/57676
42
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
43 1 Patrick Donnelly
* https://tracker.ceph.com/issues/64502
44 246 Rishabh Dave
  client: ceph-fuse fails to unmount after upgrade to main
45 1 Patrick Donnelly
* https://tracker.ceph.com/issues/54741
46
  crash: MDSTableClient::got_journaled_ack(unsigned long)
47 250 Rishabh Dave
48 248 Rishabh Dave
* https://tracker.ceph.com/issues/65265
49
  qa: health warning "no active mgr (MGR_DOWN)" occurs before and after test_nfs runs
50 1 Patrick Donnelly
* https://tracker.ceph.com/issues/65308
51
  qa: fs was offline but also unexpectedly degraded
52
* https://tracker.ceph.com/issues/65309
53
  qa: dbench.sh failed with "ERROR: handle 10318 was not found"
54 250 Rishabh Dave
55
* https://tracker.ceph.com/issues/65018
56 251 Rishabh Dave
  PG_DEGRADED warnings during cluster creation via cephadm: "Health check failed: Degraded data redundancy: 2/192 objects degraded (1.042%), 1 pg degraded (PG_DEGRADED)"
57 250 Rishabh Dave
* https://tracker.ceph.com/issues/52624
58
  qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
59 245 Rishabh Dave
60 240 Patrick Donnelly
h3. 2024-04-02
61
62
https://tracker.ceph.com/issues/65215
63
64
* "qa: error during scrub thrashing: rank damage found: {'backtrace'}":https://tracker.ceph.com/issues/57676
65
* "qa: ceph tell 4.3a deep-scrub command not found":https://tracker.ceph.com/issues/64972
66
* "pacific/quincy/v18.2.0: client: ceph-fuse fails to unmount after upgrade to main":https://tracker.ceph.com/issues/64502
67
* "Test failure: test_cephfs_mirror_cancel_mirroring_and_readd":https://tracker.ceph.com/issues/64711
68
* "workunits/fsx.sh failure":https://tracker.ceph.com/issues/64572
69
* "qa: failed cephfs-shell test_reading_conf":https://tracker.ceph.com/issues/63699
70
* "centos 9 testing reveals rocksdb Leak_StillReachable memory leak in mons":https://tracker.ceph.com/issues/61774
71
* "qa: test_max_items_per_obj open procs not fully cleaned up":https://tracker.ceph.com/issues/65022
72
* "qa: dbench workload timeout":https://tracker.ceph.com/issues/50220
73
* "suites/fsstress.sh hangs on one client - test times out":https://tracker.ceph.com/issues/64707
74 241 Patrick Donnelly
* "qa/suites/fs/nfs: cluster [WRN] Health check failed: 1 stray daemon(s) not managed by cephadm (CEPHADM_STRAY_DAEMON)" in cluster log":https://tracker.ceph.com/issues/65021
75
* "qa: Scrub error on inode 0x1000000356c (/volumes/qa/sv_0/2f8f6bb4-3ea9-47a0-bd79-a0f50dc149d5/client.0/tmp/clients/client7/~dmtmp/PARADOX) see mds.b log and `damage ls` output for details in cluster log":https://tracker.ceph.com/issues/65020
76
* "qa: iogen workunit: The following counters failed to be set on mds daemons: {'mds.exported', 'mds.imported'}":https://tracker.ceph.com/issues/54108
77
* "ffsb.sh failure Resource temporarily unavailable":https://tracker.ceph.com/issues/62067
78
* "QA failure: test_fscrypt_dummy_encryption_with_quick_group":https://tracker.ceph.com/issues/65136
79 244 Patrick Donnelly
* "qa: cluster [WRN] Health detail: HEALTH_WARN 1 pool(s) do not have an application enabled in cluster log":https://tracker.ceph.com/issues/65271
80 241 Patrick Donnelly
* "qa: test_cephfs_mirror_cancel_sync fails in a 100 jobs run of fs:mirror suite":https://tracker.ceph.com/issues/64534
81 240 Patrick Donnelly
82 236 Patrick Donnelly
h3. 2024-03-28
83
84
https://tracker.ceph.com/issues/65213
85
86 237 Patrick Donnelly
* "qa: error during scrub thrashing: rank damage found: {'backtrace'}":https://tracker.ceph.com/issues/57676
87
* "workunits/fsx.sh failure":https://tracker.ceph.com/issues/64572
88
* "PG_DEGRADED warnings during cluster creation via cephadm: Health check failed: Degraded data":https://tracker.ceph.com/issues/65018
89 238 Patrick Donnelly
* "suites/fsstress.sh hangs on one client - test times out":https://tracker.ceph.com/issues/64707
90
* "qa: ceph tell 4.3a deep-scrub command not found":https://tracker.ceph.com/issues/64972
91
* "qa: iogen workunit: The following counters failed to be set on mds daemons: {'mds.exported', 'mds.imported'}":https://tracker.ceph.com/issues/54108
92 239 Patrick Donnelly
* "qa: Scrub error on inode 0x1000000356c (/volumes/qa/sv_0/2f8f6bb4-3ea9-47a0-bd79-a0f50dc149d5/client.0/tmp/clients/client7/~dmtmp/PARADOX) see mds.b log and `damage ls` output for details in cluster log":https://tracker.ceph.com/issues/65020
93
* "qa: failed cephfs-shell test_reading_conf":https://tracker.ceph.com/issues/63699
94
* "Test failure: test_cephfs_mirror_cancel_mirroring_and_readd":https://tracker.ceph.com/issues/64711
95
* "qa: test_max_items_per_obj open procs not fully cleaned up":https://tracker.ceph.com/issues/65022
96
* "pacific/quincy/v18.2.0: client: ceph-fuse fails to unmount after upgrade to main":https://tracker.ceph.com/issues/64502
97
* "centos 9 testing reveals rocksdb Leak_StillReachable memory leak in mons":https://tracker.ceph.com/issues/61774
98
* "qa: Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)":https://tracker.ceph.com/issues/52624
99
* "qa: dbench workload timeout":https://tracker.ceph.com/issues/50220
100
101
102 236 Patrick Donnelly
103 235 Milind Changire
h3. 2024-03-25
104
105
https://pulpito.ceph.com/mchangir-2024-03-22_09:46:06-fs:upgrade-wip-mchangir-testing-main-20240318.032620-testing-default-smithi/
106
* https://tracker.ceph.com/issues/64502
107
  fusermount -u fails with: teuthology.exceptions.MaxWhileTries: reached maximum tries (51) after waiting for 300 seconds
108
109
https://pulpito.ceph.com/mchangir-2024-03-22_09:48:09-fs:libcephfs-wip-mchangir-testing-main-20240318.032620-testing-default-smithi/
110
111
* https://tracker.ceph.com/issues/62245
112
  libcephfs/test.sh failed - https://tracker.ceph.com/issues/62245#note-3
113
114
115 228 Patrick Donnelly
h3. 2024-03-20
116
117 234 Patrick Donnelly
https://pulpito.ceph.com/?branch=wip-batrick-testing-20240320.145742
118 228 Patrick Donnelly
119 233 Patrick Donnelly
https://github.com/batrick/ceph/commit/360516069d9393362c4cc6eb9371680fe16d66ab
120
121 229 Patrick Donnelly
Ubuntu jobs filtered out because builds were skipped by jenkins/shaman.
122 1 Patrick Donnelly
123 229 Patrick Donnelly
This run has a lot more failures because https://github.com/ceph/ceph/pull/55455 fixed log WRN/ERR checks.
124 228 Patrick Donnelly
125 229 Patrick Donnelly
* https://tracker.ceph.com/issues/57676
126
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
127
* https://tracker.ceph.com/issues/64572
128
    workunits/fsx.sh failure
129
* https://tracker.ceph.com/issues/65018
130
    PG_DEGRADED warnings during cluster creation via cephadm: "Health check failed: Degraded data redundancy: 2/192 objects degraded (1.042%), 1 pg degraded (PG_DEGRADED)"
131
* https://tracker.ceph.com/issues/64707 (new issue)
132
    suites/fsstress.sh hangs on one client - test times out
133 1 Patrick Donnelly
* https://tracker.ceph.com/issues/64988
134
    qa: fs:workloads mgr client evicted indicated by "cluster [WRN] evicting unresponsive client smithi042:x (15288), after 303.306 seconds"
135
* https://tracker.ceph.com/issues/59684
136
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
137 230 Patrick Donnelly
* https://tracker.ceph.com/issues/64972
138
    qa: "ceph tell 4.3a deep-scrub" command not found
139
* https://tracker.ceph.com/issues/54108
140
    qa: iogen workunit: "The following counters failed to be set on mds daemons: {'mds.exported', 'mds.imported'}"
141
* https://tracker.ceph.com/issues/65019
142
    qa/suites/fs/top: [WRN] Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)" in cluster log 
143
* https://tracker.ceph.com/issues/65020
144
    qa: Scrub error on inode 0x1000000356c (/volumes/qa/sv_0/2f8f6bb4-3ea9-47a0-bd79-a0f50dc149d5/client.0/tmp/clients/client7/~dmtmp/PARADOX) see mds.b log and `damage ls` output for details" in cluster log
145
* https://tracker.ceph.com/issues/65021
146
    qa/suites/fs/nfs: cluster [WRN] Health check failed: 1 stray daemon(s) not managed by cephadm (CEPHADM_STRAY_DAEMON)" in cluster log
147
* https://tracker.ceph.com/issues/63699
148
    qa: failed cephfs-shell test_reading_conf
149 231 Patrick Donnelly
* https://tracker.ceph.com/issues/64711
150
    Test failure: test_cephfs_mirror_cancel_mirroring_and_readd (tasks.cephfs.test_mirroring.TestMirroring)
151
* https://tracker.ceph.com/issues/50821
152
    qa: untar_snap_rm failure during mds thrashing
153 232 Patrick Donnelly
* https://tracker.ceph.com/issues/65022
154
    qa: test_max_items_per_obj open procs not fully cleaned up
155 228 Patrick Donnelly
156 226 Venky Shankar
h3.  14th March 2024
157
158
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20240307.013758
159
160 227 Venky Shankar
(pjd.sh failures are related to a bug in the testing kernel. See - https://tracker.ceph.com/issues/64679#note-4)
161 226 Venky Shankar
162
* https://tracker.ceph.com/issues/62067
163
    ffsb.sh failure "Resource temporarily unavailable"
164
* https://tracker.ceph.com/issues/57676
165
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
166
* https://tracker.ceph.com/issues/64502
167
    pacific/quincy/v18.2.0: client: ceph-fuse fails to unmount after upgrade to main
168
* https://tracker.ceph.com/issues/64572
169
    workunits/fsx.sh failure
170
* https://tracker.ceph.com/issues/63700
171
    qa: test_cd_with_args failure
172
* https://tracker.ceph.com/issues/59684
173
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
174
* https://tracker.ceph.com/issues/61243
175
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
176
177 225 Venky Shankar
h3. 5th March 2024
178
179
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20240304.042522
180
181
* https://tracker.ceph.com/issues/57676
182
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
183
* https://tracker.ceph.com/issues/64502
184
    pacific/quincy/v18.2.0: client: ceph-fuse fails to unmount after upgrade to main
185
* https://tracker.ceph.com/issues/63949
186
    leak in mds.c detected by valgrind during CephFS QA run
187
* https://tracker.ceph.com/issues/57656
188
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
189
* https://tracker.ceph.com/issues/63699
190
    qa: failed cephfs-shell test_reading_conf
191
* https://tracker.ceph.com/issues/64572
192
    workunits/fsx.sh failure
193
* https://tracker.ceph.com/issues/64707 (new issue)
194
    suites/fsstress.sh hangs on one client - test times out
195
* https://tracker.ceph.com/issues/59684
196
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
197
* https://tracker.ceph.com/issues/63700
198
    qa: test_cd_with_args failure
199
* https://tracker.ceph.com/issues/64711
200
    Test failure: test_cephfs_mirror_cancel_mirroring_and_readd (tasks.cephfs.test_mirroring.TestMirroring)
201
* https://tracker.ceph.com/issues/64729 (new issue)
202
    mon.a (mon.0) 1281 : cluster 3 [WRN] MDS_SLOW_METADATA_IO: 3 MDSs report slow metadata IOs" in cluster log
203
* https://tracker.ceph.com/issues/64730
204
    fs/misc/multiple_rsync.sh workunit times out
205
206 224 Venky Shankar
h3. 26th Feb 2024
207
208
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20240216.060239
209
210
(This run is a bit messy due to
211
212
  a) OCI runtime issues in the testing kernel with centos9
213
  b) SELinux denials related failures
214
  c) Unrelated MON_DOWN warnings)
215
216
* https://tracker.ceph.com/issues/57676
217
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
218
* https://tracker.ceph.com/issues/63700
219
    qa: test_cd_with_args failure
220
* https://tracker.ceph.com/issues/63949
221
    leak in mds.c detected by valgrind during CephFS QA run
222
* https://tracker.ceph.com/issues/59684
223
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
224
* https://tracker.ceph.com/issues/61243
225
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
226
* https://tracker.ceph.com/issues/63699
227
    qa: failed cephfs-shell test_reading_conf
228
* https://tracker.ceph.com/issues/64172
229
    Test failure: test_multiple_path_r (tasks.cephfs.test_admin.TestFsAuthorize)
230
* https://tracker.ceph.com/issues/57656
231
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
232
* https://tracker.ceph.com/issues/64572
233
    workunits/fsx.sh failure
234
235 222 Patrick Donnelly
h3. 20th Feb 2024
236
237
https://github.com/ceph/ceph/pull/55601
238
https://github.com/ceph/ceph/pull/55659
239
240
https://pulpito.ceph.com/pdonnell-2024-02-20_07:23:03-fs:upgrade:mds_upgrade_sequence-wip-batrick-testing-20240220.022152-distro-default-smithi/
241
242
* https://tracker.ceph.com/issues/64502
243
    client: quincy ceph-fuse fails to unmount after upgrade to main
244
245 223 Patrick Donnelly
This run has numerous problems. #55601 introduces testing for the upgrade sequence from </code>reef/{v18.2.0,v18.2.1,reef}</code> as well as an extra dimension for the ceph-fuse client. The main "big" issue is i64502: the ceph-fuse client is not being unmounted when <code>fusermount -u</code> is called. Instead, the client begins to unmount only after daemons are shut down during test cleanup.
246 218 Venky Shankar
247
h3. 19th Feb 2024
248
249 220 Venky Shankar
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20240217.015652
250
251 218 Venky Shankar
* https://tracker.ceph.com/issues/61243
252
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
253
* https://tracker.ceph.com/issues/63700
254
    qa: test_cd_with_args failure
255
* https://tracker.ceph.com/issues/63141
256
    qa/cephfs: test_idem_unaffected_root_squash fails
257
* https://tracker.ceph.com/issues/59684
258
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
259
* https://tracker.ceph.com/issues/63949
260
    leak in mds.c detected by valgrind during CephFS QA run
261
* https://tracker.ceph.com/issues/63764
262
    Test failure: test_r_with_fsname_and_no_path_in_cap (tasks.cephfs.test_multifs_auth.TestMDSCaps)
263
* https://tracker.ceph.com/issues/63699
264
    qa: failed cephfs-shell test_reading_conf
265 219 Venky Shankar
* https://tracker.ceph.com/issues/64482
266
    ceph: stderr Error: OCI runtime error: crun: bpf create ``: Function not implemented
267 201 Rishabh Dave
268 217 Venky Shankar
h3. 29 Jan 2024
269
270
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20240119.075157-1
271
272
* https://tracker.ceph.com/issues/57676
273
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
274
* https://tracker.ceph.com/issues/63949
275
    leak in mds.c detected by valgrind during CephFS QA run
276
* https://tracker.ceph.com/issues/62067
277
    ffsb.sh failure "Resource temporarily unavailable"
278
* https://tracker.ceph.com/issues/64172
279
    Test failure: test_multiple_path_r (tasks.cephfs.test_admin.TestFsAuthorize)
280
* https://tracker.ceph.com/issues/63265
281
    qa: fs/snaps/snaptest-git-ceph.sh failed when reseting to tag 'v0.1'
282
* https://tracker.ceph.com/issues/61243
283
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
284
* https://tracker.ceph.com/issues/59684
285
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
286
* https://tracker.ceph.com/issues/57656
287
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
288
* https://tracker.ceph.com/issues/64209
289
    snaptest-multiple-capsnaps.sh fails with "got remote process result: 1"
290
291 216 Venky Shankar
h3. 17th Jan 2024
292
293
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20240103.072409-1
294
295
* https://tracker.ceph.com/issues/63764
296
    Test failure: test_r_with_fsname_and_no_path_in_cap (tasks.cephfs.test_multifs_auth.TestMDSCaps)
297
* https://tracker.ceph.com/issues/57676
298
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
299
* https://tracker.ceph.com/issues/51964
300
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
301
* https://tracker.ceph.com/issues/63949
302
    leak in mds.c detected by valgrind during CephFS QA run
303
* https://tracker.ceph.com/issues/62067
304
    ffsb.sh failure "Resource temporarily unavailable"
305
* https://tracker.ceph.com/issues/61243
306
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
307
* https://tracker.ceph.com/issues/63259
308
    mds: failed to store backtrace and force file system read-only
309
* https://tracker.ceph.com/issues/63265
310
    qa: fs/snaps/snaptest-git-ceph.sh failed when reseting to tag 'v0.1'
311
312
h3. 16 Jan 2024
313 215 Rishabh Dave
314 214 Rishabh Dave
https://pulpito.ceph.com/rishabh-2023-12-11_15:37:57-fs-rishabh-2023dec11-testing-default-smithi/
315
https://pulpito.ceph.com/rishabh-2023-12-17_11:19:43-fs-rishabh-2023dec11-testing-default-smithi/
316
https://pulpito.ceph.com/rishabh-2024-01-04_18:43:16-fs-rishabh-2024jan4-testing-default-smithi
317
318
* https://tracker.ceph.com/issues/63764
319
  Test failure: test_r_with_fsname_and_no_path_in_cap (tasks.cephfs.test_multifs_auth.TestMDSCaps)
320
* https://tracker.ceph.com/issues/63141
321
  qa/cephfs: test_idem_unaffected_root_squash fails
322
* https://tracker.ceph.com/issues/62067
323
  ffsb.sh failure "Resource temporarily unavailable" 
324
* https://tracker.ceph.com/issues/51964
325
  qa: test_cephfs_mirror_restart_sync_on_blocklist failure
326
* https://tracker.ceph.com/issues/54462 
327
  Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128
328
* https://tracker.ceph.com/issues/57676
329
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
330
331
* https://tracker.ceph.com/issues/63949
332
  valgrind leak in MDS
333
* https://tracker.ceph.com/issues/64041
334
  qa/cephfs: fs/upgrade/nofs suite attempts to jump more than 2 releases
335
* fsstress failure in last run was due a kernel MM layer failure, unrelated to CephFS
336
* from last run, job #7507400 failed due to MGR; FS wasn't degraded, so it's unrelated to CephFS
337
338 213 Venky Shankar
h3. 06 Dec 2023
339
340
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20231206.125818
341
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20231206.125818-x (rerun w/ squid kickoff changes)
342
343
* https://tracker.ceph.com/issues/63764
344
    Test failure: test_r_with_fsname_and_no_path_in_cap (tasks.cephfs.test_multifs_auth.TestMDSCaps)
345
* https://tracker.ceph.com/issues/63233
346
    mon|client|mds: valgrind reports possible leaks in the MDS
347
* https://tracker.ceph.com/issues/57676
348
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
349
* https://tracker.ceph.com/issues/62580
350
    testing: Test failure: test_snapshot_remove (tasks.cephfs.test_strays.TestStrays)
351
* https://tracker.ceph.com/issues/62067
352
    ffsb.sh failure "Resource temporarily unavailable"
353
* https://tracker.ceph.com/issues/61243
354
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
355
* https://tracker.ceph.com/issues/62081
356
    tasks/fscrypt-common does not finish, timesout
357
* https://tracker.ceph.com/issues/63265
358
    qa: fs/snaps/snaptest-git-ceph.sh failed when reseting to tag 'v0.1'
359
* https://tracker.ceph.com/issues/63806
360
    ffsb.sh workunit failure (MDS: std::out_of_range, damaged)
361
362 211 Patrick Donnelly
h3. 30 Nov 2023
363
364
https://pulpito.ceph.com/pdonnell-2023-11-30_08:05:19-fs:shell-wip-batrick-testing-20231130.014408-distro-default-smithi/
365
366
* https://tracker.ceph.com/issues/63699
367 212 Patrick Donnelly
    qa: failed cephfs-shell test_reading_conf
368
* https://tracker.ceph.com/issues/63700
369
    qa: test_cd_with_args failure
370 211 Patrick Donnelly
371 210 Venky Shankar
h3. 29 Nov 2023
372
373
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20231107.042705
374
375
* https://tracker.ceph.com/issues/63233
376
    mon|client|mds: valgrind reports possible leaks in the MDS
377
* https://tracker.ceph.com/issues/63141
378
    qa/cephfs: test_idem_unaffected_root_squash fails
379
* https://tracker.ceph.com/issues/57676
380
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
381
* https://tracker.ceph.com/issues/57655
382
    qa: fs:mixed-clients kernel_untar_build failure
383
* https://tracker.ceph.com/issues/62067
384
    ffsb.sh failure "Resource temporarily unavailable"
385
* https://tracker.ceph.com/issues/61243
386
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
387
* https://tracker.ceph.com/issues/62510 (pending RHEL back port)
    snaptest-git-ceph.sh failure with fs/thrash
388
* https://tracker.ceph.com/issues/62810
389
    Failure in snaptest-git-ceph.sh (it's an async unlink/create bug) -- Need to fix again
390
391 206 Venky Shankar
h3. 14 Nov 2023
392 207 Milind Changire
(Milind)
393
394
https://pulpito.ceph.com/mchangir-2023-11-13_10:27:15-fs-wip-mchangir-testing-20231110.052303-testing-default-smithi/
395
396
* https://tracker.ceph.com/issues/53859
397
  qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
398
* https://tracker.ceph.com/issues/63233
399
  mon|client|mds: valgrind reports possible leaks in the MDS
400
* https://tracker.ceph.com/issues/63521
401
  qa: Test failure: test_scrub_merge_dirfrags (tasks.cephfs.test_scrub_checks.TestScrubChecks)
402
* https://tracker.ceph.com/issues/57655
403
  qa: fs:mixed-clients kernel_untar_build failure
404
* https://tracker.ceph.com/issues/62580
405
    testing: Test failure: test_snapshot_remove (tasks.cephfs.test_strays.TestStrays)
406
* https://tracker.ceph.com/issues/57676
407
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
408
* https://tracker.ceph.com/issues/61243
409
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
410
* https://tracker.ceph.com/issues/63141
411
    qa/cephfs: test_idem_unaffected_root_squash fails
412
* https://tracker.ceph.com/issues/51964
413
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
414
* https://tracker.ceph.com/issues/63522
415
    No module named 'tasks.ceph_fuse'
416
    No module named 'tasks.kclient'
417
    No module named 'tasks.cephfs.fuse_mount'
418
    No module named 'tasks.ceph'
419
* https://tracker.ceph.com/issues/63523
420
    Command failed - qa/workunits/fs/misc/general_vxattrs.sh
421
422
423
h3. 14 Nov 2023
424 206 Venky Shankar
425
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20231106.073650
426
427
(nvm the fs:upgrade test failure - the PR is excluded from merge)
428
429
* https://tracker.ceph.com/issues/57676
430
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
431
* https://tracker.ceph.com/issues/63233
432
    mon|client|mds: valgrind reports possible leaks in the MDS
433
* https://tracker.ceph.com/issues/63141
434
    qa/cephfs: test_idem_unaffected_root_squash fails
435
* https://tracker.ceph.com/issues/62580
436
    testing: Test failure: test_snapshot_remove (tasks.cephfs.test_strays.TestStrays)
437
* https://tracker.ceph.com/issues/57655
438
    qa: fs:mixed-clients kernel_untar_build failure
439
* https://tracker.ceph.com/issues/51964
440
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
441
* https://tracker.ceph.com/issues/63519
442
    ceph-fuse: reef ceph-fuse crashes with main branch ceph-mds
443
* https://tracker.ceph.com/issues/57087
444
    qa: test_fragmented_injection (tasks.cephfs.test_data_scan.TestDataScan) failure
445
* https://tracker.ceph.com/issues/58945
446
    qa: xfstests-dev's generic test suite has 20 failures with fuse client
447
448 204 Rishabh Dave
h3. 7 Nov 2023
449
450 205 Rishabh Dave
fs: https://pulpito.ceph.com/rishabh-2023-11-04_04:30:51-fs-rishabh-2023nov3-testing-default-smithi/
451
re-run: https://pulpito.ceph.com/rishabh-2023-11-05_14:10:09-fs-rishabh-2023nov3-testing-default-smithi/
452
smoke: https://pulpito.ceph.com/rishabh-2023-11-08_08:39:05-smoke-rishabh-2023nov3-testing-default-smithi/
453 204 Rishabh Dave
454
* https://tracker.ceph.com/issues/53859
455
  qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
456
* https://tracker.ceph.com/issues/63233
457
  mon|client|mds: valgrind reports possible leaks in the MDS
458
* https://tracker.ceph.com/issues/57655
459
  qa: fs:mixed-clients kernel_untar_build failure
460
* https://tracker.ceph.com/issues/57676
461
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
462
463
* https://tracker.ceph.com/issues/63473
464
  fsstress.sh failed with errno 124
465
466 202 Rishabh Dave
h3. 3 Nov 2023
467 203 Rishabh Dave
468 202 Rishabh Dave
https://pulpito.ceph.com/rishabh-2023-10-27_06:26:52-fs-rishabh-2023oct26-testing-default-smithi/
469
470
* https://tracker.ceph.com/issues/63141
471
  qa/cephfs: test_idem_unaffected_root_squash fails
472
* https://tracker.ceph.com/issues/63233
473
  mon|client|mds: valgrind reports possible leaks in the MDS
474
* https://tracker.ceph.com/issues/57656
475
  dbench: write failed on handle 10010 (Resource temporarily unavailable)
476
* https://tracker.ceph.com/issues/57655
477
  qa: fs:mixed-clients kernel_untar_build failure
478
* https://tracker.ceph.com/issues/57676
479
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
480
481
* https://tracker.ceph.com/issues/59531
482
  "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio)"
483
* https://tracker.ceph.com/issues/52624
484
  qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
485
486 198 Patrick Donnelly
h3. 24 October 2023
487
488
https://pulpito.ceph.com/?branch=wip-batrick-testing-20231024.144545
489
490 200 Patrick Donnelly
Two failures:
491
492
https://pulpito.ceph.com/pdonnell-2023-10-26_05:21:22-fs-wip-batrick-testing-20231024.144545-distro-default-smithi/7438459/
493
https://pulpito.ceph.com/pdonnell-2023-10-26_05:21:22-fs-wip-batrick-testing-20231024.144545-distro-default-smithi/7438468/
494
495
probably related to https://github.com/ceph/ceph/pull/53255. Killing the mount as part of the test did not complete. Will research more.
496
497 198 Patrick Donnelly
* https://tracker.ceph.com/issues/52624
498
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
499
* https://tracker.ceph.com/issues/57676
500 199 Patrick Donnelly
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
501
* https://tracker.ceph.com/issues/63233
502
    mon|client|mds: valgrind reports possible leaks in the MDS
503
* https://tracker.ceph.com/issues/59531
504
    "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS 
505
* https://tracker.ceph.com/issues/57655
506
    qa: fs:mixed-clients kernel_untar_build failure
507 200 Patrick Donnelly
* https://tracker.ceph.com/issues/62067
508
    ffsb.sh failure "Resource temporarily unavailable"
509
* https://tracker.ceph.com/issues/63411
510
    qa: flush journal may cause timeouts of `scrub status`
511
* https://tracker.ceph.com/issues/61243
512
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
513
* https://tracker.ceph.com/issues/63141
514 198 Patrick Donnelly
    test_idem_unaffected_root_squash (test_admin.TestFsAuthorizeUpdate) fails
515 148 Rishabh Dave
516 195 Venky Shankar
h3. 18 Oct 2023
517
518
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20231018.065603
519
520
* https://tracker.ceph.com/issues/52624
521
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
522
* https://tracker.ceph.com/issues/57676
523
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
524
* https://tracker.ceph.com/issues/63233
525
    mon|client|mds: valgrind reports possible leaks in the MDS
526
* https://tracker.ceph.com/issues/63141
527
    qa/cephfs: test_idem_unaffected_root_squash fails
528
* https://tracker.ceph.com/issues/59531
529
  "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio)" 
530
* https://tracker.ceph.com/issues/62658
531
    error during scrub thrashing: reached maximum tries (31) after waiting for 900 seconds
532
* https://tracker.ceph.com/issues/62580
533
    testing: Test failure: test_snapshot_remove (tasks.cephfs.test_strays.TestStrays)
534
* https://tracker.ceph.com/issues/62067
535
    ffsb.sh failure "Resource temporarily unavailable"
536
* https://tracker.ceph.com/issues/57655
537
    qa: fs:mixed-clients kernel_untar_build failure
538
* https://tracker.ceph.com/issues/62036
539
    src/mds/MDCache.cc: 5131: FAILED ceph_assert(isolated_inodes.empty())
540
* https://tracker.ceph.com/issues/58945
541
    qa: xfstests-dev's generic test suite has 20 failures with fuse client
542
* https://tracker.ceph.com/issues/62847
543
    mds: blogbench requests stuck (5mds+scrub+snaps-flush)
544
545 193 Venky Shankar
h3. 13 Oct 2023
546
547
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20231013.093215
548
549
* https://tracker.ceph.com/issues/52624
550
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
551
* https://tracker.ceph.com/issues/62936
552
    Test failure: test_mirroring_init_failure_with_recovery (tasks.cephfs.test_mirroring.TestMirroring)
553
* https://tracker.ceph.com/issues/47292
554
    cephfs-shell: test_df_for_valid_file failure
555
* https://tracker.ceph.com/issues/63141
556
    qa/cephfs: test_idem_unaffected_root_squash fails
557
* https://tracker.ceph.com/issues/62081
558
    tasks/fscrypt-common does not finish, timesout
559 1 Patrick Donnelly
* https://tracker.ceph.com/issues/58945
560
    qa: xfstests-dev's generic test suite has 20 failures with fuse client
561 194 Venky Shankar
* https://tracker.ceph.com/issues/63233
562
    mon|client|mds: valgrind reports possible leaks in the MDS
563 193 Venky Shankar
564 190 Patrick Donnelly
h3. 16 Oct 2023
565
566
https://pulpito.ceph.com/?branch=wip-batrick-testing-20231016.203825
567
568 192 Patrick Donnelly
Infrastructure issues:
569
* /teuthology/pdonnell-2023-10-19_12:04:12-fs-wip-batrick-testing-20231016.203825-distro-default-smithi/7432286/teuthology.log
570
    Host lost.
571
572 196 Patrick Donnelly
One followup fix:
573
* https://pulpito.ceph.com/pdonnell-2023-10-20_00:33:29-fs-wip-batrick-testing-20231016.203825-distro-default-smithi/
574
575 192 Patrick Donnelly
Failures:
576
577
* https://tracker.ceph.com/issues/56694
578
    qa: avoid blocking forever on hung umount
579
* https://tracker.ceph.com/issues/63089
580
    qa: tasks/mirror times out
581
* https://tracker.ceph.com/issues/52624
582
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
583
* https://tracker.ceph.com/issues/59531
584
  "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio)" 
585
* https://tracker.ceph.com/issues/57676
586
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
587
* https://tracker.ceph.com/issues/62658 
588
    error during scrub thrashing: reached maximum tries (31) after waiting for 900 seconds
589
* https://tracker.ceph.com/issues/61243
590
  test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
591
* https://tracker.ceph.com/issues/57656
592
  dbench: write failed on handle 10010 (Resource temporarily unavailable)
593
* https://tracker.ceph.com/issues/63233
594
  mon|client|mds: valgrind reports possible leaks in the MDS
595 197 Patrick Donnelly
* https://tracker.ceph.com/issues/63278
596
  kclient: may wrongly decode session messages and believe it is blocklisted (dead jobs)
597 192 Patrick Donnelly
598 189 Rishabh Dave
h3. 9 Oct 2023
599
600
https://pulpito.ceph.com/rishabh-2023-10-06_11:56:52-fs-rishabh-cephfs-mon-testing-default-smithi/
601
602
* https://tracker.ceph.com/issues/54460
603
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
604
* https://tracker.ceph.com/issues/63141
605
  test_idem_unaffected_root_squash (test_admin.TestFsAuthorizeUpdate) fails
606
* https://tracker.ceph.com/issues/62937
607
  logrotate doesn't support parallel execution on same set of logfiles
608
* https://tracker.ceph.com/issues/61400
609
  valgrind+ceph-mon issues
610
* https://tracker.ceph.com/issues/57676
611
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
612
* https://tracker.ceph.com/issues/55805
613
  error during scrub thrashing reached max tries in 900 secs
614
615 188 Venky Shankar
h3. 26 Sep 2023
616
617
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230926.081818
618
619
* https://tracker.ceph.com/issues/52624
620
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
621
* https://tracker.ceph.com/issues/62873
622
    qa: FAIL: test_client_blocklisted_oldest_tid (tasks.cephfs.test_client_limits.TestClientLimits)
623
* https://tracker.ceph.com/issues/61400
624
    valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
625
* https://tracker.ceph.com/issues/57676
626
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
627
* https://tracker.ceph.com/issues/62682
628
    mon: no mdsmap broadcast after "fs set joinable" is set to true
629
* https://tracker.ceph.com/issues/63089
630
    qa: tasks/mirror times out
631
632 185 Rishabh Dave
h3. 22 Sep 2023
633
634
https://pulpito.ceph.com/rishabh-2023-09-12_12:12:15-fs-wip-rishabh-2023sep12-b2-testing-default-smithi/
635
636
* https://tracker.ceph.com/issues/59348
637
  qa: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota
638
* https://tracker.ceph.com/issues/59344
639
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
640
* https://tracker.ceph.com/issues/59531
641
  "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio)" 
642
* https://tracker.ceph.com/issues/61574
643
  build failure for mdtest project
644
* https://tracker.ceph.com/issues/62702
645
  fsstress.sh: MDS slow requests for the internal 'rename' requests
646
* https://tracker.ceph.com/issues/57676
647
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
648
649
* https://tracker.ceph.com/issues/62863 
650
  deadlock in ceph-fuse causes teuthology job to hang and fail
651
* https://tracker.ceph.com/issues/62870
652
  test_cluster_info (tasks.cephfs.test_nfs.TestNFS)
653
* https://tracker.ceph.com/issues/62873
654
  test_client_blocklisted_oldest_tid (tasks.cephfs.test_client_limits.TestClientLimits)
655
656 186 Venky Shankar
h3. 20 Sep 2023
657
658
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230920.072635
659
660
* https://tracker.ceph.com/issues/52624
661
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
662
* https://tracker.ceph.com/issues/61400
663
    valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
664
* https://tracker.ceph.com/issues/61399
665
    libmpich: undefined references to fi_strerror
666
* https://tracker.ceph.com/issues/62081
667
    tasks/fscrypt-common does not finish, timesout
668
* https://tracker.ceph.com/issues/62658 
669
    error during scrub thrashing: reached maximum tries (31) after waiting for 900 seconds
670
* https://tracker.ceph.com/issues/62915
671
    qa/suites/fs/nfs: No orchestrator configured (try `ceph orch set backend`) while running test cases
672
* https://tracker.ceph.com/issues/59531
673
    quincy: "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio)"
674
* https://tracker.ceph.com/issues/62873
675
  qa: FAIL: test_client_blocklisted_oldest_tid (tasks.cephfs.test_client_limits.TestClientLimits)
676
* https://tracker.ceph.com/issues/62936
677
    Test failure: test_mirroring_init_failure_with_recovery (tasks.cephfs.test_mirroring.TestMirroring)
678
* https://tracker.ceph.com/issues/62937
679
    Command failed on smithi027 with status 3: 'sudo logrotate /etc/logrotate.d/ceph-test.conf'
680
* https://tracker.ceph.com/issues/62510
681
    snaptest-git-ceph.sh failure with fs/thrash
682
* https://tracker.ceph.com/issues/62081
683
    tasks/fscrypt-common does not finish, timesout
684
* https://tracker.ceph.com/issues/62126
685
    test failure: suites/blogbench.sh stops running
686 187 Venky Shankar
* https://tracker.ceph.com/issues/62682
687
    mon: no mdsmap broadcast after "fs set joinable" is set to true
688 186 Venky Shankar
689 184 Milind Changire
h3. 19 Sep 2023
690
691
http://pulpito.front.sepia.ceph.com/mchangir-2023-09-12_05:40:22-fs-wip-mchangir-testing-20230908.140927-testing-default-smithi/
692
693
* https://tracker.ceph.com/issues/58220#note-9
694
  workunit fs/test_python.sh: test_disk_quota_exceeeded_error failure
695
* https://tracker.ceph.com/issues/62702
696
  Command failed (workunit test suites/fsstress.sh) on smithi124 with status 124
697
* https://tracker.ceph.com/issues/57676
698
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
699
* https://tracker.ceph.com/issues/59348
700
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
701
* https://tracker.ceph.com/issues/52624
702
  qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
703
* https://tracker.ceph.com/issues/51964
704
  qa: test_cephfs_mirror_restart_sync_on_blocklist failure
705
* https://tracker.ceph.com/issues/61243
706
  test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
707
* https://tracker.ceph.com/issues/59344
708
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
709
* https://tracker.ceph.com/issues/62873
710
  qa: FAIL: test_client_blocklisted_oldest_tid (tasks.cephfs.test_client_limits.TestClientLimits)
711
* https://tracker.ceph.com/issues/59413
712
  cephfs: qa snaptest-git-ceph.sh failed with "got remote process result: 128"
713
* https://tracker.ceph.com/issues/53859
714
  qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
715
* https://tracker.ceph.com/issues/62482
716
  qa: cluster [WRN] Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
717
718 178 Patrick Donnelly
719 177 Venky Shankar
h3. 13 Sep 2023
720
721
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230908.065909
722
723
* https://tracker.ceph.com/issues/52624
724
      qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
725
* https://tracker.ceph.com/issues/57655
726
    qa: fs:mixed-clients kernel_untar_build failure
727
* https://tracker.ceph.com/issues/57676
728
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
729
* https://tracker.ceph.com/issues/61243
730
    qa: tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev - 17 tests failed
731
* https://tracker.ceph.com/issues/62567
732
    postgres workunit times out - MDS_SLOW_REQUEST in logs
733
* https://tracker.ceph.com/issues/61400
734
    valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
735
* https://tracker.ceph.com/issues/61399
736
    libmpich: undefined references to fi_strerror
737
* https://tracker.ceph.com/issues/57655
738
    qa: fs:mixed-clients kernel_untar_build failure
739
* https://tracker.ceph.com/issues/57676
740
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
741
* https://tracker.ceph.com/issues/51964
742
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
743
* https://tracker.ceph.com/issues/62081
744
    tasks/fscrypt-common does not finish, timesout
745 178 Patrick Donnelly
746 179 Patrick Donnelly
h3. 2023 Sep 12
747 178 Patrick Donnelly
748
https://pulpito.ceph.com/pdonnell-2023-09-12_14:07:50-fs-wip-batrick-testing-20230912.122437-distro-default-smithi/
749 1 Patrick Donnelly
750 181 Patrick Donnelly
A few failures caused by qa refactoring in https://github.com/ceph/ceph/pull/48130 ; notably:
751
752 182 Patrick Donnelly
* Test failure: test_export_pin_many (tasks.cephfs.test_exports.TestExportPin) caused by fragmentation from config changes.
753 181 Patrick Donnelly
754
Failures:
755
756 179 Patrick Donnelly
* https://tracker.ceph.com/issues/59348
757
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
758
* https://tracker.ceph.com/issues/57656
759
  dbench: write failed on handle 10010 (Resource temporarily unavailable)
760
* https://tracker.ceph.com/issues/55805
761
  error scrub thrashing reached max tries in 900 secs
762
* https://tracker.ceph.com/issues/62067
763
    ffsb.sh failure "Resource temporarily unavailable"
764
* https://tracker.ceph.com/issues/59344
765
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
766
* https://tracker.ceph.com/issues/61399
767 180 Patrick Donnelly
  libmpich: undefined references to fi_strerror
768
* https://tracker.ceph.com/issues/62832
769
  common: config_proxy deadlock during shutdown (and possibly other times)
770
* https://tracker.ceph.com/issues/59413
771 1 Patrick Donnelly
  cephfs: qa snaptest-git-ceph.sh failed with "got remote process result: 128"
772 181 Patrick Donnelly
* https://tracker.ceph.com/issues/57676
773
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
774
* https://tracker.ceph.com/issues/62567
775
  Command failed with status 1: "sudo TESTDIR=/home/ubuntu/cephtest bash -c 'sudo -u postgres -- pgbench -s 500 -i'"
776
* https://tracker.ceph.com/issues/54460
777
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
778
* https://tracker.ceph.com/issues/58220#note-9
779
  workunit fs/test_python.sh: test_disk_quota_exceeeded_error failure
780
* https://tracker.ceph.com/issues/59348
781
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
782 183 Patrick Donnelly
* https://tracker.ceph.com/issues/62847
783
    mds: blogbench requests stuck (5mds+scrub+snaps-flush)
784
* https://tracker.ceph.com/issues/62848
785
    qa: fail_fs upgrade scenario hanging
786
* https://tracker.ceph.com/issues/62081
787
    tasks/fscrypt-common does not finish, timesout
788 177 Venky Shankar
789 176 Venky Shankar
h3. 11 Sep 2023
790 175 Venky Shankar
791
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230830.153114
792
793
* https://tracker.ceph.com/issues/52624
794
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
795
* https://tracker.ceph.com/issues/61399
796
    qa: build failure for ior (the failed instance is when compiling `mdtest`)
797
* https://tracker.ceph.com/issues/57655
798
    qa: fs:mixed-clients kernel_untar_build failure
799
* https://tracker.ceph.com/issues/61399
800
    ior build failure
801
* https://tracker.ceph.com/issues/59531
802
    quincy: "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio)"
803
* https://tracker.ceph.com/issues/59344
804
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
805
* https://tracker.ceph.com/issues/59346
806
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
807
* https://tracker.ceph.com/issues/59348
808
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
809
* https://tracker.ceph.com/issues/57676
810
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
811
* https://tracker.ceph.com/issues/61243
812
  qa: tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev - 17 tests failed
813
* https://tracker.ceph.com/issues/62567
814
  postgres workunit times out - MDS_SLOW_REQUEST in logs
815
816
817 174 Rishabh Dave
h3. 6 Sep 2023 Run 2
818
819
https://pulpito.ceph.com/rishabh-2023-08-25_01:50:32-fs-wip-rishabh-2023aug3-b5-testing-default-smithi/ 
820
821
* https://tracker.ceph.com/issues/51964
822
  test_cephfs_mirror_restart_sync_on_blocklist failure
823
* https://tracker.ceph.com/issues/59348
824
  test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
825
* https://tracker.ceph.com/issues/53859
826
  qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
827
* https://tracker.ceph.com/issues/61892
828
  test_strays.TestStrays.test_snapshot_remove failed
829
* https://tracker.ceph.com/issues/54460
830
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
831
* https://tracker.ceph.com/issues/59346
832
  fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write"
833
* https://tracker.ceph.com/issues/59344
834
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
835
* https://tracker.ceph.com/issues/62484
836
  qa: ffsb.sh test failure
837
* https://tracker.ceph.com/issues/62567
838
  Command failed with status 1: "sudo TESTDIR=/home/ubuntu/cephtest bash -c 'sudo -u postgres -- pgbench -s 500 -i'"
839
  
840
* https://tracker.ceph.com/issues/61399
841
  ior build failure
842
* https://tracker.ceph.com/issues/57676
843
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
844
* https://tracker.ceph.com/issues/55805
845
  error scrub thrashing reached max tries in 900 secs
846
847 172 Rishabh Dave
h3. 6 Sep 2023
848 171 Rishabh Dave
849 173 Rishabh Dave
https://pulpito.ceph.com/rishabh-2023-08-10_20:16:46-fs-wip-rishabh-2023Aug1-b4-testing-default-smithi/
850 171 Rishabh Dave
851 1 Patrick Donnelly
* https://tracker.ceph.com/issues/53859
852
  qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
853 173 Rishabh Dave
* https://tracker.ceph.com/issues/51964
854
  qa: test_cephfs_mirror_restart_sync_on_blocklist failure
855 1 Patrick Donnelly
* https://tracker.ceph.com/issues/61892
856 173 Rishabh Dave
  test_snapshot_remove (test_strays.TestStrays) failed
857
* https://tracker.ceph.com/issues/59348
858
  qa: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota
859
* https://tracker.ceph.com/issues/54462
860
  Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128
861
* https://tracker.ceph.com/issues/62556
862
  test_acls: xfstests_dev: python2 is missing
863
* https://tracker.ceph.com/issues/62067
864
  ffsb.sh failure "Resource temporarily unavailable"
865
* https://tracker.ceph.com/issues/57656
866
  dbench: write failed on handle 10010 (Resource temporarily unavailable)
867 1 Patrick Donnelly
* https://tracker.ceph.com/issues/59346
868
  fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write"
869 171 Rishabh Dave
* https://tracker.ceph.com/issues/59344
870 173 Rishabh Dave
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
871
872 171 Rishabh Dave
* https://tracker.ceph.com/issues/61399
873
  ior build failure
874
* https://tracker.ceph.com/issues/57676
875
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
876
* https://tracker.ceph.com/issues/55805
877
  error scrub thrashing reached max tries in 900 secs
878 173 Rishabh Dave
879
* https://tracker.ceph.com/issues/62567
880
  Command failed on smithi008 with status 1: "sudo TESTDIR=/home/ubuntu/cephtest bash -c 'sudo -u postgres -- pgbench -s 500 -i'"
881
* https://tracker.ceph.com/issues/62702
882
  workunit test suites/fsstress.sh on smithi066 with status 124
883 170 Rishabh Dave
884
h3. 5 Sep 2023
885
886
https://pulpito.ceph.com/rishabh-2023-08-25_06:38:25-fs-wip-rishabh-2023aug3-b5-testing-default-smithi/
887
orch:cephadm suite run: http://pulpito.front.sepia.ceph.com/rishabh-2023-09-05_12:16:09-orch:cephadm-wip-rishabh-2023aug3-b5-testing-default-smithi/
888
  this run has failures but acc to Adam King these are not relevant and should be ignored
889
890
* https://tracker.ceph.com/issues/61892
891
  test_snapshot_remove (test_strays.TestStrays) failed
892
* https://tracker.ceph.com/issues/59348
893
  test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota
894
* https://tracker.ceph.com/issues/54462
895
  Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128
896
* https://tracker.ceph.com/issues/62067
897
  ffsb.sh failure "Resource temporarily unavailable"
898
* https://tracker.ceph.com/issues/57656 
899
  dbench: write failed on handle 10010 (Resource temporarily unavailable)
900
* https://tracker.ceph.com/issues/59346
901
  fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write"
902
* https://tracker.ceph.com/issues/59344
903
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
904
* https://tracker.ceph.com/issues/50223
905
  client.xxxx isn't responding to mclientcaps(revoke)
906
* https://tracker.ceph.com/issues/57655
907
  qa: fs:mixed-clients kernel_untar_build failure
908
* https://tracker.ceph.com/issues/62187
909
  iozone.sh: line 5: iozone: command not found
910
 
911
* https://tracker.ceph.com/issues/61399
912
  ior build failure
913
* https://tracker.ceph.com/issues/57676
914
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
915
* https://tracker.ceph.com/issues/55805
916
  error scrub thrashing reached max tries in 900 secs
917 169 Venky Shankar
918
919
h3. 31 Aug 2023
920
921
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230824.045828
922
923
* https://tracker.ceph.com/issues/52624
924
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
925
* https://tracker.ceph.com/issues/62187
926
    iozone: command not found
927
* https://tracker.ceph.com/issues/61399
928
    ior build failure
929
* https://tracker.ceph.com/issues/59531
930
    quincy: "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio)"
931
* https://tracker.ceph.com/issues/61399
932
    qa: build failure for ior (the failed instance is when compiling `mdtest`)
933
* https://tracker.ceph.com/issues/57655
934
    qa: fs:mixed-clients kernel_untar_build failure
935
* https://tracker.ceph.com/issues/59344
936
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
937
* https://tracker.ceph.com/issues/59346
938
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
939
* https://tracker.ceph.com/issues/59348
940
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
941
* https://tracker.ceph.com/issues/59413
942
    cephfs: qa snaptest-git-ceph.sh failed with "got remote process result: 128"
943
* https://tracker.ceph.com/issues/62653
944
    qa: unimplemented fcntl command: 1036 with fsstress
945
* https://tracker.ceph.com/issues/61400
946
    valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
947
* https://tracker.ceph.com/issues/62658
948
    error during scrub thrashing: reached maximum tries (31) after waiting for 900 seconds
949
* https://tracker.ceph.com/issues/62188
950
    AttributeError: 'RemoteProcess' object has no attribute 'read' (happens only with multis-auth test)
951 168 Venky Shankar
952
953
h3. 25 Aug 2023
954
955
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230822.064807
956
957
* https://tracker.ceph.com/issues/59344
958
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
959
* https://tracker.ceph.com/issues/59346
960
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
961
* https://tracker.ceph.com/issues/59348
962
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
963
* https://tracker.ceph.com/issues/57655
964
    qa: fs:mixed-clients kernel_untar_build failure
965
* https://tracker.ceph.com/issues/61243
966
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
967
* https://tracker.ceph.com/issues/61399
968
    ior build failure
969
* https://tracker.ceph.com/issues/61399
970
    qa: build failure for ior (the failed instance is when compiling `mdtest`)
971
* https://tracker.ceph.com/issues/62484
972
    qa: ffsb.sh test failure
973
* https://tracker.ceph.com/issues/59531
974
    quincy: "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio)"
975
* https://tracker.ceph.com/issues/62510
976
    snaptest-git-ceph.sh failure with fs/thrash
977 167 Venky Shankar
978
979
h3. 24 Aug 2023
980
981
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230822.060131
982
983
* https://tracker.ceph.com/issues/57676
984
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
985
* https://tracker.ceph.com/issues/51964
986
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
987
* https://tracker.ceph.com/issues/59344
988
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
989
* https://tracker.ceph.com/issues/59346
990
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
991
* https://tracker.ceph.com/issues/59348
992
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
993
* https://tracker.ceph.com/issues/61399
994
    ior build failure
995
* https://tracker.ceph.com/issues/61399
996
    qa: build failure for ior (the failed instance is when compiling `mdtest`)
997
* https://tracker.ceph.com/issues/62510
998
    snaptest-git-ceph.sh failure with fs/thrash
999
* https://tracker.ceph.com/issues/62484
1000
    qa: ffsb.sh test failure
1001
* https://tracker.ceph.com/issues/57087
1002
    qa: test_fragmented_injection (tasks.cephfs.test_data_scan.TestDataScan) failure
1003
* https://tracker.ceph.com/issues/57656
1004
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
1005
* https://tracker.ceph.com/issues/62187
1006
    iozone: command not found
1007
* https://tracker.ceph.com/issues/62188
1008
    AttributeError: 'RemoteProcess' object has no attribute 'read' (happens only with multis-auth test)
1009
* https://tracker.ceph.com/issues/62567
1010
    postgres workunit times out - MDS_SLOW_REQUEST in logs
1011 166 Venky Shankar
1012
1013
h3. 22 Aug 2023
1014
1015
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230809.035933
1016
1017
* https://tracker.ceph.com/issues/57676
1018
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1019
* https://tracker.ceph.com/issues/51964
1020
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1021
* https://tracker.ceph.com/issues/59344
1022
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
1023
* https://tracker.ceph.com/issues/59346
1024
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
1025
* https://tracker.ceph.com/issues/59348
1026
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1027
* https://tracker.ceph.com/issues/61399
1028
    ior build failure
1029
* https://tracker.ceph.com/issues/61399
1030
    qa: build failure for ior (the failed instance is when compiling `mdtest`)
1031
* https://tracker.ceph.com/issues/57655
1032
    qa: fs:mixed-clients kernel_untar_build failure
1033
* https://tracker.ceph.com/issues/61243
1034
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
1035
* https://tracker.ceph.com/issues/62188
1036
    AttributeError: 'RemoteProcess' object has no attribute 'read' (happens only with multis-auth test)
1037
* https://tracker.ceph.com/issues/62510
1038
    snaptest-git-ceph.sh failure with fs/thrash
1039
* https://tracker.ceph.com/issues/62511
1040
    src/mds/MDLog.cc: 299: FAILED ceph_assert(!mds_is_shutting_down)
1041 165 Venky Shankar
1042
1043
h3. 14 Aug 2023
1044
1045
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230808.093601
1046
1047
* https://tracker.ceph.com/issues/51964
1048
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1049
* https://tracker.ceph.com/issues/61400
1050
    valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
1051
* https://tracker.ceph.com/issues/61399
1052
    ior build failure
1053
* https://tracker.ceph.com/issues/59348
1054
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1055
* https://tracker.ceph.com/issues/59531
1056
    cluster [WRN] OSD bench result of 137706.272521 IOPS exceeded the threshold
1057
* https://tracker.ceph.com/issues/59344
1058
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
1059
* https://tracker.ceph.com/issues/59346
1060
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
1061
* https://tracker.ceph.com/issues/61399
1062
    qa: build failure for ior (the failed instance is when compiling `mdtest`)
1063
* https://tracker.ceph.com/issues/59684 [kclient bug]
1064
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
1065
* https://tracker.ceph.com/issues/61243 (NEW)
1066
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
1067
* https://tracker.ceph.com/issues/57655
1068
    qa: fs:mixed-clients kernel_untar_build failure
1069
* https://tracker.ceph.com/issues/57656
1070
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
1071 163 Venky Shankar
1072
1073
h3. 28 JULY 2023
1074
1075
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230725.053049
1076
1077
* https://tracker.ceph.com/issues/51964
1078
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1079
* https://tracker.ceph.com/issues/61400
1080
    valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
1081
* https://tracker.ceph.com/issues/61399
1082
    ior build failure
1083
* https://tracker.ceph.com/issues/57676
1084
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1085
* https://tracker.ceph.com/issues/59348
1086
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1087
* https://tracker.ceph.com/issues/59531
1088
    cluster [WRN] OSD bench result of 137706.272521 IOPS exceeded the threshold
1089
* https://tracker.ceph.com/issues/59344
1090
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
1091
* https://tracker.ceph.com/issues/59346
1092
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
1093
* https://github.com/ceph/ceph/pull/52556
1094
    task/test_nfs: AttributeError: 'TestNFS' object has no attribute 'run_ceph_cmd' (see note #4)
1095
* https://tracker.ceph.com/issues/62187
1096
    iozone: command not found
1097
* https://tracker.ceph.com/issues/61399
1098
    qa: build failure for ior (the failed instance is when compiling `mdtest`)
1099
* https://tracker.ceph.com/issues/62188
1100 164 Rishabh Dave
    AttributeError: 'RemoteProcess' object has no attribute 'read' (happens only with multis-auth test)
1101 158 Rishabh Dave
1102
h3. 24 Jul 2023
1103
1104
https://pulpito.ceph.com/rishabh-2023-07-13_21:35:13-fs-wip-rishabh-2023Jul13-testing-default-smithi/
1105
https://pulpito.ceph.com/rishabh-2023-07-14_10:26:42-fs-wip-rishabh-2023Jul13-testing-default-smithi/
1106
There were few failure from one of the PRs under testing. Following run confirms that removing this PR fixes these failures -
1107
https://pulpito.ceph.com/rishabh-2023-07-18_02:11:50-fs-wip-rishabh-2023Jul13-m-quota-testing-default-smithi/
1108
One more extra run to check if blogbench.sh fail every time:
1109
https://pulpito.ceph.com/rishabh-2023-07-21_17:58:19-fs-wip-rishabh-2023Jul13-m-quota-testing-default-smithi/
1110
blogbench.sh failure were seen on above runs for first time, following run with main branch that confirms that "blogbench.sh" was not related to any of the PRs that are under testing -
1111 161 Rishabh Dave
https://pulpito.ceph.com/rishabh-2023-07-21_21:30:53-fs-wip-rishabh-2023Jul13-base-2-testing-default-smithi/
1112
1113
* https://tracker.ceph.com/issues/61892
1114
  test_snapshot_remove (test_strays.TestStrays) failed
1115
* https://tracker.ceph.com/issues/53859
1116
  test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
1117
* https://tracker.ceph.com/issues/61982
1118
  test_clean_stale_subvolume_snapshot_metadata (tasks.cephfs.test_volumes.TestSubvolumeSnapshots)
1119
* https://tracker.ceph.com/issues/52438
1120
  qa: ffsb timeout
1121
* https://tracker.ceph.com/issues/54460
1122
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1123
* https://tracker.ceph.com/issues/57655
1124
  qa: fs:mixed-clients kernel_untar_build failure
1125
* https://tracker.ceph.com/issues/48773
1126
  reached max tries: scrub does not complete
1127
* https://tracker.ceph.com/issues/58340
1128
  mds: fsstress.sh hangs with multimds
1129
* https://tracker.ceph.com/issues/61400
1130
  valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
1131
* https://tracker.ceph.com/issues/57206
1132
  libcephfs/test.sh: ceph_test_libcephfs_reclaim
1133
  
1134
* https://tracker.ceph.com/issues/57656
1135
  [testing] dbench: write failed on handle 10010 (Resource temporarily unavailable)
1136
* https://tracker.ceph.com/issues/61399
1137
  ior build failure
1138
* https://tracker.ceph.com/issues/57676
1139
  error during scrub thrashing: backtrace
1140
  
1141
* https://tracker.ceph.com/issues/38452
1142
  'sudo -u postgres -- pgbench -s 500 -i' failed
1143 158 Rishabh Dave
* https://tracker.ceph.com/issues/62126
1144 157 Venky Shankar
  blogbench.sh failure
1145
1146
h3. 18 July 2023
1147
1148
* https://tracker.ceph.com/issues/52624
1149
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
1150
* https://tracker.ceph.com/issues/57676
1151
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1152
* https://tracker.ceph.com/issues/54460
1153
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1154
* https://tracker.ceph.com/issues/57655
1155
    qa: fs:mixed-clients kernel_untar_build failure
1156
* https://tracker.ceph.com/issues/51964
1157
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1158
* https://tracker.ceph.com/issues/59344
1159
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
1160
* https://tracker.ceph.com/issues/61182
1161
    cephfs-mirror-ha-workunit: reached maximum tries (50) after waiting for 300 seconds
1162
* https://tracker.ceph.com/issues/61957
1163
    test_client_limits.TestClientLimits.test_client_release_bug
1164
* https://tracker.ceph.com/issues/59348
1165
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1166
* https://tracker.ceph.com/issues/61892
1167
    test_strays.TestStrays.test_snapshot_remove failed
1168
* https://tracker.ceph.com/issues/59346
1169
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
1170
* https://tracker.ceph.com/issues/44565
1171
    src/mds/SimpleLock.h: 528: FAILED ceph_assert(state == LOCK_XLOCK || state == LOCK_XLOCKDONE || state == LOCK_XLOCKSNAP || state == LOCK_LOCK_XLOCK || state == LOCK_LOCK || is_locallock())
1172
* https://tracker.ceph.com/issues/62067
1173
    ffsb.sh failure "Resource temporarily unavailable"
1174 156 Venky Shankar
1175
1176
h3. 17 July 2023
1177
1178
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230704.040136
1179
1180
* https://tracker.ceph.com/issues/61982
1181
    Test failure: test_clean_stale_subvolume_snapshot_metadata (tasks.cephfs.test_volumes.TestSubvolumeSnapshots)
1182
* https://tracker.ceph.com/issues/59344
1183
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
1184
* https://tracker.ceph.com/issues/61182
1185
    cephfs-mirror-ha-workunit: reached maximum tries (50) after waiting for 300 seconds
1186
* https://tracker.ceph.com/issues/61957
1187
    test_client_limits.TestClientLimits.test_client_release_bug
1188
* https://tracker.ceph.com/issues/61400
1189
    valgrind+ceph-mon: segmentation fault in rocksdb+tcmalloc
1190
* https://tracker.ceph.com/issues/59348
1191
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1192
* https://tracker.ceph.com/issues/61892
1193
    test_strays.TestStrays.test_snapshot_remove failed
1194
* https://tracker.ceph.com/issues/59346
1195
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
1196
* https://tracker.ceph.com/issues/62036
1197
    src/mds/MDCache.cc: 5131: FAILED ceph_assert(isolated_inodes.empty())
1198
* https://tracker.ceph.com/issues/61737
1199
    coredump from '/bin/podman pull quay.ceph.io/ceph-ci/ceph:pacific'
1200
* https://tracker.ceph.com/issues/44565
1201
    src/mds/SimpleLock.h: 528: FAILED ceph_assert(state == LOCK_XLOCK || state == LOCK_XLOCKDONE || state == LOCK_XLOCKSNAP || state == LOCK_LOCK_XLOCK || state == LOCK_LOCK || is_locallock())
1202 155 Rishabh Dave
1203 1 Patrick Donnelly
1204 153 Rishabh Dave
h3. 13 July 2023 Run 2
1205 152 Rishabh Dave
1206
1207
https://pulpito.ceph.com/rishabh-2023-07-08_23:33:40-fs-wip-rishabh-2023Jul9-testing-default-smithi/
1208
https://pulpito.ceph.com/rishabh-2023-07-09_20:19:09-fs-wip-rishabh-2023Jul9-testing-default-smithi/
1209
1210
* https://tracker.ceph.com/issues/61957
1211
  test_client_limits.TestClientLimits.test_client_release_bug
1212
* https://tracker.ceph.com/issues/61982
1213
  Test failure: test_clean_stale_subvolume_snapshot_metadata (tasks.cephfs.test_volumes.TestSubvolumeSnapshots)
1214
* https://tracker.ceph.com/issues/59348
1215
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1216
* https://tracker.ceph.com/issues/59344
1217
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
1218
* https://tracker.ceph.com/issues/54460
1219
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1220
* https://tracker.ceph.com/issues/57655
1221
  qa: fs:mixed-clients kernel_untar_build failure
1222
* https://tracker.ceph.com/issues/61400
1223
  valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
1224
* https://tracker.ceph.com/issues/61399
1225
  ior build failure
1226
1227 151 Venky Shankar
h3. 13 July 2023
1228
1229
https://pulpito.ceph.com/vshankar-2023-07-04_11:45:30-fs-wip-vshankar-testing-20230704.040242-testing-default-smithi/
1230
1231
* https://tracker.ceph.com/issues/54460
1232
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1233
* https://tracker.ceph.com/issues/61400
1234
    valgrind+ceph-mon: segmentation fault in rocksdb+tcmalloc
1235
* https://tracker.ceph.com/issues/57655
1236
    qa: fs:mixed-clients kernel_untar_build failure
1237
* https://tracker.ceph.com/issues/61945
1238
    LibCephFS.DelegTimeout failure
1239
* https://tracker.ceph.com/issues/52624
1240
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
1241
* https://tracker.ceph.com/issues/57676
1242
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1243
* https://tracker.ceph.com/issues/59348
1244
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1245
* https://tracker.ceph.com/issues/59344
1246
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
1247
* https://tracker.ceph.com/issues/51964
1248
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1249
* https://tracker.ceph.com/issues/59346
1250
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
1251
* https://tracker.ceph.com/issues/61982
1252
    Test failure: test_clean_stale_subvolume_snapshot_metadata (tasks.cephfs.test_volumes.TestSubvolumeSnapshots)
1253 150 Rishabh Dave
1254
1255
h3. 13 Jul 2023
1256
1257
https://pulpito.ceph.com/rishabh-2023-07-05_22:21:20-fs-wip-rishabh-2023Jul5-testing-default-smithi/
1258
https://pulpito.ceph.com/rishabh-2023-07-06_19:33:28-fs-wip-rishabh-2023Jul5-testing-default-smithi/
1259
1260
* https://tracker.ceph.com/issues/61957
1261
  test_client_limits.TestClientLimits.test_client_release_bug
1262
* https://tracker.ceph.com/issues/59348
1263
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1264
* https://tracker.ceph.com/issues/59346
1265
  fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
1266
* https://tracker.ceph.com/issues/48773
1267
  scrub does not complete: reached max tries
1268
* https://tracker.ceph.com/issues/59344
1269
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
1270
* https://tracker.ceph.com/issues/52438
1271
  qa: ffsb timeout
1272
* https://tracker.ceph.com/issues/57656
1273
  [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
1274
* https://tracker.ceph.com/issues/58742
1275
  xfstests-dev: kcephfs: generic
1276
* https://tracker.ceph.com/issues/61399
1277 148 Rishabh Dave
  libmpich: undefined references to fi_strerror
1278 149 Rishabh Dave
1279 148 Rishabh Dave
h3. 12 July 2023
1280
1281
https://pulpito.ceph.com/rishabh-2023-07-05_18:32:52-fs-wip-rishabh-CephManager-in-CephFSTestCase-testing-default-smithi/
1282
https://pulpito.ceph.com/rishabh-2023-07-06_19:46:43-fs-wip-rishabh-CephManager-in-CephFSTestCase-testing-default-smithi/
1283
1284
* https://tracker.ceph.com/issues/61892
1285
  test_strays.TestStrays.test_snapshot_remove failed
1286
* https://tracker.ceph.com/issues/59348
1287
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1288
* https://tracker.ceph.com/issues/53859
1289
  qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
1290
* https://tracker.ceph.com/issues/59346
1291
  fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write"
1292
* https://tracker.ceph.com/issues/58742
1293
  xfstests-dev: kcephfs: generic
1294
* https://tracker.ceph.com/issues/59344
1295
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
1296
* https://tracker.ceph.com/issues/52438
1297
  qa: ffsb timeout
1298
* https://tracker.ceph.com/issues/57656
1299
  [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
1300
* https://tracker.ceph.com/issues/54460
1301
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1302
* https://tracker.ceph.com/issues/57655
1303
  qa: fs:mixed-clients kernel_untar_build failure
1304
* https://tracker.ceph.com/issues/61182
1305
  cephfs-mirror-ha-workunit: reached maximum tries (50) after waiting for 300 seconds
1306
* https://tracker.ceph.com/issues/61400
1307
  valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
1308 147 Rishabh Dave
* https://tracker.ceph.com/issues/48773
1309 146 Patrick Donnelly
  reached max tries: scrub does not complete
1310
1311
h3. 05 July 2023
1312
1313
https://pulpito.ceph.com/pdonnell-2023-07-05_03:38:33-fs:libcephfs-wip-pdonnell-testing-20230705.003205-distro-default-smithi/
1314
1315 137 Rishabh Dave
* https://tracker.ceph.com/issues/59346
1316 143 Rishabh Dave
  fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write"
1317
1318
h3. 27 Jun 2023
1319
1320
https://pulpito.ceph.com/rishabh-2023-06-21_23:38:17-fs-wip-rishabh-improvements-authmon-testing-default-smithi/
1321 144 Rishabh Dave
https://pulpito.ceph.com/rishabh-2023-06-23_17:37:30-fs-wip-rishabh-improvements-authmon-distro-default-smithi/
1322
1323
* https://tracker.ceph.com/issues/59348
1324
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1325
* https://tracker.ceph.com/issues/54460
1326
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1327
* https://tracker.ceph.com/issues/59346
1328
  fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write"
1329
* https://tracker.ceph.com/issues/59344
1330
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
1331
* https://tracker.ceph.com/issues/61399
1332
  libmpich: undefined references to fi_strerror
1333
* https://tracker.ceph.com/issues/50223
1334
  client.xxxx isn't responding to mclientcaps(revoke)
1335 143 Rishabh Dave
* https://tracker.ceph.com/issues/61831
1336
  Test failure: test_mirroring_init_failure_with_recovery (tasks.cephfs.test_mirroring.TestMirroring)
1337 142 Venky Shankar
1338
1339
h3. 22 June 2023
1340
1341
* https://tracker.ceph.com/issues/57676
1342
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1343
* https://tracker.ceph.com/issues/54460
1344
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1345
* https://tracker.ceph.com/issues/59344
1346
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
1347
* https://tracker.ceph.com/issues/59348
1348
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1349
* https://tracker.ceph.com/issues/61400
1350
    valgrind+ceph-mon: segmentation fault in rocksdb+tcmalloc
1351
* https://tracker.ceph.com/issues/57655
1352
    qa: fs:mixed-clients kernel_untar_build failure
1353
* https://tracker.ceph.com/issues/61394
1354
    qa/quincy: cluster [WRN] evicting unresponsive client smithi152 (4298), after 303.726 seconds" in cluster log
1355
* https://tracker.ceph.com/issues/61762
1356
    qa: wait_for_clean: failed before timeout expired
1357
* https://tracker.ceph.com/issues/61775
1358
    cephfs-mirror: mirror daemon does not shutdown (in mirror ha tests)
1359
* https://tracker.ceph.com/issues/44565
1360
    src/mds/SimpleLock.h: 528: FAILED ceph_assert(state == LOCK_XLOCK || state == LOCK_XLOCKDONE || state == LOCK_XLOCKSNAP || state == LOCK_LOCK_XLOCK || state == LOCK_LOCK || is_locallock())
1361
* https://tracker.ceph.com/issues/61790
1362
    cephfs client to mds comms remain silent after reconnect
1363
* https://tracker.ceph.com/issues/61791
1364
    snaptest-git-ceph.sh test timed out (job dead)
1365 139 Venky Shankar
1366
1367
h3. 20 June 2023
1368
1369
https://pulpito.ceph.com/vshankar-2023-06-15_04:58:28-fs-wip-vshankar-testing-20230614.124123-testing-default-smithi/
1370
1371
* https://tracker.ceph.com/issues/57676
1372
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1373
* https://tracker.ceph.com/issues/54460
1374
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1375 140 Venky Shankar
* https://tracker.ceph.com/issues/54462
1376 1 Patrick Donnelly
  Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128
1377 141 Venky Shankar
* https://tracker.ceph.com/issues/58340
1378 139 Venky Shankar
  mds: fsstress.sh hangs with multimds
1379
* https://tracker.ceph.com/issues/59344
1380
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
1381
* https://tracker.ceph.com/issues/59348
1382
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1383
* https://tracker.ceph.com/issues/57656
1384
  [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
1385
* https://tracker.ceph.com/issues/61400
1386
    valgrind+ceph-mon: segmentation fault in rocksdb+tcmalloc
1387
* https://tracker.ceph.com/issues/57655
1388
    qa: fs:mixed-clients kernel_untar_build failure
1389
* https://tracker.ceph.com/issues/44565
1390
    src/mds/SimpleLock.h: 528: FAILED ceph_assert(state == LOCK_XLOCK || state == LOCK_XLOCKDONE || state == LOCK_XLOCKSNAP || state == LOCK_LOCK_XLOCK || state == LOCK_LOCK || is_locallock())
1391
* https://tracker.ceph.com/issues/61737
1392 138 Rishabh Dave
    coredump from '/bin/podman pull quay.ceph.io/ceph-ci/ceph:pacific'
1393
1394
h3. 16 June 2023
1395
1396 1 Patrick Donnelly
https://pulpito.ceph.com/rishabh-2023-05-16_10:39:13-fs-wip-rishabh-2023May15-1524-testing-default-smithi/
1397 145 Rishabh Dave
https://pulpito.ceph.com/rishabh-2023-05-17_11:09:48-fs-wip-rishabh-2023May15-1524-testing-default-smithi/
1398 138 Rishabh Dave
https://pulpito.ceph.com/rishabh-2023-05-18_10:01:53-fs-wip-rishabh-2023May15-1524-testing-default-smithi/
1399 1 Patrick Donnelly
(bins were rebuilt with a subset of orig PRs) https://pulpito.ceph.com/rishabh-2023-06-09_10:19:22-fs-wip-rishabh-2023Jun9-1308-testing-default-smithi/
1400
1401
1402
* https://tracker.ceph.com/issues/59344
1403
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
1404 138 Rishabh Dave
* https://tracker.ceph.com/issues/59348
1405
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1406 145 Rishabh Dave
* https://tracker.ceph.com/issues/59346
1407
  fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write"
1408
* https://tracker.ceph.com/issues/57656
1409
  [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
1410
* https://tracker.ceph.com/issues/54460
1411
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1412 138 Rishabh Dave
* https://tracker.ceph.com/issues/54462
1413
  Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128
1414 145 Rishabh Dave
* https://tracker.ceph.com/issues/61399
1415
  libmpich: undefined references to fi_strerror
1416
* https://tracker.ceph.com/issues/58945
1417
  xfstests-dev: ceph-fuse: generic 
1418 138 Rishabh Dave
* https://tracker.ceph.com/issues/58742
1419 136 Patrick Donnelly
  xfstests-dev: kcephfs: generic
1420
1421
h3. 24 May 2023
1422
1423
https://pulpito.ceph.com/pdonnell-2023-05-23_18:20:18-fs-wip-pdonnell-testing-20230523.134409-distro-default-smithi/
1424
1425
* https://tracker.ceph.com/issues/57676
1426
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1427
* https://tracker.ceph.com/issues/59683
1428
    Error: Unable to find a match: userspace-rcu-devel libedit-devel device-mapper-devel with fscrypt tests
1429
* https://tracker.ceph.com/issues/61399
1430
    qa: "[Makefile:299: ior] Error 1"
1431
* https://tracker.ceph.com/issues/61265
1432
    qa: tasks.cephfs.fuse_mount:process failed to terminate after unmount
1433
* https://tracker.ceph.com/issues/59348
1434
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1435
* https://tracker.ceph.com/issues/59346
1436
    qa/workunits/fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write"
1437
* https://tracker.ceph.com/issues/61400
1438
    valgrind+ceph-mon: segmentation fault in rocksdb+tcmalloc
1439
* https://tracker.ceph.com/issues/54460
1440
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1441
* https://tracker.ceph.com/issues/51964
1442
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1443
* https://tracker.ceph.com/issues/59344
1444
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
1445
* https://tracker.ceph.com/issues/61407
1446
    mds: abort on CInode::verify_dirfrags
1447
* https://tracker.ceph.com/issues/48773
1448
    qa: scrub does not complete
1449
* https://tracker.ceph.com/issues/57655
1450
    qa: fs:mixed-clients kernel_untar_build failure
1451
* https://tracker.ceph.com/issues/61409
1452 128 Venky Shankar
    qa: _test_stale_caps does not wait for file flush before stat
1453
1454
h3. 15 May 2023
1455 130 Venky Shankar
1456 128 Venky Shankar
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230509.090020
1457
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230509.090020-6
1458
1459
* https://tracker.ceph.com/issues/52624
1460
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
1461
* https://tracker.ceph.com/issues/54460
1462
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1463
* https://tracker.ceph.com/issues/57676
1464
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1465
* https://tracker.ceph.com/issues/59684 [kclient bug]
1466
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
1467
* https://tracker.ceph.com/issues/59348
1468
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1469 131 Venky Shankar
* https://tracker.ceph.com/issues/61148
1470
    dbench test results in call trace in dmesg [kclient bug]
1471 133 Kotresh Hiremath Ravishankar
* https://tracker.ceph.com/issues/58340
1472 134 Kotresh Hiremath Ravishankar
    mds: fsstress.sh hangs with multimds
1473 125 Venky Shankar
1474
 
1475 129 Rishabh Dave
h3. 11 May 2023
1476
1477
https://pulpito.ceph.com/yuriw-2023-05-10_18:21:40-fs-wip-yuri7-testing-2023-05-10-0742-distro-default-smithi/
1478
1479
* https://tracker.ceph.com/issues/59684 [kclient bug]
1480
  Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
1481
* https://tracker.ceph.com/issues/59348
1482
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1483
* https://tracker.ceph.com/issues/57655
1484
  qa: fs:mixed-clients kernel_untar_build failure
1485
* https://tracker.ceph.com/issues/57676
1486
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
1487
* https://tracker.ceph.com/issues/55805
1488
  error during scrub thrashing reached max tries in 900 secs
1489
* https://tracker.ceph.com/issues/54460
1490
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1491
* https://tracker.ceph.com/issues/57656
1492
  [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
1493
* https://tracker.ceph.com/issues/58220
1494
  Command failed (workunit test fs/quota/quota.sh) on smithi081 with status 1:
1495 1 Patrick Donnelly
* https://tracker.ceph.com/issues/58220#note-9
1496
  workunit fs/test_python.sh: test_disk_quota_exceeeded_error failure
1497 134 Kotresh Hiremath Ravishankar
* https://tracker.ceph.com/issues/59342
1498
  qa/workunits/kernel_untar_build.sh failed when compiling the Linux source
1499 135 Kotresh Hiremath Ravishankar
* https://tracker.ceph.com/issues/58949
1500
    test_cephfs.test_disk_quota_exceeeded_error - AssertionError: DiskQuotaExceeded not raised by write
1501 129 Rishabh Dave
* https://tracker.ceph.com/issues/61243 (NEW)
1502
  test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
1503
1504 125 Venky Shankar
h3. 11 May 2023
1505 127 Venky Shankar
1506
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230509.054005
1507 126 Venky Shankar
1508 125 Venky Shankar
(no fsstress job failure [https://tracker.ceph.com/issues/58340] since https://github.com/ceph/ceph/pull/49553
1509
 was included in the branch, however, the PR got updated and needs retest).
1510
1511
* https://tracker.ceph.com/issues/52624
1512
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
1513
* https://tracker.ceph.com/issues/54460
1514
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1515
* https://tracker.ceph.com/issues/57676
1516
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1517
* https://tracker.ceph.com/issues/59683
1518
    Error: Unable to find a match: userspace-rcu-devel libedit-devel device-mapper-devel with fscrypt tests
1519
* https://tracker.ceph.com/issues/59684 [kclient bug]
1520
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
1521
* https://tracker.ceph.com/issues/59348
1522 124 Venky Shankar
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1523
1524
h3. 09 May 2023
1525
1526
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230506.143554
1527
1528
* https://tracker.ceph.com/issues/52624
1529
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
1530
* https://tracker.ceph.com/issues/58340
1531
    mds: fsstress.sh hangs with multimds
1532
* https://tracker.ceph.com/issues/54460
1533
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1534
* https://tracker.ceph.com/issues/57676
1535
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1536
* https://tracker.ceph.com/issues/51964
1537
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1538
* https://tracker.ceph.com/issues/59350
1539
    qa: test_flag_scrub_mdsdir (tasks.cephfs.test_scrub_checks.TestScrubChecks) ... ERROR
1540
* https://tracker.ceph.com/issues/59683
1541
    Error: Unable to find a match: userspace-rcu-devel libedit-devel device-mapper-devel with fscrypt tests
1542
* https://tracker.ceph.com/issues/59684 [kclient bug]
1543
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
1544
* https://tracker.ceph.com/issues/59348
1545 123 Venky Shankar
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1546
1547
h3. 10 Apr 2023
1548
1549
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230330.105356
1550
1551
* https://tracker.ceph.com/issues/52624
1552
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
1553
* https://tracker.ceph.com/issues/58340
1554
    mds: fsstress.sh hangs with multimds
1555
* https://tracker.ceph.com/issues/54460
1556
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1557
* https://tracker.ceph.com/issues/57676
1558
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1559 119 Rishabh Dave
* https://tracker.ceph.com/issues/51964
1560 120 Rishabh Dave
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1561 121 Rishabh Dave
1562 120 Rishabh Dave
h3. 31 Mar 2023
1563 122 Rishabh Dave
1564
run: http://pulpito.front.sepia.ceph.com/rishabh-2023-03-03_21:39:49-fs-wip-rishabh-2023Mar03-2316-testing-default-smithi/
1565 120 Rishabh Dave
re-run (some PRs removed from batch): http://pulpito.front.sepia.ceph.com/rishabh-2023-03-11_05:54:03-fs-wip-rishabh-2023Mar10-1727-testing-default-smithi/
1566
re-run (some PRs removed from batch): http://pulpito.front.sepia.ceph.com/rishabh-2023-03-23_08:27:28-fs-wip-rishabh-2023Mar20-2250-testing-default-smithi/
1567
1568
There were many more re-runs for "failed+dead" jobs as well as for individual jobs. half of the PRs from the batch were removed (gradually over subsequent re-runs).
1569
1570
* https://tracker.ceph.com/issues/57676
1571
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
1572
* https://tracker.ceph.com/issues/54460
1573
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1574
* https://tracker.ceph.com/issues/58220
1575
  Command failed (workunit test fs/quota/quota.sh) on smithixxx with status 1
1576
* https://tracker.ceph.com/issues/58220#note-9
1577
  workunit fs/test_python.sh: test_disk_quota_exceeeded_error failure
1578
* https://tracker.ceph.com/issues/56695
1579
  Command failed (workunit test suites/pjd.sh)
1580
* https://tracker.ceph.com/issues/58564 
1581
  workuit dbench failed with error code 1
1582
* https://tracker.ceph.com/issues/57206
1583
  libcephfs/test.sh: ceph_test_libcephfs_reclaim
1584
* https://tracker.ceph.com/issues/57580
1585
  Test failure: test_newops_getvxattr (tasks.cephfs.test_newops.TestNewOps)
1586
* https://tracker.ceph.com/issues/58940
1587
  ceph osd hit ceph_abort
1588
* https://tracker.ceph.com/issues/55805
1589 118 Venky Shankar
  error scrub thrashing reached max tries in 900 secs
1590
1591
h3. 30 March 2023
1592
1593
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230315.085747
1594
1595
* https://tracker.ceph.com/issues/58938
1596
    qa: xfstests-dev's generic test suite has 7 failures with kclient
1597
* https://tracker.ceph.com/issues/51964
1598
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1599
* https://tracker.ceph.com/issues/58340
1600 114 Venky Shankar
    mds: fsstress.sh hangs with multimds
1601
1602 115 Venky Shankar
h3. 29 March 2023
1603 114 Venky Shankar
1604
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230317.095222
1605
1606
* https://tracker.ceph.com/issues/56695
1607
    [RHEL stock] pjd test failures
1608
* https://tracker.ceph.com/issues/57676
1609
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1610
* https://tracker.ceph.com/issues/57087
1611
    qa: test_fragmented_injection (tasks.cephfs.test_data_scan.TestDataScan) failure
1612 116 Venky Shankar
* https://tracker.ceph.com/issues/58340
1613
    mds: fsstress.sh hangs with multimds
1614 114 Venky Shankar
* https://tracker.ceph.com/issues/57655
1615
    qa: fs:mixed-clients kernel_untar_build failure
1616 117 Venky Shankar
* https://tracker.ceph.com/issues/59230
1617
    Test failure: test_object_deletion (tasks.cephfs.test_damage.TestDamage)
1618 114 Venky Shankar
* https://tracker.ceph.com/issues/58938
1619 113 Venky Shankar
    qa: xfstests-dev's generic test suite has 7 failures with kclient
1620
1621
h3. 13 Mar 2023
1622
1623
* https://tracker.ceph.com/issues/56695
1624
    [RHEL stock] pjd test failures
1625
* https://tracker.ceph.com/issues/57676
1626
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1627
* https://tracker.ceph.com/issues/51964
1628
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1629
* https://tracker.ceph.com/issues/54460
1630
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1631
* https://tracker.ceph.com/issues/57656
1632 112 Venky Shankar
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
1633
1634
h3. 09 Mar 2023
1635
1636
https://pulpito.ceph.com/vshankar-2023-03-03_04:39:14-fs-wip-vshankar-testing-20230303.023823-testing-default-smithi/
1637
https://pulpito.ceph.com/vshankar-2023-03-08_15:12:36-fs-wip-vshankar-testing-20230308.112059-testing-default-smithi/
1638
1639
* https://tracker.ceph.com/issues/56695
1640
    [RHEL stock] pjd test failures
1641
* https://tracker.ceph.com/issues/57676
1642
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1643
* https://tracker.ceph.com/issues/51964
1644
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1645
* https://tracker.ceph.com/issues/54460
1646
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1647
* https://tracker.ceph.com/issues/58340
1648
    mds: fsstress.sh hangs with multimds
1649
* https://tracker.ceph.com/issues/57087
1650 111 Venky Shankar
    qa: test_fragmented_injection (tasks.cephfs.test_data_scan.TestDataScan) failure
1651
1652
h3. 07 Mar 2023
1653
1654
https://pulpito.ceph.com/vshankar-2023-03-02_09:21:58-fs-wip-vshankar-testing-20230222.044949-testing-default-smithi/
1655
https://pulpito.ceph.com/vshankar-2023-03-07_05:15:12-fs-wip-vshankar-testing-20230307.030510-testing-default-smithi/
1656
1657
* https://tracker.ceph.com/issues/56695
1658
    [RHEL stock] pjd test failures
1659
* https://tracker.ceph.com/issues/57676
1660
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1661
* https://tracker.ceph.com/issues/51964
1662
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1663
* https://tracker.ceph.com/issues/57656
1664
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
1665
* https://tracker.ceph.com/issues/57655
1666
    qa: fs:mixed-clients kernel_untar_build failure
1667
* https://tracker.ceph.com/issues/58220
1668
    Command failed (workunit test fs/quota/quota.sh) on smithi081 with status 1:
1669
* https://tracker.ceph.com/issues/54460
1670
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1671
* https://tracker.ceph.com/issues/58934
1672 109 Venky Shankar
    snaptest-git-ceph.sh failure with ceph-fuse
1673
1674
h3. 28 Feb 2023
1675
1676
https://pulpito.ceph.com/vshankar-2023-02-24_02:11:45-fs-wip-vshankar-testing-20230222.025426-testing-default-smithi/
1677
1678
* https://tracker.ceph.com/issues/56695
1679
    [RHEL stock] pjd test failures
1680
* https://tracker.ceph.com/issues/57676
1681
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1682 110 Venky Shankar
* https://tracker.ceph.com/issues/56446
1683 109 Venky Shankar
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
1684
1685 107 Venky Shankar
(teuthology infra issues causing testing delays - merging PRs which have tests passing)
1686
1687
h3. 25 Jan 2023
1688
1689
https://pulpito.ceph.com/vshankar-2023-01-25_07:57:32-fs-wip-vshankar-testing-20230125.055346-testing-default-smithi/
1690
1691
* https://tracker.ceph.com/issues/52624
1692
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
1693
* https://tracker.ceph.com/issues/56695
1694
    [RHEL stock] pjd test failures
1695
* https://tracker.ceph.com/issues/57676
1696
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1697
* https://tracker.ceph.com/issues/56446
1698
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
1699
* https://tracker.ceph.com/issues/57206
1700
    libcephfs/test.sh: ceph_test_libcephfs_reclaim
1701
* https://tracker.ceph.com/issues/58220
1702
    Command failed (workunit test fs/quota/quota.sh) on smithi081 with status 1:
1703
* https://tracker.ceph.com/issues/58340
1704
  mds: fsstress.sh hangs with multimds
1705
* https://tracker.ceph.com/issues/56011
1706
    fs/thrash: snaptest-snap-rm-cmp.sh fails in mds5sum comparison
1707
* https://tracker.ceph.com/issues/54460
1708 101 Rishabh Dave
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1709
1710
h3. 30 JAN 2023
1711
1712
run: http://pulpito.front.sepia.ceph.com/rishabh-2022-11-28_08:04:11-fs-wip-rishabh-testing-2022Nov24-1818-testing-default-smithi/
1713
re-run: http://pulpito.front.sepia.ceph.com/rishabh-2023-01-13_12:08:33-fs-wip-rishabh-testing-2022Nov24-11Jan2023-distro-default-smithi/
1714 105 Rishabh Dave
re-run of re-run: http://pulpito.front.sepia.ceph.com/rishabh-2023-01-23_18:53:32-fs-wip-rishabh-testing-2022Nov24-11Jan2023-distro-default-smithi/
1715
1716 101 Rishabh Dave
* https://tracker.ceph.com/issues/52624
1717
  qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
1718
* https://tracker.ceph.com/issues/56695
1719
  [RHEL stock] pjd test failures
1720
* https://tracker.ceph.com/issues/57676
1721
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
1722
* https://tracker.ceph.com/issues/55332
1723
  Failure in snaptest-git-ceph.sh
1724
* https://tracker.ceph.com/issues/51964
1725
  qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1726
* https://tracker.ceph.com/issues/56446
1727
  Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
1728
* https://tracker.ceph.com/issues/57655 
1729
  qa: fs:mixed-clients kernel_untar_build failure
1730
* https://tracker.ceph.com/issues/54460
1731
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1732 103 Rishabh Dave
* https://tracker.ceph.com/issues/58340
1733
  mds: fsstress.sh hangs with multimds
1734 101 Rishabh Dave
* https://tracker.ceph.com/issues/58219
1735 102 Rishabh Dave
  Command crashed: 'ceph-dencoder type inode_backtrace_t import - decode dump_json'
1736
1737
* "Failed to load ceph-mgr modules: prometheus" in cluster log"
1738 106 Rishabh Dave
  http://pulpito.front.sepia.ceph.com/rishabh-2023-01-23_18:53:32-fs-wip-rishabh-testing-2022Nov24-11Jan2023-distro-default-smithi/7134086
1739
  Acc to Venky this was fixed in https://github.com/ceph/ceph/commit/cf6089200d96fc56b08ee17a4e31f19823370dc8
1740 102 Rishabh Dave
* Created https://tracker.ceph.com/issues/58564
1741 100 Venky Shankar
  workunit test suites/dbench.sh failed error code 1
1742
1743
h3. 15 Dec 2022
1744
1745
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20221215.112736
1746
1747
* https://tracker.ceph.com/issues/52624
1748
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1749
* https://tracker.ceph.com/issues/56695
1750
    [RHEL stock] pjd test failures
1751
* https://tracker.ceph.com/issues/58219
1752
* https://tracker.ceph.com/issues/57655
1753
* qa: fs:mixed-clients kernel_untar_build failure
1754
    Test failure: test_journal_migration (tasks.cephfs.test_journal_migration.TestJournalMigration)
1755
* https://tracker.ceph.com/issues/57676
1756
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1757
* https://tracker.ceph.com/issues/58340
1758 96 Venky Shankar
    mds: fsstress.sh hangs with multimds
1759
1760
h3. 08 Dec 2022
1761 99 Venky Shankar
1762 96 Venky Shankar
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20221130.043104
1763
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20221209.043803
1764
1765
(lots of transient git.ceph.com failures)
1766
1767
* https://tracker.ceph.com/issues/52624
1768
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1769
* https://tracker.ceph.com/issues/56695
1770
    [RHEL stock] pjd test failures
1771
* https://tracker.ceph.com/issues/57655
1772
    qa: fs:mixed-clients kernel_untar_build failure
1773
* https://tracker.ceph.com/issues/58219
1774
    Test failure: test_journal_migration (tasks.cephfs.test_journal_migration.TestJournalMigration)
1775
* https://tracker.ceph.com/issues/58220
1776
    Command failed (workunit test fs/quota/quota.sh) on smithi081 with status 1:
1777 97 Venky Shankar
* https://tracker.ceph.com/issues/57676
1778
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1779 98 Venky Shankar
* https://tracker.ceph.com/issues/53859
1780
    qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
1781
* https://tracker.ceph.com/issues/54460
1782
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1783 96 Venky Shankar
* https://tracker.ceph.com/issues/58244
1784 95 Venky Shankar
    Test failure: test_rebuild_inotable (tasks.cephfs.test_data_scan.TestDataScan)
1785
1786
h3. 14 Oct 2022
1787
1788
https://pulpito.ceph.com/vshankar-2022-10-12_04:56:59-fs-wip-vshankar-testing-20221011-145847-testing-default-smithi/
1789
https://pulpito.ceph.com/vshankar-2022-10-14_04:04:57-fs-wip-vshankar-testing-20221014-072608-testing-default-smithi/
1790
1791
* https://tracker.ceph.com/issues/52624
1792
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1793
* https://tracker.ceph.com/issues/55804
1794
    Command failed (workunit test suites/pjd.sh)
1795
* https://tracker.ceph.com/issues/51964
1796
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1797
* https://tracker.ceph.com/issues/57682
1798
    client: ERROR: test_reconnect_after_blocklisted
1799 90 Rishabh Dave
* https://tracker.ceph.com/issues/54460
1800 91 Rishabh Dave
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1801
1802
h3. 10 Oct 2022
1803 92 Rishabh Dave
1804 91 Rishabh Dave
http://pulpito.front.sepia.ceph.com/rishabh-2022-09-30_19:45:21-fs-wip-rishabh-testing-30Sep2022-testing-default-smithi/
1805
1806
reruns
1807
* fs-thrash, passed: http://pulpito.front.sepia.ceph.com/rishabh-2022-10-04_13:19:47-fs-wip-rishabh-testing-30Sep2022-testing-default-smithi/
1808 94 Rishabh Dave
* fs-verify, passed: http://pulpito.front.sepia.ceph.com/rishabh-2022-10-05_12:25:37-fs-wip-rishabh-testing-30Sep2022-testing-default-smithi/
1809 91 Rishabh Dave
* cephadm failures also passed after many re-runs: http://pulpito.front.sepia.ceph.com/rishabh-2022-10-06_13:50:51-fs-wip-rishabh-testing-30Sep2022-2-testing-default-smithi/
1810 93 Rishabh Dave
    ** needed this PR to be merged in ceph-ci branch - https://github.com/ceph/ceph/pull/47458
1811 91 Rishabh Dave
1812
known bugs
1813
* https://tracker.ceph.com/issues/52624
1814
  qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1815
* https://tracker.ceph.com/issues/50223
1816
  client.xxxx isn't responding to mclientcaps(revoke
1817
* https://tracker.ceph.com/issues/57299
1818
  qa: test_dump_loads fails with JSONDecodeError
1819
* https://tracker.ceph.com/issues/57655 [Exist in main as well]
1820
  qa: fs:mixed-clients kernel_untar_build failure
1821
* https://tracker.ceph.com/issues/57206
1822 90 Rishabh Dave
  libcephfs/test.sh: ceph_test_libcephfs_reclaim
1823
1824
h3. 2022 Sep 29
1825
1826
http://pulpito.front.sepia.ceph.com/rishabh-2022-09-14_12:48:43-fs-wip-rishabh-testing-2022Sep9-1708-testing-default-smithi/
1827
1828
* https://tracker.ceph.com/issues/55804
1829
  Command failed (workunit test suites/pjd.sh)
1830
* https://tracker.ceph.com/issues/36593
1831
  Command failed (workunit test fs/quota/quota.sh) on smithixxx with status 1
1832
* https://tracker.ceph.com/issues/52624
1833
  qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1834
* https://tracker.ceph.com/issues/51964
1835
  qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1836
* https://tracker.ceph.com/issues/56632
1837
  Test failure: test_subvolume_snapshot_clone_quota_exceeded
1838
* https://tracker.ceph.com/issues/50821
1839 88 Patrick Donnelly
  qa: untar_snap_rm failure during mds thrashing
1840
1841
h3. 2022 Sep 26
1842
1843
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20220923.171109
1844
1845
* https://tracker.ceph.com/issues/55804
1846
    qa failure: pjd link tests failed
1847
* https://tracker.ceph.com/issues/57676
1848
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1849
* https://tracker.ceph.com/issues/52624
1850
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1851
* https://tracker.ceph.com/issues/57580
1852
    Test failure: test_newops_getvxattr (tasks.cephfs.test_newops.TestNewOps)
1853
* https://tracker.ceph.com/issues/48773
1854
    qa: scrub does not complete
1855
* https://tracker.ceph.com/issues/57299
1856
    qa: test_dump_loads fails with JSONDecodeError
1857
* https://tracker.ceph.com/issues/57280
1858
    qa: tasks/kernel_cfuse_workunits_untarbuild_blogbench fails - Failed to fetch package version from shaman
1859
* https://tracker.ceph.com/issues/57205
1860
    Test failure: test_subvolume_group_ls_filter_internal_directories (tasks.cephfs.test_volumes.TestSubvolumeGroups)
1861
* https://tracker.ceph.com/issues/57656
1862
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
1863
* https://tracker.ceph.com/issues/57677
1864
    qa: "1 MDSs behind on trimming (MDS_TRIM)"
1865
* https://tracker.ceph.com/issues/57206
1866
    libcephfs/test.sh: ceph_test_libcephfs_reclaim
1867
* https://tracker.ceph.com/issues/57446
1868
    qa: test_subvolume_snapshot_info_if_orphan_clone fails
1869 89 Patrick Donnelly
* https://tracker.ceph.com/issues/57655 [Exist in main as well]
1870
    qa: fs:mixed-clients kernel_untar_build failure
1871 88 Patrick Donnelly
* https://tracker.ceph.com/issues/57682
1872
    client: ERROR: test_reconnect_after_blocklisted
1873 87 Patrick Donnelly
1874
1875
h3. 2022 Sep 22
1876
1877
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20220920.234701
1878
1879
* https://tracker.ceph.com/issues/57299
1880
    qa: test_dump_loads fails with JSONDecodeError
1881
* https://tracker.ceph.com/issues/57205
1882
    Test failure: test_subvolume_group_ls_filter_internal_directories (tasks.cephfs.test_volumes.TestSubvolumeGroups)
1883
* https://tracker.ceph.com/issues/52624
1884
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1885
* https://tracker.ceph.com/issues/57580
1886
    Test failure: test_newops_getvxattr (tasks.cephfs.test_newops.TestNewOps)
1887
* https://tracker.ceph.com/issues/57280
1888
    qa: tasks/kernel_cfuse_workunits_untarbuild_blogbench fails - Failed to fetch package version from shaman
1889
* https://tracker.ceph.com/issues/48773
1890
    qa: scrub does not complete
1891
* https://tracker.ceph.com/issues/56446
1892
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
1893
* https://tracker.ceph.com/issues/57206
1894
    libcephfs/test.sh: ceph_test_libcephfs_reclaim
1895
* https://tracker.ceph.com/issues/51267
1896
    CommandFailedError: Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithi096 with status 1:...
1897
1898
NEW:
1899
1900
* https://tracker.ceph.com/issues/57656
1901
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
1902
* https://tracker.ceph.com/issues/57655 [Exist in main as well]
1903
    qa: fs:mixed-clients kernel_untar_build failure
1904
* https://tracker.ceph.com/issues/57657
1905
    mds: scrub locates mismatch between child accounted_rstats and self rstats
1906
1907
Segfault probably caused by: https://github.com/ceph/ceph/pull/47795#issuecomment-1255724799
1908 80 Venky Shankar
1909 79 Venky Shankar
1910
h3. 2022 Sep 16
1911
1912
https://pulpito.ceph.com/?branch=wip-vshankar-testing1-20220905-132828
1913
1914
* https://tracker.ceph.com/issues/57446
1915
    qa: test_subvolume_snapshot_info_if_orphan_clone fails
1916
* https://tracker.ceph.com/issues/57299
1917
    qa: test_dump_loads fails with JSONDecodeError
1918
* https://tracker.ceph.com/issues/50223
1919
    client.xxxx isn't responding to mclientcaps(revoke)
1920
* https://tracker.ceph.com/issues/52624
1921
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1922
* https://tracker.ceph.com/issues/57205
1923
    Test failure: test_subvolume_group_ls_filter_internal_directories (tasks.cephfs.test_volumes.TestSubvolumeGroups)
1924
* https://tracker.ceph.com/issues/57280
1925
    qa: tasks/kernel_cfuse_workunits_untarbuild_blogbench fails - Failed to fetch package version from shaman
1926
* https://tracker.ceph.com/issues/51282
1927
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
1928
* https://tracker.ceph.com/issues/48203
1929
  https://tracker.ceph.com/issues/36593
1930
    qa: quota failure
1931
    qa: quota failure caused by clients stepping on each other
1932
* https://tracker.ceph.com/issues/57580
1933 77 Rishabh Dave
    Test failure: test_newops_getvxattr (tasks.cephfs.test_newops.TestNewOps)
1934
1935 76 Rishabh Dave
1936
h3. 2022 Aug 26
1937
1938
http://pulpito.front.sepia.ceph.com/rishabh-2022-08-22_17:49:59-fs-wip-rishabh-testing-2022Aug19-testing-default-smithi/
1939
http://pulpito.front.sepia.ceph.com/rishabh-2022-08-24_11:56:51-fs-wip-rishabh-testing-2022Aug19-testing-default-smithi/
1940
1941
* https://tracker.ceph.com/issues/57206
1942
  libcephfs/test.sh: ceph_test_libcephfs_reclaim
1943
* https://tracker.ceph.com/issues/56632
1944
  Test failure: test_subvolume_snapshot_clone_quota_exceeded (tasks.cephfs.test_volumes.TestSubvolumeSnapshotClones)
1945
* https://tracker.ceph.com/issues/56446
1946
  Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
1947
* https://tracker.ceph.com/issues/51964
1948
  qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1949
* https://tracker.ceph.com/issues/53859
1950
  qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
1951
1952
* https://tracker.ceph.com/issues/54460
1953
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1954
* https://tracker.ceph.com/issues/54462
1955
  Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128
1956
* https://tracker.ceph.com/issues/54460
1957
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1958
* https://tracker.ceph.com/issues/36593
1959
  Command failed (workunit test fs/quota/quota.sh) on smithixxx with status 1
1960
1961
* https://tracker.ceph.com/issues/52624
1962
  qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1963
* https://tracker.ceph.com/issues/55804
1964
  Command failed (workunit test suites/pjd.sh)
1965
* https://tracker.ceph.com/issues/50223
1966
  client.xxxx isn't responding to mclientcaps(revoke)
1967 75 Venky Shankar
1968
1969
h3. 2022 Aug 22
1970
1971
https://pulpito.ceph.com/vshankar-2022-08-12_09:34:24-fs-wip-vshankar-testing1-20220812-072441-testing-default-smithi/
1972
https://pulpito.ceph.com/vshankar-2022-08-18_04:30:42-fs-wip-vshankar-testing1-20220818-082047-testing-default-smithi/ (drop problematic PR and re-run)
1973
1974
* https://tracker.ceph.com/issues/52624
1975
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1976
* https://tracker.ceph.com/issues/56446
1977
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
1978
* https://tracker.ceph.com/issues/55804
1979
    Command failed (workunit test suites/pjd.sh)
1980
* https://tracker.ceph.com/issues/51278
1981
    mds: "FAILED ceph_assert(!segments.empty())"
1982
* https://tracker.ceph.com/issues/54460
1983
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1984
* https://tracker.ceph.com/issues/57205
1985
    Test failure: test_subvolume_group_ls_filter_internal_directories (tasks.cephfs.test_volumes.TestSubvolumeGroups)
1986
* https://tracker.ceph.com/issues/57206
1987
    ceph_test_libcephfs_reclaim crashes during test
1988
* https://tracker.ceph.com/issues/53859
1989
  Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
1990
* https://tracker.ceph.com/issues/50223
1991 72 Venky Shankar
    client.xxxx isn't responding to mclientcaps(revoke)
1992
1993
h3. 2022 Aug 12
1994
1995
https://pulpito.ceph.com/vshankar-2022-08-10_04:06:00-fs-wip-vshankar-testing-20220805-190751-testing-default-smithi/
1996
https://pulpito.ceph.com/vshankar-2022-08-11_12:16:58-fs-wip-vshankar-testing-20220811-145809-testing-default-smithi/ (drop problematic PR and re-run)
1997
1998
* https://tracker.ceph.com/issues/52624
1999
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2000
* https://tracker.ceph.com/issues/56446
2001
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
2002
* https://tracker.ceph.com/issues/51964
2003
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
2004
* https://tracker.ceph.com/issues/55804
2005
    Command failed (workunit test suites/pjd.sh)
2006
* https://tracker.ceph.com/issues/50223
2007
    client.xxxx isn't responding to mclientcaps(revoke)
2008
* https://tracker.ceph.com/issues/50821
2009 73 Venky Shankar
    qa: untar_snap_rm failure during mds thrashing
2010 72 Venky Shankar
* https://tracker.ceph.com/issues/54460
2011 71 Venky Shankar
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
2012
2013
h3. 2022 Aug 04
2014
2015
https://pulpito.ceph.com/?branch=wip-vshankar-testing1-20220804-123835 (only mgr/volumes, mgr/stats)
2016
2017 69 Rishabh Dave
Unrealted teuthology failure on rhel
2018 68 Rishabh Dave
2019
h3. 2022 Jul 25
2020
2021
http://pulpito.front.sepia.ceph.com/rishabh-2022-07-22_11:34:20-fs-wip-rishabh-testing-2022Jul22-1400-testing-default-smithi/
2022
2023 74 Rishabh Dave
1st re-run: http://pulpito.front.sepia.ceph.com/rishabh-2022-07-24_03:51:19-fs-wip-rishabh-testing-2022Jul22-1400-testing-default-smithi
2024
2nd re-run: http://pulpito.front.sepia.ceph.com/rishabh-2022-07-24_08:53:36-fs-wip-rishabh-testing-2022Jul22-1400-testing-default-smithi/
2025 68 Rishabh Dave
3rd re-run: http://pulpito.front.sepia.ceph.com/rishabh-2022-07-24_08:53:36-fs-wip-rishabh-testing-2022Jul22-1400-testing-default-smithi/
2026
4th (final) re-run: http://pulpito.front.sepia.ceph.com/rishabh-2022-07-28_03:59:01-fs-wip-rishabh-testing-2022Jul28-0143-testing-default-smithi/
2027
2028
* https://tracker.ceph.com/issues/55804
2029
  Command failed (workunit test suites/pjd.sh)
2030
* https://tracker.ceph.com/issues/50223
2031
  client.xxxx isn't responding to mclientcaps(revoke)
2032
2033
* https://tracker.ceph.com/issues/54460
2034
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
2035 1 Patrick Donnelly
* https://tracker.ceph.com/issues/36593
2036 74 Rishabh Dave
  Command failed (workunit test fs/quota/quota.sh) on smithixxx with status 1
2037 68 Rishabh Dave
* https://tracker.ceph.com/issues/54462
2038 67 Patrick Donnelly
  Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128~
2039
2040
h3. 2022 July 22
2041
2042
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20220721.235756
2043
2044
MDS_HEALTH_DUMMY error in log fixed by followup commit.
2045
transient selinux ping failure
2046
2047
* https://tracker.ceph.com/issues/56694
2048
    qa: avoid blocking forever on hung umount
2049
* https://tracker.ceph.com/issues/56695
2050
    [RHEL stock] pjd test failures
2051
* https://tracker.ceph.com/issues/56696
2052
    admin keyring disappears during qa run
2053
* https://tracker.ceph.com/issues/56697
2054
    qa: fs/snaps fails for fuse
2055
* https://tracker.ceph.com/issues/50222
2056
    osd: 5.2s0 deep-scrub : stat mismatch
2057
* https://tracker.ceph.com/issues/56698
2058
    client: FAILED ceph_assert(_size == 0)
2059
* https://tracker.ceph.com/issues/50223
2060
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2061 66 Rishabh Dave
2062 65 Rishabh Dave
2063
h3. 2022 Jul 15
2064
2065
http://pulpito.front.sepia.ceph.com/rishabh-2022-07-08_23:53:34-fs-wip-rishabh-testing-2022Jul08-1820-testing-default-smithi/
2066
2067
re-run: http://pulpito.front.sepia.ceph.com/rishabh-2022-07-15_06:42:04-fs-wip-rishabh-testing-2022Jul08-1820-testing-default-smithi/
2068
2069
* https://tracker.ceph.com/issues/53859
2070
  Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
2071
* https://tracker.ceph.com/issues/55804
2072
  Command failed (workunit test suites/pjd.sh)
2073
* https://tracker.ceph.com/issues/50223
2074
  client.xxxx isn't responding to mclientcaps(revoke)
2075
* https://tracker.ceph.com/issues/50222
2076
  osd: deep-scrub : stat mismatch
2077
2078
* https://tracker.ceph.com/issues/56632
2079
  Test failure: test_subvolume_snapshot_clone_quota_exceeded (tasks.cephfs.test_volumes.TestSubvolumeSnapshotClones)
2080
* https://tracker.ceph.com/issues/56634
2081
  workunit test fs/snaps/snaptest-intodir.sh
2082
* https://tracker.ceph.com/issues/56644
2083
  Test failure: test_rapid_creation (tasks.cephfs.test_fragment.TestFragmentation)
2084
2085 61 Rishabh Dave
2086
2087
h3. 2022 July 05
2088 62 Rishabh Dave
2089 64 Rishabh Dave
http://pulpito.front.sepia.ceph.com/rishabh-2022-07-02_14:14:52-fs-wip-rishabh-testing-20220702-1631-testing-default-smithi/
2090
2091
On 1st re-run some jobs passed - http://pulpito.front.sepia.ceph.com/rishabh-2022-07-03_15:10:28-fs-wip-rishabh-testing-20220702-1631-distro-default-smithi/
2092
2093
On 2nd re-run only few jobs failed -
2094 62 Rishabh Dave
http://pulpito.front.sepia.ceph.com/rishabh-2022-07-06_05:24:29-fs-wip-rishabh-testing-20220705-2132-distro-default-smithi/
2095
http://pulpito.front.sepia.ceph.com/rishabh-2022-07-06_05:24:29-fs-wip-rishabh-testing-20220705-2132-distro-default-smithi/
2096
2097
* https://tracker.ceph.com/issues/56446
2098
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
2099
* https://tracker.ceph.com/issues/55804
2100
    Command failed (workunit test suites/pjd.sh) on smithi047 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/
2101
2102
* https://tracker.ceph.com/issues/56445
2103 63 Rishabh Dave
    Command failed on smithi080 with status 123: "find /home/ubuntu/cephtest/archive/syslog -name '*.log' -print0 | sudo xargs -0 --no-run-if-empty -- gzip --"
2104
* https://tracker.ceph.com/issues/51267
2105
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithi098 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1
2106 62 Rishabh Dave
* https://tracker.ceph.com/issues/50224
2107
    Test failure: test_mirroring_init_failure_with_recovery (tasks.cephfs.test_mirroring.TestMirroring)
2108 61 Rishabh Dave
2109 58 Venky Shankar
2110
2111
h3. 2022 July 04
2112
2113
https://pulpito.ceph.com/vshankar-2022-06-29_09:19:00-fs-wip-vshankar-testing-20220627-100931-testing-default-smithi/
2114
(rhel runs were borked due to: https://lists.ceph.io/hyperkitty/list/dev@ceph.io/thread/JSZQFUKVLDND4W33PXDGCABPHNSPT6SS/, tests ran with --filter-out=rhel)
2115
2116
* https://tracker.ceph.com/issues/56445
2117 59 Rishabh Dave
    Command failed on smithi162 with status 123: "find /home/ubuntu/cephtest/archive/syslog -name '*.log' -print0 | sudo xargs -0 --no-run-if-empty -- gzip --"
2118
* https://tracker.ceph.com/issues/56446
2119
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
2120
* https://tracker.ceph.com/issues/51964
2121 60 Rishabh Dave
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
2122 59 Rishabh Dave
* https://tracker.ceph.com/issues/52624
2123 57 Venky Shankar
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2124
2125
h3. 2022 June 20
2126
2127
https://pulpito.ceph.com/vshankar-2022-06-15_04:03:39-fs-wip-vshankar-testing1-20220615-072516-testing-default-smithi/
2128
https://pulpito.ceph.com/vshankar-2022-06-19_08:22:46-fs-wip-vshankar-testing1-20220619-102531-testing-default-smithi/
2129
2130
* https://tracker.ceph.com/issues/52624
2131
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2132
* https://tracker.ceph.com/issues/55804
2133
    qa failure: pjd link tests failed
2134
* https://tracker.ceph.com/issues/54108
2135
    qa: iogen workunit: "The following counters failed to be set on mds daemons: {'mds.exported', 'mds.imported'}"
2136
* https://tracker.ceph.com/issues/55332
2137 56 Patrick Donnelly
    Failure in snaptest-git-ceph.sh (it's an async unlink/create bug)
2138
2139
h3. 2022 June 13
2140
2141
https://pulpito.ceph.com/pdonnell-2022-06-12_05:08:12-fs:workload-wip-pdonnell-testing-20220612.004943-distro-default-smithi/
2142
2143
* https://tracker.ceph.com/issues/56024
2144
    cephadm: removes ceph.conf during qa run causing command failure
2145
* https://tracker.ceph.com/issues/48773
2146
    qa: scrub does not complete
2147
* https://tracker.ceph.com/issues/56012
2148
    mds: src/mds/MDLog.cc: 283: FAILED ceph_assert(!mds->is_ any_replay())
2149 55 Venky Shankar
2150 54 Venky Shankar
2151
h3. 2022 Jun 13
2152
2153
https://pulpito.ceph.com/vshankar-2022-06-07_00:25:50-fs-wip-vshankar-testing-20220606-223254-testing-default-smithi/
2154
https://pulpito.ceph.com/vshankar-2022-06-10_01:04:46-fs-wip-vshankar-testing-20220609-175550-testing-default-smithi/
2155
2156
* https://tracker.ceph.com/issues/52624
2157
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2158
* https://tracker.ceph.com/issues/51964
2159
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
2160
* https://tracker.ceph.com/issues/53859
2161
    qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
2162
* https://tracker.ceph.com/issues/55804
2163
    qa failure: pjd link tests failed
2164
* https://tracker.ceph.com/issues/56003
2165
    client: src/include/xlist.h: 81: FAILED ceph_assert(_size == 0)
2166
* https://tracker.ceph.com/issues/56011
2167
    fs/thrash: snaptest-snap-rm-cmp.sh fails in mds5sum comparison
2168
* https://tracker.ceph.com/issues/56012
2169 53 Venky Shankar
    mds: src/mds/MDLog.cc: 283: FAILED ceph_assert(!mds->is_ any_replay())
2170
2171
h3. 2022 Jun 07
2172
2173
https://pulpito.ceph.com/vshankar-2022-06-06_21:25:41-fs-wip-vshankar-testing1-20220606-230129-testing-default-smithi/
2174
https://pulpito.ceph.com/vshankar-2022-06-07_10:53:31-fs-wip-vshankar-testing1-20220607-104134-testing-default-smithi/ (rerun after dropping a problematic PR)
2175
2176
* https://tracker.ceph.com/issues/52624
2177
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2178
* https://tracker.ceph.com/issues/50223
2179
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2180
* https://tracker.ceph.com/issues/50224
2181 51 Venky Shankar
    qa: test_mirroring_init_failure_with_recovery failure
2182
2183
h3. 2022 May 12
2184 52 Venky Shankar
2185 51 Venky Shankar
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20220509-125847
2186
https://pulpito.ceph.com/vshankar-2022-05-13_17:09:16-fs-wip-vshankar-testing-20220513-120051-testing-default-smithi/ (drop prs + rerun)
2187
2188
* https://tracker.ceph.com/issues/52624
2189
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2190
* https://tracker.ceph.com/issues/50223
2191
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2192
* https://tracker.ceph.com/issues/55332
2193
    Failure in snaptest-git-ceph.sh
2194
* https://tracker.ceph.com/issues/53859
2195 1 Patrick Donnelly
    qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
2196 52 Venky Shankar
* https://tracker.ceph.com/issues/55538
2197
    Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead)
2198 51 Venky Shankar
* https://tracker.ceph.com/issues/55258
2199 49 Venky Shankar
    lots of "heartbeat_check: no reply from X.X.X.X" in OSD logs (cropss up again, though very infrequent)
2200
2201 50 Venky Shankar
h3. 2022 May 04
2202
2203
https://pulpito.ceph.com/vshankar-2022-05-01_13:18:44-fs-wip-vshankar-testing1-20220428-204527-testing-default-smithi/
2204 49 Venky Shankar
https://pulpito.ceph.com/vshankar-2022-05-02_16:58:59-fs-wip-vshankar-testing1-20220502-201957-testing-default-smithi/ (after dropping PRs)
2205
2206
* https://tracker.ceph.com/issues/52624
2207
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2208
* https://tracker.ceph.com/issues/50223
2209
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2210
* https://tracker.ceph.com/issues/55332
2211
    Failure in snaptest-git-ceph.sh
2212
* https://tracker.ceph.com/issues/53859
2213
    qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
2214
* https://tracker.ceph.com/issues/55516
2215
    qa: fs suite tests failing with "json.decoder.JSONDecodeError: Extra data: line 2 column 82 (char 82)"
2216
* https://tracker.ceph.com/issues/55537
2217
    mds: crash during fs:upgrade test
2218
* https://tracker.ceph.com/issues/55538
2219 48 Venky Shankar
    Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead)
2220
2221
h3. 2022 Apr 25
2222
2223
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20220420-113951 (owner vshankar)
2224
2225
* https://tracker.ceph.com/issues/52624
2226
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2227
* https://tracker.ceph.com/issues/50223
2228
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2229
* https://tracker.ceph.com/issues/55258
2230
    lots of "heartbeat_check: no reply from X.X.X.X" in OSD logs
2231
* https://tracker.ceph.com/issues/55377
2232 47 Venky Shankar
    kclient: mds revoke Fwb caps stuck after the kclient tries writebcak once
2233
2234
h3. 2022 Apr 14
2235
2236
https://pulpito.ceph.com/?branch=wip-vshankar-testing1-20220411-144044
2237
2238
* https://tracker.ceph.com/issues/52624
2239
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2240
* https://tracker.ceph.com/issues/50223
2241
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2242
* https://tracker.ceph.com/issues/52438
2243
    qa: ffsb timeout
2244
* https://tracker.ceph.com/issues/55170
2245
    mds: crash during rejoin (CDir::fetch_keys)
2246
* https://tracker.ceph.com/issues/55331
2247
    pjd failure
2248
* https://tracker.ceph.com/issues/48773
2249
    qa: scrub does not complete
2250
* https://tracker.ceph.com/issues/55332
2251
    Failure in snaptest-git-ceph.sh
2252
* https://tracker.ceph.com/issues/55258
2253 45 Venky Shankar
    lots of "heartbeat_check: no reply from X.X.X.X" in OSD logs
2254
2255 46 Venky Shankar
h3. 2022 Apr 11
2256 45 Venky Shankar
2257
https://pulpito.ceph.com/?branch=wip-vshankar-testing-55110-20220408-203242
2258
2259
* https://tracker.ceph.com/issues/48773
2260
    qa: scrub does not complete
2261
* https://tracker.ceph.com/issues/52624
2262
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2263
* https://tracker.ceph.com/issues/52438
2264
    qa: ffsb timeout
2265
* https://tracker.ceph.com/issues/48680
2266
    mds: scrubbing stuck "scrub active (0 inodes in the stack)"
2267
* https://tracker.ceph.com/issues/55236
2268
    qa: fs/snaps tests fails with "hit max job timeout"
2269
* https://tracker.ceph.com/issues/54108
2270
    qa: iogen workunit: "The following counters failed to be set on mds daemons: {'mds.exported', 'mds.imported'}"
2271
* https://tracker.ceph.com/issues/54971
2272
    Test failure: test_perf_stats_stale_metrics (tasks.cephfs.test_mds_metrics.TestMDSMetrics)
2273
* https://tracker.ceph.com/issues/50223
2274
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2275
* https://tracker.ceph.com/issues/55258
2276 44 Venky Shankar
    lots of "heartbeat_check: no reply from X.X.X.X" in OSD logs
2277 42 Venky Shankar
2278 43 Venky Shankar
h3. 2022 Mar 21
2279
2280
https://pulpito.ceph.com/vshankar-2022-03-20_02:16:37-fs-wip-vshankar-testing-20220319-163539-testing-default-smithi/
2281
2282
Run didn't go well, lots of failures - debugging by dropping PRs and running against master branch. Only merging unrelated PRs that pass tests.
2283
2284
2285 42 Venky Shankar
h3. 2022 Mar 08
2286
2287
https://pulpito.ceph.com/vshankar-2022-02-28_04:32:15-fs-wip-vshankar-testing-20220226-211550-testing-default-smithi/
2288
2289
rerun with
2290
- (drop) https://github.com/ceph/ceph/pull/44679
2291
- (drop) https://github.com/ceph/ceph/pull/44958
2292
https://pulpito.ceph.com/vshankar-2022-03-06_14:47:51-fs-wip-vshankar-testing-20220304-132102-testing-default-smithi/
2293
2294
* https://tracker.ceph.com/issues/54419 (new)
2295
    `ceph orch upgrade start` seems to never reach completion
2296
* https://tracker.ceph.com/issues/51964
2297
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
2298
* https://tracker.ceph.com/issues/52624
2299
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2300
* https://tracker.ceph.com/issues/50223
2301
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2302
* https://tracker.ceph.com/issues/52438
2303
    qa: ffsb timeout
2304
* https://tracker.ceph.com/issues/50821
2305
    qa: untar_snap_rm failure during mds thrashing
2306 41 Venky Shankar
2307
2308
h3. 2022 Feb 09
2309
2310
https://pulpito.ceph.com/vshankar-2022-02-05_17:27:49-fs-wip-vshankar-testing-20220201-113815-testing-default-smithi/
2311
2312
rerun with
2313
- (drop) https://github.com/ceph/ceph/pull/37938
2314
- (drop) https://github.com/ceph/ceph/pull/44335
2315
- (drop) https://github.com/ceph/ceph/pull/44491
2316
- (drop) https://github.com/ceph/ceph/pull/44501
2317
https://pulpito.ceph.com/vshankar-2022-02-08_14:27:29-fs-wip-vshankar-testing-20220208-181241-testing-default-smithi/
2318
2319
* https://tracker.ceph.com/issues/51964
2320
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
2321
* https://tracker.ceph.com/issues/54066
2322
    test_subvolume_no_upgrade_v1_sanity fails with `AssertionError: 1000 != 0`
2323
* https://tracker.ceph.com/issues/48773
2324
    qa: scrub does not complete
2325
* https://tracker.ceph.com/issues/52624
2326
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2327
* https://tracker.ceph.com/issues/50223
2328
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2329
* https://tracker.ceph.com/issues/52438
2330 40 Patrick Donnelly
    qa: ffsb timeout
2331
2332
h3. 2022 Feb 01
2333
2334
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20220127.171526
2335
2336
* https://tracker.ceph.com/issues/54107
2337
    kclient: hang during umount
2338
* https://tracker.ceph.com/issues/54106
2339
    kclient: hang during workunit cleanup
2340
* https://tracker.ceph.com/issues/54108
2341
    qa: iogen workunit: "The following counters failed to be set on mds daemons: {'mds.exported', 'mds.imported'}"
2342
* https://tracker.ceph.com/issues/48773
2343
    qa: scrub does not complete
2344
* https://tracker.ceph.com/issues/52438
2345
    qa: ffsb timeout
2346 36 Venky Shankar
2347
2348
h3. 2022 Jan 13
2349 39 Venky Shankar
2350 36 Venky Shankar
https://pulpito.ceph.com/vshankar-2022-01-06_13:18:41-fs-wip-vshankar-testing-20220106-145819-testing-default-smithi/
2351 38 Venky Shankar
2352
rerun with:
2353 36 Venky Shankar
- (add) https://github.com/ceph/ceph/pull/44570
2354
- (drop) https://github.com/ceph/ceph/pull/43184
2355
https://pulpito.ceph.com/vshankar-2022-01-13_04:42:40-fs-wip-vshankar-testing-20220106-145819-testing-default-smithi/
2356
2357
* https://tracker.ceph.com/issues/50223
2358
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2359
* https://tracker.ceph.com/issues/51282
2360
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2361
* https://tracker.ceph.com/issues/48773
2362
    qa: scrub does not complete
2363
* https://tracker.ceph.com/issues/52624
2364
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2365
* https://tracker.ceph.com/issues/53859
2366 34 Venky Shankar
    qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
2367
2368
h3. 2022 Jan 03
2369
2370
https://pulpito.ceph.com/vshankar-2021-12-22_07:37:44-fs-wip-vshankar-testing-20211216-114012-testing-default-smithi/
2371
https://pulpito.ceph.com/vshankar-2022-01-03_12:27:45-fs-wip-vshankar-testing-20220103-142738-testing-default-smithi/ (rerun)
2372
2373
* https://tracker.ceph.com/issues/50223
2374
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2375
* https://tracker.ceph.com/issues/51964
2376
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
2377
* https://tracker.ceph.com/issues/51267
2378
    CommandFailedError: Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithi096 with status 1:...
2379
* https://tracker.ceph.com/issues/51282
2380
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2381
* https://tracker.ceph.com/issues/50821
2382
    qa: untar_snap_rm failure during mds thrashing
2383 35 Ramana Raja
* https://tracker.ceph.com/issues/51278
2384
    mds: "FAILED ceph_assert(!segments.empty())"
2385
* https://tracker.ceph.com/issues/52279
2386 34 Venky Shankar
    cephadm tests fail due to: error adding seccomp filter rule for syscall bdflush: requested action matches default action of filter
2387 33 Patrick Donnelly
2388
2389
h3. 2021 Dec 22
2390
2391
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211222.014316
2392
2393
* https://tracker.ceph.com/issues/52624
2394
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2395
* https://tracker.ceph.com/issues/50223
2396
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2397
* https://tracker.ceph.com/issues/52279
2398
    cephadm tests fail due to: error adding seccomp filter rule for syscall bdflush: requested action matches default action of filter
2399
* https://tracker.ceph.com/issues/50224
2400
    qa: test_mirroring_init_failure_with_recovery failure
2401
* https://tracker.ceph.com/issues/48773
2402
    qa: scrub does not complete
2403 32 Venky Shankar
2404
2405
h3. 2021 Nov 30
2406
2407
https://pulpito.ceph.com/vshankar-2021-11-24_07:14:27-fs-wip-vshankar-testing-20211124-094330-testing-default-smithi/
2408
https://pulpito.ceph.com/vshankar-2021-11-30_06:23:32-fs-wip-vshankar-testing-20211124-094330-distro-default-smithi/ (rerun w/ QA fixes)
2409
2410
* https://tracker.ceph.com/issues/53436
2411
    mds, mon: mds beacon messages get dropped? (mds never reaches up:active state)
2412
* https://tracker.ceph.com/issues/51964
2413
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
2414
* https://tracker.ceph.com/issues/48812
2415
    qa: test_scrub_pause_and_resume_with_abort failure
2416
* https://tracker.ceph.com/issues/51076
2417
    "wait_for_recovery: failed before timeout expired" during thrashosd test with EC backend.
2418
* https://tracker.ceph.com/issues/50223
2419
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2420
* https://tracker.ceph.com/issues/52624
2421
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2422
* https://tracker.ceph.com/issues/50250
2423
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
2424 31 Patrick Donnelly
2425
2426
h3. 2021 November 9
2427
2428
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211109.180315
2429
2430
* https://tracker.ceph.com/issues/53214
2431
    qa: "dd: error reading '/sys/kernel/debug/ceph/2a934501-6731-4052-a836-f42229a869be.client4874/metrics': Is a directory"
2432
* https://tracker.ceph.com/issues/48773
2433
    qa: scrub does not complete
2434
* https://tracker.ceph.com/issues/50223
2435
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2436
* https://tracker.ceph.com/issues/51282
2437
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2438
* https://tracker.ceph.com/issues/52624
2439
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2440
* https://tracker.ceph.com/issues/53216
2441
    qa: "RuntimeError: value of attributes should be either str or None. client_id"
2442
* https://tracker.ceph.com/issues/50250
2443
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
2444
2445 30 Patrick Donnelly
2446
2447
h3. 2021 November 03
2448
2449
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211103.023355
2450
2451
* https://tracker.ceph.com/issues/51964
2452
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
2453
* https://tracker.ceph.com/issues/51282
2454
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2455
* https://tracker.ceph.com/issues/52436
2456
    fs/ceph: "corrupt mdsmap"
2457
* https://tracker.ceph.com/issues/53074
2458
    pybind/mgr/cephadm: upgrade sequence does not continue if no MDS are active
2459
* https://tracker.ceph.com/issues/53150
2460
    pybind/mgr/cephadm/upgrade: tolerate MDS failures during upgrade straddling v16.2.5
2461
* https://tracker.ceph.com/issues/53155
2462
    MDSMonitor: assertion during upgrade to v16.2.5+
2463 29 Patrick Donnelly
2464
2465
h3. 2021 October 26
2466
2467
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211025.000447
2468
2469
* https://tracker.ceph.com/issues/53074
2470
    pybind/mgr/cephadm: upgrade sequence does not continue if no MDS are active
2471
* https://tracker.ceph.com/issues/52997
2472
    testing: hang ing umount
2473
* https://tracker.ceph.com/issues/50824
2474
    qa: snaptest-git-ceph bus error
2475
* https://tracker.ceph.com/issues/52436
2476
    fs/ceph: "corrupt mdsmap"
2477
* https://tracker.ceph.com/issues/48773
2478
    qa: scrub does not complete
2479
* https://tracker.ceph.com/issues/53082
2480
    ceph-fuse: segmenetation fault in Client::handle_mds_map
2481
* https://tracker.ceph.com/issues/50223
2482
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2483
* https://tracker.ceph.com/issues/52624
2484
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2485
* https://tracker.ceph.com/issues/50224
2486
    qa: test_mirroring_init_failure_with_recovery failure
2487
* https://tracker.ceph.com/issues/50821
2488
    qa: untar_snap_rm failure during mds thrashing
2489
* https://tracker.ceph.com/issues/50250
2490
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
2491
2492 27 Patrick Donnelly
2493
2494 28 Patrick Donnelly
h3. 2021 October 19
2495 27 Patrick Donnelly
2496
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211019.013028
2497
2498
* https://tracker.ceph.com/issues/52995
2499
    qa: test_standby_count_wanted failure
2500
* https://tracker.ceph.com/issues/52948
2501
    osd: fails to come up: "teuthology.misc:7 of 8 OSDs are up"
2502
* https://tracker.ceph.com/issues/52996
2503
    qa: test_perf_counters via test_openfiletable
2504
* https://tracker.ceph.com/issues/48772
2505
    qa: pjd: not ok 9, 44, 80
2506
* https://tracker.ceph.com/issues/52997
2507
    testing: hang ing umount
2508
* https://tracker.ceph.com/issues/50250
2509
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
2510
* https://tracker.ceph.com/issues/52624
2511
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2512
* https://tracker.ceph.com/issues/50223
2513
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2514
* https://tracker.ceph.com/issues/50821
2515
    qa: untar_snap_rm failure during mds thrashing
2516
* https://tracker.ceph.com/issues/48773
2517
    qa: scrub does not complete
2518 26 Patrick Donnelly
2519
2520
h3. 2021 October 12
2521
2522
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211012.192211
2523
2524
Some failures caused by teuthology bug: https://tracker.ceph.com/issues/52944
2525
2526
New test caused failure: https://github.com/ceph/ceph/pull/43297#discussion_r729883167
2527
2528
2529
* https://tracker.ceph.com/issues/51282
2530
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2531
* https://tracker.ceph.com/issues/52948
2532
    osd: fails to come up: "teuthology.misc:7 of 8 OSDs are up"
2533
* https://tracker.ceph.com/issues/48773
2534
    qa: scrub does not complete
2535
* https://tracker.ceph.com/issues/50224
2536
    qa: test_mirroring_init_failure_with_recovery failure
2537
* https://tracker.ceph.com/issues/52949
2538
    RuntimeError: The following counters failed to be set on mds daemons: {'mds.dir_split'}
2539 25 Patrick Donnelly
2540 23 Patrick Donnelly
2541 24 Patrick Donnelly
h3. 2021 October 02
2542
2543
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211002.163337
2544
2545
Some failures caused by cephadm upgrade test. Fixed in follow-up qa commit.
2546
2547
test_simple failures caused by PR in this set.
2548
2549
A few reruns because of QA infra noise.
2550
2551
* https://tracker.ceph.com/issues/52822
2552
    qa: failed pacific install on fs:upgrade
2553
* https://tracker.ceph.com/issues/52624
2554
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2555
* https://tracker.ceph.com/issues/50223
2556
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2557
* https://tracker.ceph.com/issues/48773
2558
    qa: scrub does not complete
2559
2560
2561 23 Patrick Donnelly
h3. 2021 September 20
2562
2563
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210917.174826
2564
2565
* https://tracker.ceph.com/issues/52677
2566
    qa: test_simple failure
2567
* https://tracker.ceph.com/issues/51279
2568
    kclient hangs on umount (testing branch)
2569
* https://tracker.ceph.com/issues/50223
2570
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2571
* https://tracker.ceph.com/issues/50250
2572
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
2573
* https://tracker.ceph.com/issues/52624
2574
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2575
* https://tracker.ceph.com/issues/52438
2576
    qa: ffsb timeout
2577 22 Patrick Donnelly
2578
2579
h3. 2021 September 10
2580
2581
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210910.181451
2582
2583
* https://tracker.ceph.com/issues/50223
2584
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2585
* https://tracker.ceph.com/issues/50250
2586
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
2587
* https://tracker.ceph.com/issues/52624
2588
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2589
* https://tracker.ceph.com/issues/52625
2590
    qa: test_kill_mdstable (tasks.cephfs.test_snapshots.TestSnapshots)
2591
* https://tracker.ceph.com/issues/52439
2592
    qa: acls does not compile on centos stream
2593
* https://tracker.ceph.com/issues/50821
2594
    qa: untar_snap_rm failure during mds thrashing
2595
* https://tracker.ceph.com/issues/48773
2596
    qa: scrub does not complete
2597
* https://tracker.ceph.com/issues/52626
2598
    mds: ScrubStack.cc: 831: FAILED ceph_assert(diri)
2599
* https://tracker.ceph.com/issues/51279
2600
    kclient hangs on umount (testing branch)
2601 21 Patrick Donnelly
2602
2603
h3. 2021 August 27
2604
2605
Several jobs died because of device failures.
2606
2607
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210827.024746
2608
2609
* https://tracker.ceph.com/issues/52430
2610
    mds: fast async create client mount breaks racy test
2611
* https://tracker.ceph.com/issues/52436
2612
    fs/ceph: "corrupt mdsmap"
2613
* https://tracker.ceph.com/issues/52437
2614
    mds: InoTable::replay_release_ids abort via test_inotable_sync
2615
* https://tracker.ceph.com/issues/51282
2616
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2617
* https://tracker.ceph.com/issues/52438
2618
    qa: ffsb timeout
2619
* https://tracker.ceph.com/issues/52439
2620
    qa: acls does not compile on centos stream
2621 20 Patrick Donnelly
2622
2623
h3. 2021 July 30
2624
2625
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210729.214022
2626
2627
* https://tracker.ceph.com/issues/50250
2628
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
2629
* https://tracker.ceph.com/issues/51282
2630
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2631
* https://tracker.ceph.com/issues/48773
2632
    qa: scrub does not complete
2633
* https://tracker.ceph.com/issues/51975
2634
    pybind/mgr/stats: KeyError
2635 19 Patrick Donnelly
2636
2637
h3. 2021 July 28
2638
2639
https://pulpito.ceph.com/pdonnell-2021-07-28_00:39:45-fs-wip-pdonnell-testing-20210727.213757-distro-basic-smithi/
2640
2641
with qa fix: https://pulpito.ceph.com/pdonnell-2021-07-28_16:20:28-fs-wip-pdonnell-testing-20210728.141004-distro-basic-smithi/
2642
2643
* https://tracker.ceph.com/issues/51905
2644
    qa: "error reading sessionmap 'mds1_sessionmap'"
2645
* https://tracker.ceph.com/issues/48773
2646
    qa: scrub does not complete
2647
* https://tracker.ceph.com/issues/50250
2648
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
2649
* https://tracker.ceph.com/issues/51267
2650
    CommandFailedError: Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithi096 with status 1:...
2651
* https://tracker.ceph.com/issues/51279
2652
    kclient hangs on umount (testing branch)
2653 18 Patrick Donnelly
2654
2655
h3. 2021 July 16
2656
2657
https://pulpito.ceph.com/pdonnell-2021-07-16_05:50:11-fs-wip-pdonnell-testing-20210716.022804-distro-basic-smithi/
2658
2659
* https://tracker.ceph.com/issues/48773
2660
    qa: scrub does not complete
2661
* https://tracker.ceph.com/issues/48772
2662
    qa: pjd: not ok 9, 44, 80
2663
* https://tracker.ceph.com/issues/45434
2664
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2665
* https://tracker.ceph.com/issues/51279
2666
    kclient hangs on umount (testing branch)
2667
* https://tracker.ceph.com/issues/50824
2668
    qa: snaptest-git-ceph bus error
2669 17 Patrick Donnelly
2670
2671
h3. 2021 July 04
2672
2673
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210703.052904
2674
2675
* https://tracker.ceph.com/issues/48773
2676
    qa: scrub does not complete
2677
* https://tracker.ceph.com/issues/39150
2678
    mon: "FAILED ceph_assert(session_map.sessions.empty())" when out of quorum
2679
* https://tracker.ceph.com/issues/45434
2680
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2681
* https://tracker.ceph.com/issues/51282
2682
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2683
* https://tracker.ceph.com/issues/48771
2684
    qa: iogen: workload fails to cause balancing
2685
* https://tracker.ceph.com/issues/51279
2686
    kclient hangs on umount (testing branch)
2687
* https://tracker.ceph.com/issues/50250
2688
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
2689 16 Patrick Donnelly
2690
2691
h3. 2021 July 01
2692
2693
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210701.192056
2694
2695
* https://tracker.ceph.com/issues/51197
2696
    qa: [WRN] Scrub error on inode 0x10000001520 (/client.0/tmp/t/linux-5.4/Documentation/driver-api) see mds.f log and `damage ls` output for details
2697
* https://tracker.ceph.com/issues/50866
2698
    osd: stat mismatch on objects
2699
* https://tracker.ceph.com/issues/48773
2700
    qa: scrub does not complete
2701 15 Patrick Donnelly
2702
2703
h3. 2021 June 26
2704
2705
https://pulpito.ceph.com/pdonnell-2021-06-26_00:57:00-fs-wip-pdonnell-testing-20210625.225421-distro-basic-smithi/
2706
2707
* https://tracker.ceph.com/issues/51183
2708
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
2709
* https://tracker.ceph.com/issues/51410
2710
    kclient: fails to finish reconnect during MDS thrashing (testing branch)
2711
* https://tracker.ceph.com/issues/48773
2712
    qa: scrub does not complete
2713
* https://tracker.ceph.com/issues/51282
2714
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2715
* https://tracker.ceph.com/issues/51169
2716
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
2717
* https://tracker.ceph.com/issues/48772
2718
    qa: pjd: not ok 9, 44, 80
2719 14 Patrick Donnelly
2720
2721
h3. 2021 June 21
2722
2723
https://pulpito.ceph.com/pdonnell-2021-06-22_00:27:21-fs-wip-pdonnell-testing-20210621.231646-distro-basic-smithi/
2724
2725
One failure caused by PR: https://github.com/ceph/ceph/pull/41935#issuecomment-866472599
2726
2727
* https://tracker.ceph.com/issues/51282
2728
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2729
* https://tracker.ceph.com/issues/51183
2730
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
2731
* https://tracker.ceph.com/issues/48773
2732
    qa: scrub does not complete
2733
* https://tracker.ceph.com/issues/48771
2734
    qa: iogen: workload fails to cause balancing
2735
* https://tracker.ceph.com/issues/51169
2736
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
2737
* https://tracker.ceph.com/issues/50495
2738
    libcephfs: shutdown race fails with status 141
2739
* https://tracker.ceph.com/issues/45434
2740
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2741
* https://tracker.ceph.com/issues/50824
2742
    qa: snaptest-git-ceph bus error
2743
* https://tracker.ceph.com/issues/50223
2744
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2745 13 Patrick Donnelly
2746
2747
h3. 2021 June 16
2748
2749
https://pulpito.ceph.com/pdonnell-2021-06-16_21:26:55-fs-wip-pdonnell-testing-20210616.191804-distro-basic-smithi/
2750
2751
MDS abort class of failures caused by PR: https://github.com/ceph/ceph/pull/41667
2752
2753
* https://tracker.ceph.com/issues/45434
2754
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2755
* https://tracker.ceph.com/issues/51169
2756
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
2757
* https://tracker.ceph.com/issues/43216
2758
    MDSMonitor: removes MDS coming out of quorum election
2759
* https://tracker.ceph.com/issues/51278
2760
    mds: "FAILED ceph_assert(!segments.empty())"
2761
* https://tracker.ceph.com/issues/51279
2762
    kclient hangs on umount (testing branch)
2763
* https://tracker.ceph.com/issues/51280
2764
    mds: "FAILED ceph_assert(r == 0 || r == -2)"
2765
* https://tracker.ceph.com/issues/51183
2766
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
2767
* https://tracker.ceph.com/issues/51281
2768
    qa: snaptest-snap-rm-cmp.sh: "echo 'FAIL: bad match, /tmp/a 4637e766853d1ad16a7b17079e2c6f03 != real c3883760b18d50e8d78819c54d579b00'"
2769
* https://tracker.ceph.com/issues/48773
2770
    qa: scrub does not complete
2771
* https://tracker.ceph.com/issues/51076
2772
    "wait_for_recovery: failed before timeout expired" during thrashosd test with EC backend.
2773
* https://tracker.ceph.com/issues/51228
2774
    qa: rmdir: failed to remove 'a/.snap/*': No such file or directory
2775
* https://tracker.ceph.com/issues/51282
2776
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2777 12 Patrick Donnelly
2778
2779
h3. 2021 June 14
2780
2781
https://pulpito.ceph.com/pdonnell-2021-06-14_20:53:05-fs-wip-pdonnell-testing-20210614.173325-distro-basic-smithi/
2782
2783
Some Ubuntu 20.04 upgrade fallout. In particular, upgrade tests are failing due to missing packages for 18.04 Pacific.
2784
2785
* https://tracker.ceph.com/issues/51169
2786
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
2787
* https://tracker.ceph.com/issues/51228
2788
    qa: rmdir: failed to remove 'a/.snap/*': No such file or directory
2789
* https://tracker.ceph.com/issues/48773
2790
    qa: scrub does not complete
2791
* https://tracker.ceph.com/issues/51183
2792
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
2793
* https://tracker.ceph.com/issues/45434
2794
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2795
* https://tracker.ceph.com/issues/51182
2796
    pybind/mgr/snap_schedule: Invalid command: Unexpected argument 'fs=cephfs'
2797
* https://tracker.ceph.com/issues/51229
2798
    qa: test_multi_snap_schedule list difference failure
2799
* https://tracker.ceph.com/issues/50821
2800
    qa: untar_snap_rm failure during mds thrashing
2801 11 Patrick Donnelly
2802
2803
h3. 2021 June 13
2804
2805
https://pulpito.ceph.com/pdonnell-2021-06-12_02:45:35-fs-wip-pdonnell-testing-20210612.002809-distro-basic-smithi/
2806
2807
Some Ubuntu 20.04 upgrade fallout. In particular, upgrade tests are failing due to missing packages for 18.04 Pacific.
2808
2809
* https://tracker.ceph.com/issues/51169
2810
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
2811
* https://tracker.ceph.com/issues/48773
2812
    qa: scrub does not complete
2813
* https://tracker.ceph.com/issues/51182
2814
    pybind/mgr/snap_schedule: Invalid command: Unexpected argument 'fs=cephfs'
2815
* https://tracker.ceph.com/issues/51183
2816
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
2817
* https://tracker.ceph.com/issues/51197
2818
    qa: [WRN] Scrub error on inode 0x10000001520 (/client.0/tmp/t/linux-5.4/Documentation/driver-api) see mds.f log and `damage ls` output for details
2819
* https://tracker.ceph.com/issues/45434
2820 10 Patrick Donnelly
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2821
2822
h3. 2021 June 11
2823
2824
https://pulpito.ceph.com/pdonnell-2021-06-11_18:02:10-fs-wip-pdonnell-testing-20210611.162716-distro-basic-smithi/
2825
2826
Some Ubuntu 20.04 upgrade fallout. In particular, upgrade tests are failing due to missing packages for 18.04 Pacific.
2827
2828
* https://tracker.ceph.com/issues/51169
2829
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
2830
* https://tracker.ceph.com/issues/45434
2831
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2832
* https://tracker.ceph.com/issues/48771
2833
    qa: iogen: workload fails to cause balancing
2834
* https://tracker.ceph.com/issues/43216
2835
    MDSMonitor: removes MDS coming out of quorum election
2836
* https://tracker.ceph.com/issues/51182
2837
    pybind/mgr/snap_schedule: Invalid command: Unexpected argument 'fs=cephfs'
2838
* https://tracker.ceph.com/issues/50223
2839
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2840
* https://tracker.ceph.com/issues/48773
2841
    qa: scrub does not complete
2842
* https://tracker.ceph.com/issues/51183
2843
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
2844
* https://tracker.ceph.com/issues/51184
2845
    qa: fs:bugs does not specify distro
2846 9 Patrick Donnelly
2847
2848
h3. 2021 June 03
2849
2850
https://pulpito.ceph.com/pdonnell-2021-06-03_03:40:33-fs-wip-pdonnell-testing-20210603.020013-distro-basic-smithi/
2851
2852
* https://tracker.ceph.com/issues/45434
2853
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2854
* https://tracker.ceph.com/issues/50016
2855
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
2856
* https://tracker.ceph.com/issues/50821
2857
    qa: untar_snap_rm failure during mds thrashing
2858
* https://tracker.ceph.com/issues/50622 (regression)
2859
    msg: active_connections regression
2860
* https://tracker.ceph.com/issues/49845#note-2 (regression)
2861
    qa: failed umount in test_volumes
2862
* https://tracker.ceph.com/issues/48773
2863
    qa: scrub does not complete
2864
* https://tracker.ceph.com/issues/43216
2865
    MDSMonitor: removes MDS coming out of quorum election
2866 7 Patrick Donnelly
2867
2868 8 Patrick Donnelly
h3. 2021 May 18
2869
2870
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210518.214114
2871
2872
Regression in testing kernel caused some failures. Ilya fixed those and rerun
2873
looked better. Some odd new noise in the rerun relating to packaging and "No
2874
module named 'tasks.ceph'".
2875
2876
* https://tracker.ceph.com/issues/50824
2877
    qa: snaptest-git-ceph bus error
2878
* https://tracker.ceph.com/issues/50622 (regression)
2879
    msg: active_connections regression
2880
* https://tracker.ceph.com/issues/49845#note-2 (regression)
2881
    qa: failed umount in test_volumes
2882
* https://tracker.ceph.com/issues/48203 (stock kernel update required)
2883
    qa: quota failure
2884
2885
2886 7 Patrick Donnelly
h3. 2021 May 18
2887
2888
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210518.025642
2889
2890
* https://tracker.ceph.com/issues/50821
2891
    qa: untar_snap_rm failure during mds thrashing
2892
* https://tracker.ceph.com/issues/48773
2893
    qa: scrub does not complete
2894
* https://tracker.ceph.com/issues/45591
2895
    mgr: FAILED ceph_assert(daemon != nullptr)
2896
* https://tracker.ceph.com/issues/50866
2897
    osd: stat mismatch on objects
2898
* https://tracker.ceph.com/issues/50016
2899
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
2900
* https://tracker.ceph.com/issues/50867
2901
    qa: fs:mirror: reduced data availability
2902
* https://tracker.ceph.com/issues/50821
2903
    qa: untar_snap_rm failure during mds thrashing
2904
* https://tracker.ceph.com/issues/50622 (regression)
2905
    msg: active_connections regression
2906
* https://tracker.ceph.com/issues/50223
2907
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2908
* https://tracker.ceph.com/issues/50868
2909
    qa: "kern.log.gz already exists; not overwritten"
2910
* https://tracker.ceph.com/issues/50870
2911
    qa: test_full: "rm: cannot remove 'large_file_a': Permission denied"
2912 6 Patrick Donnelly
2913
2914
h3. 2021 May 11
2915
2916
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210511.232042
2917
2918
* one class of failures caused by PR
2919
* https://tracker.ceph.com/issues/48812
2920
    qa: test_scrub_pause_and_resume_with_abort failure
2921
* https://tracker.ceph.com/issues/50390
2922
    mds: monclient: wait_auth_rotating timed out after 30
2923
* https://tracker.ceph.com/issues/48773
2924
    qa: scrub does not complete
2925
* https://tracker.ceph.com/issues/50821
2926
    qa: untar_snap_rm failure during mds thrashing
2927
* https://tracker.ceph.com/issues/50224
2928
    qa: test_mirroring_init_failure_with_recovery failure
2929
* https://tracker.ceph.com/issues/50622 (regression)
2930
    msg: active_connections regression
2931
* https://tracker.ceph.com/issues/50825
2932
    qa: snaptest-git-ceph hang during mon thrashing v2
2933
* https://tracker.ceph.com/issues/50821
2934
    qa: untar_snap_rm failure during mds thrashing
2935
* https://tracker.ceph.com/issues/50823
2936
    qa: RuntimeError: timeout waiting for cluster to stabilize
2937 5 Patrick Donnelly
2938
2939
h3. 2021 May 14
2940
2941
https://pulpito.ceph.com/pdonnell-2021-05-14_21:45:42-fs-master-distro-basic-smithi/
2942
2943
* https://tracker.ceph.com/issues/48812
2944
    qa: test_scrub_pause_and_resume_with_abort failure
2945
* https://tracker.ceph.com/issues/50821
2946
    qa: untar_snap_rm failure during mds thrashing
2947
* https://tracker.ceph.com/issues/50622 (regression)
2948
    msg: active_connections regression
2949
* https://tracker.ceph.com/issues/50822
2950
    qa: testing kernel patch for client metrics causes mds abort
2951
* https://tracker.ceph.com/issues/48773
2952
    qa: scrub does not complete
2953
* https://tracker.ceph.com/issues/50823
2954
    qa: RuntimeError: timeout waiting for cluster to stabilize
2955
* https://tracker.ceph.com/issues/50824
2956
    qa: snaptest-git-ceph bus error
2957
* https://tracker.ceph.com/issues/50825
2958
    qa: snaptest-git-ceph hang during mon thrashing v2
2959
* https://tracker.ceph.com/issues/50826
2960
    kceph: stock RHEL kernel hangs on snaptests with mon|osd thrashers
2961 4 Patrick Donnelly
2962
2963
h3. 2021 May 01
2964
2965
https://pulpito.ceph.com/pdonnell-2021-05-01_09:07:09-fs-wip-pdonnell-testing-20210501.040415-distro-basic-smithi/
2966
2967
* https://tracker.ceph.com/issues/45434
2968
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2969
* https://tracker.ceph.com/issues/50281
2970
    qa: untar_snap_rm timeout
2971
* https://tracker.ceph.com/issues/48203 (stock kernel update required)
2972
    qa: quota failure
2973
* https://tracker.ceph.com/issues/48773
2974
    qa: scrub does not complete
2975
* https://tracker.ceph.com/issues/50390
2976
    mds: monclient: wait_auth_rotating timed out after 30
2977
* https://tracker.ceph.com/issues/50250
2978
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details"
2979
* https://tracker.ceph.com/issues/50622 (regression)
2980
    msg: active_connections regression
2981
* https://tracker.ceph.com/issues/45591
2982
    mgr: FAILED ceph_assert(daemon != nullptr)
2983
* https://tracker.ceph.com/issues/50221
2984
    qa: snaptest-git-ceph failure in git diff
2985
* https://tracker.ceph.com/issues/50016
2986
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
2987 3 Patrick Donnelly
2988
2989
h3. 2021 Apr 15
2990
2991
https://pulpito.ceph.com/pdonnell-2021-04-15_01:35:57-fs-wip-pdonnell-testing-20210414.230315-distro-basic-smithi/
2992
2993
* https://tracker.ceph.com/issues/50281
2994
    qa: untar_snap_rm timeout
2995
* https://tracker.ceph.com/issues/50220
2996
    qa: dbench workload timeout
2997
* https://tracker.ceph.com/issues/50246
2998
    mds: failure replaying journal (EMetaBlob)
2999
* https://tracker.ceph.com/issues/50250
3000
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details"
3001
* https://tracker.ceph.com/issues/50016
3002
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
3003
* https://tracker.ceph.com/issues/50222
3004
    osd: 5.2s0 deep-scrub : stat mismatch
3005
* https://tracker.ceph.com/issues/45434
3006
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
3007
* https://tracker.ceph.com/issues/49845
3008
    qa: failed umount in test_volumes
3009
* https://tracker.ceph.com/issues/37808
3010
    osd: osdmap cache weak_refs assert during shutdown
3011
* https://tracker.ceph.com/issues/50387
3012
    client: fs/snaps failure
3013
* https://tracker.ceph.com/issues/50389
3014
    mds: "cluster [ERR] Error recovering journal 0x203: (2) No such file or directory" in cluster log"
3015
* https://tracker.ceph.com/issues/50216
3016
    qa: "ls: cannot access 'lost+found': No such file or directory"
3017
* https://tracker.ceph.com/issues/50390
3018
    mds: monclient: wait_auth_rotating timed out after 30
3019
3020 1 Patrick Donnelly
3021
3022 2 Patrick Donnelly
h3. 2021 Apr 08
3023
3024
https://pulpito.ceph.com/pdonnell-2021-04-08_22:42:24-fs-wip-pdonnell-testing-20210408.192301-distro-basic-smithi/
3025
3026
* https://tracker.ceph.com/issues/45434
3027
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
3028
* https://tracker.ceph.com/issues/50016
3029
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
3030
* https://tracker.ceph.com/issues/48773
3031
    qa: scrub does not complete
3032
* https://tracker.ceph.com/issues/50279
3033
    qa: "Replacing daemon mds.b as rank 0 with standby daemon mds.c"
3034
* https://tracker.ceph.com/issues/50246
3035
    mds: failure replaying journal (EMetaBlob)
3036
* https://tracker.ceph.com/issues/48365
3037
    qa: ffsb build failure on CentOS 8.2
3038
* https://tracker.ceph.com/issues/50216
3039
    qa: "ls: cannot access 'lost+found': No such file or directory"
3040
* https://tracker.ceph.com/issues/50223
3041
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
3042
* https://tracker.ceph.com/issues/50280
3043
    cephadm: RuntimeError: uid/gid not found
3044
* https://tracker.ceph.com/issues/50281
3045
    qa: untar_snap_rm timeout
3046
3047 1 Patrick Donnelly
h3. 2021 Apr 08
3048
3049
https://pulpito.ceph.com/pdonnell-2021-04-08_04:31:36-fs-wip-pdonnell-testing-20210408.024225-distro-basic-smithi/
3050
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210408.142238 (with logic inversion / QA fix)
3051
3052
* https://tracker.ceph.com/issues/50246
3053
    mds: failure replaying journal (EMetaBlob)
3054
* https://tracker.ceph.com/issues/50250
3055
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details"
3056
3057
3058
h3. 2021 Apr 07
3059
3060
https://pulpito.ceph.com/pdonnell-2021-04-07_02:12:41-fs-wip-pdonnell-testing-20210406.213012-distro-basic-smithi/
3061
3062
* https://tracker.ceph.com/issues/50215
3063
    qa: "log [ERR] : error reading sessionmap 'mds2_sessionmap'"
3064
* https://tracker.ceph.com/issues/49466
3065
    qa: "Command failed on gibba030 with status 1: 'set -ex\nsudo dd of=/tmp/tmp.ZEeZBasJer'"
3066
* https://tracker.ceph.com/issues/50216
3067
    qa: "ls: cannot access 'lost+found': No such file or directory"
3068
* https://tracker.ceph.com/issues/48773
3069
    qa: scrub does not complete
3070
* https://tracker.ceph.com/issues/49845
3071
    qa: failed umount in test_volumes
3072
* https://tracker.ceph.com/issues/50220
3073
    qa: dbench workload timeout
3074
* https://tracker.ceph.com/issues/50221
3075
    qa: snaptest-git-ceph failure in git diff
3076
* https://tracker.ceph.com/issues/50222
3077
    osd: 5.2s0 deep-scrub : stat mismatch
3078
* https://tracker.ceph.com/issues/50223
3079
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
3080
* https://tracker.ceph.com/issues/50224
3081
    qa: test_mirroring_init_failure_with_recovery failure
3082
3083
h3. 2021 Apr 01
3084
3085
https://pulpito.ceph.com/pdonnell-2021-04-01_00:45:34-fs-wip-pdonnell-testing-20210331.222326-distro-basic-smithi/
3086
3087
* https://tracker.ceph.com/issues/48772
3088
    qa: pjd: not ok 9, 44, 80
3089
* https://tracker.ceph.com/issues/50177
3090
    osd: "stalled aio... buggy kernel or bad device?"
3091
* https://tracker.ceph.com/issues/48771
3092
    qa: iogen: workload fails to cause balancing
3093
* https://tracker.ceph.com/issues/49845
3094
    qa: failed umount in test_volumes
3095
* https://tracker.ceph.com/issues/48773
3096
    qa: scrub does not complete
3097
* https://tracker.ceph.com/issues/48805
3098
    mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details"
3099
* https://tracker.ceph.com/issues/50178
3100
    qa: "TypeError: run() got an unexpected keyword argument 'shell'"
3101
* https://tracker.ceph.com/issues/45434
3102
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
3103
3104
h3. 2021 Mar 24
3105
3106
https://pulpito.ceph.com/pdonnell-2021-03-24_23:26:35-fs-wip-pdonnell-testing-20210324.190252-distro-basic-smithi/
3107
3108
* https://tracker.ceph.com/issues/49500
3109
    qa: "Assertion `cb_done' failed."
3110
* https://tracker.ceph.com/issues/50019
3111
    qa: mount failure with cephadm "probably no MDS server is up?"
3112
* https://tracker.ceph.com/issues/50020
3113
    qa: "RADOS object not found (Failed to operate read op for oid cephfs_mirror)"
3114
* https://tracker.ceph.com/issues/48773
3115
    qa: scrub does not complete
3116
* https://tracker.ceph.com/issues/45434
3117
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
3118
* https://tracker.ceph.com/issues/48805
3119
    mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details"
3120
* https://tracker.ceph.com/issues/48772
3121
    qa: pjd: not ok 9, 44, 80
3122
* https://tracker.ceph.com/issues/50021
3123
    qa: snaptest-git-ceph failure during mon thrashing
3124
* https://tracker.ceph.com/issues/48771
3125
    qa: iogen: workload fails to cause balancing
3126
* https://tracker.ceph.com/issues/50016
3127
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
3128
* https://tracker.ceph.com/issues/49466
3129
    qa: "Command failed on gibba030 with status 1: 'set -ex\nsudo dd of=/tmp/tmp.ZEeZBasJer'"
3130
3131
3132
h3. 2021 Mar 18
3133
3134
https://pulpito.ceph.com/pdonnell-2021-03-18_13:46:31-fs-wip-pdonnell-testing-20210318.024145-distro-basic-smithi/
3135
3136
* https://tracker.ceph.com/issues/49466
3137
    qa: "Command failed on gibba030 with status 1: 'set -ex\nsudo dd of=/tmp/tmp.ZEeZBasJer'"
3138
* https://tracker.ceph.com/issues/48773
3139
    qa: scrub does not complete
3140
* https://tracker.ceph.com/issues/48805
3141
    mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details"
3142
* https://tracker.ceph.com/issues/45434
3143
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
3144
* https://tracker.ceph.com/issues/49845
3145
    qa: failed umount in test_volumes
3146
* https://tracker.ceph.com/issues/49605
3147
    mgr: drops command on the floor
3148
* https://tracker.ceph.com/issues/48203 (stock kernel update required)
3149
    qa: quota failure
3150
* https://tracker.ceph.com/issues/49928
3151
    client: items pinned in cache preventing unmount x2
3152
3153
h3. 2021 Mar 15
3154
3155
https://pulpito.ceph.com/pdonnell-2021-03-15_22:16:56-fs-wip-pdonnell-testing-20210315.182203-distro-basic-smithi/
3156
3157
* https://tracker.ceph.com/issues/49842
3158
    qa: stuck pkg install
3159
* https://tracker.ceph.com/issues/49466
3160
    qa: "Command failed on gibba030 with status 1: 'set -ex\nsudo dd of=/tmp/tmp.ZEeZBasJer'"
3161
* https://tracker.ceph.com/issues/49822
3162
    test: test_mirroring_command_idempotency (tasks.cephfs.test_admin.TestMirroringCommands) failure
3163
* https://tracker.ceph.com/issues/49240
3164
    terminate called after throwing an instance of 'std::bad_alloc'
3165
* https://tracker.ceph.com/issues/48773
3166
    qa: scrub does not complete
3167
* https://tracker.ceph.com/issues/45434
3168
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
3169
* https://tracker.ceph.com/issues/49500
3170
    qa: "Assertion `cb_done' failed."
3171
* https://tracker.ceph.com/issues/49843
3172
    qa: fs/snaps/snaptest-upchildrealms.sh failure
3173
* https://tracker.ceph.com/issues/49845
3174
    qa: failed umount in test_volumes
3175
* https://tracker.ceph.com/issues/48805
3176
    mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details"
3177
* https://tracker.ceph.com/issues/49605
3178
    mgr: drops command on the floor
3179
3180
and failure caused by PR: https://github.com/ceph/ceph/pull/39969
3181
3182
3183
h3. 2021 Mar 09
3184
3185
https://pulpito.ceph.com/pdonnell-2021-03-09_03:27:39-fs-wip-pdonnell-testing-20210308.214827-distro-basic-smithi/
3186
3187
* https://tracker.ceph.com/issues/49500
3188
    qa: "Assertion `cb_done' failed."
3189
* https://tracker.ceph.com/issues/48805
3190
    mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details"
3191
* https://tracker.ceph.com/issues/48773
3192
    qa: scrub does not complete
3193
* https://tracker.ceph.com/issues/45434
3194
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
3195
* https://tracker.ceph.com/issues/49240
3196
    terminate called after throwing an instance of 'std::bad_alloc'
3197
* https://tracker.ceph.com/issues/49466
3198
    qa: "Command failed on gibba030 with status 1: 'set -ex\nsudo dd of=/tmp/tmp.ZEeZBasJer'"
3199
* https://tracker.ceph.com/issues/49684
3200
    qa: fs:cephadm mount does not wait for mds to be created
3201
* https://tracker.ceph.com/issues/48771
3202
    qa: iogen: workload fails to cause balancing