Project

General

Profile

Main » History » Version 255

Patrick Donnelly, 04/04/2024 01:02 PM

1 221 Patrick Donnelly
h1. <code>main</code> branch
2 1 Patrick Donnelly
3 247 Rishabh Dave
h3. ADD NEW ENTRY HERE
4
5 253 Venky Shankar
h3. 2024-04-04
6
7
https://tracker.ceph.com/issues/65300
8
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20240330.172700
9
10
(Lot many `sudo systemctl stop ceph-ba42f8d0-efae-11ee-b647-cb9ed24678a4@mon.a` failures in this run)
11
12
* "Test failure: test_cephfs_mirror_cancel_mirroring_and_readd":https://tracker.ceph.com/issues/64711
13
* "pacific/quincy/v18.2.0: client: ceph-fuse fails to unmount after upgrade to main":https://tracker.ceph.com/issues/64502
14
* "qa: failed cephfs-shell test_reading_conf":https://tracker.ceph.com/issues/63699
15
* "centos 9 testing reveals rocksdb Leak_StillReachable memory leak in mons":https://tracker.ceph.com/issues/61774
16 254 Venky Shankar
* "qa: scrub - object missing on disk; some files may be lost":https://tracker.ceph.com/issues/48562
17
* "upgrade stalls after upgrading one ceph-mgr daemon":https://tracker.ceph.com/issues/65263
18 253 Venky Shankar
* "qa: test_max_items_per_obj open procs not fully cleaned up":https://tracker.ceph.com/issues/65022
19
* "QA failure: test_fscrypt_dummy_encryption_with_quick_group":https://tracker.ceph.com/issues/65136
20 254 Venky Shankar
* "qa/cephfs: test_multifs_single_path_rootsquash (tasks.cephfs.test_admin.TestFsAuthorize)":https://tracker.ceph.com/issues/65246
21
* "qa: test_cd_with_args failure":https://tracker.ceph.com/issues/63700
22
* "valgrind error: Leak_PossiblyLost posix_memalign UnknownInlinedFun ceph::buffer::v15_2_0::list::refill_append_space(unsigned int)":https://tracker.ceph.com/issues/65314
23 253 Venky Shankar
24 249 Rishabh Dave
h3. 4 Apr 2024
25 246 Rishabh Dave
26
https://pulpito.ceph.com/rishabh-2024-03-27_05:27:11-fs-wip-rishabh-testing-20240326.131558-testing-default-smithi/
27
28
* https://tracker.ceph.com/issues/64927
29
  qa/cephfs: test_cephfs_mirror_blocklist raises "KeyError: 'rados_inst'"
30
* https://tracker.ceph.com/issues/65022
31
  qa: test_max_items_per_obj open procs not fully cleaned up
32
* https://tracker.ceph.com/issues/63699
33
  qa: failed cephfs-shell test_reading_conf
34
* https://tracker.ceph.com/issues/63700
35
  qa: test_cd_with_args failure
36
* https://tracker.ceph.com/issues/65136
37
  QA failure: test_fscrypt_dummy_encryption_with_quick_group
38
* https://tracker.ceph.com/issues/65246
39
  qa/cephfs: test_multifs_single_path_rootsquash (tasks.cephfs.test_admin.TestFsAuthorize)
40
41 248 Rishabh Dave
42 246 Rishabh Dave
* https://tracker.ceph.com/issues/58945
43 1 Patrick Donnelly
  qa: xfstests-dev's generic test suite has failures with fuse client
44
* https://tracker.ceph.com/issues/57656
45 251 Rishabh Dave
  [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
46 1 Patrick Donnelly
* https://tracker.ceph.com/issues/63265
47
  qa: fs/snaps/snaptest-git-ceph.sh failed when reseting to tag 'v0.1'
48 246 Rishabh Dave
* https://tracker.ceph.com/issues/62067
49 251 Rishabh Dave
  ffsb.sh failure "Resource temporarily unavailable"
50 246 Rishabh Dave
* https://tracker.ceph.com/issues/63949
51
  leak in mds.c detected by valgrind during CephFS QA run
52
* https://tracker.ceph.com/issues/48562
53
  qa: scrub - object missing on disk; some files may be lost
54
* https://tracker.ceph.com/issues/65020
55
  qa: Scrub error on inode 0x1000000356c (/volumes/qa/sv_0/2f8f6bb4-3ea9-47a0-bd79-a0f50dc149d5/client.0/tmp/clients/client7/~dmtmp/PARADOX) see mds.b log and `damage ls` output for details" in cluster log
56
* https://tracker.ceph.com/issues/64572
57
  workunits/fsx.sh failure
58
* https://tracker.ceph.com/issues/57676
59
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
60 1 Patrick Donnelly
* https://tracker.ceph.com/issues/64502
61 246 Rishabh Dave
  client: ceph-fuse fails to unmount after upgrade to main
62 1 Patrick Donnelly
* https://tracker.ceph.com/issues/54741
63
  crash: MDSTableClient::got_journaled_ack(unsigned long)
64 250 Rishabh Dave
65 248 Rishabh Dave
* https://tracker.ceph.com/issues/65265
66
  qa: health warning "no active mgr (MGR_DOWN)" occurs before and after test_nfs runs
67 1 Patrick Donnelly
* https://tracker.ceph.com/issues/65308
68
  qa: fs was offline but also unexpectedly degraded
69
* https://tracker.ceph.com/issues/65309
70
  qa: dbench.sh failed with "ERROR: handle 10318 was not found"
71 250 Rishabh Dave
72
* https://tracker.ceph.com/issues/65018
73 251 Rishabh Dave
  PG_DEGRADED warnings during cluster creation via cephadm: "Health check failed: Degraded data redundancy: 2/192 objects degraded (1.042%), 1 pg degraded (PG_DEGRADED)"
74 250 Rishabh Dave
* https://tracker.ceph.com/issues/52624
75
  qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
76 245 Rishabh Dave
77 240 Patrick Donnelly
h3. 2024-04-02
78
79
https://tracker.ceph.com/issues/65215
80
81
* "qa: error during scrub thrashing: rank damage found: {'backtrace'}":https://tracker.ceph.com/issues/57676
82
* "qa: ceph tell 4.3a deep-scrub command not found":https://tracker.ceph.com/issues/64972
83
* "pacific/quincy/v18.2.0: client: ceph-fuse fails to unmount after upgrade to main":https://tracker.ceph.com/issues/64502
84
* "Test failure: test_cephfs_mirror_cancel_mirroring_and_readd":https://tracker.ceph.com/issues/64711
85
* "workunits/fsx.sh failure":https://tracker.ceph.com/issues/64572
86
* "qa: failed cephfs-shell test_reading_conf":https://tracker.ceph.com/issues/63699
87
* "centos 9 testing reveals rocksdb Leak_StillReachable memory leak in mons":https://tracker.ceph.com/issues/61774
88
* "qa: test_max_items_per_obj open procs not fully cleaned up":https://tracker.ceph.com/issues/65022
89
* "qa: dbench workload timeout":https://tracker.ceph.com/issues/50220
90
* "suites/fsstress.sh hangs on one client - test times out":https://tracker.ceph.com/issues/64707
91 255 Patrick Donnelly
* "qa/suites/fs/nfs: cluster [WRN] Health check failed: 1 stray daemon(s) not managed by cephadm (CEPHADM_STRAY_DAEMON) in cluster log":https://tracker.ceph.com/issues/65021
92 241 Patrick Donnelly
* "qa: Scrub error on inode 0x1000000356c (/volumes/qa/sv_0/2f8f6bb4-3ea9-47a0-bd79-a0f50dc149d5/client.0/tmp/clients/client7/~dmtmp/PARADOX) see mds.b log and `damage ls` output for details in cluster log":https://tracker.ceph.com/issues/65020
93
* "qa: iogen workunit: The following counters failed to be set on mds daemons: {'mds.exported', 'mds.imported'}":https://tracker.ceph.com/issues/54108
94
* "ffsb.sh failure Resource temporarily unavailable":https://tracker.ceph.com/issues/62067
95
* "QA failure: test_fscrypt_dummy_encryption_with_quick_group":https://tracker.ceph.com/issues/65136
96 244 Patrick Donnelly
* "qa: cluster [WRN] Health detail: HEALTH_WARN 1 pool(s) do not have an application enabled in cluster log":https://tracker.ceph.com/issues/65271
97 241 Patrick Donnelly
* "qa: test_cephfs_mirror_cancel_sync fails in a 100 jobs run of fs:mirror suite":https://tracker.ceph.com/issues/64534
98 240 Patrick Donnelly
99 236 Patrick Donnelly
h3. 2024-03-28
100
101
https://tracker.ceph.com/issues/65213
102
103 237 Patrick Donnelly
* "qa: error during scrub thrashing: rank damage found: {'backtrace'}":https://tracker.ceph.com/issues/57676
104
* "workunits/fsx.sh failure":https://tracker.ceph.com/issues/64572
105
* "PG_DEGRADED warnings during cluster creation via cephadm: Health check failed: Degraded data":https://tracker.ceph.com/issues/65018
106 238 Patrick Donnelly
* "suites/fsstress.sh hangs on one client - test times out":https://tracker.ceph.com/issues/64707
107
* "qa: ceph tell 4.3a deep-scrub command not found":https://tracker.ceph.com/issues/64972
108
* "qa: iogen workunit: The following counters failed to be set on mds daemons: {'mds.exported', 'mds.imported'}":https://tracker.ceph.com/issues/54108
109 239 Patrick Donnelly
* "qa: Scrub error on inode 0x1000000356c (/volumes/qa/sv_0/2f8f6bb4-3ea9-47a0-bd79-a0f50dc149d5/client.0/tmp/clients/client7/~dmtmp/PARADOX) see mds.b log and `damage ls` output for details in cluster log":https://tracker.ceph.com/issues/65020
110
* "qa: failed cephfs-shell test_reading_conf":https://tracker.ceph.com/issues/63699
111
* "Test failure: test_cephfs_mirror_cancel_mirroring_and_readd":https://tracker.ceph.com/issues/64711
112
* "qa: test_max_items_per_obj open procs not fully cleaned up":https://tracker.ceph.com/issues/65022
113
* "pacific/quincy/v18.2.0: client: ceph-fuse fails to unmount after upgrade to main":https://tracker.ceph.com/issues/64502
114
* "centos 9 testing reveals rocksdb Leak_StillReachable memory leak in mons":https://tracker.ceph.com/issues/61774
115
* "qa: Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)":https://tracker.ceph.com/issues/52624
116
* "qa: dbench workload timeout":https://tracker.ceph.com/issues/50220
117
118
119 236 Patrick Donnelly
120 235 Milind Changire
h3. 2024-03-25
121
122
https://pulpito.ceph.com/mchangir-2024-03-22_09:46:06-fs:upgrade-wip-mchangir-testing-main-20240318.032620-testing-default-smithi/
123
* https://tracker.ceph.com/issues/64502
124
  fusermount -u fails with: teuthology.exceptions.MaxWhileTries: reached maximum tries (51) after waiting for 300 seconds
125
126
https://pulpito.ceph.com/mchangir-2024-03-22_09:48:09-fs:libcephfs-wip-mchangir-testing-main-20240318.032620-testing-default-smithi/
127
128
* https://tracker.ceph.com/issues/62245
129
  libcephfs/test.sh failed - https://tracker.ceph.com/issues/62245#note-3
130
131
132 228 Patrick Donnelly
h3. 2024-03-20
133
134 234 Patrick Donnelly
https://pulpito.ceph.com/?branch=wip-batrick-testing-20240320.145742
135 228 Patrick Donnelly
136 233 Patrick Donnelly
https://github.com/batrick/ceph/commit/360516069d9393362c4cc6eb9371680fe16d66ab
137
138 229 Patrick Donnelly
Ubuntu jobs filtered out because builds were skipped by jenkins/shaman.
139 1 Patrick Donnelly
140 229 Patrick Donnelly
This run has a lot more failures because https://github.com/ceph/ceph/pull/55455 fixed log WRN/ERR checks.
141 228 Patrick Donnelly
142 229 Patrick Donnelly
* https://tracker.ceph.com/issues/57676
143
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
144
* https://tracker.ceph.com/issues/64572
145
    workunits/fsx.sh failure
146
* https://tracker.ceph.com/issues/65018
147
    PG_DEGRADED warnings during cluster creation via cephadm: "Health check failed: Degraded data redundancy: 2/192 objects degraded (1.042%), 1 pg degraded (PG_DEGRADED)"
148
* https://tracker.ceph.com/issues/64707 (new issue)
149
    suites/fsstress.sh hangs on one client - test times out
150 1 Patrick Donnelly
* https://tracker.ceph.com/issues/64988
151
    qa: fs:workloads mgr client evicted indicated by "cluster [WRN] evicting unresponsive client smithi042:x (15288), after 303.306 seconds"
152
* https://tracker.ceph.com/issues/59684
153
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
154 230 Patrick Donnelly
* https://tracker.ceph.com/issues/64972
155
    qa: "ceph tell 4.3a deep-scrub" command not found
156
* https://tracker.ceph.com/issues/54108
157
    qa: iogen workunit: "The following counters failed to be set on mds daemons: {'mds.exported', 'mds.imported'}"
158
* https://tracker.ceph.com/issues/65019
159
    qa/suites/fs/top: [WRN] Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)" in cluster log 
160
* https://tracker.ceph.com/issues/65020
161
    qa: Scrub error on inode 0x1000000356c (/volumes/qa/sv_0/2f8f6bb4-3ea9-47a0-bd79-a0f50dc149d5/client.0/tmp/clients/client7/~dmtmp/PARADOX) see mds.b log and `damage ls` output for details" in cluster log
162
* https://tracker.ceph.com/issues/65021
163
    qa/suites/fs/nfs: cluster [WRN] Health check failed: 1 stray daemon(s) not managed by cephadm (CEPHADM_STRAY_DAEMON)" in cluster log
164
* https://tracker.ceph.com/issues/63699
165
    qa: failed cephfs-shell test_reading_conf
166 231 Patrick Donnelly
* https://tracker.ceph.com/issues/64711
167
    Test failure: test_cephfs_mirror_cancel_mirroring_and_readd (tasks.cephfs.test_mirroring.TestMirroring)
168
* https://tracker.ceph.com/issues/50821
169
    qa: untar_snap_rm failure during mds thrashing
170 232 Patrick Donnelly
* https://tracker.ceph.com/issues/65022
171
    qa: test_max_items_per_obj open procs not fully cleaned up
172 228 Patrick Donnelly
173 226 Venky Shankar
h3.  14th March 2024
174
175
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20240307.013758
176
177 227 Venky Shankar
(pjd.sh failures are related to a bug in the testing kernel. See - https://tracker.ceph.com/issues/64679#note-4)
178 226 Venky Shankar
179
* https://tracker.ceph.com/issues/62067
180
    ffsb.sh failure "Resource temporarily unavailable"
181
* https://tracker.ceph.com/issues/57676
182
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
183
* https://tracker.ceph.com/issues/64502
184
    pacific/quincy/v18.2.0: client: ceph-fuse fails to unmount after upgrade to main
185
* https://tracker.ceph.com/issues/64572
186
    workunits/fsx.sh failure
187
* https://tracker.ceph.com/issues/63700
188
    qa: test_cd_with_args failure
189
* https://tracker.ceph.com/issues/59684
190
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
191
* https://tracker.ceph.com/issues/61243
192
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
193
194 225 Venky Shankar
h3. 5th March 2024
195
196
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20240304.042522
197
198
* https://tracker.ceph.com/issues/57676
199
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
200
* https://tracker.ceph.com/issues/64502
201
    pacific/quincy/v18.2.0: client: ceph-fuse fails to unmount after upgrade to main
202
* https://tracker.ceph.com/issues/63949
203
    leak in mds.c detected by valgrind during CephFS QA run
204
* https://tracker.ceph.com/issues/57656
205
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
206
* https://tracker.ceph.com/issues/63699
207
    qa: failed cephfs-shell test_reading_conf
208
* https://tracker.ceph.com/issues/64572
209
    workunits/fsx.sh failure
210
* https://tracker.ceph.com/issues/64707 (new issue)
211
    suites/fsstress.sh hangs on one client - test times out
212
* https://tracker.ceph.com/issues/59684
213
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
214
* https://tracker.ceph.com/issues/63700
215
    qa: test_cd_with_args failure
216
* https://tracker.ceph.com/issues/64711
217
    Test failure: test_cephfs_mirror_cancel_mirroring_and_readd (tasks.cephfs.test_mirroring.TestMirroring)
218
* https://tracker.ceph.com/issues/64729 (new issue)
219
    mon.a (mon.0) 1281 : cluster 3 [WRN] MDS_SLOW_METADATA_IO: 3 MDSs report slow metadata IOs" in cluster log
220
* https://tracker.ceph.com/issues/64730
221
    fs/misc/multiple_rsync.sh workunit times out
222
223 224 Venky Shankar
h3. 26th Feb 2024
224
225
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20240216.060239
226
227
(This run is a bit messy due to
228
229
  a) OCI runtime issues in the testing kernel with centos9
230
  b) SELinux denials related failures
231
  c) Unrelated MON_DOWN warnings)
232
233
* https://tracker.ceph.com/issues/57676
234
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
235
* https://tracker.ceph.com/issues/63700
236
    qa: test_cd_with_args failure
237
* https://tracker.ceph.com/issues/63949
238
    leak in mds.c detected by valgrind during CephFS QA run
239
* https://tracker.ceph.com/issues/59684
240
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
241
* https://tracker.ceph.com/issues/61243
242
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
243
* https://tracker.ceph.com/issues/63699
244
    qa: failed cephfs-shell test_reading_conf
245
* https://tracker.ceph.com/issues/64172
246
    Test failure: test_multiple_path_r (tasks.cephfs.test_admin.TestFsAuthorize)
247
* https://tracker.ceph.com/issues/57656
248
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
249
* https://tracker.ceph.com/issues/64572
250
    workunits/fsx.sh failure
251
252 222 Patrick Donnelly
h3. 20th Feb 2024
253
254
https://github.com/ceph/ceph/pull/55601
255
https://github.com/ceph/ceph/pull/55659
256
257
https://pulpito.ceph.com/pdonnell-2024-02-20_07:23:03-fs:upgrade:mds_upgrade_sequence-wip-batrick-testing-20240220.022152-distro-default-smithi/
258
259
* https://tracker.ceph.com/issues/64502
260
    client: quincy ceph-fuse fails to unmount after upgrade to main
261
262 223 Patrick Donnelly
This run has numerous problems. #55601 introduces testing for the upgrade sequence from </code>reef/{v18.2.0,v18.2.1,reef}</code> as well as an extra dimension for the ceph-fuse client. The main "big" issue is i64502: the ceph-fuse client is not being unmounted when <code>fusermount -u</code> is called. Instead, the client begins to unmount only after daemons are shut down during test cleanup.
263 218 Venky Shankar
264
h3. 19th Feb 2024
265
266 220 Venky Shankar
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20240217.015652
267
268 218 Venky Shankar
* https://tracker.ceph.com/issues/61243
269
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
270
* https://tracker.ceph.com/issues/63700
271
    qa: test_cd_with_args failure
272
* https://tracker.ceph.com/issues/63141
273
    qa/cephfs: test_idem_unaffected_root_squash fails
274
* https://tracker.ceph.com/issues/59684
275
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
276
* https://tracker.ceph.com/issues/63949
277
    leak in mds.c detected by valgrind during CephFS QA run
278
* https://tracker.ceph.com/issues/63764
279
    Test failure: test_r_with_fsname_and_no_path_in_cap (tasks.cephfs.test_multifs_auth.TestMDSCaps)
280
* https://tracker.ceph.com/issues/63699
281
    qa: failed cephfs-shell test_reading_conf
282 219 Venky Shankar
* https://tracker.ceph.com/issues/64482
283
    ceph: stderr Error: OCI runtime error: crun: bpf create ``: Function not implemented
284 201 Rishabh Dave
285 217 Venky Shankar
h3. 29 Jan 2024
286
287
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20240119.075157-1
288
289
* https://tracker.ceph.com/issues/57676
290
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
291
* https://tracker.ceph.com/issues/63949
292
    leak in mds.c detected by valgrind during CephFS QA run
293
* https://tracker.ceph.com/issues/62067
294
    ffsb.sh failure "Resource temporarily unavailable"
295
* https://tracker.ceph.com/issues/64172
296
    Test failure: test_multiple_path_r (tasks.cephfs.test_admin.TestFsAuthorize)
297
* https://tracker.ceph.com/issues/63265
298
    qa: fs/snaps/snaptest-git-ceph.sh failed when reseting to tag 'v0.1'
299
* https://tracker.ceph.com/issues/61243
300
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
301
* https://tracker.ceph.com/issues/59684
302
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
303
* https://tracker.ceph.com/issues/57656
304
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
305
* https://tracker.ceph.com/issues/64209
306
    snaptest-multiple-capsnaps.sh fails with "got remote process result: 1"
307
308 216 Venky Shankar
h3. 17th Jan 2024
309
310
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20240103.072409-1
311
312
* https://tracker.ceph.com/issues/63764
313
    Test failure: test_r_with_fsname_and_no_path_in_cap (tasks.cephfs.test_multifs_auth.TestMDSCaps)
314
* https://tracker.ceph.com/issues/57676
315
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
316
* https://tracker.ceph.com/issues/51964
317
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
318
* https://tracker.ceph.com/issues/63949
319
    leak in mds.c detected by valgrind during CephFS QA run
320
* https://tracker.ceph.com/issues/62067
321
    ffsb.sh failure "Resource temporarily unavailable"
322
* https://tracker.ceph.com/issues/61243
323
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
324
* https://tracker.ceph.com/issues/63259
325
    mds: failed to store backtrace and force file system read-only
326
* https://tracker.ceph.com/issues/63265
327
    qa: fs/snaps/snaptest-git-ceph.sh failed when reseting to tag 'v0.1'
328
329
h3. 16 Jan 2024
330 215 Rishabh Dave
331 214 Rishabh Dave
https://pulpito.ceph.com/rishabh-2023-12-11_15:37:57-fs-rishabh-2023dec11-testing-default-smithi/
332
https://pulpito.ceph.com/rishabh-2023-12-17_11:19:43-fs-rishabh-2023dec11-testing-default-smithi/
333
https://pulpito.ceph.com/rishabh-2024-01-04_18:43:16-fs-rishabh-2024jan4-testing-default-smithi
334
335
* https://tracker.ceph.com/issues/63764
336
  Test failure: test_r_with_fsname_and_no_path_in_cap (tasks.cephfs.test_multifs_auth.TestMDSCaps)
337
* https://tracker.ceph.com/issues/63141
338
  qa/cephfs: test_idem_unaffected_root_squash fails
339
* https://tracker.ceph.com/issues/62067
340
  ffsb.sh failure "Resource temporarily unavailable" 
341
* https://tracker.ceph.com/issues/51964
342
  qa: test_cephfs_mirror_restart_sync_on_blocklist failure
343
* https://tracker.ceph.com/issues/54462 
344
  Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128
345
* https://tracker.ceph.com/issues/57676
346
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
347
348
* https://tracker.ceph.com/issues/63949
349
  valgrind leak in MDS
350
* https://tracker.ceph.com/issues/64041
351
  qa/cephfs: fs/upgrade/nofs suite attempts to jump more than 2 releases
352
* fsstress failure in last run was due a kernel MM layer failure, unrelated to CephFS
353
* from last run, job #7507400 failed due to MGR; FS wasn't degraded, so it's unrelated to CephFS
354
355 213 Venky Shankar
h3. 06 Dec 2023
356
357
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20231206.125818
358
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20231206.125818-x (rerun w/ squid kickoff changes)
359
360
* https://tracker.ceph.com/issues/63764
361
    Test failure: test_r_with_fsname_and_no_path_in_cap (tasks.cephfs.test_multifs_auth.TestMDSCaps)
362
* https://tracker.ceph.com/issues/63233
363
    mon|client|mds: valgrind reports possible leaks in the MDS
364
* https://tracker.ceph.com/issues/57676
365
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
366
* https://tracker.ceph.com/issues/62580
367
    testing: Test failure: test_snapshot_remove (tasks.cephfs.test_strays.TestStrays)
368
* https://tracker.ceph.com/issues/62067
369
    ffsb.sh failure "Resource temporarily unavailable"
370
* https://tracker.ceph.com/issues/61243
371
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
372
* https://tracker.ceph.com/issues/62081
373
    tasks/fscrypt-common does not finish, timesout
374
* https://tracker.ceph.com/issues/63265
375
    qa: fs/snaps/snaptest-git-ceph.sh failed when reseting to tag 'v0.1'
376
* https://tracker.ceph.com/issues/63806
377
    ffsb.sh workunit failure (MDS: std::out_of_range, damaged)
378
379 211 Patrick Donnelly
h3. 30 Nov 2023
380
381
https://pulpito.ceph.com/pdonnell-2023-11-30_08:05:19-fs:shell-wip-batrick-testing-20231130.014408-distro-default-smithi/
382
383
* https://tracker.ceph.com/issues/63699
384 212 Patrick Donnelly
    qa: failed cephfs-shell test_reading_conf
385
* https://tracker.ceph.com/issues/63700
386
    qa: test_cd_with_args failure
387 211 Patrick Donnelly
388 210 Venky Shankar
h3. 29 Nov 2023
389
390
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20231107.042705
391
392
* https://tracker.ceph.com/issues/63233
393
    mon|client|mds: valgrind reports possible leaks in the MDS
394
* https://tracker.ceph.com/issues/63141
395
    qa/cephfs: test_idem_unaffected_root_squash fails
396
* https://tracker.ceph.com/issues/57676
397
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
398
* https://tracker.ceph.com/issues/57655
399
    qa: fs:mixed-clients kernel_untar_build failure
400
* https://tracker.ceph.com/issues/62067
401
    ffsb.sh failure "Resource temporarily unavailable"
402
* https://tracker.ceph.com/issues/61243
403
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
404
* https://tracker.ceph.com/issues/62510 (pending RHEL back port)
    snaptest-git-ceph.sh failure with fs/thrash
405
* https://tracker.ceph.com/issues/62810
406
    Failure in snaptest-git-ceph.sh (it's an async unlink/create bug) -- Need to fix again
407
408 206 Venky Shankar
h3. 14 Nov 2023
409 207 Milind Changire
(Milind)
410
411
https://pulpito.ceph.com/mchangir-2023-11-13_10:27:15-fs-wip-mchangir-testing-20231110.052303-testing-default-smithi/
412
413
* https://tracker.ceph.com/issues/53859
414
  qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
415
* https://tracker.ceph.com/issues/63233
416
  mon|client|mds: valgrind reports possible leaks in the MDS
417
* https://tracker.ceph.com/issues/63521
418
  qa: Test failure: test_scrub_merge_dirfrags (tasks.cephfs.test_scrub_checks.TestScrubChecks)
419
* https://tracker.ceph.com/issues/57655
420
  qa: fs:mixed-clients kernel_untar_build failure
421
* https://tracker.ceph.com/issues/62580
422
    testing: Test failure: test_snapshot_remove (tasks.cephfs.test_strays.TestStrays)
423
* https://tracker.ceph.com/issues/57676
424
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
425
* https://tracker.ceph.com/issues/61243
426
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
427
* https://tracker.ceph.com/issues/63141
428
    qa/cephfs: test_idem_unaffected_root_squash fails
429
* https://tracker.ceph.com/issues/51964
430
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
431
* https://tracker.ceph.com/issues/63522
432
    No module named 'tasks.ceph_fuse'
433
    No module named 'tasks.kclient'
434
    No module named 'tasks.cephfs.fuse_mount'
435
    No module named 'tasks.ceph'
436
* https://tracker.ceph.com/issues/63523
437
    Command failed - qa/workunits/fs/misc/general_vxattrs.sh
438
439
440
h3. 14 Nov 2023
441 206 Venky Shankar
442
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20231106.073650
443
444
(nvm the fs:upgrade test failure - the PR is excluded from merge)
445
446
* https://tracker.ceph.com/issues/57676
447
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
448
* https://tracker.ceph.com/issues/63233
449
    mon|client|mds: valgrind reports possible leaks in the MDS
450
* https://tracker.ceph.com/issues/63141
451
    qa/cephfs: test_idem_unaffected_root_squash fails
452
* https://tracker.ceph.com/issues/62580
453
    testing: Test failure: test_snapshot_remove (tasks.cephfs.test_strays.TestStrays)
454
* https://tracker.ceph.com/issues/57655
455
    qa: fs:mixed-clients kernel_untar_build failure
456
* https://tracker.ceph.com/issues/51964
457
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
458
* https://tracker.ceph.com/issues/63519
459
    ceph-fuse: reef ceph-fuse crashes with main branch ceph-mds
460
* https://tracker.ceph.com/issues/57087
461
    qa: test_fragmented_injection (tasks.cephfs.test_data_scan.TestDataScan) failure
462
* https://tracker.ceph.com/issues/58945
463
    qa: xfstests-dev's generic test suite has 20 failures with fuse client
464
465 204 Rishabh Dave
h3. 7 Nov 2023
466
467 205 Rishabh Dave
fs: https://pulpito.ceph.com/rishabh-2023-11-04_04:30:51-fs-rishabh-2023nov3-testing-default-smithi/
468
re-run: https://pulpito.ceph.com/rishabh-2023-11-05_14:10:09-fs-rishabh-2023nov3-testing-default-smithi/
469
smoke: https://pulpito.ceph.com/rishabh-2023-11-08_08:39:05-smoke-rishabh-2023nov3-testing-default-smithi/
470 204 Rishabh Dave
471
* https://tracker.ceph.com/issues/53859
472
  qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
473
* https://tracker.ceph.com/issues/63233
474
  mon|client|mds: valgrind reports possible leaks in the MDS
475
* https://tracker.ceph.com/issues/57655
476
  qa: fs:mixed-clients kernel_untar_build failure
477
* https://tracker.ceph.com/issues/57676
478
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
479
480
* https://tracker.ceph.com/issues/63473
481
  fsstress.sh failed with errno 124
482
483 202 Rishabh Dave
h3. 3 Nov 2023
484 203 Rishabh Dave
485 202 Rishabh Dave
https://pulpito.ceph.com/rishabh-2023-10-27_06:26:52-fs-rishabh-2023oct26-testing-default-smithi/
486
487
* https://tracker.ceph.com/issues/63141
488
  qa/cephfs: test_idem_unaffected_root_squash fails
489
* https://tracker.ceph.com/issues/63233
490
  mon|client|mds: valgrind reports possible leaks in the MDS
491
* https://tracker.ceph.com/issues/57656
492
  dbench: write failed on handle 10010 (Resource temporarily unavailable)
493
* https://tracker.ceph.com/issues/57655
494
  qa: fs:mixed-clients kernel_untar_build failure
495
* https://tracker.ceph.com/issues/57676
496
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
497
498
* https://tracker.ceph.com/issues/59531
499
  "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio)"
500
* https://tracker.ceph.com/issues/52624
501
  qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
502
503 198 Patrick Donnelly
h3. 24 October 2023
504
505
https://pulpito.ceph.com/?branch=wip-batrick-testing-20231024.144545
506
507 200 Patrick Donnelly
Two failures:
508
509
https://pulpito.ceph.com/pdonnell-2023-10-26_05:21:22-fs-wip-batrick-testing-20231024.144545-distro-default-smithi/7438459/
510
https://pulpito.ceph.com/pdonnell-2023-10-26_05:21:22-fs-wip-batrick-testing-20231024.144545-distro-default-smithi/7438468/
511
512
probably related to https://github.com/ceph/ceph/pull/53255. Killing the mount as part of the test did not complete. Will research more.
513
514 198 Patrick Donnelly
* https://tracker.ceph.com/issues/52624
515
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
516
* https://tracker.ceph.com/issues/57676
517 199 Patrick Donnelly
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
518
* https://tracker.ceph.com/issues/63233
519
    mon|client|mds: valgrind reports possible leaks in the MDS
520
* https://tracker.ceph.com/issues/59531
521
    "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS 
522
* https://tracker.ceph.com/issues/57655
523
    qa: fs:mixed-clients kernel_untar_build failure
524 200 Patrick Donnelly
* https://tracker.ceph.com/issues/62067
525
    ffsb.sh failure "Resource temporarily unavailable"
526
* https://tracker.ceph.com/issues/63411
527
    qa: flush journal may cause timeouts of `scrub status`
528
* https://tracker.ceph.com/issues/61243
529
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
530
* https://tracker.ceph.com/issues/63141
531 198 Patrick Donnelly
    test_idem_unaffected_root_squash (test_admin.TestFsAuthorizeUpdate) fails
532 148 Rishabh Dave
533 195 Venky Shankar
h3. 18 Oct 2023
534
535
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20231018.065603
536
537
* https://tracker.ceph.com/issues/52624
538
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
539
* https://tracker.ceph.com/issues/57676
540
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
541
* https://tracker.ceph.com/issues/63233
542
    mon|client|mds: valgrind reports possible leaks in the MDS
543
* https://tracker.ceph.com/issues/63141
544
    qa/cephfs: test_idem_unaffected_root_squash fails
545
* https://tracker.ceph.com/issues/59531
546
  "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio)" 
547
* https://tracker.ceph.com/issues/62658
548
    error during scrub thrashing: reached maximum tries (31) after waiting for 900 seconds
549
* https://tracker.ceph.com/issues/62580
550
    testing: Test failure: test_snapshot_remove (tasks.cephfs.test_strays.TestStrays)
551
* https://tracker.ceph.com/issues/62067
552
    ffsb.sh failure "Resource temporarily unavailable"
553
* https://tracker.ceph.com/issues/57655
554
    qa: fs:mixed-clients kernel_untar_build failure
555
* https://tracker.ceph.com/issues/62036
556
    src/mds/MDCache.cc: 5131: FAILED ceph_assert(isolated_inodes.empty())
557
* https://tracker.ceph.com/issues/58945
558
    qa: xfstests-dev's generic test suite has 20 failures with fuse client
559
* https://tracker.ceph.com/issues/62847
560
    mds: blogbench requests stuck (5mds+scrub+snaps-flush)
561
562 193 Venky Shankar
h3. 13 Oct 2023
563
564
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20231013.093215
565
566
* https://tracker.ceph.com/issues/52624
567
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
568
* https://tracker.ceph.com/issues/62936
569
    Test failure: test_mirroring_init_failure_with_recovery (tasks.cephfs.test_mirroring.TestMirroring)
570
* https://tracker.ceph.com/issues/47292
571
    cephfs-shell: test_df_for_valid_file failure
572
* https://tracker.ceph.com/issues/63141
573
    qa/cephfs: test_idem_unaffected_root_squash fails
574
* https://tracker.ceph.com/issues/62081
575
    tasks/fscrypt-common does not finish, timesout
576 1 Patrick Donnelly
* https://tracker.ceph.com/issues/58945
577
    qa: xfstests-dev's generic test suite has 20 failures with fuse client
578 194 Venky Shankar
* https://tracker.ceph.com/issues/63233
579
    mon|client|mds: valgrind reports possible leaks in the MDS
580 193 Venky Shankar
581 190 Patrick Donnelly
h3. 16 Oct 2023
582
583
https://pulpito.ceph.com/?branch=wip-batrick-testing-20231016.203825
584
585 192 Patrick Donnelly
Infrastructure issues:
586
* /teuthology/pdonnell-2023-10-19_12:04:12-fs-wip-batrick-testing-20231016.203825-distro-default-smithi/7432286/teuthology.log
587
    Host lost.
588
589 196 Patrick Donnelly
One followup fix:
590
* https://pulpito.ceph.com/pdonnell-2023-10-20_00:33:29-fs-wip-batrick-testing-20231016.203825-distro-default-smithi/
591
592 192 Patrick Donnelly
Failures:
593
594
* https://tracker.ceph.com/issues/56694
595
    qa: avoid blocking forever on hung umount
596
* https://tracker.ceph.com/issues/63089
597
    qa: tasks/mirror times out
598
* https://tracker.ceph.com/issues/52624
599
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
600
* https://tracker.ceph.com/issues/59531
601
  "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio)" 
602
* https://tracker.ceph.com/issues/57676
603
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
604
* https://tracker.ceph.com/issues/62658 
605
    error during scrub thrashing: reached maximum tries (31) after waiting for 900 seconds
606
* https://tracker.ceph.com/issues/61243
607
  test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
608
* https://tracker.ceph.com/issues/57656
609
  dbench: write failed on handle 10010 (Resource temporarily unavailable)
610
* https://tracker.ceph.com/issues/63233
611
  mon|client|mds: valgrind reports possible leaks in the MDS
612 197 Patrick Donnelly
* https://tracker.ceph.com/issues/63278
613
  kclient: may wrongly decode session messages and believe it is blocklisted (dead jobs)
614 192 Patrick Donnelly
615 189 Rishabh Dave
h3. 9 Oct 2023
616
617
https://pulpito.ceph.com/rishabh-2023-10-06_11:56:52-fs-rishabh-cephfs-mon-testing-default-smithi/
618
619
* https://tracker.ceph.com/issues/54460
620
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
621
* https://tracker.ceph.com/issues/63141
622
  test_idem_unaffected_root_squash (test_admin.TestFsAuthorizeUpdate) fails
623
* https://tracker.ceph.com/issues/62937
624
  logrotate doesn't support parallel execution on same set of logfiles
625
* https://tracker.ceph.com/issues/61400
626
  valgrind+ceph-mon issues
627
* https://tracker.ceph.com/issues/57676
628
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
629
* https://tracker.ceph.com/issues/55805
630
  error during scrub thrashing reached max tries in 900 secs
631
632 188 Venky Shankar
h3. 26 Sep 2023
633
634
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230926.081818
635
636
* https://tracker.ceph.com/issues/52624
637
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
638
* https://tracker.ceph.com/issues/62873
639
    qa: FAIL: test_client_blocklisted_oldest_tid (tasks.cephfs.test_client_limits.TestClientLimits)
640
* https://tracker.ceph.com/issues/61400
641
    valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
642
* https://tracker.ceph.com/issues/57676
643
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
644
* https://tracker.ceph.com/issues/62682
645
    mon: no mdsmap broadcast after "fs set joinable" is set to true
646
* https://tracker.ceph.com/issues/63089
647
    qa: tasks/mirror times out
648
649 185 Rishabh Dave
h3. 22 Sep 2023
650
651
https://pulpito.ceph.com/rishabh-2023-09-12_12:12:15-fs-wip-rishabh-2023sep12-b2-testing-default-smithi/
652
653
* https://tracker.ceph.com/issues/59348
654
  qa: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota
655
* https://tracker.ceph.com/issues/59344
656
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
657
* https://tracker.ceph.com/issues/59531
658
  "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio)" 
659
* https://tracker.ceph.com/issues/61574
660
  build failure for mdtest project
661
* https://tracker.ceph.com/issues/62702
662
  fsstress.sh: MDS slow requests for the internal 'rename' requests
663
* https://tracker.ceph.com/issues/57676
664
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
665
666
* https://tracker.ceph.com/issues/62863 
667
  deadlock in ceph-fuse causes teuthology job to hang and fail
668
* https://tracker.ceph.com/issues/62870
669
  test_cluster_info (tasks.cephfs.test_nfs.TestNFS)
670
* https://tracker.ceph.com/issues/62873
671
  test_client_blocklisted_oldest_tid (tasks.cephfs.test_client_limits.TestClientLimits)
672
673 186 Venky Shankar
h3. 20 Sep 2023
674
675
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230920.072635
676
677
* https://tracker.ceph.com/issues/52624
678
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
679
* https://tracker.ceph.com/issues/61400
680
    valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
681
* https://tracker.ceph.com/issues/61399
682
    libmpich: undefined references to fi_strerror
683
* https://tracker.ceph.com/issues/62081
684
    tasks/fscrypt-common does not finish, timesout
685
* https://tracker.ceph.com/issues/62658 
686
    error during scrub thrashing: reached maximum tries (31) after waiting for 900 seconds
687
* https://tracker.ceph.com/issues/62915
688
    qa/suites/fs/nfs: No orchestrator configured (try `ceph orch set backend`) while running test cases
689
* https://tracker.ceph.com/issues/59531
690
    quincy: "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio)"
691
* https://tracker.ceph.com/issues/62873
692
  qa: FAIL: test_client_blocklisted_oldest_tid (tasks.cephfs.test_client_limits.TestClientLimits)
693
* https://tracker.ceph.com/issues/62936
694
    Test failure: test_mirroring_init_failure_with_recovery (tasks.cephfs.test_mirroring.TestMirroring)
695
* https://tracker.ceph.com/issues/62937
696
    Command failed on smithi027 with status 3: 'sudo logrotate /etc/logrotate.d/ceph-test.conf'
697
* https://tracker.ceph.com/issues/62510
698
    snaptest-git-ceph.sh failure with fs/thrash
699
* https://tracker.ceph.com/issues/62081
700
    tasks/fscrypt-common does not finish, timesout
701
* https://tracker.ceph.com/issues/62126
702
    test failure: suites/blogbench.sh stops running
703 187 Venky Shankar
* https://tracker.ceph.com/issues/62682
704
    mon: no mdsmap broadcast after "fs set joinable" is set to true
705 186 Venky Shankar
706 184 Milind Changire
h3. 19 Sep 2023
707
708
http://pulpito.front.sepia.ceph.com/mchangir-2023-09-12_05:40:22-fs-wip-mchangir-testing-20230908.140927-testing-default-smithi/
709
710
* https://tracker.ceph.com/issues/58220#note-9
711
  workunit fs/test_python.sh: test_disk_quota_exceeeded_error failure
712
* https://tracker.ceph.com/issues/62702
713
  Command failed (workunit test suites/fsstress.sh) on smithi124 with status 124
714
* https://tracker.ceph.com/issues/57676
715
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
716
* https://tracker.ceph.com/issues/59348
717
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
718
* https://tracker.ceph.com/issues/52624
719
  qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
720
* https://tracker.ceph.com/issues/51964
721
  qa: test_cephfs_mirror_restart_sync_on_blocklist failure
722
* https://tracker.ceph.com/issues/61243
723
  test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
724
* https://tracker.ceph.com/issues/59344
725
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
726
* https://tracker.ceph.com/issues/62873
727
  qa: FAIL: test_client_blocklisted_oldest_tid (tasks.cephfs.test_client_limits.TestClientLimits)
728
* https://tracker.ceph.com/issues/59413
729
  cephfs: qa snaptest-git-ceph.sh failed with "got remote process result: 128"
730
* https://tracker.ceph.com/issues/53859
731
  qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
732
* https://tracker.ceph.com/issues/62482
733
  qa: cluster [WRN] Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
734
735 178 Patrick Donnelly
736 177 Venky Shankar
h3. 13 Sep 2023
737
738
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230908.065909
739
740
* https://tracker.ceph.com/issues/52624
741
      qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
742
* https://tracker.ceph.com/issues/57655
743
    qa: fs:mixed-clients kernel_untar_build failure
744
* https://tracker.ceph.com/issues/57676
745
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
746
* https://tracker.ceph.com/issues/61243
747
    qa: tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev - 17 tests failed
748
* https://tracker.ceph.com/issues/62567
749
    postgres workunit times out - MDS_SLOW_REQUEST in logs
750
* https://tracker.ceph.com/issues/61400
751
    valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
752
* https://tracker.ceph.com/issues/61399
753
    libmpich: undefined references to fi_strerror
754
* https://tracker.ceph.com/issues/57655
755
    qa: fs:mixed-clients kernel_untar_build failure
756
* https://tracker.ceph.com/issues/57676
757
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
758
* https://tracker.ceph.com/issues/51964
759
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
760
* https://tracker.ceph.com/issues/62081
761
    tasks/fscrypt-common does not finish, timesout
762 178 Patrick Donnelly
763 179 Patrick Donnelly
h3. 2023 Sep 12
764 178 Patrick Donnelly
765
https://pulpito.ceph.com/pdonnell-2023-09-12_14:07:50-fs-wip-batrick-testing-20230912.122437-distro-default-smithi/
766 1 Patrick Donnelly
767 181 Patrick Donnelly
A few failures caused by qa refactoring in https://github.com/ceph/ceph/pull/48130 ; notably:
768
769 182 Patrick Donnelly
* Test failure: test_export_pin_many (tasks.cephfs.test_exports.TestExportPin) caused by fragmentation from config changes.
770 181 Patrick Donnelly
771
Failures:
772
773 179 Patrick Donnelly
* https://tracker.ceph.com/issues/59348
774
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
775
* https://tracker.ceph.com/issues/57656
776
  dbench: write failed on handle 10010 (Resource temporarily unavailable)
777
* https://tracker.ceph.com/issues/55805
778
  error scrub thrashing reached max tries in 900 secs
779
* https://tracker.ceph.com/issues/62067
780
    ffsb.sh failure "Resource temporarily unavailable"
781
* https://tracker.ceph.com/issues/59344
782
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
783
* https://tracker.ceph.com/issues/61399
784 180 Patrick Donnelly
  libmpich: undefined references to fi_strerror
785
* https://tracker.ceph.com/issues/62832
786
  common: config_proxy deadlock during shutdown (and possibly other times)
787
* https://tracker.ceph.com/issues/59413
788 1 Patrick Donnelly
  cephfs: qa snaptest-git-ceph.sh failed with "got remote process result: 128"
789 181 Patrick Donnelly
* https://tracker.ceph.com/issues/57676
790
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
791
* https://tracker.ceph.com/issues/62567
792
  Command failed with status 1: "sudo TESTDIR=/home/ubuntu/cephtest bash -c 'sudo -u postgres -- pgbench -s 500 -i'"
793
* https://tracker.ceph.com/issues/54460
794
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
795
* https://tracker.ceph.com/issues/58220#note-9
796
  workunit fs/test_python.sh: test_disk_quota_exceeeded_error failure
797
* https://tracker.ceph.com/issues/59348
798
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
799 183 Patrick Donnelly
* https://tracker.ceph.com/issues/62847
800
    mds: blogbench requests stuck (5mds+scrub+snaps-flush)
801
* https://tracker.ceph.com/issues/62848
802
    qa: fail_fs upgrade scenario hanging
803
* https://tracker.ceph.com/issues/62081
804
    tasks/fscrypt-common does not finish, timesout
805 177 Venky Shankar
806 176 Venky Shankar
h3. 11 Sep 2023
807 175 Venky Shankar
808
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230830.153114
809
810
* https://tracker.ceph.com/issues/52624
811
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
812
* https://tracker.ceph.com/issues/61399
813
    qa: build failure for ior (the failed instance is when compiling `mdtest`)
814
* https://tracker.ceph.com/issues/57655
815
    qa: fs:mixed-clients kernel_untar_build failure
816
* https://tracker.ceph.com/issues/61399
817
    ior build failure
818
* https://tracker.ceph.com/issues/59531
819
    quincy: "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio)"
820
* https://tracker.ceph.com/issues/59344
821
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
822
* https://tracker.ceph.com/issues/59346
823
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
824
* https://tracker.ceph.com/issues/59348
825
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
826
* https://tracker.ceph.com/issues/57676
827
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
828
* https://tracker.ceph.com/issues/61243
829
  qa: tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev - 17 tests failed
830
* https://tracker.ceph.com/issues/62567
831
  postgres workunit times out - MDS_SLOW_REQUEST in logs
832
833
834 174 Rishabh Dave
h3. 6 Sep 2023 Run 2
835
836
https://pulpito.ceph.com/rishabh-2023-08-25_01:50:32-fs-wip-rishabh-2023aug3-b5-testing-default-smithi/ 
837
838
* https://tracker.ceph.com/issues/51964
839
  test_cephfs_mirror_restart_sync_on_blocklist failure
840
* https://tracker.ceph.com/issues/59348
841
  test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
842
* https://tracker.ceph.com/issues/53859
843
  qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
844
* https://tracker.ceph.com/issues/61892
845
  test_strays.TestStrays.test_snapshot_remove failed
846
* https://tracker.ceph.com/issues/54460
847
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
848
* https://tracker.ceph.com/issues/59346
849
  fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write"
850
* https://tracker.ceph.com/issues/59344
851
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
852
* https://tracker.ceph.com/issues/62484
853
  qa: ffsb.sh test failure
854
* https://tracker.ceph.com/issues/62567
855
  Command failed with status 1: "sudo TESTDIR=/home/ubuntu/cephtest bash -c 'sudo -u postgres -- pgbench -s 500 -i'"
856
  
857
* https://tracker.ceph.com/issues/61399
858
  ior build failure
859
* https://tracker.ceph.com/issues/57676
860
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
861
* https://tracker.ceph.com/issues/55805
862
  error scrub thrashing reached max tries in 900 secs
863
864 172 Rishabh Dave
h3. 6 Sep 2023
865 171 Rishabh Dave
866 173 Rishabh Dave
https://pulpito.ceph.com/rishabh-2023-08-10_20:16:46-fs-wip-rishabh-2023Aug1-b4-testing-default-smithi/
867 171 Rishabh Dave
868 1 Patrick Donnelly
* https://tracker.ceph.com/issues/53859
869
  qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
870 173 Rishabh Dave
* https://tracker.ceph.com/issues/51964
871
  qa: test_cephfs_mirror_restart_sync_on_blocklist failure
872 1 Patrick Donnelly
* https://tracker.ceph.com/issues/61892
873 173 Rishabh Dave
  test_snapshot_remove (test_strays.TestStrays) failed
874
* https://tracker.ceph.com/issues/59348
875
  qa: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota
876
* https://tracker.ceph.com/issues/54462
877
  Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128
878
* https://tracker.ceph.com/issues/62556
879
  test_acls: xfstests_dev: python2 is missing
880
* https://tracker.ceph.com/issues/62067
881
  ffsb.sh failure "Resource temporarily unavailable"
882
* https://tracker.ceph.com/issues/57656
883
  dbench: write failed on handle 10010 (Resource temporarily unavailable)
884 1 Patrick Donnelly
* https://tracker.ceph.com/issues/59346
885
  fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write"
886 171 Rishabh Dave
* https://tracker.ceph.com/issues/59344
887 173 Rishabh Dave
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
888
889 171 Rishabh Dave
* https://tracker.ceph.com/issues/61399
890
  ior build failure
891
* https://tracker.ceph.com/issues/57676
892
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
893
* https://tracker.ceph.com/issues/55805
894
  error scrub thrashing reached max tries in 900 secs
895 173 Rishabh Dave
896
* https://tracker.ceph.com/issues/62567
897
  Command failed on smithi008 with status 1: "sudo TESTDIR=/home/ubuntu/cephtest bash -c 'sudo -u postgres -- pgbench -s 500 -i'"
898
* https://tracker.ceph.com/issues/62702
899
  workunit test suites/fsstress.sh on smithi066 with status 124
900 170 Rishabh Dave
901
h3. 5 Sep 2023
902
903
https://pulpito.ceph.com/rishabh-2023-08-25_06:38:25-fs-wip-rishabh-2023aug3-b5-testing-default-smithi/
904
orch:cephadm suite run: http://pulpito.front.sepia.ceph.com/rishabh-2023-09-05_12:16:09-orch:cephadm-wip-rishabh-2023aug3-b5-testing-default-smithi/
905
  this run has failures but acc to Adam King these are not relevant and should be ignored
906
907
* https://tracker.ceph.com/issues/61892
908
  test_snapshot_remove (test_strays.TestStrays) failed
909
* https://tracker.ceph.com/issues/59348
910
  test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota
911
* https://tracker.ceph.com/issues/54462
912
  Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128
913
* https://tracker.ceph.com/issues/62067
914
  ffsb.sh failure "Resource temporarily unavailable"
915
* https://tracker.ceph.com/issues/57656 
916
  dbench: write failed on handle 10010 (Resource temporarily unavailable)
917
* https://tracker.ceph.com/issues/59346
918
  fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write"
919
* https://tracker.ceph.com/issues/59344
920
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
921
* https://tracker.ceph.com/issues/50223
922
  client.xxxx isn't responding to mclientcaps(revoke)
923
* https://tracker.ceph.com/issues/57655
924
  qa: fs:mixed-clients kernel_untar_build failure
925
* https://tracker.ceph.com/issues/62187
926
  iozone.sh: line 5: iozone: command not found
927
 
928
* https://tracker.ceph.com/issues/61399
929
  ior build failure
930
* https://tracker.ceph.com/issues/57676
931
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
932
* https://tracker.ceph.com/issues/55805
933
  error scrub thrashing reached max tries in 900 secs
934 169 Venky Shankar
935
936
h3. 31 Aug 2023
937
938
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230824.045828
939
940
* https://tracker.ceph.com/issues/52624
941
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
942
* https://tracker.ceph.com/issues/62187
943
    iozone: command not found
944
* https://tracker.ceph.com/issues/61399
945
    ior build failure
946
* https://tracker.ceph.com/issues/59531
947
    quincy: "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio)"
948
* https://tracker.ceph.com/issues/61399
949
    qa: build failure for ior (the failed instance is when compiling `mdtest`)
950
* https://tracker.ceph.com/issues/57655
951
    qa: fs:mixed-clients kernel_untar_build failure
952
* https://tracker.ceph.com/issues/59344
953
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
954
* https://tracker.ceph.com/issues/59346
955
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
956
* https://tracker.ceph.com/issues/59348
957
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
958
* https://tracker.ceph.com/issues/59413
959
    cephfs: qa snaptest-git-ceph.sh failed with "got remote process result: 128"
960
* https://tracker.ceph.com/issues/62653
961
    qa: unimplemented fcntl command: 1036 with fsstress
962
* https://tracker.ceph.com/issues/61400
963
    valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
964
* https://tracker.ceph.com/issues/62658
965
    error during scrub thrashing: reached maximum tries (31) after waiting for 900 seconds
966
* https://tracker.ceph.com/issues/62188
967
    AttributeError: 'RemoteProcess' object has no attribute 'read' (happens only with multis-auth test)
968 168 Venky Shankar
969
970
h3. 25 Aug 2023
971
972
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230822.064807
973
974
* https://tracker.ceph.com/issues/59344
975
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
976
* https://tracker.ceph.com/issues/59346
977
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
978
* https://tracker.ceph.com/issues/59348
979
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
980
* https://tracker.ceph.com/issues/57655
981
    qa: fs:mixed-clients kernel_untar_build failure
982
* https://tracker.ceph.com/issues/61243
983
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
984
* https://tracker.ceph.com/issues/61399
985
    ior build failure
986
* https://tracker.ceph.com/issues/61399
987
    qa: build failure for ior (the failed instance is when compiling `mdtest`)
988
* https://tracker.ceph.com/issues/62484
989
    qa: ffsb.sh test failure
990
* https://tracker.ceph.com/issues/59531
991
    quincy: "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio)"
992
* https://tracker.ceph.com/issues/62510
993
    snaptest-git-ceph.sh failure with fs/thrash
994 167 Venky Shankar
995
996
h3. 24 Aug 2023
997
998
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230822.060131
999
1000
* https://tracker.ceph.com/issues/57676
1001
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1002
* https://tracker.ceph.com/issues/51964
1003
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1004
* https://tracker.ceph.com/issues/59344
1005
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
1006
* https://tracker.ceph.com/issues/59346
1007
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
1008
* https://tracker.ceph.com/issues/59348
1009
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1010
* https://tracker.ceph.com/issues/61399
1011
    ior build failure
1012
* https://tracker.ceph.com/issues/61399
1013
    qa: build failure for ior (the failed instance is when compiling `mdtest`)
1014
* https://tracker.ceph.com/issues/62510
1015
    snaptest-git-ceph.sh failure with fs/thrash
1016
* https://tracker.ceph.com/issues/62484
1017
    qa: ffsb.sh test failure
1018
* https://tracker.ceph.com/issues/57087
1019
    qa: test_fragmented_injection (tasks.cephfs.test_data_scan.TestDataScan) failure
1020
* https://tracker.ceph.com/issues/57656
1021
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
1022
* https://tracker.ceph.com/issues/62187
1023
    iozone: command not found
1024
* https://tracker.ceph.com/issues/62188
1025
    AttributeError: 'RemoteProcess' object has no attribute 'read' (happens only with multis-auth test)
1026
* https://tracker.ceph.com/issues/62567
1027
    postgres workunit times out - MDS_SLOW_REQUEST in logs
1028 166 Venky Shankar
1029
1030
h3. 22 Aug 2023
1031
1032
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230809.035933
1033
1034
* https://tracker.ceph.com/issues/57676
1035
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1036
* https://tracker.ceph.com/issues/51964
1037
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1038
* https://tracker.ceph.com/issues/59344
1039
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
1040
* https://tracker.ceph.com/issues/59346
1041
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
1042
* https://tracker.ceph.com/issues/59348
1043
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1044
* https://tracker.ceph.com/issues/61399
1045
    ior build failure
1046
* https://tracker.ceph.com/issues/61399
1047
    qa: build failure for ior (the failed instance is when compiling `mdtest`)
1048
* https://tracker.ceph.com/issues/57655
1049
    qa: fs:mixed-clients kernel_untar_build failure
1050
* https://tracker.ceph.com/issues/61243
1051
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
1052
* https://tracker.ceph.com/issues/62188
1053
    AttributeError: 'RemoteProcess' object has no attribute 'read' (happens only with multis-auth test)
1054
* https://tracker.ceph.com/issues/62510
1055
    snaptest-git-ceph.sh failure with fs/thrash
1056
* https://tracker.ceph.com/issues/62511
1057
    src/mds/MDLog.cc: 299: FAILED ceph_assert(!mds_is_shutting_down)
1058 165 Venky Shankar
1059
1060
h3. 14 Aug 2023
1061
1062
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230808.093601
1063
1064
* https://tracker.ceph.com/issues/51964
1065
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1066
* https://tracker.ceph.com/issues/61400
1067
    valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
1068
* https://tracker.ceph.com/issues/61399
1069
    ior build failure
1070
* https://tracker.ceph.com/issues/59348
1071
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1072
* https://tracker.ceph.com/issues/59531
1073
    cluster [WRN] OSD bench result of 137706.272521 IOPS exceeded the threshold
1074
* https://tracker.ceph.com/issues/59344
1075
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
1076
* https://tracker.ceph.com/issues/59346
1077
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
1078
* https://tracker.ceph.com/issues/61399
1079
    qa: build failure for ior (the failed instance is when compiling `mdtest`)
1080
* https://tracker.ceph.com/issues/59684 [kclient bug]
1081
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
1082
* https://tracker.ceph.com/issues/61243 (NEW)
1083
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
1084
* https://tracker.ceph.com/issues/57655
1085
    qa: fs:mixed-clients kernel_untar_build failure
1086
* https://tracker.ceph.com/issues/57656
1087
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
1088 163 Venky Shankar
1089
1090
h3. 28 JULY 2023
1091
1092
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230725.053049
1093
1094
* https://tracker.ceph.com/issues/51964
1095
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1096
* https://tracker.ceph.com/issues/61400
1097
    valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
1098
* https://tracker.ceph.com/issues/61399
1099
    ior build failure
1100
* https://tracker.ceph.com/issues/57676
1101
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1102
* https://tracker.ceph.com/issues/59348
1103
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1104
* https://tracker.ceph.com/issues/59531
1105
    cluster [WRN] OSD bench result of 137706.272521 IOPS exceeded the threshold
1106
* https://tracker.ceph.com/issues/59344
1107
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
1108
* https://tracker.ceph.com/issues/59346
1109
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
1110
* https://github.com/ceph/ceph/pull/52556
1111
    task/test_nfs: AttributeError: 'TestNFS' object has no attribute 'run_ceph_cmd' (see note #4)
1112
* https://tracker.ceph.com/issues/62187
1113
    iozone: command not found
1114
* https://tracker.ceph.com/issues/61399
1115
    qa: build failure for ior (the failed instance is when compiling `mdtest`)
1116
* https://tracker.ceph.com/issues/62188
1117 164 Rishabh Dave
    AttributeError: 'RemoteProcess' object has no attribute 'read' (happens only with multis-auth test)
1118 158 Rishabh Dave
1119
h3. 24 Jul 2023
1120
1121
https://pulpito.ceph.com/rishabh-2023-07-13_21:35:13-fs-wip-rishabh-2023Jul13-testing-default-smithi/
1122
https://pulpito.ceph.com/rishabh-2023-07-14_10:26:42-fs-wip-rishabh-2023Jul13-testing-default-smithi/
1123
There were few failure from one of the PRs under testing. Following run confirms that removing this PR fixes these failures -
1124
https://pulpito.ceph.com/rishabh-2023-07-18_02:11:50-fs-wip-rishabh-2023Jul13-m-quota-testing-default-smithi/
1125
One more extra run to check if blogbench.sh fail every time:
1126
https://pulpito.ceph.com/rishabh-2023-07-21_17:58:19-fs-wip-rishabh-2023Jul13-m-quota-testing-default-smithi/
1127
blogbench.sh failure were seen on above runs for first time, following run with main branch that confirms that "blogbench.sh" was not related to any of the PRs that are under testing -
1128 161 Rishabh Dave
https://pulpito.ceph.com/rishabh-2023-07-21_21:30:53-fs-wip-rishabh-2023Jul13-base-2-testing-default-smithi/
1129
1130
* https://tracker.ceph.com/issues/61892
1131
  test_snapshot_remove (test_strays.TestStrays) failed
1132
* https://tracker.ceph.com/issues/53859
1133
  test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
1134
* https://tracker.ceph.com/issues/61982
1135
  test_clean_stale_subvolume_snapshot_metadata (tasks.cephfs.test_volumes.TestSubvolumeSnapshots)
1136
* https://tracker.ceph.com/issues/52438
1137
  qa: ffsb timeout
1138
* https://tracker.ceph.com/issues/54460
1139
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1140
* https://tracker.ceph.com/issues/57655
1141
  qa: fs:mixed-clients kernel_untar_build failure
1142
* https://tracker.ceph.com/issues/48773
1143
  reached max tries: scrub does not complete
1144
* https://tracker.ceph.com/issues/58340
1145
  mds: fsstress.sh hangs with multimds
1146
* https://tracker.ceph.com/issues/61400
1147
  valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
1148
* https://tracker.ceph.com/issues/57206
1149
  libcephfs/test.sh: ceph_test_libcephfs_reclaim
1150
  
1151
* https://tracker.ceph.com/issues/57656
1152
  [testing] dbench: write failed on handle 10010 (Resource temporarily unavailable)
1153
* https://tracker.ceph.com/issues/61399
1154
  ior build failure
1155
* https://tracker.ceph.com/issues/57676
1156
  error during scrub thrashing: backtrace
1157
  
1158
* https://tracker.ceph.com/issues/38452
1159
  'sudo -u postgres -- pgbench -s 500 -i' failed
1160 158 Rishabh Dave
* https://tracker.ceph.com/issues/62126
1161 157 Venky Shankar
  blogbench.sh failure
1162
1163
h3. 18 July 2023
1164
1165
* https://tracker.ceph.com/issues/52624
1166
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
1167
* https://tracker.ceph.com/issues/57676
1168
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1169
* https://tracker.ceph.com/issues/54460
1170
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1171
* https://tracker.ceph.com/issues/57655
1172
    qa: fs:mixed-clients kernel_untar_build failure
1173
* https://tracker.ceph.com/issues/51964
1174
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1175
* https://tracker.ceph.com/issues/59344
1176
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
1177
* https://tracker.ceph.com/issues/61182
1178
    cephfs-mirror-ha-workunit: reached maximum tries (50) after waiting for 300 seconds
1179
* https://tracker.ceph.com/issues/61957
1180
    test_client_limits.TestClientLimits.test_client_release_bug
1181
* https://tracker.ceph.com/issues/59348
1182
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1183
* https://tracker.ceph.com/issues/61892
1184
    test_strays.TestStrays.test_snapshot_remove failed
1185
* https://tracker.ceph.com/issues/59346
1186
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
1187
* https://tracker.ceph.com/issues/44565
1188
    src/mds/SimpleLock.h: 528: FAILED ceph_assert(state == LOCK_XLOCK || state == LOCK_XLOCKDONE || state == LOCK_XLOCKSNAP || state == LOCK_LOCK_XLOCK || state == LOCK_LOCK || is_locallock())
1189
* https://tracker.ceph.com/issues/62067
1190
    ffsb.sh failure "Resource temporarily unavailable"
1191 156 Venky Shankar
1192
1193
h3. 17 July 2023
1194
1195
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230704.040136
1196
1197
* https://tracker.ceph.com/issues/61982
1198
    Test failure: test_clean_stale_subvolume_snapshot_metadata (tasks.cephfs.test_volumes.TestSubvolumeSnapshots)
1199
* https://tracker.ceph.com/issues/59344
1200
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
1201
* https://tracker.ceph.com/issues/61182
1202
    cephfs-mirror-ha-workunit: reached maximum tries (50) after waiting for 300 seconds
1203
* https://tracker.ceph.com/issues/61957
1204
    test_client_limits.TestClientLimits.test_client_release_bug
1205
* https://tracker.ceph.com/issues/61400
1206
    valgrind+ceph-mon: segmentation fault in rocksdb+tcmalloc
1207
* https://tracker.ceph.com/issues/59348
1208
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1209
* https://tracker.ceph.com/issues/61892
1210
    test_strays.TestStrays.test_snapshot_remove failed
1211
* https://tracker.ceph.com/issues/59346
1212
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
1213
* https://tracker.ceph.com/issues/62036
1214
    src/mds/MDCache.cc: 5131: FAILED ceph_assert(isolated_inodes.empty())
1215
* https://tracker.ceph.com/issues/61737
1216
    coredump from '/bin/podman pull quay.ceph.io/ceph-ci/ceph:pacific'
1217
* https://tracker.ceph.com/issues/44565
1218
    src/mds/SimpleLock.h: 528: FAILED ceph_assert(state == LOCK_XLOCK || state == LOCK_XLOCKDONE || state == LOCK_XLOCKSNAP || state == LOCK_LOCK_XLOCK || state == LOCK_LOCK || is_locallock())
1219 155 Rishabh Dave
1220 1 Patrick Donnelly
1221 153 Rishabh Dave
h3. 13 July 2023 Run 2
1222 152 Rishabh Dave
1223
1224
https://pulpito.ceph.com/rishabh-2023-07-08_23:33:40-fs-wip-rishabh-2023Jul9-testing-default-smithi/
1225
https://pulpito.ceph.com/rishabh-2023-07-09_20:19:09-fs-wip-rishabh-2023Jul9-testing-default-smithi/
1226
1227
* https://tracker.ceph.com/issues/61957
1228
  test_client_limits.TestClientLimits.test_client_release_bug
1229
* https://tracker.ceph.com/issues/61982
1230
  Test failure: test_clean_stale_subvolume_snapshot_metadata (tasks.cephfs.test_volumes.TestSubvolumeSnapshots)
1231
* https://tracker.ceph.com/issues/59348
1232
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1233
* https://tracker.ceph.com/issues/59344
1234
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
1235
* https://tracker.ceph.com/issues/54460
1236
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1237
* https://tracker.ceph.com/issues/57655
1238
  qa: fs:mixed-clients kernel_untar_build failure
1239
* https://tracker.ceph.com/issues/61400
1240
  valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
1241
* https://tracker.ceph.com/issues/61399
1242
  ior build failure
1243
1244 151 Venky Shankar
h3. 13 July 2023
1245
1246
https://pulpito.ceph.com/vshankar-2023-07-04_11:45:30-fs-wip-vshankar-testing-20230704.040242-testing-default-smithi/
1247
1248
* https://tracker.ceph.com/issues/54460
1249
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1250
* https://tracker.ceph.com/issues/61400
1251
    valgrind+ceph-mon: segmentation fault in rocksdb+tcmalloc
1252
* https://tracker.ceph.com/issues/57655
1253
    qa: fs:mixed-clients kernel_untar_build failure
1254
* https://tracker.ceph.com/issues/61945
1255
    LibCephFS.DelegTimeout failure
1256
* https://tracker.ceph.com/issues/52624
1257
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
1258
* https://tracker.ceph.com/issues/57676
1259
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1260
* https://tracker.ceph.com/issues/59348
1261
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1262
* https://tracker.ceph.com/issues/59344
1263
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
1264
* https://tracker.ceph.com/issues/51964
1265
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1266
* https://tracker.ceph.com/issues/59346
1267
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
1268
* https://tracker.ceph.com/issues/61982
1269
    Test failure: test_clean_stale_subvolume_snapshot_metadata (tasks.cephfs.test_volumes.TestSubvolumeSnapshots)
1270 150 Rishabh Dave
1271
1272
h3. 13 Jul 2023
1273
1274
https://pulpito.ceph.com/rishabh-2023-07-05_22:21:20-fs-wip-rishabh-2023Jul5-testing-default-smithi/
1275
https://pulpito.ceph.com/rishabh-2023-07-06_19:33:28-fs-wip-rishabh-2023Jul5-testing-default-smithi/
1276
1277
* https://tracker.ceph.com/issues/61957
1278
  test_client_limits.TestClientLimits.test_client_release_bug
1279
* https://tracker.ceph.com/issues/59348
1280
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1281
* https://tracker.ceph.com/issues/59346
1282
  fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
1283
* https://tracker.ceph.com/issues/48773
1284
  scrub does not complete: reached max tries
1285
* https://tracker.ceph.com/issues/59344
1286
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
1287
* https://tracker.ceph.com/issues/52438
1288
  qa: ffsb timeout
1289
* https://tracker.ceph.com/issues/57656
1290
  [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
1291
* https://tracker.ceph.com/issues/58742
1292
  xfstests-dev: kcephfs: generic
1293
* https://tracker.ceph.com/issues/61399
1294 148 Rishabh Dave
  libmpich: undefined references to fi_strerror
1295 149 Rishabh Dave
1296 148 Rishabh Dave
h3. 12 July 2023
1297
1298
https://pulpito.ceph.com/rishabh-2023-07-05_18:32:52-fs-wip-rishabh-CephManager-in-CephFSTestCase-testing-default-smithi/
1299
https://pulpito.ceph.com/rishabh-2023-07-06_19:46:43-fs-wip-rishabh-CephManager-in-CephFSTestCase-testing-default-smithi/
1300
1301
* https://tracker.ceph.com/issues/61892
1302
  test_strays.TestStrays.test_snapshot_remove failed
1303
* https://tracker.ceph.com/issues/59348
1304
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1305
* https://tracker.ceph.com/issues/53859
1306
  qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
1307
* https://tracker.ceph.com/issues/59346
1308
  fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write"
1309
* https://tracker.ceph.com/issues/58742
1310
  xfstests-dev: kcephfs: generic
1311
* https://tracker.ceph.com/issues/59344
1312
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
1313
* https://tracker.ceph.com/issues/52438
1314
  qa: ffsb timeout
1315
* https://tracker.ceph.com/issues/57656
1316
  [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
1317
* https://tracker.ceph.com/issues/54460
1318
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1319
* https://tracker.ceph.com/issues/57655
1320
  qa: fs:mixed-clients kernel_untar_build failure
1321
* https://tracker.ceph.com/issues/61182
1322
  cephfs-mirror-ha-workunit: reached maximum tries (50) after waiting for 300 seconds
1323
* https://tracker.ceph.com/issues/61400
1324
  valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
1325 147 Rishabh Dave
* https://tracker.ceph.com/issues/48773
1326 146 Patrick Donnelly
  reached max tries: scrub does not complete
1327
1328
h3. 05 July 2023
1329
1330
https://pulpito.ceph.com/pdonnell-2023-07-05_03:38:33-fs:libcephfs-wip-pdonnell-testing-20230705.003205-distro-default-smithi/
1331
1332 137 Rishabh Dave
* https://tracker.ceph.com/issues/59346
1333 143 Rishabh Dave
  fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write"
1334
1335
h3. 27 Jun 2023
1336
1337
https://pulpito.ceph.com/rishabh-2023-06-21_23:38:17-fs-wip-rishabh-improvements-authmon-testing-default-smithi/
1338 144 Rishabh Dave
https://pulpito.ceph.com/rishabh-2023-06-23_17:37:30-fs-wip-rishabh-improvements-authmon-distro-default-smithi/
1339
1340
* https://tracker.ceph.com/issues/59348
1341
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1342
* https://tracker.ceph.com/issues/54460
1343
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1344
* https://tracker.ceph.com/issues/59346
1345
  fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write"
1346
* https://tracker.ceph.com/issues/59344
1347
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
1348
* https://tracker.ceph.com/issues/61399
1349
  libmpich: undefined references to fi_strerror
1350
* https://tracker.ceph.com/issues/50223
1351
  client.xxxx isn't responding to mclientcaps(revoke)
1352 143 Rishabh Dave
* https://tracker.ceph.com/issues/61831
1353
  Test failure: test_mirroring_init_failure_with_recovery (tasks.cephfs.test_mirroring.TestMirroring)
1354 142 Venky Shankar
1355
1356
h3. 22 June 2023
1357
1358
* https://tracker.ceph.com/issues/57676
1359
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1360
* https://tracker.ceph.com/issues/54460
1361
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1362
* https://tracker.ceph.com/issues/59344
1363
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
1364
* https://tracker.ceph.com/issues/59348
1365
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1366
* https://tracker.ceph.com/issues/61400
1367
    valgrind+ceph-mon: segmentation fault in rocksdb+tcmalloc
1368
* https://tracker.ceph.com/issues/57655
1369
    qa: fs:mixed-clients kernel_untar_build failure
1370
* https://tracker.ceph.com/issues/61394
1371
    qa/quincy: cluster [WRN] evicting unresponsive client smithi152 (4298), after 303.726 seconds" in cluster log
1372
* https://tracker.ceph.com/issues/61762
1373
    qa: wait_for_clean: failed before timeout expired
1374
* https://tracker.ceph.com/issues/61775
1375
    cephfs-mirror: mirror daemon does not shutdown (in mirror ha tests)
1376
* https://tracker.ceph.com/issues/44565
1377
    src/mds/SimpleLock.h: 528: FAILED ceph_assert(state == LOCK_XLOCK || state == LOCK_XLOCKDONE || state == LOCK_XLOCKSNAP || state == LOCK_LOCK_XLOCK || state == LOCK_LOCK || is_locallock())
1378
* https://tracker.ceph.com/issues/61790
1379
    cephfs client to mds comms remain silent after reconnect
1380
* https://tracker.ceph.com/issues/61791
1381
    snaptest-git-ceph.sh test timed out (job dead)
1382 139 Venky Shankar
1383
1384
h3. 20 June 2023
1385
1386
https://pulpito.ceph.com/vshankar-2023-06-15_04:58:28-fs-wip-vshankar-testing-20230614.124123-testing-default-smithi/
1387
1388
* https://tracker.ceph.com/issues/57676
1389
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1390
* https://tracker.ceph.com/issues/54460
1391
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1392 140 Venky Shankar
* https://tracker.ceph.com/issues/54462
1393 1 Patrick Donnelly
  Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128
1394 141 Venky Shankar
* https://tracker.ceph.com/issues/58340
1395 139 Venky Shankar
  mds: fsstress.sh hangs with multimds
1396
* https://tracker.ceph.com/issues/59344
1397
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
1398
* https://tracker.ceph.com/issues/59348
1399
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1400
* https://tracker.ceph.com/issues/57656
1401
  [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
1402
* https://tracker.ceph.com/issues/61400
1403
    valgrind+ceph-mon: segmentation fault in rocksdb+tcmalloc
1404
* https://tracker.ceph.com/issues/57655
1405
    qa: fs:mixed-clients kernel_untar_build failure
1406
* https://tracker.ceph.com/issues/44565
1407
    src/mds/SimpleLock.h: 528: FAILED ceph_assert(state == LOCK_XLOCK || state == LOCK_XLOCKDONE || state == LOCK_XLOCKSNAP || state == LOCK_LOCK_XLOCK || state == LOCK_LOCK || is_locallock())
1408
* https://tracker.ceph.com/issues/61737
1409 138 Rishabh Dave
    coredump from '/bin/podman pull quay.ceph.io/ceph-ci/ceph:pacific'
1410
1411
h3. 16 June 2023
1412
1413 1 Patrick Donnelly
https://pulpito.ceph.com/rishabh-2023-05-16_10:39:13-fs-wip-rishabh-2023May15-1524-testing-default-smithi/
1414 145 Rishabh Dave
https://pulpito.ceph.com/rishabh-2023-05-17_11:09:48-fs-wip-rishabh-2023May15-1524-testing-default-smithi/
1415 138 Rishabh Dave
https://pulpito.ceph.com/rishabh-2023-05-18_10:01:53-fs-wip-rishabh-2023May15-1524-testing-default-smithi/
1416 1 Patrick Donnelly
(bins were rebuilt with a subset of orig PRs) https://pulpito.ceph.com/rishabh-2023-06-09_10:19:22-fs-wip-rishabh-2023Jun9-1308-testing-default-smithi/
1417
1418
1419
* https://tracker.ceph.com/issues/59344
1420
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
1421 138 Rishabh Dave
* https://tracker.ceph.com/issues/59348
1422
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1423 145 Rishabh Dave
* https://tracker.ceph.com/issues/59346
1424
  fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write"
1425
* https://tracker.ceph.com/issues/57656
1426
  [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
1427
* https://tracker.ceph.com/issues/54460
1428
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1429 138 Rishabh Dave
* https://tracker.ceph.com/issues/54462
1430
  Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128
1431 145 Rishabh Dave
* https://tracker.ceph.com/issues/61399
1432
  libmpich: undefined references to fi_strerror
1433
* https://tracker.ceph.com/issues/58945
1434
  xfstests-dev: ceph-fuse: generic 
1435 138 Rishabh Dave
* https://tracker.ceph.com/issues/58742
1436 136 Patrick Donnelly
  xfstests-dev: kcephfs: generic
1437
1438
h3. 24 May 2023
1439
1440
https://pulpito.ceph.com/pdonnell-2023-05-23_18:20:18-fs-wip-pdonnell-testing-20230523.134409-distro-default-smithi/
1441
1442
* https://tracker.ceph.com/issues/57676
1443
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1444
* https://tracker.ceph.com/issues/59683
1445
    Error: Unable to find a match: userspace-rcu-devel libedit-devel device-mapper-devel with fscrypt tests
1446
* https://tracker.ceph.com/issues/61399
1447
    qa: "[Makefile:299: ior] Error 1"
1448
* https://tracker.ceph.com/issues/61265
1449
    qa: tasks.cephfs.fuse_mount:process failed to terminate after unmount
1450
* https://tracker.ceph.com/issues/59348
1451
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1452
* https://tracker.ceph.com/issues/59346
1453
    qa/workunits/fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write"
1454
* https://tracker.ceph.com/issues/61400
1455
    valgrind+ceph-mon: segmentation fault in rocksdb+tcmalloc
1456
* https://tracker.ceph.com/issues/54460
1457
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1458
* https://tracker.ceph.com/issues/51964
1459
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1460
* https://tracker.ceph.com/issues/59344
1461
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
1462
* https://tracker.ceph.com/issues/61407
1463
    mds: abort on CInode::verify_dirfrags
1464
* https://tracker.ceph.com/issues/48773
1465
    qa: scrub does not complete
1466
* https://tracker.ceph.com/issues/57655
1467
    qa: fs:mixed-clients kernel_untar_build failure
1468
* https://tracker.ceph.com/issues/61409
1469 128 Venky Shankar
    qa: _test_stale_caps does not wait for file flush before stat
1470
1471
h3. 15 May 2023
1472 130 Venky Shankar
1473 128 Venky Shankar
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230509.090020
1474
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230509.090020-6
1475
1476
* https://tracker.ceph.com/issues/52624
1477
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
1478
* https://tracker.ceph.com/issues/54460
1479
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1480
* https://tracker.ceph.com/issues/57676
1481
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1482
* https://tracker.ceph.com/issues/59684 [kclient bug]
1483
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
1484
* https://tracker.ceph.com/issues/59348
1485
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1486 131 Venky Shankar
* https://tracker.ceph.com/issues/61148
1487
    dbench test results in call trace in dmesg [kclient bug]
1488 133 Kotresh Hiremath Ravishankar
* https://tracker.ceph.com/issues/58340
1489 134 Kotresh Hiremath Ravishankar
    mds: fsstress.sh hangs with multimds
1490 125 Venky Shankar
1491
 
1492 129 Rishabh Dave
h3. 11 May 2023
1493
1494
https://pulpito.ceph.com/yuriw-2023-05-10_18:21:40-fs-wip-yuri7-testing-2023-05-10-0742-distro-default-smithi/
1495
1496
* https://tracker.ceph.com/issues/59684 [kclient bug]
1497
  Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
1498
* https://tracker.ceph.com/issues/59348
1499
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1500
* https://tracker.ceph.com/issues/57655
1501
  qa: fs:mixed-clients kernel_untar_build failure
1502
* https://tracker.ceph.com/issues/57676
1503
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
1504
* https://tracker.ceph.com/issues/55805
1505
  error during scrub thrashing reached max tries in 900 secs
1506
* https://tracker.ceph.com/issues/54460
1507
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1508
* https://tracker.ceph.com/issues/57656
1509
  [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
1510
* https://tracker.ceph.com/issues/58220
1511
  Command failed (workunit test fs/quota/quota.sh) on smithi081 with status 1:
1512 1 Patrick Donnelly
* https://tracker.ceph.com/issues/58220#note-9
1513
  workunit fs/test_python.sh: test_disk_quota_exceeeded_error failure
1514 134 Kotresh Hiremath Ravishankar
* https://tracker.ceph.com/issues/59342
1515
  qa/workunits/kernel_untar_build.sh failed when compiling the Linux source
1516 135 Kotresh Hiremath Ravishankar
* https://tracker.ceph.com/issues/58949
1517
    test_cephfs.test_disk_quota_exceeeded_error - AssertionError: DiskQuotaExceeded not raised by write
1518 129 Rishabh Dave
* https://tracker.ceph.com/issues/61243 (NEW)
1519
  test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
1520
1521 125 Venky Shankar
h3. 11 May 2023
1522 127 Venky Shankar
1523
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230509.054005
1524 126 Venky Shankar
1525 125 Venky Shankar
(no fsstress job failure [https://tracker.ceph.com/issues/58340] since https://github.com/ceph/ceph/pull/49553
1526
 was included in the branch, however, the PR got updated and needs retest).
1527
1528
* https://tracker.ceph.com/issues/52624
1529
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
1530
* https://tracker.ceph.com/issues/54460
1531
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1532
* https://tracker.ceph.com/issues/57676
1533
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1534
* https://tracker.ceph.com/issues/59683
1535
    Error: Unable to find a match: userspace-rcu-devel libedit-devel device-mapper-devel with fscrypt tests
1536
* https://tracker.ceph.com/issues/59684 [kclient bug]
1537
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
1538
* https://tracker.ceph.com/issues/59348
1539 124 Venky Shankar
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1540
1541
h3. 09 May 2023
1542
1543
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230506.143554
1544
1545
* https://tracker.ceph.com/issues/52624
1546
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
1547
* https://tracker.ceph.com/issues/58340
1548
    mds: fsstress.sh hangs with multimds
1549
* https://tracker.ceph.com/issues/54460
1550
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1551
* https://tracker.ceph.com/issues/57676
1552
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1553
* https://tracker.ceph.com/issues/51964
1554
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1555
* https://tracker.ceph.com/issues/59350
1556
    qa: test_flag_scrub_mdsdir (tasks.cephfs.test_scrub_checks.TestScrubChecks) ... ERROR
1557
* https://tracker.ceph.com/issues/59683
1558
    Error: Unable to find a match: userspace-rcu-devel libedit-devel device-mapper-devel with fscrypt tests
1559
* https://tracker.ceph.com/issues/59684 [kclient bug]
1560
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
1561
* https://tracker.ceph.com/issues/59348
1562 123 Venky Shankar
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1563
1564
h3. 10 Apr 2023
1565
1566
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230330.105356
1567
1568
* https://tracker.ceph.com/issues/52624
1569
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
1570
* https://tracker.ceph.com/issues/58340
1571
    mds: fsstress.sh hangs with multimds
1572
* https://tracker.ceph.com/issues/54460
1573
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1574
* https://tracker.ceph.com/issues/57676
1575
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1576 119 Rishabh Dave
* https://tracker.ceph.com/issues/51964
1577 120 Rishabh Dave
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1578 121 Rishabh Dave
1579 120 Rishabh Dave
h3. 31 Mar 2023
1580 122 Rishabh Dave
1581
run: http://pulpito.front.sepia.ceph.com/rishabh-2023-03-03_21:39:49-fs-wip-rishabh-2023Mar03-2316-testing-default-smithi/
1582 120 Rishabh Dave
re-run (some PRs removed from batch): http://pulpito.front.sepia.ceph.com/rishabh-2023-03-11_05:54:03-fs-wip-rishabh-2023Mar10-1727-testing-default-smithi/
1583
re-run (some PRs removed from batch): http://pulpito.front.sepia.ceph.com/rishabh-2023-03-23_08:27:28-fs-wip-rishabh-2023Mar20-2250-testing-default-smithi/
1584
1585
There were many more re-runs for "failed+dead" jobs as well as for individual jobs. half of the PRs from the batch were removed (gradually over subsequent re-runs).
1586
1587
* https://tracker.ceph.com/issues/57676
1588
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
1589
* https://tracker.ceph.com/issues/54460
1590
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1591
* https://tracker.ceph.com/issues/58220
1592
  Command failed (workunit test fs/quota/quota.sh) on smithixxx with status 1
1593
* https://tracker.ceph.com/issues/58220#note-9
1594
  workunit fs/test_python.sh: test_disk_quota_exceeeded_error failure
1595
* https://tracker.ceph.com/issues/56695
1596
  Command failed (workunit test suites/pjd.sh)
1597
* https://tracker.ceph.com/issues/58564 
1598
  workuit dbench failed with error code 1
1599
* https://tracker.ceph.com/issues/57206
1600
  libcephfs/test.sh: ceph_test_libcephfs_reclaim
1601
* https://tracker.ceph.com/issues/57580
1602
  Test failure: test_newops_getvxattr (tasks.cephfs.test_newops.TestNewOps)
1603
* https://tracker.ceph.com/issues/58940
1604
  ceph osd hit ceph_abort
1605
* https://tracker.ceph.com/issues/55805
1606 118 Venky Shankar
  error scrub thrashing reached max tries in 900 secs
1607
1608
h3. 30 March 2023
1609
1610
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230315.085747
1611
1612
* https://tracker.ceph.com/issues/58938
1613
    qa: xfstests-dev's generic test suite has 7 failures with kclient
1614
* https://tracker.ceph.com/issues/51964
1615
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1616
* https://tracker.ceph.com/issues/58340
1617 114 Venky Shankar
    mds: fsstress.sh hangs with multimds
1618
1619 115 Venky Shankar
h3. 29 March 2023
1620 114 Venky Shankar
1621
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230317.095222
1622
1623
* https://tracker.ceph.com/issues/56695
1624
    [RHEL stock] pjd test failures
1625
* https://tracker.ceph.com/issues/57676
1626
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1627
* https://tracker.ceph.com/issues/57087
1628
    qa: test_fragmented_injection (tasks.cephfs.test_data_scan.TestDataScan) failure
1629 116 Venky Shankar
* https://tracker.ceph.com/issues/58340
1630
    mds: fsstress.sh hangs with multimds
1631 114 Venky Shankar
* https://tracker.ceph.com/issues/57655
1632
    qa: fs:mixed-clients kernel_untar_build failure
1633 117 Venky Shankar
* https://tracker.ceph.com/issues/59230
1634
    Test failure: test_object_deletion (tasks.cephfs.test_damage.TestDamage)
1635 114 Venky Shankar
* https://tracker.ceph.com/issues/58938
1636 113 Venky Shankar
    qa: xfstests-dev's generic test suite has 7 failures with kclient
1637
1638
h3. 13 Mar 2023
1639
1640
* https://tracker.ceph.com/issues/56695
1641
    [RHEL stock] pjd test failures
1642
* https://tracker.ceph.com/issues/57676
1643
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1644
* https://tracker.ceph.com/issues/51964
1645
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1646
* https://tracker.ceph.com/issues/54460
1647
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1648
* https://tracker.ceph.com/issues/57656
1649 112 Venky Shankar
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
1650
1651
h3. 09 Mar 2023
1652
1653
https://pulpito.ceph.com/vshankar-2023-03-03_04:39:14-fs-wip-vshankar-testing-20230303.023823-testing-default-smithi/
1654
https://pulpito.ceph.com/vshankar-2023-03-08_15:12:36-fs-wip-vshankar-testing-20230308.112059-testing-default-smithi/
1655
1656
* https://tracker.ceph.com/issues/56695
1657
    [RHEL stock] pjd test failures
1658
* https://tracker.ceph.com/issues/57676
1659
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1660
* https://tracker.ceph.com/issues/51964
1661
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1662
* https://tracker.ceph.com/issues/54460
1663
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1664
* https://tracker.ceph.com/issues/58340
1665
    mds: fsstress.sh hangs with multimds
1666
* https://tracker.ceph.com/issues/57087
1667 111 Venky Shankar
    qa: test_fragmented_injection (tasks.cephfs.test_data_scan.TestDataScan) failure
1668
1669
h3. 07 Mar 2023
1670
1671
https://pulpito.ceph.com/vshankar-2023-03-02_09:21:58-fs-wip-vshankar-testing-20230222.044949-testing-default-smithi/
1672
https://pulpito.ceph.com/vshankar-2023-03-07_05:15:12-fs-wip-vshankar-testing-20230307.030510-testing-default-smithi/
1673
1674
* https://tracker.ceph.com/issues/56695
1675
    [RHEL stock] pjd test failures
1676
* https://tracker.ceph.com/issues/57676
1677
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1678
* https://tracker.ceph.com/issues/51964
1679
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1680
* https://tracker.ceph.com/issues/57656
1681
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
1682
* https://tracker.ceph.com/issues/57655
1683
    qa: fs:mixed-clients kernel_untar_build failure
1684
* https://tracker.ceph.com/issues/58220
1685
    Command failed (workunit test fs/quota/quota.sh) on smithi081 with status 1:
1686
* https://tracker.ceph.com/issues/54460
1687
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1688
* https://tracker.ceph.com/issues/58934
1689 109 Venky Shankar
    snaptest-git-ceph.sh failure with ceph-fuse
1690
1691
h3. 28 Feb 2023
1692
1693
https://pulpito.ceph.com/vshankar-2023-02-24_02:11:45-fs-wip-vshankar-testing-20230222.025426-testing-default-smithi/
1694
1695
* https://tracker.ceph.com/issues/56695
1696
    [RHEL stock] pjd test failures
1697
* https://tracker.ceph.com/issues/57676
1698
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1699 110 Venky Shankar
* https://tracker.ceph.com/issues/56446
1700 109 Venky Shankar
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
1701
1702 107 Venky Shankar
(teuthology infra issues causing testing delays - merging PRs which have tests passing)
1703
1704
h3. 25 Jan 2023
1705
1706
https://pulpito.ceph.com/vshankar-2023-01-25_07:57:32-fs-wip-vshankar-testing-20230125.055346-testing-default-smithi/
1707
1708
* https://tracker.ceph.com/issues/52624
1709
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
1710
* https://tracker.ceph.com/issues/56695
1711
    [RHEL stock] pjd test failures
1712
* https://tracker.ceph.com/issues/57676
1713
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1714
* https://tracker.ceph.com/issues/56446
1715
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
1716
* https://tracker.ceph.com/issues/57206
1717
    libcephfs/test.sh: ceph_test_libcephfs_reclaim
1718
* https://tracker.ceph.com/issues/58220
1719
    Command failed (workunit test fs/quota/quota.sh) on smithi081 with status 1:
1720
* https://tracker.ceph.com/issues/58340
1721
  mds: fsstress.sh hangs with multimds
1722
* https://tracker.ceph.com/issues/56011
1723
    fs/thrash: snaptest-snap-rm-cmp.sh fails in mds5sum comparison
1724
* https://tracker.ceph.com/issues/54460
1725 101 Rishabh Dave
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1726
1727
h3. 30 JAN 2023
1728
1729
run: http://pulpito.front.sepia.ceph.com/rishabh-2022-11-28_08:04:11-fs-wip-rishabh-testing-2022Nov24-1818-testing-default-smithi/
1730
re-run: http://pulpito.front.sepia.ceph.com/rishabh-2023-01-13_12:08:33-fs-wip-rishabh-testing-2022Nov24-11Jan2023-distro-default-smithi/
1731 105 Rishabh Dave
re-run of re-run: http://pulpito.front.sepia.ceph.com/rishabh-2023-01-23_18:53:32-fs-wip-rishabh-testing-2022Nov24-11Jan2023-distro-default-smithi/
1732
1733 101 Rishabh Dave
* https://tracker.ceph.com/issues/52624
1734
  qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
1735
* https://tracker.ceph.com/issues/56695
1736
  [RHEL stock] pjd test failures
1737
* https://tracker.ceph.com/issues/57676
1738
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
1739
* https://tracker.ceph.com/issues/55332
1740
  Failure in snaptest-git-ceph.sh
1741
* https://tracker.ceph.com/issues/51964
1742
  qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1743
* https://tracker.ceph.com/issues/56446
1744
  Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
1745
* https://tracker.ceph.com/issues/57655 
1746
  qa: fs:mixed-clients kernel_untar_build failure
1747
* https://tracker.ceph.com/issues/54460
1748
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1749 103 Rishabh Dave
* https://tracker.ceph.com/issues/58340
1750
  mds: fsstress.sh hangs with multimds
1751 101 Rishabh Dave
* https://tracker.ceph.com/issues/58219
1752 102 Rishabh Dave
  Command crashed: 'ceph-dencoder type inode_backtrace_t import - decode dump_json'
1753
1754
* "Failed to load ceph-mgr modules: prometheus" in cluster log"
1755 106 Rishabh Dave
  http://pulpito.front.sepia.ceph.com/rishabh-2023-01-23_18:53:32-fs-wip-rishabh-testing-2022Nov24-11Jan2023-distro-default-smithi/7134086
1756
  Acc to Venky this was fixed in https://github.com/ceph/ceph/commit/cf6089200d96fc56b08ee17a4e31f19823370dc8
1757 102 Rishabh Dave
* Created https://tracker.ceph.com/issues/58564
1758 100 Venky Shankar
  workunit test suites/dbench.sh failed error code 1
1759
1760
h3. 15 Dec 2022
1761
1762
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20221215.112736
1763
1764
* https://tracker.ceph.com/issues/52624
1765
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1766
* https://tracker.ceph.com/issues/56695
1767
    [RHEL stock] pjd test failures
1768
* https://tracker.ceph.com/issues/58219
1769
* https://tracker.ceph.com/issues/57655
1770
* qa: fs:mixed-clients kernel_untar_build failure
1771
    Test failure: test_journal_migration (tasks.cephfs.test_journal_migration.TestJournalMigration)
1772
* https://tracker.ceph.com/issues/57676
1773
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1774
* https://tracker.ceph.com/issues/58340
1775 96 Venky Shankar
    mds: fsstress.sh hangs with multimds
1776
1777
h3. 08 Dec 2022
1778 99 Venky Shankar
1779 96 Venky Shankar
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20221130.043104
1780
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20221209.043803
1781
1782
(lots of transient git.ceph.com failures)
1783
1784
* https://tracker.ceph.com/issues/52624
1785
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1786
* https://tracker.ceph.com/issues/56695
1787
    [RHEL stock] pjd test failures
1788
* https://tracker.ceph.com/issues/57655
1789
    qa: fs:mixed-clients kernel_untar_build failure
1790
* https://tracker.ceph.com/issues/58219
1791
    Test failure: test_journal_migration (tasks.cephfs.test_journal_migration.TestJournalMigration)
1792
* https://tracker.ceph.com/issues/58220
1793
    Command failed (workunit test fs/quota/quota.sh) on smithi081 with status 1:
1794 97 Venky Shankar
* https://tracker.ceph.com/issues/57676
1795
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1796 98 Venky Shankar
* https://tracker.ceph.com/issues/53859
1797
    qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
1798
* https://tracker.ceph.com/issues/54460
1799
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1800 96 Venky Shankar
* https://tracker.ceph.com/issues/58244
1801 95 Venky Shankar
    Test failure: test_rebuild_inotable (tasks.cephfs.test_data_scan.TestDataScan)
1802
1803
h3. 14 Oct 2022
1804
1805
https://pulpito.ceph.com/vshankar-2022-10-12_04:56:59-fs-wip-vshankar-testing-20221011-145847-testing-default-smithi/
1806
https://pulpito.ceph.com/vshankar-2022-10-14_04:04:57-fs-wip-vshankar-testing-20221014-072608-testing-default-smithi/
1807
1808
* https://tracker.ceph.com/issues/52624
1809
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1810
* https://tracker.ceph.com/issues/55804
1811
    Command failed (workunit test suites/pjd.sh)
1812
* https://tracker.ceph.com/issues/51964
1813
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1814
* https://tracker.ceph.com/issues/57682
1815
    client: ERROR: test_reconnect_after_blocklisted
1816 90 Rishabh Dave
* https://tracker.ceph.com/issues/54460
1817 91 Rishabh Dave
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1818
1819
h3. 10 Oct 2022
1820 92 Rishabh Dave
1821 91 Rishabh Dave
http://pulpito.front.sepia.ceph.com/rishabh-2022-09-30_19:45:21-fs-wip-rishabh-testing-30Sep2022-testing-default-smithi/
1822
1823
reruns
1824
* fs-thrash, passed: http://pulpito.front.sepia.ceph.com/rishabh-2022-10-04_13:19:47-fs-wip-rishabh-testing-30Sep2022-testing-default-smithi/
1825 94 Rishabh Dave
* fs-verify, passed: http://pulpito.front.sepia.ceph.com/rishabh-2022-10-05_12:25:37-fs-wip-rishabh-testing-30Sep2022-testing-default-smithi/
1826 91 Rishabh Dave
* cephadm failures also passed after many re-runs: http://pulpito.front.sepia.ceph.com/rishabh-2022-10-06_13:50:51-fs-wip-rishabh-testing-30Sep2022-2-testing-default-smithi/
1827 93 Rishabh Dave
    ** needed this PR to be merged in ceph-ci branch - https://github.com/ceph/ceph/pull/47458
1828 91 Rishabh Dave
1829
known bugs
1830
* https://tracker.ceph.com/issues/52624
1831
  qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1832
* https://tracker.ceph.com/issues/50223
1833
  client.xxxx isn't responding to mclientcaps(revoke
1834
* https://tracker.ceph.com/issues/57299
1835
  qa: test_dump_loads fails with JSONDecodeError
1836
* https://tracker.ceph.com/issues/57655 [Exist in main as well]
1837
  qa: fs:mixed-clients kernel_untar_build failure
1838
* https://tracker.ceph.com/issues/57206
1839 90 Rishabh Dave
  libcephfs/test.sh: ceph_test_libcephfs_reclaim
1840
1841
h3. 2022 Sep 29
1842
1843
http://pulpito.front.sepia.ceph.com/rishabh-2022-09-14_12:48:43-fs-wip-rishabh-testing-2022Sep9-1708-testing-default-smithi/
1844
1845
* https://tracker.ceph.com/issues/55804
1846
  Command failed (workunit test suites/pjd.sh)
1847
* https://tracker.ceph.com/issues/36593
1848
  Command failed (workunit test fs/quota/quota.sh) on smithixxx with status 1
1849
* https://tracker.ceph.com/issues/52624
1850
  qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1851
* https://tracker.ceph.com/issues/51964
1852
  qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1853
* https://tracker.ceph.com/issues/56632
1854
  Test failure: test_subvolume_snapshot_clone_quota_exceeded
1855
* https://tracker.ceph.com/issues/50821
1856 88 Patrick Donnelly
  qa: untar_snap_rm failure during mds thrashing
1857
1858
h3. 2022 Sep 26
1859
1860
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20220923.171109
1861
1862
* https://tracker.ceph.com/issues/55804
1863
    qa failure: pjd link tests failed
1864
* https://tracker.ceph.com/issues/57676
1865
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1866
* https://tracker.ceph.com/issues/52624
1867
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1868
* https://tracker.ceph.com/issues/57580
1869
    Test failure: test_newops_getvxattr (tasks.cephfs.test_newops.TestNewOps)
1870
* https://tracker.ceph.com/issues/48773
1871
    qa: scrub does not complete
1872
* https://tracker.ceph.com/issues/57299
1873
    qa: test_dump_loads fails with JSONDecodeError
1874
* https://tracker.ceph.com/issues/57280
1875
    qa: tasks/kernel_cfuse_workunits_untarbuild_blogbench fails - Failed to fetch package version from shaman
1876
* https://tracker.ceph.com/issues/57205
1877
    Test failure: test_subvolume_group_ls_filter_internal_directories (tasks.cephfs.test_volumes.TestSubvolumeGroups)
1878
* https://tracker.ceph.com/issues/57656
1879
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
1880
* https://tracker.ceph.com/issues/57677
1881
    qa: "1 MDSs behind on trimming (MDS_TRIM)"
1882
* https://tracker.ceph.com/issues/57206
1883
    libcephfs/test.sh: ceph_test_libcephfs_reclaim
1884
* https://tracker.ceph.com/issues/57446
1885
    qa: test_subvolume_snapshot_info_if_orphan_clone fails
1886 89 Patrick Donnelly
* https://tracker.ceph.com/issues/57655 [Exist in main as well]
1887
    qa: fs:mixed-clients kernel_untar_build failure
1888 88 Patrick Donnelly
* https://tracker.ceph.com/issues/57682
1889
    client: ERROR: test_reconnect_after_blocklisted
1890 87 Patrick Donnelly
1891
1892
h3. 2022 Sep 22
1893
1894
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20220920.234701
1895
1896
* https://tracker.ceph.com/issues/57299
1897
    qa: test_dump_loads fails with JSONDecodeError
1898
* https://tracker.ceph.com/issues/57205
1899
    Test failure: test_subvolume_group_ls_filter_internal_directories (tasks.cephfs.test_volumes.TestSubvolumeGroups)
1900
* https://tracker.ceph.com/issues/52624
1901
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1902
* https://tracker.ceph.com/issues/57580
1903
    Test failure: test_newops_getvxattr (tasks.cephfs.test_newops.TestNewOps)
1904
* https://tracker.ceph.com/issues/57280
1905
    qa: tasks/kernel_cfuse_workunits_untarbuild_blogbench fails - Failed to fetch package version from shaman
1906
* https://tracker.ceph.com/issues/48773
1907
    qa: scrub does not complete
1908
* https://tracker.ceph.com/issues/56446
1909
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
1910
* https://tracker.ceph.com/issues/57206
1911
    libcephfs/test.sh: ceph_test_libcephfs_reclaim
1912
* https://tracker.ceph.com/issues/51267
1913
    CommandFailedError: Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithi096 with status 1:...
1914
1915
NEW:
1916
1917
* https://tracker.ceph.com/issues/57656
1918
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
1919
* https://tracker.ceph.com/issues/57655 [Exist in main as well]
1920
    qa: fs:mixed-clients kernel_untar_build failure
1921
* https://tracker.ceph.com/issues/57657
1922
    mds: scrub locates mismatch between child accounted_rstats and self rstats
1923
1924
Segfault probably caused by: https://github.com/ceph/ceph/pull/47795#issuecomment-1255724799
1925 80 Venky Shankar
1926 79 Venky Shankar
1927
h3. 2022 Sep 16
1928
1929
https://pulpito.ceph.com/?branch=wip-vshankar-testing1-20220905-132828
1930
1931
* https://tracker.ceph.com/issues/57446
1932
    qa: test_subvolume_snapshot_info_if_orphan_clone fails
1933
* https://tracker.ceph.com/issues/57299
1934
    qa: test_dump_loads fails with JSONDecodeError
1935
* https://tracker.ceph.com/issues/50223
1936
    client.xxxx isn't responding to mclientcaps(revoke)
1937
* https://tracker.ceph.com/issues/52624
1938
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1939
* https://tracker.ceph.com/issues/57205
1940
    Test failure: test_subvolume_group_ls_filter_internal_directories (tasks.cephfs.test_volumes.TestSubvolumeGroups)
1941
* https://tracker.ceph.com/issues/57280
1942
    qa: tasks/kernel_cfuse_workunits_untarbuild_blogbench fails - Failed to fetch package version from shaman
1943
* https://tracker.ceph.com/issues/51282
1944
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
1945
* https://tracker.ceph.com/issues/48203
1946
  https://tracker.ceph.com/issues/36593
1947
    qa: quota failure
1948
    qa: quota failure caused by clients stepping on each other
1949
* https://tracker.ceph.com/issues/57580
1950 77 Rishabh Dave
    Test failure: test_newops_getvxattr (tasks.cephfs.test_newops.TestNewOps)
1951
1952 76 Rishabh Dave
1953
h3. 2022 Aug 26
1954
1955
http://pulpito.front.sepia.ceph.com/rishabh-2022-08-22_17:49:59-fs-wip-rishabh-testing-2022Aug19-testing-default-smithi/
1956
http://pulpito.front.sepia.ceph.com/rishabh-2022-08-24_11:56:51-fs-wip-rishabh-testing-2022Aug19-testing-default-smithi/
1957
1958
* https://tracker.ceph.com/issues/57206
1959
  libcephfs/test.sh: ceph_test_libcephfs_reclaim
1960
* https://tracker.ceph.com/issues/56632
1961
  Test failure: test_subvolume_snapshot_clone_quota_exceeded (tasks.cephfs.test_volumes.TestSubvolumeSnapshotClones)
1962
* https://tracker.ceph.com/issues/56446
1963
  Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
1964
* https://tracker.ceph.com/issues/51964
1965
  qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1966
* https://tracker.ceph.com/issues/53859
1967
  qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
1968
1969
* https://tracker.ceph.com/issues/54460
1970
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1971
* https://tracker.ceph.com/issues/54462
1972
  Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128
1973
* https://tracker.ceph.com/issues/54460
1974
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1975
* https://tracker.ceph.com/issues/36593
1976
  Command failed (workunit test fs/quota/quota.sh) on smithixxx with status 1
1977
1978
* https://tracker.ceph.com/issues/52624
1979
  qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1980
* https://tracker.ceph.com/issues/55804
1981
  Command failed (workunit test suites/pjd.sh)
1982
* https://tracker.ceph.com/issues/50223
1983
  client.xxxx isn't responding to mclientcaps(revoke)
1984 75 Venky Shankar
1985
1986
h3. 2022 Aug 22
1987
1988
https://pulpito.ceph.com/vshankar-2022-08-12_09:34:24-fs-wip-vshankar-testing1-20220812-072441-testing-default-smithi/
1989
https://pulpito.ceph.com/vshankar-2022-08-18_04:30:42-fs-wip-vshankar-testing1-20220818-082047-testing-default-smithi/ (drop problematic PR and re-run)
1990
1991
* https://tracker.ceph.com/issues/52624
1992
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1993
* https://tracker.ceph.com/issues/56446
1994
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
1995
* https://tracker.ceph.com/issues/55804
1996
    Command failed (workunit test suites/pjd.sh)
1997
* https://tracker.ceph.com/issues/51278
1998
    mds: "FAILED ceph_assert(!segments.empty())"
1999
* https://tracker.ceph.com/issues/54460
2000
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
2001
* https://tracker.ceph.com/issues/57205
2002
    Test failure: test_subvolume_group_ls_filter_internal_directories (tasks.cephfs.test_volumes.TestSubvolumeGroups)
2003
* https://tracker.ceph.com/issues/57206
2004
    ceph_test_libcephfs_reclaim crashes during test
2005
* https://tracker.ceph.com/issues/53859
2006
  Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
2007
* https://tracker.ceph.com/issues/50223
2008 72 Venky Shankar
    client.xxxx isn't responding to mclientcaps(revoke)
2009
2010
h3. 2022 Aug 12
2011
2012
https://pulpito.ceph.com/vshankar-2022-08-10_04:06:00-fs-wip-vshankar-testing-20220805-190751-testing-default-smithi/
2013
https://pulpito.ceph.com/vshankar-2022-08-11_12:16:58-fs-wip-vshankar-testing-20220811-145809-testing-default-smithi/ (drop problematic PR and re-run)
2014
2015
* https://tracker.ceph.com/issues/52624
2016
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2017
* https://tracker.ceph.com/issues/56446
2018
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
2019
* https://tracker.ceph.com/issues/51964
2020
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
2021
* https://tracker.ceph.com/issues/55804
2022
    Command failed (workunit test suites/pjd.sh)
2023
* https://tracker.ceph.com/issues/50223
2024
    client.xxxx isn't responding to mclientcaps(revoke)
2025
* https://tracker.ceph.com/issues/50821
2026 73 Venky Shankar
    qa: untar_snap_rm failure during mds thrashing
2027 72 Venky Shankar
* https://tracker.ceph.com/issues/54460
2028 71 Venky Shankar
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
2029
2030
h3. 2022 Aug 04
2031
2032
https://pulpito.ceph.com/?branch=wip-vshankar-testing1-20220804-123835 (only mgr/volumes, mgr/stats)
2033
2034 69 Rishabh Dave
Unrealted teuthology failure on rhel
2035 68 Rishabh Dave
2036
h3. 2022 Jul 25
2037
2038
http://pulpito.front.sepia.ceph.com/rishabh-2022-07-22_11:34:20-fs-wip-rishabh-testing-2022Jul22-1400-testing-default-smithi/
2039
2040 74 Rishabh Dave
1st re-run: http://pulpito.front.sepia.ceph.com/rishabh-2022-07-24_03:51:19-fs-wip-rishabh-testing-2022Jul22-1400-testing-default-smithi
2041
2nd re-run: http://pulpito.front.sepia.ceph.com/rishabh-2022-07-24_08:53:36-fs-wip-rishabh-testing-2022Jul22-1400-testing-default-smithi/
2042 68 Rishabh Dave
3rd re-run: http://pulpito.front.sepia.ceph.com/rishabh-2022-07-24_08:53:36-fs-wip-rishabh-testing-2022Jul22-1400-testing-default-smithi/
2043
4th (final) re-run: http://pulpito.front.sepia.ceph.com/rishabh-2022-07-28_03:59:01-fs-wip-rishabh-testing-2022Jul28-0143-testing-default-smithi/
2044
2045
* https://tracker.ceph.com/issues/55804
2046
  Command failed (workunit test suites/pjd.sh)
2047
* https://tracker.ceph.com/issues/50223
2048
  client.xxxx isn't responding to mclientcaps(revoke)
2049
2050
* https://tracker.ceph.com/issues/54460
2051
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
2052 1 Patrick Donnelly
* https://tracker.ceph.com/issues/36593
2053 74 Rishabh Dave
  Command failed (workunit test fs/quota/quota.sh) on smithixxx with status 1
2054 68 Rishabh Dave
* https://tracker.ceph.com/issues/54462
2055 67 Patrick Donnelly
  Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128~
2056
2057
h3. 2022 July 22
2058
2059
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20220721.235756
2060
2061
MDS_HEALTH_DUMMY error in log fixed by followup commit.
2062
transient selinux ping failure
2063
2064
* https://tracker.ceph.com/issues/56694
2065
    qa: avoid blocking forever on hung umount
2066
* https://tracker.ceph.com/issues/56695
2067
    [RHEL stock] pjd test failures
2068
* https://tracker.ceph.com/issues/56696
2069
    admin keyring disappears during qa run
2070
* https://tracker.ceph.com/issues/56697
2071
    qa: fs/snaps fails for fuse
2072
* https://tracker.ceph.com/issues/50222
2073
    osd: 5.2s0 deep-scrub : stat mismatch
2074
* https://tracker.ceph.com/issues/56698
2075
    client: FAILED ceph_assert(_size == 0)
2076
* https://tracker.ceph.com/issues/50223
2077
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2078 66 Rishabh Dave
2079 65 Rishabh Dave
2080
h3. 2022 Jul 15
2081
2082
http://pulpito.front.sepia.ceph.com/rishabh-2022-07-08_23:53:34-fs-wip-rishabh-testing-2022Jul08-1820-testing-default-smithi/
2083
2084
re-run: http://pulpito.front.sepia.ceph.com/rishabh-2022-07-15_06:42:04-fs-wip-rishabh-testing-2022Jul08-1820-testing-default-smithi/
2085
2086
* https://tracker.ceph.com/issues/53859
2087
  Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
2088
* https://tracker.ceph.com/issues/55804
2089
  Command failed (workunit test suites/pjd.sh)
2090
* https://tracker.ceph.com/issues/50223
2091
  client.xxxx isn't responding to mclientcaps(revoke)
2092
* https://tracker.ceph.com/issues/50222
2093
  osd: deep-scrub : stat mismatch
2094
2095
* https://tracker.ceph.com/issues/56632
2096
  Test failure: test_subvolume_snapshot_clone_quota_exceeded (tasks.cephfs.test_volumes.TestSubvolumeSnapshotClones)
2097
* https://tracker.ceph.com/issues/56634
2098
  workunit test fs/snaps/snaptest-intodir.sh
2099
* https://tracker.ceph.com/issues/56644
2100
  Test failure: test_rapid_creation (tasks.cephfs.test_fragment.TestFragmentation)
2101
2102 61 Rishabh Dave
2103
2104
h3. 2022 July 05
2105 62 Rishabh Dave
2106 64 Rishabh Dave
http://pulpito.front.sepia.ceph.com/rishabh-2022-07-02_14:14:52-fs-wip-rishabh-testing-20220702-1631-testing-default-smithi/
2107
2108
On 1st re-run some jobs passed - http://pulpito.front.sepia.ceph.com/rishabh-2022-07-03_15:10:28-fs-wip-rishabh-testing-20220702-1631-distro-default-smithi/
2109
2110
On 2nd re-run only few jobs failed -
2111 62 Rishabh Dave
http://pulpito.front.sepia.ceph.com/rishabh-2022-07-06_05:24:29-fs-wip-rishabh-testing-20220705-2132-distro-default-smithi/
2112
http://pulpito.front.sepia.ceph.com/rishabh-2022-07-06_05:24:29-fs-wip-rishabh-testing-20220705-2132-distro-default-smithi/
2113
2114
* https://tracker.ceph.com/issues/56446
2115
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
2116
* https://tracker.ceph.com/issues/55804
2117
    Command failed (workunit test suites/pjd.sh) on smithi047 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/
2118
2119
* https://tracker.ceph.com/issues/56445
2120 63 Rishabh Dave
    Command failed on smithi080 with status 123: "find /home/ubuntu/cephtest/archive/syslog -name '*.log' -print0 | sudo xargs -0 --no-run-if-empty -- gzip --"
2121
* https://tracker.ceph.com/issues/51267
2122
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithi098 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1
2123 62 Rishabh Dave
* https://tracker.ceph.com/issues/50224
2124
    Test failure: test_mirroring_init_failure_with_recovery (tasks.cephfs.test_mirroring.TestMirroring)
2125 61 Rishabh Dave
2126 58 Venky Shankar
2127
2128
h3. 2022 July 04
2129
2130
https://pulpito.ceph.com/vshankar-2022-06-29_09:19:00-fs-wip-vshankar-testing-20220627-100931-testing-default-smithi/
2131
(rhel runs were borked due to: https://lists.ceph.io/hyperkitty/list/dev@ceph.io/thread/JSZQFUKVLDND4W33PXDGCABPHNSPT6SS/, tests ran with --filter-out=rhel)
2132
2133
* https://tracker.ceph.com/issues/56445
2134 59 Rishabh Dave
    Command failed on smithi162 with status 123: "find /home/ubuntu/cephtest/archive/syslog -name '*.log' -print0 | sudo xargs -0 --no-run-if-empty -- gzip --"
2135
* https://tracker.ceph.com/issues/56446
2136
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
2137
* https://tracker.ceph.com/issues/51964
2138 60 Rishabh Dave
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
2139 59 Rishabh Dave
* https://tracker.ceph.com/issues/52624
2140 57 Venky Shankar
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2141
2142
h3. 2022 June 20
2143
2144
https://pulpito.ceph.com/vshankar-2022-06-15_04:03:39-fs-wip-vshankar-testing1-20220615-072516-testing-default-smithi/
2145
https://pulpito.ceph.com/vshankar-2022-06-19_08:22:46-fs-wip-vshankar-testing1-20220619-102531-testing-default-smithi/
2146
2147
* https://tracker.ceph.com/issues/52624
2148
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2149
* https://tracker.ceph.com/issues/55804
2150
    qa failure: pjd link tests failed
2151
* https://tracker.ceph.com/issues/54108
2152
    qa: iogen workunit: "The following counters failed to be set on mds daemons: {'mds.exported', 'mds.imported'}"
2153
* https://tracker.ceph.com/issues/55332
2154 56 Patrick Donnelly
    Failure in snaptest-git-ceph.sh (it's an async unlink/create bug)
2155
2156
h3. 2022 June 13
2157
2158
https://pulpito.ceph.com/pdonnell-2022-06-12_05:08:12-fs:workload-wip-pdonnell-testing-20220612.004943-distro-default-smithi/
2159
2160
* https://tracker.ceph.com/issues/56024
2161
    cephadm: removes ceph.conf during qa run causing command failure
2162
* https://tracker.ceph.com/issues/48773
2163
    qa: scrub does not complete
2164
* https://tracker.ceph.com/issues/56012
2165
    mds: src/mds/MDLog.cc: 283: FAILED ceph_assert(!mds->is_ any_replay())
2166 55 Venky Shankar
2167 54 Venky Shankar
2168
h3. 2022 Jun 13
2169
2170
https://pulpito.ceph.com/vshankar-2022-06-07_00:25:50-fs-wip-vshankar-testing-20220606-223254-testing-default-smithi/
2171
https://pulpito.ceph.com/vshankar-2022-06-10_01:04:46-fs-wip-vshankar-testing-20220609-175550-testing-default-smithi/
2172
2173
* https://tracker.ceph.com/issues/52624
2174
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2175
* https://tracker.ceph.com/issues/51964
2176
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
2177
* https://tracker.ceph.com/issues/53859
2178
    qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
2179
* https://tracker.ceph.com/issues/55804
2180
    qa failure: pjd link tests failed
2181
* https://tracker.ceph.com/issues/56003
2182
    client: src/include/xlist.h: 81: FAILED ceph_assert(_size == 0)
2183
* https://tracker.ceph.com/issues/56011
2184
    fs/thrash: snaptest-snap-rm-cmp.sh fails in mds5sum comparison
2185
* https://tracker.ceph.com/issues/56012
2186 53 Venky Shankar
    mds: src/mds/MDLog.cc: 283: FAILED ceph_assert(!mds->is_ any_replay())
2187
2188
h3. 2022 Jun 07
2189
2190
https://pulpito.ceph.com/vshankar-2022-06-06_21:25:41-fs-wip-vshankar-testing1-20220606-230129-testing-default-smithi/
2191
https://pulpito.ceph.com/vshankar-2022-06-07_10:53:31-fs-wip-vshankar-testing1-20220607-104134-testing-default-smithi/ (rerun after dropping a problematic PR)
2192
2193
* https://tracker.ceph.com/issues/52624
2194
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2195
* https://tracker.ceph.com/issues/50223
2196
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2197
* https://tracker.ceph.com/issues/50224
2198 51 Venky Shankar
    qa: test_mirroring_init_failure_with_recovery failure
2199
2200
h3. 2022 May 12
2201 52 Venky Shankar
2202 51 Venky Shankar
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20220509-125847
2203
https://pulpito.ceph.com/vshankar-2022-05-13_17:09:16-fs-wip-vshankar-testing-20220513-120051-testing-default-smithi/ (drop prs + rerun)
2204
2205
* https://tracker.ceph.com/issues/52624
2206
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2207
* https://tracker.ceph.com/issues/50223
2208
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2209
* https://tracker.ceph.com/issues/55332
2210
    Failure in snaptest-git-ceph.sh
2211
* https://tracker.ceph.com/issues/53859
2212 1 Patrick Donnelly
    qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
2213 52 Venky Shankar
* https://tracker.ceph.com/issues/55538
2214
    Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead)
2215 51 Venky Shankar
* https://tracker.ceph.com/issues/55258
2216 49 Venky Shankar
    lots of "heartbeat_check: no reply from X.X.X.X" in OSD logs (cropss up again, though very infrequent)
2217
2218 50 Venky Shankar
h3. 2022 May 04
2219
2220
https://pulpito.ceph.com/vshankar-2022-05-01_13:18:44-fs-wip-vshankar-testing1-20220428-204527-testing-default-smithi/
2221 49 Venky Shankar
https://pulpito.ceph.com/vshankar-2022-05-02_16:58:59-fs-wip-vshankar-testing1-20220502-201957-testing-default-smithi/ (after dropping PRs)
2222
2223
* https://tracker.ceph.com/issues/52624
2224
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2225
* https://tracker.ceph.com/issues/50223
2226
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2227
* https://tracker.ceph.com/issues/55332
2228
    Failure in snaptest-git-ceph.sh
2229
* https://tracker.ceph.com/issues/53859
2230
    qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
2231
* https://tracker.ceph.com/issues/55516
2232
    qa: fs suite tests failing with "json.decoder.JSONDecodeError: Extra data: line 2 column 82 (char 82)"
2233
* https://tracker.ceph.com/issues/55537
2234
    mds: crash during fs:upgrade test
2235
* https://tracker.ceph.com/issues/55538
2236 48 Venky Shankar
    Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead)
2237
2238
h3. 2022 Apr 25
2239
2240
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20220420-113951 (owner vshankar)
2241
2242
* https://tracker.ceph.com/issues/52624
2243
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2244
* https://tracker.ceph.com/issues/50223
2245
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2246
* https://tracker.ceph.com/issues/55258
2247
    lots of "heartbeat_check: no reply from X.X.X.X" in OSD logs
2248
* https://tracker.ceph.com/issues/55377
2249 47 Venky Shankar
    kclient: mds revoke Fwb caps stuck after the kclient tries writebcak once
2250
2251
h3. 2022 Apr 14
2252
2253
https://pulpito.ceph.com/?branch=wip-vshankar-testing1-20220411-144044
2254
2255
* https://tracker.ceph.com/issues/52624
2256
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2257
* https://tracker.ceph.com/issues/50223
2258
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2259
* https://tracker.ceph.com/issues/52438
2260
    qa: ffsb timeout
2261
* https://tracker.ceph.com/issues/55170
2262
    mds: crash during rejoin (CDir::fetch_keys)
2263
* https://tracker.ceph.com/issues/55331
2264
    pjd failure
2265
* https://tracker.ceph.com/issues/48773
2266
    qa: scrub does not complete
2267
* https://tracker.ceph.com/issues/55332
2268
    Failure in snaptest-git-ceph.sh
2269
* https://tracker.ceph.com/issues/55258
2270 45 Venky Shankar
    lots of "heartbeat_check: no reply from X.X.X.X" in OSD logs
2271
2272 46 Venky Shankar
h3. 2022 Apr 11
2273 45 Venky Shankar
2274
https://pulpito.ceph.com/?branch=wip-vshankar-testing-55110-20220408-203242
2275
2276
* https://tracker.ceph.com/issues/48773
2277
    qa: scrub does not complete
2278
* https://tracker.ceph.com/issues/52624
2279
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2280
* https://tracker.ceph.com/issues/52438
2281
    qa: ffsb timeout
2282
* https://tracker.ceph.com/issues/48680
2283
    mds: scrubbing stuck "scrub active (0 inodes in the stack)"
2284
* https://tracker.ceph.com/issues/55236
2285
    qa: fs/snaps tests fails with "hit max job timeout"
2286
* https://tracker.ceph.com/issues/54108
2287
    qa: iogen workunit: "The following counters failed to be set on mds daemons: {'mds.exported', 'mds.imported'}"
2288
* https://tracker.ceph.com/issues/54971
2289
    Test failure: test_perf_stats_stale_metrics (tasks.cephfs.test_mds_metrics.TestMDSMetrics)
2290
* https://tracker.ceph.com/issues/50223
2291
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2292
* https://tracker.ceph.com/issues/55258
2293 44 Venky Shankar
    lots of "heartbeat_check: no reply from X.X.X.X" in OSD logs
2294 42 Venky Shankar
2295 43 Venky Shankar
h3. 2022 Mar 21
2296
2297
https://pulpito.ceph.com/vshankar-2022-03-20_02:16:37-fs-wip-vshankar-testing-20220319-163539-testing-default-smithi/
2298
2299
Run didn't go well, lots of failures - debugging by dropping PRs and running against master branch. Only merging unrelated PRs that pass tests.
2300
2301
2302 42 Venky Shankar
h3. 2022 Mar 08
2303
2304
https://pulpito.ceph.com/vshankar-2022-02-28_04:32:15-fs-wip-vshankar-testing-20220226-211550-testing-default-smithi/
2305
2306
rerun with
2307
- (drop) https://github.com/ceph/ceph/pull/44679
2308
- (drop) https://github.com/ceph/ceph/pull/44958
2309
https://pulpito.ceph.com/vshankar-2022-03-06_14:47:51-fs-wip-vshankar-testing-20220304-132102-testing-default-smithi/
2310
2311
* https://tracker.ceph.com/issues/54419 (new)
2312
    `ceph orch upgrade start` seems to never reach completion
2313
* https://tracker.ceph.com/issues/51964
2314
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
2315
* https://tracker.ceph.com/issues/52624
2316
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2317
* https://tracker.ceph.com/issues/50223
2318
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2319
* https://tracker.ceph.com/issues/52438
2320
    qa: ffsb timeout
2321
* https://tracker.ceph.com/issues/50821
2322
    qa: untar_snap_rm failure during mds thrashing
2323 41 Venky Shankar
2324
2325
h3. 2022 Feb 09
2326
2327
https://pulpito.ceph.com/vshankar-2022-02-05_17:27:49-fs-wip-vshankar-testing-20220201-113815-testing-default-smithi/
2328
2329
rerun with
2330
- (drop) https://github.com/ceph/ceph/pull/37938
2331
- (drop) https://github.com/ceph/ceph/pull/44335
2332
- (drop) https://github.com/ceph/ceph/pull/44491
2333
- (drop) https://github.com/ceph/ceph/pull/44501
2334
https://pulpito.ceph.com/vshankar-2022-02-08_14:27:29-fs-wip-vshankar-testing-20220208-181241-testing-default-smithi/
2335
2336
* https://tracker.ceph.com/issues/51964
2337
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
2338
* https://tracker.ceph.com/issues/54066
2339
    test_subvolume_no_upgrade_v1_sanity fails with `AssertionError: 1000 != 0`
2340
* https://tracker.ceph.com/issues/48773
2341
    qa: scrub does not complete
2342
* https://tracker.ceph.com/issues/52624
2343
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2344
* https://tracker.ceph.com/issues/50223
2345
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2346
* https://tracker.ceph.com/issues/52438
2347 40 Patrick Donnelly
    qa: ffsb timeout
2348
2349
h3. 2022 Feb 01
2350
2351
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20220127.171526
2352
2353
* https://tracker.ceph.com/issues/54107
2354
    kclient: hang during umount
2355
* https://tracker.ceph.com/issues/54106
2356
    kclient: hang during workunit cleanup
2357
* https://tracker.ceph.com/issues/54108
2358
    qa: iogen workunit: "The following counters failed to be set on mds daemons: {'mds.exported', 'mds.imported'}"
2359
* https://tracker.ceph.com/issues/48773
2360
    qa: scrub does not complete
2361
* https://tracker.ceph.com/issues/52438
2362
    qa: ffsb timeout
2363 36 Venky Shankar
2364
2365
h3. 2022 Jan 13
2366 39 Venky Shankar
2367 36 Venky Shankar
https://pulpito.ceph.com/vshankar-2022-01-06_13:18:41-fs-wip-vshankar-testing-20220106-145819-testing-default-smithi/
2368 38 Venky Shankar
2369
rerun with:
2370 36 Venky Shankar
- (add) https://github.com/ceph/ceph/pull/44570
2371
- (drop) https://github.com/ceph/ceph/pull/43184
2372
https://pulpito.ceph.com/vshankar-2022-01-13_04:42:40-fs-wip-vshankar-testing-20220106-145819-testing-default-smithi/
2373
2374
* https://tracker.ceph.com/issues/50223
2375
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2376
* https://tracker.ceph.com/issues/51282
2377
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2378
* https://tracker.ceph.com/issues/48773
2379
    qa: scrub does not complete
2380
* https://tracker.ceph.com/issues/52624
2381
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2382
* https://tracker.ceph.com/issues/53859
2383 34 Venky Shankar
    qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
2384
2385
h3. 2022 Jan 03
2386
2387
https://pulpito.ceph.com/vshankar-2021-12-22_07:37:44-fs-wip-vshankar-testing-20211216-114012-testing-default-smithi/
2388
https://pulpito.ceph.com/vshankar-2022-01-03_12:27:45-fs-wip-vshankar-testing-20220103-142738-testing-default-smithi/ (rerun)
2389
2390
* https://tracker.ceph.com/issues/50223
2391
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2392
* https://tracker.ceph.com/issues/51964
2393
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
2394
* https://tracker.ceph.com/issues/51267
2395
    CommandFailedError: Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithi096 with status 1:...
2396
* https://tracker.ceph.com/issues/51282
2397
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2398
* https://tracker.ceph.com/issues/50821
2399
    qa: untar_snap_rm failure during mds thrashing
2400 35 Ramana Raja
* https://tracker.ceph.com/issues/51278
2401
    mds: "FAILED ceph_assert(!segments.empty())"
2402
* https://tracker.ceph.com/issues/52279
2403 34 Venky Shankar
    cephadm tests fail due to: error adding seccomp filter rule for syscall bdflush: requested action matches default action of filter
2404 33 Patrick Donnelly
2405
2406
h3. 2021 Dec 22
2407
2408
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211222.014316
2409
2410
* https://tracker.ceph.com/issues/52624
2411
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2412
* https://tracker.ceph.com/issues/50223
2413
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2414
* https://tracker.ceph.com/issues/52279
2415
    cephadm tests fail due to: error adding seccomp filter rule for syscall bdflush: requested action matches default action of filter
2416
* https://tracker.ceph.com/issues/50224
2417
    qa: test_mirroring_init_failure_with_recovery failure
2418
* https://tracker.ceph.com/issues/48773
2419
    qa: scrub does not complete
2420 32 Venky Shankar
2421
2422
h3. 2021 Nov 30
2423
2424
https://pulpito.ceph.com/vshankar-2021-11-24_07:14:27-fs-wip-vshankar-testing-20211124-094330-testing-default-smithi/
2425
https://pulpito.ceph.com/vshankar-2021-11-30_06:23:32-fs-wip-vshankar-testing-20211124-094330-distro-default-smithi/ (rerun w/ QA fixes)
2426
2427
* https://tracker.ceph.com/issues/53436
2428
    mds, mon: mds beacon messages get dropped? (mds never reaches up:active state)
2429
* https://tracker.ceph.com/issues/51964
2430
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
2431
* https://tracker.ceph.com/issues/48812
2432
    qa: test_scrub_pause_and_resume_with_abort failure
2433
* https://tracker.ceph.com/issues/51076
2434
    "wait_for_recovery: failed before timeout expired" during thrashosd test with EC backend.
2435
* https://tracker.ceph.com/issues/50223
2436
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2437
* https://tracker.ceph.com/issues/52624
2438
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2439
* https://tracker.ceph.com/issues/50250
2440
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
2441 31 Patrick Donnelly
2442
2443
h3. 2021 November 9
2444
2445
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211109.180315
2446
2447
* https://tracker.ceph.com/issues/53214
2448
    qa: "dd: error reading '/sys/kernel/debug/ceph/2a934501-6731-4052-a836-f42229a869be.client4874/metrics': Is a directory"
2449
* https://tracker.ceph.com/issues/48773
2450
    qa: scrub does not complete
2451
* https://tracker.ceph.com/issues/50223
2452
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2453
* https://tracker.ceph.com/issues/51282
2454
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2455
* https://tracker.ceph.com/issues/52624
2456
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2457
* https://tracker.ceph.com/issues/53216
2458
    qa: "RuntimeError: value of attributes should be either str or None. client_id"
2459
* https://tracker.ceph.com/issues/50250
2460
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
2461
2462 30 Patrick Donnelly
2463
2464
h3. 2021 November 03
2465
2466
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211103.023355
2467
2468
* https://tracker.ceph.com/issues/51964
2469
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
2470
* https://tracker.ceph.com/issues/51282
2471
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2472
* https://tracker.ceph.com/issues/52436
2473
    fs/ceph: "corrupt mdsmap"
2474
* https://tracker.ceph.com/issues/53074
2475
    pybind/mgr/cephadm: upgrade sequence does not continue if no MDS are active
2476
* https://tracker.ceph.com/issues/53150
2477
    pybind/mgr/cephadm/upgrade: tolerate MDS failures during upgrade straddling v16.2.5
2478
* https://tracker.ceph.com/issues/53155
2479
    MDSMonitor: assertion during upgrade to v16.2.5+
2480 29 Patrick Donnelly
2481
2482
h3. 2021 October 26
2483
2484
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211025.000447
2485
2486
* https://tracker.ceph.com/issues/53074
2487
    pybind/mgr/cephadm: upgrade sequence does not continue if no MDS are active
2488
* https://tracker.ceph.com/issues/52997
2489
    testing: hang ing umount
2490
* https://tracker.ceph.com/issues/50824
2491
    qa: snaptest-git-ceph bus error
2492
* https://tracker.ceph.com/issues/52436
2493
    fs/ceph: "corrupt mdsmap"
2494
* https://tracker.ceph.com/issues/48773
2495
    qa: scrub does not complete
2496
* https://tracker.ceph.com/issues/53082
2497
    ceph-fuse: segmenetation fault in Client::handle_mds_map
2498
* https://tracker.ceph.com/issues/50223
2499
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2500
* https://tracker.ceph.com/issues/52624
2501
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2502
* https://tracker.ceph.com/issues/50224
2503
    qa: test_mirroring_init_failure_with_recovery failure
2504
* https://tracker.ceph.com/issues/50821
2505
    qa: untar_snap_rm failure during mds thrashing
2506
* https://tracker.ceph.com/issues/50250
2507
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
2508
2509 27 Patrick Donnelly
2510
2511 28 Patrick Donnelly
h3. 2021 October 19
2512 27 Patrick Donnelly
2513
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211019.013028
2514
2515
* https://tracker.ceph.com/issues/52995
2516
    qa: test_standby_count_wanted failure
2517
* https://tracker.ceph.com/issues/52948
2518
    osd: fails to come up: "teuthology.misc:7 of 8 OSDs are up"
2519
* https://tracker.ceph.com/issues/52996
2520
    qa: test_perf_counters via test_openfiletable
2521
* https://tracker.ceph.com/issues/48772
2522
    qa: pjd: not ok 9, 44, 80
2523
* https://tracker.ceph.com/issues/52997
2524
    testing: hang ing umount
2525
* https://tracker.ceph.com/issues/50250
2526
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
2527
* https://tracker.ceph.com/issues/52624
2528
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2529
* https://tracker.ceph.com/issues/50223
2530
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2531
* https://tracker.ceph.com/issues/50821
2532
    qa: untar_snap_rm failure during mds thrashing
2533
* https://tracker.ceph.com/issues/48773
2534
    qa: scrub does not complete
2535 26 Patrick Donnelly
2536
2537
h3. 2021 October 12
2538
2539
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211012.192211
2540
2541
Some failures caused by teuthology bug: https://tracker.ceph.com/issues/52944
2542
2543
New test caused failure: https://github.com/ceph/ceph/pull/43297#discussion_r729883167
2544
2545
2546
* https://tracker.ceph.com/issues/51282
2547
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2548
* https://tracker.ceph.com/issues/52948
2549
    osd: fails to come up: "teuthology.misc:7 of 8 OSDs are up"
2550
* https://tracker.ceph.com/issues/48773
2551
    qa: scrub does not complete
2552
* https://tracker.ceph.com/issues/50224
2553
    qa: test_mirroring_init_failure_with_recovery failure
2554
* https://tracker.ceph.com/issues/52949
2555
    RuntimeError: The following counters failed to be set on mds daemons: {'mds.dir_split'}
2556 25 Patrick Donnelly
2557 23 Patrick Donnelly
2558 24 Patrick Donnelly
h3. 2021 October 02
2559
2560
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211002.163337
2561
2562
Some failures caused by cephadm upgrade test. Fixed in follow-up qa commit.
2563
2564
test_simple failures caused by PR in this set.
2565
2566
A few reruns because of QA infra noise.
2567
2568
* https://tracker.ceph.com/issues/52822
2569
    qa: failed pacific install on fs:upgrade
2570
* https://tracker.ceph.com/issues/52624
2571
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2572
* https://tracker.ceph.com/issues/50223
2573
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2574
* https://tracker.ceph.com/issues/48773
2575
    qa: scrub does not complete
2576
2577
2578 23 Patrick Donnelly
h3. 2021 September 20
2579
2580
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210917.174826
2581
2582
* https://tracker.ceph.com/issues/52677
2583
    qa: test_simple failure
2584
* https://tracker.ceph.com/issues/51279
2585
    kclient hangs on umount (testing branch)
2586
* https://tracker.ceph.com/issues/50223
2587
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2588
* https://tracker.ceph.com/issues/50250
2589
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
2590
* https://tracker.ceph.com/issues/52624
2591
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2592
* https://tracker.ceph.com/issues/52438
2593
    qa: ffsb timeout
2594 22 Patrick Donnelly
2595
2596
h3. 2021 September 10
2597
2598
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210910.181451
2599
2600
* https://tracker.ceph.com/issues/50223
2601
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2602
* https://tracker.ceph.com/issues/50250
2603
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
2604
* https://tracker.ceph.com/issues/52624
2605
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2606
* https://tracker.ceph.com/issues/52625
2607
    qa: test_kill_mdstable (tasks.cephfs.test_snapshots.TestSnapshots)
2608
* https://tracker.ceph.com/issues/52439
2609
    qa: acls does not compile on centos stream
2610
* https://tracker.ceph.com/issues/50821
2611
    qa: untar_snap_rm failure during mds thrashing
2612
* https://tracker.ceph.com/issues/48773
2613
    qa: scrub does not complete
2614
* https://tracker.ceph.com/issues/52626
2615
    mds: ScrubStack.cc: 831: FAILED ceph_assert(diri)
2616
* https://tracker.ceph.com/issues/51279
2617
    kclient hangs on umount (testing branch)
2618 21 Patrick Donnelly
2619
2620
h3. 2021 August 27
2621
2622
Several jobs died because of device failures.
2623
2624
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210827.024746
2625
2626
* https://tracker.ceph.com/issues/52430
2627
    mds: fast async create client mount breaks racy test
2628
* https://tracker.ceph.com/issues/52436
2629
    fs/ceph: "corrupt mdsmap"
2630
* https://tracker.ceph.com/issues/52437
2631
    mds: InoTable::replay_release_ids abort via test_inotable_sync
2632
* https://tracker.ceph.com/issues/51282
2633
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2634
* https://tracker.ceph.com/issues/52438
2635
    qa: ffsb timeout
2636
* https://tracker.ceph.com/issues/52439
2637
    qa: acls does not compile on centos stream
2638 20 Patrick Donnelly
2639
2640
h3. 2021 July 30
2641
2642
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210729.214022
2643
2644
* https://tracker.ceph.com/issues/50250
2645
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
2646
* https://tracker.ceph.com/issues/51282
2647
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2648
* https://tracker.ceph.com/issues/48773
2649
    qa: scrub does not complete
2650
* https://tracker.ceph.com/issues/51975
2651
    pybind/mgr/stats: KeyError
2652 19 Patrick Donnelly
2653
2654
h3. 2021 July 28
2655
2656
https://pulpito.ceph.com/pdonnell-2021-07-28_00:39:45-fs-wip-pdonnell-testing-20210727.213757-distro-basic-smithi/
2657
2658
with qa fix: https://pulpito.ceph.com/pdonnell-2021-07-28_16:20:28-fs-wip-pdonnell-testing-20210728.141004-distro-basic-smithi/
2659
2660
* https://tracker.ceph.com/issues/51905
2661
    qa: "error reading sessionmap 'mds1_sessionmap'"
2662
* https://tracker.ceph.com/issues/48773
2663
    qa: scrub does not complete
2664
* https://tracker.ceph.com/issues/50250
2665
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
2666
* https://tracker.ceph.com/issues/51267
2667
    CommandFailedError: Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithi096 with status 1:...
2668
* https://tracker.ceph.com/issues/51279
2669
    kclient hangs on umount (testing branch)
2670 18 Patrick Donnelly
2671
2672
h3. 2021 July 16
2673
2674
https://pulpito.ceph.com/pdonnell-2021-07-16_05:50:11-fs-wip-pdonnell-testing-20210716.022804-distro-basic-smithi/
2675
2676
* https://tracker.ceph.com/issues/48773
2677
    qa: scrub does not complete
2678
* https://tracker.ceph.com/issues/48772
2679
    qa: pjd: not ok 9, 44, 80
2680
* https://tracker.ceph.com/issues/45434
2681
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2682
* https://tracker.ceph.com/issues/51279
2683
    kclient hangs on umount (testing branch)
2684
* https://tracker.ceph.com/issues/50824
2685
    qa: snaptest-git-ceph bus error
2686 17 Patrick Donnelly
2687
2688
h3. 2021 July 04
2689
2690
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210703.052904
2691
2692
* https://tracker.ceph.com/issues/48773
2693
    qa: scrub does not complete
2694
* https://tracker.ceph.com/issues/39150
2695
    mon: "FAILED ceph_assert(session_map.sessions.empty())" when out of quorum
2696
* https://tracker.ceph.com/issues/45434
2697
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2698
* https://tracker.ceph.com/issues/51282
2699
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2700
* https://tracker.ceph.com/issues/48771
2701
    qa: iogen: workload fails to cause balancing
2702
* https://tracker.ceph.com/issues/51279
2703
    kclient hangs on umount (testing branch)
2704
* https://tracker.ceph.com/issues/50250
2705
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
2706 16 Patrick Donnelly
2707
2708
h3. 2021 July 01
2709
2710
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210701.192056
2711
2712
* https://tracker.ceph.com/issues/51197
2713
    qa: [WRN] Scrub error on inode 0x10000001520 (/client.0/tmp/t/linux-5.4/Documentation/driver-api) see mds.f log and `damage ls` output for details
2714
* https://tracker.ceph.com/issues/50866
2715
    osd: stat mismatch on objects
2716
* https://tracker.ceph.com/issues/48773
2717
    qa: scrub does not complete
2718 15 Patrick Donnelly
2719
2720
h3. 2021 June 26
2721
2722
https://pulpito.ceph.com/pdonnell-2021-06-26_00:57:00-fs-wip-pdonnell-testing-20210625.225421-distro-basic-smithi/
2723
2724
* https://tracker.ceph.com/issues/51183
2725
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
2726
* https://tracker.ceph.com/issues/51410
2727
    kclient: fails to finish reconnect during MDS thrashing (testing branch)
2728
* https://tracker.ceph.com/issues/48773
2729
    qa: scrub does not complete
2730
* https://tracker.ceph.com/issues/51282
2731
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2732
* https://tracker.ceph.com/issues/51169
2733
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
2734
* https://tracker.ceph.com/issues/48772
2735
    qa: pjd: not ok 9, 44, 80
2736 14 Patrick Donnelly
2737
2738
h3. 2021 June 21
2739
2740
https://pulpito.ceph.com/pdonnell-2021-06-22_00:27:21-fs-wip-pdonnell-testing-20210621.231646-distro-basic-smithi/
2741
2742
One failure caused by PR: https://github.com/ceph/ceph/pull/41935#issuecomment-866472599
2743
2744
* https://tracker.ceph.com/issues/51282
2745
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2746
* https://tracker.ceph.com/issues/51183
2747
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
2748
* https://tracker.ceph.com/issues/48773
2749
    qa: scrub does not complete
2750
* https://tracker.ceph.com/issues/48771
2751
    qa: iogen: workload fails to cause balancing
2752
* https://tracker.ceph.com/issues/51169
2753
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
2754
* https://tracker.ceph.com/issues/50495
2755
    libcephfs: shutdown race fails with status 141
2756
* https://tracker.ceph.com/issues/45434
2757
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2758
* https://tracker.ceph.com/issues/50824
2759
    qa: snaptest-git-ceph bus error
2760
* https://tracker.ceph.com/issues/50223
2761
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2762 13 Patrick Donnelly
2763
2764
h3. 2021 June 16
2765
2766
https://pulpito.ceph.com/pdonnell-2021-06-16_21:26:55-fs-wip-pdonnell-testing-20210616.191804-distro-basic-smithi/
2767
2768
MDS abort class of failures caused by PR: https://github.com/ceph/ceph/pull/41667
2769
2770
* https://tracker.ceph.com/issues/45434
2771
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2772
* https://tracker.ceph.com/issues/51169
2773
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
2774
* https://tracker.ceph.com/issues/43216
2775
    MDSMonitor: removes MDS coming out of quorum election
2776
* https://tracker.ceph.com/issues/51278
2777
    mds: "FAILED ceph_assert(!segments.empty())"
2778
* https://tracker.ceph.com/issues/51279
2779
    kclient hangs on umount (testing branch)
2780
* https://tracker.ceph.com/issues/51280
2781
    mds: "FAILED ceph_assert(r == 0 || r == -2)"
2782
* https://tracker.ceph.com/issues/51183
2783
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
2784
* https://tracker.ceph.com/issues/51281
2785
    qa: snaptest-snap-rm-cmp.sh: "echo 'FAIL: bad match, /tmp/a 4637e766853d1ad16a7b17079e2c6f03 != real c3883760b18d50e8d78819c54d579b00'"
2786
* https://tracker.ceph.com/issues/48773
2787
    qa: scrub does not complete
2788
* https://tracker.ceph.com/issues/51076
2789
    "wait_for_recovery: failed before timeout expired" during thrashosd test with EC backend.
2790
* https://tracker.ceph.com/issues/51228
2791
    qa: rmdir: failed to remove 'a/.snap/*': No such file or directory
2792
* https://tracker.ceph.com/issues/51282
2793
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2794 12 Patrick Donnelly
2795
2796
h3. 2021 June 14
2797
2798
https://pulpito.ceph.com/pdonnell-2021-06-14_20:53:05-fs-wip-pdonnell-testing-20210614.173325-distro-basic-smithi/
2799
2800
Some Ubuntu 20.04 upgrade fallout. In particular, upgrade tests are failing due to missing packages for 18.04 Pacific.
2801
2802
* https://tracker.ceph.com/issues/51169
2803
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
2804
* https://tracker.ceph.com/issues/51228
2805
    qa: rmdir: failed to remove 'a/.snap/*': No such file or directory
2806
* https://tracker.ceph.com/issues/48773
2807
    qa: scrub does not complete
2808
* https://tracker.ceph.com/issues/51183
2809
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
2810
* https://tracker.ceph.com/issues/45434
2811
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2812
* https://tracker.ceph.com/issues/51182
2813
    pybind/mgr/snap_schedule: Invalid command: Unexpected argument 'fs=cephfs'
2814
* https://tracker.ceph.com/issues/51229
2815
    qa: test_multi_snap_schedule list difference failure
2816
* https://tracker.ceph.com/issues/50821
2817
    qa: untar_snap_rm failure during mds thrashing
2818 11 Patrick Donnelly
2819
2820
h3. 2021 June 13
2821
2822
https://pulpito.ceph.com/pdonnell-2021-06-12_02:45:35-fs-wip-pdonnell-testing-20210612.002809-distro-basic-smithi/
2823
2824
Some Ubuntu 20.04 upgrade fallout. In particular, upgrade tests are failing due to missing packages for 18.04 Pacific.
2825
2826
* https://tracker.ceph.com/issues/51169
2827
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
2828
* https://tracker.ceph.com/issues/48773
2829
    qa: scrub does not complete
2830
* https://tracker.ceph.com/issues/51182
2831
    pybind/mgr/snap_schedule: Invalid command: Unexpected argument 'fs=cephfs'
2832
* https://tracker.ceph.com/issues/51183
2833
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
2834
* https://tracker.ceph.com/issues/51197
2835
    qa: [WRN] Scrub error on inode 0x10000001520 (/client.0/tmp/t/linux-5.4/Documentation/driver-api) see mds.f log and `damage ls` output for details
2836
* https://tracker.ceph.com/issues/45434
2837 10 Patrick Donnelly
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2838
2839
h3. 2021 June 11
2840
2841
https://pulpito.ceph.com/pdonnell-2021-06-11_18:02:10-fs-wip-pdonnell-testing-20210611.162716-distro-basic-smithi/
2842
2843
Some Ubuntu 20.04 upgrade fallout. In particular, upgrade tests are failing due to missing packages for 18.04 Pacific.
2844
2845
* https://tracker.ceph.com/issues/51169
2846
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
2847
* https://tracker.ceph.com/issues/45434
2848
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2849
* https://tracker.ceph.com/issues/48771
2850
    qa: iogen: workload fails to cause balancing
2851
* https://tracker.ceph.com/issues/43216
2852
    MDSMonitor: removes MDS coming out of quorum election
2853
* https://tracker.ceph.com/issues/51182
2854
    pybind/mgr/snap_schedule: Invalid command: Unexpected argument 'fs=cephfs'
2855
* https://tracker.ceph.com/issues/50223
2856
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2857
* https://tracker.ceph.com/issues/48773
2858
    qa: scrub does not complete
2859
* https://tracker.ceph.com/issues/51183
2860
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
2861
* https://tracker.ceph.com/issues/51184
2862
    qa: fs:bugs does not specify distro
2863 9 Patrick Donnelly
2864
2865
h3. 2021 June 03
2866
2867
https://pulpito.ceph.com/pdonnell-2021-06-03_03:40:33-fs-wip-pdonnell-testing-20210603.020013-distro-basic-smithi/
2868
2869
* https://tracker.ceph.com/issues/45434
2870
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2871
* https://tracker.ceph.com/issues/50016
2872
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
2873
* https://tracker.ceph.com/issues/50821
2874
    qa: untar_snap_rm failure during mds thrashing
2875
* https://tracker.ceph.com/issues/50622 (regression)
2876
    msg: active_connections regression
2877
* https://tracker.ceph.com/issues/49845#note-2 (regression)
2878
    qa: failed umount in test_volumes
2879
* https://tracker.ceph.com/issues/48773
2880
    qa: scrub does not complete
2881
* https://tracker.ceph.com/issues/43216
2882
    MDSMonitor: removes MDS coming out of quorum election
2883 7 Patrick Donnelly
2884
2885 8 Patrick Donnelly
h3. 2021 May 18
2886
2887
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210518.214114
2888
2889
Regression in testing kernel caused some failures. Ilya fixed those and rerun
2890
looked better. Some odd new noise in the rerun relating to packaging and "No
2891
module named 'tasks.ceph'".
2892
2893
* https://tracker.ceph.com/issues/50824
2894
    qa: snaptest-git-ceph bus error
2895
* https://tracker.ceph.com/issues/50622 (regression)
2896
    msg: active_connections regression
2897
* https://tracker.ceph.com/issues/49845#note-2 (regression)
2898
    qa: failed umount in test_volumes
2899
* https://tracker.ceph.com/issues/48203 (stock kernel update required)
2900
    qa: quota failure
2901
2902
2903 7 Patrick Donnelly
h3. 2021 May 18
2904
2905
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210518.025642
2906
2907
* https://tracker.ceph.com/issues/50821
2908
    qa: untar_snap_rm failure during mds thrashing
2909
* https://tracker.ceph.com/issues/48773
2910
    qa: scrub does not complete
2911
* https://tracker.ceph.com/issues/45591
2912
    mgr: FAILED ceph_assert(daemon != nullptr)
2913
* https://tracker.ceph.com/issues/50866
2914
    osd: stat mismatch on objects
2915
* https://tracker.ceph.com/issues/50016
2916
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
2917
* https://tracker.ceph.com/issues/50867
2918
    qa: fs:mirror: reduced data availability
2919
* https://tracker.ceph.com/issues/50821
2920
    qa: untar_snap_rm failure during mds thrashing
2921
* https://tracker.ceph.com/issues/50622 (regression)
2922
    msg: active_connections regression
2923
* https://tracker.ceph.com/issues/50223
2924
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2925
* https://tracker.ceph.com/issues/50868
2926
    qa: "kern.log.gz already exists; not overwritten"
2927
* https://tracker.ceph.com/issues/50870
2928
    qa: test_full: "rm: cannot remove 'large_file_a': Permission denied"
2929 6 Patrick Donnelly
2930
2931
h3. 2021 May 11
2932
2933
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210511.232042
2934
2935
* one class of failures caused by PR
2936
* https://tracker.ceph.com/issues/48812
2937
    qa: test_scrub_pause_and_resume_with_abort failure
2938
* https://tracker.ceph.com/issues/50390
2939
    mds: monclient: wait_auth_rotating timed out after 30
2940
* https://tracker.ceph.com/issues/48773
2941
    qa: scrub does not complete
2942
* https://tracker.ceph.com/issues/50821
2943
    qa: untar_snap_rm failure during mds thrashing
2944
* https://tracker.ceph.com/issues/50224
2945
    qa: test_mirroring_init_failure_with_recovery failure
2946
* https://tracker.ceph.com/issues/50622 (regression)
2947
    msg: active_connections regression
2948
* https://tracker.ceph.com/issues/50825
2949
    qa: snaptest-git-ceph hang during mon thrashing v2
2950
* https://tracker.ceph.com/issues/50821
2951
    qa: untar_snap_rm failure during mds thrashing
2952
* https://tracker.ceph.com/issues/50823
2953
    qa: RuntimeError: timeout waiting for cluster to stabilize
2954 5 Patrick Donnelly
2955
2956
h3. 2021 May 14
2957
2958
https://pulpito.ceph.com/pdonnell-2021-05-14_21:45:42-fs-master-distro-basic-smithi/
2959
2960
* https://tracker.ceph.com/issues/48812
2961
    qa: test_scrub_pause_and_resume_with_abort failure
2962
* https://tracker.ceph.com/issues/50821
2963
    qa: untar_snap_rm failure during mds thrashing
2964
* https://tracker.ceph.com/issues/50622 (regression)
2965
    msg: active_connections regression
2966
* https://tracker.ceph.com/issues/50822
2967
    qa: testing kernel patch for client metrics causes mds abort
2968
* https://tracker.ceph.com/issues/48773
2969
    qa: scrub does not complete
2970
* https://tracker.ceph.com/issues/50823
2971
    qa: RuntimeError: timeout waiting for cluster to stabilize
2972
* https://tracker.ceph.com/issues/50824
2973
    qa: snaptest-git-ceph bus error
2974
* https://tracker.ceph.com/issues/50825
2975
    qa: snaptest-git-ceph hang during mon thrashing v2
2976
* https://tracker.ceph.com/issues/50826
2977
    kceph: stock RHEL kernel hangs on snaptests with mon|osd thrashers
2978 4 Patrick Donnelly
2979
2980
h3. 2021 May 01
2981
2982
https://pulpito.ceph.com/pdonnell-2021-05-01_09:07:09-fs-wip-pdonnell-testing-20210501.040415-distro-basic-smithi/
2983
2984
* https://tracker.ceph.com/issues/45434
2985
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2986
* https://tracker.ceph.com/issues/50281
2987
    qa: untar_snap_rm timeout
2988
* https://tracker.ceph.com/issues/48203 (stock kernel update required)
2989
    qa: quota failure
2990
* https://tracker.ceph.com/issues/48773
2991
    qa: scrub does not complete
2992
* https://tracker.ceph.com/issues/50390
2993
    mds: monclient: wait_auth_rotating timed out after 30
2994
* https://tracker.ceph.com/issues/50250
2995
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details"
2996
* https://tracker.ceph.com/issues/50622 (regression)
2997
    msg: active_connections regression
2998
* https://tracker.ceph.com/issues/45591
2999
    mgr: FAILED ceph_assert(daemon != nullptr)
3000
* https://tracker.ceph.com/issues/50221
3001
    qa: snaptest-git-ceph failure in git diff
3002
* https://tracker.ceph.com/issues/50016
3003
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
3004 3 Patrick Donnelly
3005
3006
h3. 2021 Apr 15
3007
3008
https://pulpito.ceph.com/pdonnell-2021-04-15_01:35:57-fs-wip-pdonnell-testing-20210414.230315-distro-basic-smithi/
3009
3010
* https://tracker.ceph.com/issues/50281
3011
    qa: untar_snap_rm timeout
3012
* https://tracker.ceph.com/issues/50220
3013
    qa: dbench workload timeout
3014
* https://tracker.ceph.com/issues/50246
3015
    mds: failure replaying journal (EMetaBlob)
3016
* https://tracker.ceph.com/issues/50250
3017
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details"
3018
* https://tracker.ceph.com/issues/50016
3019
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
3020
* https://tracker.ceph.com/issues/50222
3021
    osd: 5.2s0 deep-scrub : stat mismatch
3022
* https://tracker.ceph.com/issues/45434
3023
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
3024
* https://tracker.ceph.com/issues/49845
3025
    qa: failed umount in test_volumes
3026
* https://tracker.ceph.com/issues/37808
3027
    osd: osdmap cache weak_refs assert during shutdown
3028
* https://tracker.ceph.com/issues/50387
3029
    client: fs/snaps failure
3030
* https://tracker.ceph.com/issues/50389
3031
    mds: "cluster [ERR] Error recovering journal 0x203: (2) No such file or directory" in cluster log"
3032
* https://tracker.ceph.com/issues/50216
3033
    qa: "ls: cannot access 'lost+found': No such file or directory"
3034
* https://tracker.ceph.com/issues/50390
3035
    mds: monclient: wait_auth_rotating timed out after 30
3036
3037 1 Patrick Donnelly
3038
3039 2 Patrick Donnelly
h3. 2021 Apr 08
3040
3041
https://pulpito.ceph.com/pdonnell-2021-04-08_22:42:24-fs-wip-pdonnell-testing-20210408.192301-distro-basic-smithi/
3042
3043
* https://tracker.ceph.com/issues/45434
3044
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
3045
* https://tracker.ceph.com/issues/50016
3046
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
3047
* https://tracker.ceph.com/issues/48773
3048
    qa: scrub does not complete
3049
* https://tracker.ceph.com/issues/50279
3050
    qa: "Replacing daemon mds.b as rank 0 with standby daemon mds.c"
3051
* https://tracker.ceph.com/issues/50246
3052
    mds: failure replaying journal (EMetaBlob)
3053
* https://tracker.ceph.com/issues/48365
3054
    qa: ffsb build failure on CentOS 8.2
3055
* https://tracker.ceph.com/issues/50216
3056
    qa: "ls: cannot access 'lost+found': No such file or directory"
3057
* https://tracker.ceph.com/issues/50223
3058
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
3059
* https://tracker.ceph.com/issues/50280
3060
    cephadm: RuntimeError: uid/gid not found
3061
* https://tracker.ceph.com/issues/50281
3062
    qa: untar_snap_rm timeout
3063
3064 1 Patrick Donnelly
h3. 2021 Apr 08
3065
3066
https://pulpito.ceph.com/pdonnell-2021-04-08_04:31:36-fs-wip-pdonnell-testing-20210408.024225-distro-basic-smithi/
3067
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210408.142238 (with logic inversion / QA fix)
3068
3069
* https://tracker.ceph.com/issues/50246
3070
    mds: failure replaying journal (EMetaBlob)
3071
* https://tracker.ceph.com/issues/50250
3072
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details"
3073
3074
3075
h3. 2021 Apr 07
3076
3077
https://pulpito.ceph.com/pdonnell-2021-04-07_02:12:41-fs-wip-pdonnell-testing-20210406.213012-distro-basic-smithi/
3078
3079
* https://tracker.ceph.com/issues/50215
3080
    qa: "log [ERR] : error reading sessionmap 'mds2_sessionmap'"
3081
* https://tracker.ceph.com/issues/49466
3082
    qa: "Command failed on gibba030 with status 1: 'set -ex\nsudo dd of=/tmp/tmp.ZEeZBasJer'"
3083
* https://tracker.ceph.com/issues/50216
3084
    qa: "ls: cannot access 'lost+found': No such file or directory"
3085
* https://tracker.ceph.com/issues/48773
3086
    qa: scrub does not complete
3087
* https://tracker.ceph.com/issues/49845
3088
    qa: failed umount in test_volumes
3089
* https://tracker.ceph.com/issues/50220
3090
    qa: dbench workload timeout
3091
* https://tracker.ceph.com/issues/50221
3092
    qa: snaptest-git-ceph failure in git diff
3093
* https://tracker.ceph.com/issues/50222
3094
    osd: 5.2s0 deep-scrub : stat mismatch
3095
* https://tracker.ceph.com/issues/50223
3096
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
3097
* https://tracker.ceph.com/issues/50224
3098
    qa: test_mirroring_init_failure_with_recovery failure
3099
3100
h3. 2021 Apr 01
3101
3102
https://pulpito.ceph.com/pdonnell-2021-04-01_00:45:34-fs-wip-pdonnell-testing-20210331.222326-distro-basic-smithi/
3103
3104
* https://tracker.ceph.com/issues/48772
3105
    qa: pjd: not ok 9, 44, 80
3106
* https://tracker.ceph.com/issues/50177
3107
    osd: "stalled aio... buggy kernel or bad device?"
3108
* https://tracker.ceph.com/issues/48771
3109
    qa: iogen: workload fails to cause balancing
3110
* https://tracker.ceph.com/issues/49845
3111
    qa: failed umount in test_volumes
3112
* https://tracker.ceph.com/issues/48773
3113
    qa: scrub does not complete
3114
* https://tracker.ceph.com/issues/48805
3115
    mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details"
3116
* https://tracker.ceph.com/issues/50178
3117
    qa: "TypeError: run() got an unexpected keyword argument 'shell'"
3118
* https://tracker.ceph.com/issues/45434
3119
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
3120
3121
h3. 2021 Mar 24
3122
3123
https://pulpito.ceph.com/pdonnell-2021-03-24_23:26:35-fs-wip-pdonnell-testing-20210324.190252-distro-basic-smithi/
3124
3125
* https://tracker.ceph.com/issues/49500
3126
    qa: "Assertion `cb_done' failed."
3127
* https://tracker.ceph.com/issues/50019
3128
    qa: mount failure with cephadm "probably no MDS server is up?"
3129
* https://tracker.ceph.com/issues/50020
3130
    qa: "RADOS object not found (Failed to operate read op for oid cephfs_mirror)"
3131
* https://tracker.ceph.com/issues/48773
3132
    qa: scrub does not complete
3133
* https://tracker.ceph.com/issues/45434
3134
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
3135
* https://tracker.ceph.com/issues/48805
3136
    mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details"
3137
* https://tracker.ceph.com/issues/48772
3138
    qa: pjd: not ok 9, 44, 80
3139
* https://tracker.ceph.com/issues/50021
3140
    qa: snaptest-git-ceph failure during mon thrashing
3141
* https://tracker.ceph.com/issues/48771
3142
    qa: iogen: workload fails to cause balancing
3143
* https://tracker.ceph.com/issues/50016
3144
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
3145
* https://tracker.ceph.com/issues/49466
3146
    qa: "Command failed on gibba030 with status 1: 'set -ex\nsudo dd of=/tmp/tmp.ZEeZBasJer'"
3147
3148
3149
h3. 2021 Mar 18
3150
3151
https://pulpito.ceph.com/pdonnell-2021-03-18_13:46:31-fs-wip-pdonnell-testing-20210318.024145-distro-basic-smithi/
3152
3153
* https://tracker.ceph.com/issues/49466
3154
    qa: "Command failed on gibba030 with status 1: 'set -ex\nsudo dd of=/tmp/tmp.ZEeZBasJer'"
3155
* https://tracker.ceph.com/issues/48773
3156
    qa: scrub does not complete
3157
* https://tracker.ceph.com/issues/48805
3158
    mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details"
3159
* https://tracker.ceph.com/issues/45434
3160
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
3161
* https://tracker.ceph.com/issues/49845
3162
    qa: failed umount in test_volumes
3163
* https://tracker.ceph.com/issues/49605
3164
    mgr: drops command on the floor
3165
* https://tracker.ceph.com/issues/48203 (stock kernel update required)
3166
    qa: quota failure
3167
* https://tracker.ceph.com/issues/49928
3168
    client: items pinned in cache preventing unmount x2
3169
3170
h3. 2021 Mar 15
3171
3172
https://pulpito.ceph.com/pdonnell-2021-03-15_22:16:56-fs-wip-pdonnell-testing-20210315.182203-distro-basic-smithi/
3173
3174
* https://tracker.ceph.com/issues/49842
3175
    qa: stuck pkg install
3176
* https://tracker.ceph.com/issues/49466
3177
    qa: "Command failed on gibba030 with status 1: 'set -ex\nsudo dd of=/tmp/tmp.ZEeZBasJer'"
3178
* https://tracker.ceph.com/issues/49822
3179
    test: test_mirroring_command_idempotency (tasks.cephfs.test_admin.TestMirroringCommands) failure
3180
* https://tracker.ceph.com/issues/49240
3181
    terminate called after throwing an instance of 'std::bad_alloc'
3182
* https://tracker.ceph.com/issues/48773
3183
    qa: scrub does not complete
3184
* https://tracker.ceph.com/issues/45434
3185
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
3186
* https://tracker.ceph.com/issues/49500
3187
    qa: "Assertion `cb_done' failed."
3188
* https://tracker.ceph.com/issues/49843
3189
    qa: fs/snaps/snaptest-upchildrealms.sh failure
3190
* https://tracker.ceph.com/issues/49845
3191
    qa: failed umount in test_volumes
3192
* https://tracker.ceph.com/issues/48805
3193
    mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details"
3194
* https://tracker.ceph.com/issues/49605
3195
    mgr: drops command on the floor
3196
3197
and failure caused by PR: https://github.com/ceph/ceph/pull/39969
3198
3199
3200
h3. 2021 Mar 09
3201
3202
https://pulpito.ceph.com/pdonnell-2021-03-09_03:27:39-fs-wip-pdonnell-testing-20210308.214827-distro-basic-smithi/
3203
3204
* https://tracker.ceph.com/issues/49500
3205
    qa: "Assertion `cb_done' failed."
3206
* https://tracker.ceph.com/issues/48805
3207
    mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details"
3208
* https://tracker.ceph.com/issues/48773
3209
    qa: scrub does not complete
3210
* https://tracker.ceph.com/issues/45434
3211
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
3212
* https://tracker.ceph.com/issues/49240
3213
    terminate called after throwing an instance of 'std::bad_alloc'
3214
* https://tracker.ceph.com/issues/49466
3215
    qa: "Command failed on gibba030 with status 1: 'set -ex\nsudo dd of=/tmp/tmp.ZEeZBasJer'"
3216
* https://tracker.ceph.com/issues/49684
3217
    qa: fs:cephadm mount does not wait for mds to be created
3218
* https://tracker.ceph.com/issues/48771
3219
    qa: iogen: workload fails to cause balancing