Project

General

Profile

Main » History » Version 246

Rishabh Dave, 04/04/2024 08:04 AM

1 221 Patrick Donnelly
h1. <code>main</code> branch
2 1 Patrick Donnelly
3 246 Rishabh Dave
h3. 4 Apr 2024 TEMP
4
5
https://pulpito.ceph.com/rishabh-2024-03-27_05:27:11-fs-wip-rishabh-testing-20240326.131558-testing-default-smithi/
6
7
* https://tracker.ceph.com/issues/64927
8
  qa/cephfs: test_cephfs_mirror_blocklist raises "KeyError: 'rados_inst'"
9
* https://tracker.ceph.com/issues/65022
10
  qa: test_max_items_per_obj open procs not fully cleaned up
11
* https://tracker.ceph.com/issues/63699
12
  qa: failed cephfs-shell test_reading_conf
13
* https://tracker.ceph.com/issues/63700
14
  qa: test_cd_with_args failure
15
* https://tracker.ceph.com/issues/65136
16
  QA failure: test_fscrypt_dummy_encryption_with_quick_group
17
* https://tracker.ceph.com/issues/65246
18
  qa/cephfs: test_multifs_single_path_rootsquash (tasks.cephfs.test_admin.TestFsAuthorize)
19
20
* https://tracker.ceph.com/issues/58945
21
  qa: xfstests-dev's generic test suite has failures with fuse client
22
* https://tracker.ceph.com/issues/57656
23
  [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
24
* https://tracker.ceph.com/issues/63265
25
  qa: fs/snaps/snaptest-git-ceph.sh failed when reseting to tag 'v0.1'
26
* https://tracker.ceph.com/issues/62067
27
  ffsb.sh failure "Resource temporarily unavailable"
28
* https://tracker.ceph.com/issues/63949
29
  leak in mds.c detected by valgrind during CephFS QA run
30
* https://tracker.ceph.com/issues/48562
31
  qa: scrub - object missing on disk; some files may be lost
32
* https://tracker.ceph.com/issues/65020
33
  qa: Scrub error on inode 0x1000000356c (/volumes/qa/sv_0/2f8f6bb4-3ea9-47a0-bd79-a0f50dc149d5/client.0/tmp/clients/client7/~dmtmp/PARADOX) see mds.b log and `damage ls` output for details" in cluster log
34
* https://tracker.ceph.com/issues/64572
35
  workunits/fsx.sh failure
36
* https://tracker.ceph.com/issues/57676
37
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
38
* https://tracker.ceph.com/issues/64502
39
  client: ceph-fuse fails to unmount after upgrade to main
40
* https://tracker.ceph.com/issues/65018
41
  PG_DEGRADED warnings during cluster creation via cephadm: "Health check failed: Degraded data redundancy: 2/192 objects degraded (1.042%), 1 pg degraded (PG_DEGRADED)"
42
* https://tracker.ceph.com/issues/52624
43
  qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
44
* https://tracker.ceph.com/issues/54741
45
  crash: MDSTableClient::got_journaled_ack(unsigned long)
46
47 245 Rishabh Dave
48 240 Patrick Donnelly
h3. 2024-04-02
49
50
https://tracker.ceph.com/issues/65215
51
52
* "qa: error during scrub thrashing: rank damage found: {'backtrace'}":https://tracker.ceph.com/issues/57676
53
* "qa: ceph tell 4.3a deep-scrub command not found":https://tracker.ceph.com/issues/64972
54
* "pacific/quincy/v18.2.0: client: ceph-fuse fails to unmount after upgrade to main":https://tracker.ceph.com/issues/64502
55
* "Test failure: test_cephfs_mirror_cancel_mirroring_and_readd":https://tracker.ceph.com/issues/64711
56
* "workunits/fsx.sh failure":https://tracker.ceph.com/issues/64572
57
* "qa: failed cephfs-shell test_reading_conf":https://tracker.ceph.com/issues/63699
58
* "centos 9 testing reveals rocksdb Leak_StillReachable memory leak in mons":https://tracker.ceph.com/issues/61774
59
* "qa: test_max_items_per_obj open procs not fully cleaned up":https://tracker.ceph.com/issues/65022
60
* "qa: dbench workload timeout":https://tracker.ceph.com/issues/50220
61
* "suites/fsstress.sh hangs on one client - test times out":https://tracker.ceph.com/issues/64707
62 241 Patrick Donnelly
* "qa/suites/fs/nfs: cluster [WRN] Health check failed: 1 stray daemon(s) not managed by cephadm (CEPHADM_STRAY_DAEMON)" in cluster log":https://tracker.ceph.com/issues/65021
63
* "qa: Scrub error on inode 0x1000000356c (/volumes/qa/sv_0/2f8f6bb4-3ea9-47a0-bd79-a0f50dc149d5/client.0/tmp/clients/client7/~dmtmp/PARADOX) see mds.b log and `damage ls` output for details in cluster log":https://tracker.ceph.com/issues/65020
64
* "qa: iogen workunit: The following counters failed to be set on mds daemons: {'mds.exported', 'mds.imported'}":https://tracker.ceph.com/issues/54108
65
* "ffsb.sh failure Resource temporarily unavailable":https://tracker.ceph.com/issues/62067
66
* "QA failure: test_fscrypt_dummy_encryption_with_quick_group":https://tracker.ceph.com/issues/65136
67 244 Patrick Donnelly
* "qa: cluster [WRN] Health detail: HEALTH_WARN 1 pool(s) do not have an application enabled in cluster log":https://tracker.ceph.com/issues/65271
68 241 Patrick Donnelly
* "qa: test_cephfs_mirror_cancel_sync fails in a 100 jobs run of fs:mirror suite":https://tracker.ceph.com/issues/64534
69 240 Patrick Donnelly
70 236 Patrick Donnelly
h3. 2024-03-28
71
72
https://tracker.ceph.com/issues/65213
73
74 237 Patrick Donnelly
* "qa: error during scrub thrashing: rank damage found: {'backtrace'}":https://tracker.ceph.com/issues/57676
75
* "workunits/fsx.sh failure":https://tracker.ceph.com/issues/64572
76
* "PG_DEGRADED warnings during cluster creation via cephadm: Health check failed: Degraded data":https://tracker.ceph.com/issues/65018
77 238 Patrick Donnelly
* "suites/fsstress.sh hangs on one client - test times out":https://tracker.ceph.com/issues/64707
78
* "qa: ceph tell 4.3a deep-scrub command not found":https://tracker.ceph.com/issues/64972
79
* "qa: iogen workunit: The following counters failed to be set on mds daemons: {'mds.exported', 'mds.imported'}":https://tracker.ceph.com/issues/54108
80 239 Patrick Donnelly
* "qa: Scrub error on inode 0x1000000356c (/volumes/qa/sv_0/2f8f6bb4-3ea9-47a0-bd79-a0f50dc149d5/client.0/tmp/clients/client7/~dmtmp/PARADOX) see mds.b log and `damage ls` output for details in cluster log":https://tracker.ceph.com/issues/65020
81
* "qa: failed cephfs-shell test_reading_conf":https://tracker.ceph.com/issues/63699
82
* "Test failure: test_cephfs_mirror_cancel_mirroring_and_readd":https://tracker.ceph.com/issues/64711
83
* "qa: test_max_items_per_obj open procs not fully cleaned up":https://tracker.ceph.com/issues/65022
84
* "pacific/quincy/v18.2.0: client: ceph-fuse fails to unmount after upgrade to main":https://tracker.ceph.com/issues/64502
85
* "centos 9 testing reveals rocksdb Leak_StillReachable memory leak in mons":https://tracker.ceph.com/issues/61774
86
* "qa: Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)":https://tracker.ceph.com/issues/52624
87
* "qa: dbench workload timeout":https://tracker.ceph.com/issues/50220
88
89
90 236 Patrick Donnelly
91 235 Milind Changire
h3. 2024-03-25
92
93
https://pulpito.ceph.com/mchangir-2024-03-22_09:46:06-fs:upgrade-wip-mchangir-testing-main-20240318.032620-testing-default-smithi/
94
* https://tracker.ceph.com/issues/64502
95
  fusermount -u fails with: teuthology.exceptions.MaxWhileTries: reached maximum tries (51) after waiting for 300 seconds
96
97
https://pulpito.ceph.com/mchangir-2024-03-22_09:48:09-fs:libcephfs-wip-mchangir-testing-main-20240318.032620-testing-default-smithi/
98
99
* https://tracker.ceph.com/issues/62245
100
  libcephfs/test.sh failed - https://tracker.ceph.com/issues/62245#note-3
101
102
103 228 Patrick Donnelly
h3. 2024-03-20
104
105 234 Patrick Donnelly
https://pulpito.ceph.com/?branch=wip-batrick-testing-20240320.145742
106 228 Patrick Donnelly
107 233 Patrick Donnelly
https://github.com/batrick/ceph/commit/360516069d9393362c4cc6eb9371680fe16d66ab
108
109 229 Patrick Donnelly
Ubuntu jobs filtered out because builds were skipped by jenkins/shaman.
110 1 Patrick Donnelly
111 229 Patrick Donnelly
This run has a lot more failures because https://github.com/ceph/ceph/pull/55455 fixed log WRN/ERR checks.
112 228 Patrick Donnelly
113 229 Patrick Donnelly
* https://tracker.ceph.com/issues/57676
114
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
115
* https://tracker.ceph.com/issues/64572
116
    workunits/fsx.sh failure
117
* https://tracker.ceph.com/issues/65018
118
    PG_DEGRADED warnings during cluster creation via cephadm: "Health check failed: Degraded data redundancy: 2/192 objects degraded (1.042%), 1 pg degraded (PG_DEGRADED)"
119
* https://tracker.ceph.com/issues/64707 (new issue)
120
    suites/fsstress.sh hangs on one client - test times out
121 1 Patrick Donnelly
* https://tracker.ceph.com/issues/64988
122
    qa: fs:workloads mgr client evicted indicated by "cluster [WRN] evicting unresponsive client smithi042:x (15288), after 303.306 seconds"
123
* https://tracker.ceph.com/issues/59684
124
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
125 230 Patrick Donnelly
* https://tracker.ceph.com/issues/64972
126
    qa: "ceph tell 4.3a deep-scrub" command not found
127
* https://tracker.ceph.com/issues/54108
128
    qa: iogen workunit: "The following counters failed to be set on mds daemons: {'mds.exported', 'mds.imported'}"
129
* https://tracker.ceph.com/issues/65019
130
    qa/suites/fs/top: [WRN] Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)" in cluster log 
131
* https://tracker.ceph.com/issues/65020
132
    qa: Scrub error on inode 0x1000000356c (/volumes/qa/sv_0/2f8f6bb4-3ea9-47a0-bd79-a0f50dc149d5/client.0/tmp/clients/client7/~dmtmp/PARADOX) see mds.b log and `damage ls` output for details" in cluster log
133
* https://tracker.ceph.com/issues/65021
134
    qa/suites/fs/nfs: cluster [WRN] Health check failed: 1 stray daemon(s) not managed by cephadm (CEPHADM_STRAY_DAEMON)" in cluster log
135
* https://tracker.ceph.com/issues/63699
136
    qa: failed cephfs-shell test_reading_conf
137 231 Patrick Donnelly
* https://tracker.ceph.com/issues/64711
138
    Test failure: test_cephfs_mirror_cancel_mirroring_and_readd (tasks.cephfs.test_mirroring.TestMirroring)
139
* https://tracker.ceph.com/issues/50821
140
    qa: untar_snap_rm failure during mds thrashing
141 232 Patrick Donnelly
* https://tracker.ceph.com/issues/65022
142
    qa: test_max_items_per_obj open procs not fully cleaned up
143 228 Patrick Donnelly
144 226 Venky Shankar
h3.  14th March 2024
145
146
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20240307.013758
147
148 227 Venky Shankar
(pjd.sh failures are related to a bug in the testing kernel. See - https://tracker.ceph.com/issues/64679#note-4)
149 226 Venky Shankar
150
* https://tracker.ceph.com/issues/62067
151
    ffsb.sh failure "Resource temporarily unavailable"
152
* https://tracker.ceph.com/issues/57676
153
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
154
* https://tracker.ceph.com/issues/64502
155
    pacific/quincy/v18.2.0: client: ceph-fuse fails to unmount after upgrade to main
156
* https://tracker.ceph.com/issues/64572
157
    workunits/fsx.sh failure
158
* https://tracker.ceph.com/issues/63700
159
    qa: test_cd_with_args failure
160
* https://tracker.ceph.com/issues/59684
161
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
162
* https://tracker.ceph.com/issues/61243
163
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
164
165 225 Venky Shankar
h3. 5th March 2024
166
167
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20240304.042522
168
169
* https://tracker.ceph.com/issues/57676
170
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
171
* https://tracker.ceph.com/issues/64502
172
    pacific/quincy/v18.2.0: client: ceph-fuse fails to unmount after upgrade to main
173
* https://tracker.ceph.com/issues/63949
174
    leak in mds.c detected by valgrind during CephFS QA run
175
* https://tracker.ceph.com/issues/57656
176
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
177
* https://tracker.ceph.com/issues/63699
178
    qa: failed cephfs-shell test_reading_conf
179
* https://tracker.ceph.com/issues/64572
180
    workunits/fsx.sh failure
181
* https://tracker.ceph.com/issues/64707 (new issue)
182
    suites/fsstress.sh hangs on one client - test times out
183
* https://tracker.ceph.com/issues/59684
184
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
185
* https://tracker.ceph.com/issues/63700
186
    qa: test_cd_with_args failure
187
* https://tracker.ceph.com/issues/64711
188
    Test failure: test_cephfs_mirror_cancel_mirroring_and_readd (tasks.cephfs.test_mirroring.TestMirroring)
189
* https://tracker.ceph.com/issues/64729 (new issue)
190
    mon.a (mon.0) 1281 : cluster 3 [WRN] MDS_SLOW_METADATA_IO: 3 MDSs report slow metadata IOs" in cluster log
191
* https://tracker.ceph.com/issues/64730
192
    fs/misc/multiple_rsync.sh workunit times out
193
194 224 Venky Shankar
h3. 26th Feb 2024
195
196
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20240216.060239
197
198
(This run is a bit messy due to
199
200
  a) OCI runtime issues in the testing kernel with centos9
201
  b) SELinux denials related failures
202
  c) Unrelated MON_DOWN warnings)
203
204
* https://tracker.ceph.com/issues/57676
205
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
206
* https://tracker.ceph.com/issues/63700
207
    qa: test_cd_with_args failure
208
* https://tracker.ceph.com/issues/63949
209
    leak in mds.c detected by valgrind during CephFS QA run
210
* https://tracker.ceph.com/issues/59684
211
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
212
* https://tracker.ceph.com/issues/61243
213
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
214
* https://tracker.ceph.com/issues/63699
215
    qa: failed cephfs-shell test_reading_conf
216
* https://tracker.ceph.com/issues/64172
217
    Test failure: test_multiple_path_r (tasks.cephfs.test_admin.TestFsAuthorize)
218
* https://tracker.ceph.com/issues/57656
219
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
220
* https://tracker.ceph.com/issues/64572
221
    workunits/fsx.sh failure
222
223 222 Patrick Donnelly
h3. 20th Feb 2024
224
225
https://github.com/ceph/ceph/pull/55601
226
https://github.com/ceph/ceph/pull/55659
227
228
https://pulpito.ceph.com/pdonnell-2024-02-20_07:23:03-fs:upgrade:mds_upgrade_sequence-wip-batrick-testing-20240220.022152-distro-default-smithi/
229
230
* https://tracker.ceph.com/issues/64502
231
    client: quincy ceph-fuse fails to unmount after upgrade to main
232
233 223 Patrick Donnelly
This run has numerous problems. #55601 introduces testing for the upgrade sequence from </code>reef/{v18.2.0,v18.2.1,reef}</code> as well as an extra dimension for the ceph-fuse client. The main "big" issue is i64502: the ceph-fuse client is not being unmounted when <code>fusermount -u</code> is called. Instead, the client begins to unmount only after daemons are shut down during test cleanup.
234 218 Venky Shankar
235
h3. 19th Feb 2024
236
237 220 Venky Shankar
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20240217.015652
238
239 218 Venky Shankar
* https://tracker.ceph.com/issues/61243
240
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
241
* https://tracker.ceph.com/issues/63700
242
    qa: test_cd_with_args failure
243
* https://tracker.ceph.com/issues/63141
244
    qa/cephfs: test_idem_unaffected_root_squash fails
245
* https://tracker.ceph.com/issues/59684
246
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
247
* https://tracker.ceph.com/issues/63949
248
    leak in mds.c detected by valgrind during CephFS QA run
249
* https://tracker.ceph.com/issues/63764
250
    Test failure: test_r_with_fsname_and_no_path_in_cap (tasks.cephfs.test_multifs_auth.TestMDSCaps)
251
* https://tracker.ceph.com/issues/63699
252
    qa: failed cephfs-shell test_reading_conf
253 219 Venky Shankar
* https://tracker.ceph.com/issues/64482
254
    ceph: stderr Error: OCI runtime error: crun: bpf create ``: Function not implemented
255 201 Rishabh Dave
256 217 Venky Shankar
h3. 29 Jan 2024
257
258
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20240119.075157-1
259
260
* https://tracker.ceph.com/issues/57676
261
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
262
* https://tracker.ceph.com/issues/63949
263
    leak in mds.c detected by valgrind during CephFS QA run
264
* https://tracker.ceph.com/issues/62067
265
    ffsb.sh failure "Resource temporarily unavailable"
266
* https://tracker.ceph.com/issues/64172
267
    Test failure: test_multiple_path_r (tasks.cephfs.test_admin.TestFsAuthorize)
268
* https://tracker.ceph.com/issues/63265
269
    qa: fs/snaps/snaptest-git-ceph.sh failed when reseting to tag 'v0.1'
270
* https://tracker.ceph.com/issues/61243
271
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
272
* https://tracker.ceph.com/issues/59684
273
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
274
* https://tracker.ceph.com/issues/57656
275
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
276
* https://tracker.ceph.com/issues/64209
277
    snaptest-multiple-capsnaps.sh fails with "got remote process result: 1"
278
279 216 Venky Shankar
h3. 17th Jan 2024
280
281
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20240103.072409-1
282
283
* https://tracker.ceph.com/issues/63764
284
    Test failure: test_r_with_fsname_and_no_path_in_cap (tasks.cephfs.test_multifs_auth.TestMDSCaps)
285
* https://tracker.ceph.com/issues/57676
286
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
287
* https://tracker.ceph.com/issues/51964
288
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
289
* https://tracker.ceph.com/issues/63949
290
    leak in mds.c detected by valgrind during CephFS QA run
291
* https://tracker.ceph.com/issues/62067
292
    ffsb.sh failure "Resource temporarily unavailable"
293
* https://tracker.ceph.com/issues/61243
294
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
295
* https://tracker.ceph.com/issues/63259
296
    mds: failed to store backtrace and force file system read-only
297
* https://tracker.ceph.com/issues/63265
298
    qa: fs/snaps/snaptest-git-ceph.sh failed when reseting to tag 'v0.1'
299
300
h3. 16 Jan 2024
301 215 Rishabh Dave
302 214 Rishabh Dave
https://pulpito.ceph.com/rishabh-2023-12-11_15:37:57-fs-rishabh-2023dec11-testing-default-smithi/
303
https://pulpito.ceph.com/rishabh-2023-12-17_11:19:43-fs-rishabh-2023dec11-testing-default-smithi/
304
https://pulpito.ceph.com/rishabh-2024-01-04_18:43:16-fs-rishabh-2024jan4-testing-default-smithi
305
306
* https://tracker.ceph.com/issues/63764
307
  Test failure: test_r_with_fsname_and_no_path_in_cap (tasks.cephfs.test_multifs_auth.TestMDSCaps)
308
* https://tracker.ceph.com/issues/63141
309
  qa/cephfs: test_idem_unaffected_root_squash fails
310
* https://tracker.ceph.com/issues/62067
311
  ffsb.sh failure "Resource temporarily unavailable" 
312
* https://tracker.ceph.com/issues/51964
313
  qa: test_cephfs_mirror_restart_sync_on_blocklist failure
314
* https://tracker.ceph.com/issues/54462 
315
  Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128
316
* https://tracker.ceph.com/issues/57676
317
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
318
319
* https://tracker.ceph.com/issues/63949
320
  valgrind leak in MDS
321
* https://tracker.ceph.com/issues/64041
322
  qa/cephfs: fs/upgrade/nofs suite attempts to jump more than 2 releases
323
* fsstress failure in last run was due a kernel MM layer failure, unrelated to CephFS
324
* from last run, job #7507400 failed due to MGR; FS wasn't degraded, so it's unrelated to CephFS
325
326 213 Venky Shankar
h3. 06 Dec 2023
327
328
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20231206.125818
329
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20231206.125818-x (rerun w/ squid kickoff changes)
330
331
* https://tracker.ceph.com/issues/63764
332
    Test failure: test_r_with_fsname_and_no_path_in_cap (tasks.cephfs.test_multifs_auth.TestMDSCaps)
333
* https://tracker.ceph.com/issues/63233
334
    mon|client|mds: valgrind reports possible leaks in the MDS
335
* https://tracker.ceph.com/issues/57676
336
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
337
* https://tracker.ceph.com/issues/62580
338
    testing: Test failure: test_snapshot_remove (tasks.cephfs.test_strays.TestStrays)
339
* https://tracker.ceph.com/issues/62067
340
    ffsb.sh failure "Resource temporarily unavailable"
341
* https://tracker.ceph.com/issues/61243
342
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
343
* https://tracker.ceph.com/issues/62081
344
    tasks/fscrypt-common does not finish, timesout
345
* https://tracker.ceph.com/issues/63265
346
    qa: fs/snaps/snaptest-git-ceph.sh failed when reseting to tag 'v0.1'
347
* https://tracker.ceph.com/issues/63806
348
    ffsb.sh workunit failure (MDS: std::out_of_range, damaged)
349
350 211 Patrick Donnelly
h3. 30 Nov 2023
351
352
https://pulpito.ceph.com/pdonnell-2023-11-30_08:05:19-fs:shell-wip-batrick-testing-20231130.014408-distro-default-smithi/
353
354
* https://tracker.ceph.com/issues/63699
355 212 Patrick Donnelly
    qa: failed cephfs-shell test_reading_conf
356
* https://tracker.ceph.com/issues/63700
357
    qa: test_cd_with_args failure
358 211 Patrick Donnelly
359 210 Venky Shankar
h3. 29 Nov 2023
360
361
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20231107.042705
362
363
* https://tracker.ceph.com/issues/63233
364
    mon|client|mds: valgrind reports possible leaks in the MDS
365
* https://tracker.ceph.com/issues/63141
366
    qa/cephfs: test_idem_unaffected_root_squash fails
367
* https://tracker.ceph.com/issues/57676
368
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
369
* https://tracker.ceph.com/issues/57655
370
    qa: fs:mixed-clients kernel_untar_build failure
371
* https://tracker.ceph.com/issues/62067
372
    ffsb.sh failure "Resource temporarily unavailable"
373
* https://tracker.ceph.com/issues/61243
374
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
375
* https://tracker.ceph.com/issues/62510 (pending RHEL back port)
    snaptest-git-ceph.sh failure with fs/thrash
376
* https://tracker.ceph.com/issues/62810
377
    Failure in snaptest-git-ceph.sh (it's an async unlink/create bug) -- Need to fix again
378
379 206 Venky Shankar
h3. 14 Nov 2023
380 207 Milind Changire
(Milind)
381
382
https://pulpito.ceph.com/mchangir-2023-11-13_10:27:15-fs-wip-mchangir-testing-20231110.052303-testing-default-smithi/
383
384
* https://tracker.ceph.com/issues/53859
385
  qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
386
* https://tracker.ceph.com/issues/63233
387
  mon|client|mds: valgrind reports possible leaks in the MDS
388
* https://tracker.ceph.com/issues/63521
389
  qa: Test failure: test_scrub_merge_dirfrags (tasks.cephfs.test_scrub_checks.TestScrubChecks)
390
* https://tracker.ceph.com/issues/57655
391
  qa: fs:mixed-clients kernel_untar_build failure
392
* https://tracker.ceph.com/issues/62580
393
    testing: Test failure: test_snapshot_remove (tasks.cephfs.test_strays.TestStrays)
394
* https://tracker.ceph.com/issues/57676
395
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
396
* https://tracker.ceph.com/issues/61243
397
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
398
* https://tracker.ceph.com/issues/63141
399
    qa/cephfs: test_idem_unaffected_root_squash fails
400
* https://tracker.ceph.com/issues/51964
401
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
402
* https://tracker.ceph.com/issues/63522
403
    No module named 'tasks.ceph_fuse'
404
    No module named 'tasks.kclient'
405
    No module named 'tasks.cephfs.fuse_mount'
406
    No module named 'tasks.ceph'
407
* https://tracker.ceph.com/issues/63523
408
    Command failed - qa/workunits/fs/misc/general_vxattrs.sh
409
410
411
h3. 14 Nov 2023
412 206 Venky Shankar
413
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20231106.073650
414
415
(nvm the fs:upgrade test failure - the PR is excluded from merge)
416
417
* https://tracker.ceph.com/issues/57676
418
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
419
* https://tracker.ceph.com/issues/63233
420
    mon|client|mds: valgrind reports possible leaks in the MDS
421
* https://tracker.ceph.com/issues/63141
422
    qa/cephfs: test_idem_unaffected_root_squash fails
423
* https://tracker.ceph.com/issues/62580
424
    testing: Test failure: test_snapshot_remove (tasks.cephfs.test_strays.TestStrays)
425
* https://tracker.ceph.com/issues/57655
426
    qa: fs:mixed-clients kernel_untar_build failure
427
* https://tracker.ceph.com/issues/51964
428
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
429
* https://tracker.ceph.com/issues/63519
430
    ceph-fuse: reef ceph-fuse crashes with main branch ceph-mds
431
* https://tracker.ceph.com/issues/57087
432
    qa: test_fragmented_injection (tasks.cephfs.test_data_scan.TestDataScan) failure
433
* https://tracker.ceph.com/issues/58945
434
    qa: xfstests-dev's generic test suite has 20 failures with fuse client
435
436 204 Rishabh Dave
h3. 7 Nov 2023
437
438 205 Rishabh Dave
fs: https://pulpito.ceph.com/rishabh-2023-11-04_04:30:51-fs-rishabh-2023nov3-testing-default-smithi/
439
re-run: https://pulpito.ceph.com/rishabh-2023-11-05_14:10:09-fs-rishabh-2023nov3-testing-default-smithi/
440
smoke: https://pulpito.ceph.com/rishabh-2023-11-08_08:39:05-smoke-rishabh-2023nov3-testing-default-smithi/
441 204 Rishabh Dave
442
* https://tracker.ceph.com/issues/53859
443
  qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
444
* https://tracker.ceph.com/issues/63233
445
  mon|client|mds: valgrind reports possible leaks in the MDS
446
* https://tracker.ceph.com/issues/57655
447
  qa: fs:mixed-clients kernel_untar_build failure
448
* https://tracker.ceph.com/issues/57676
449
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
450
451
* https://tracker.ceph.com/issues/63473
452
  fsstress.sh failed with errno 124
453
454 202 Rishabh Dave
h3. 3 Nov 2023
455 203 Rishabh Dave
456 202 Rishabh Dave
https://pulpito.ceph.com/rishabh-2023-10-27_06:26:52-fs-rishabh-2023oct26-testing-default-smithi/
457
458
* https://tracker.ceph.com/issues/63141
459
  qa/cephfs: test_idem_unaffected_root_squash fails
460
* https://tracker.ceph.com/issues/63233
461
  mon|client|mds: valgrind reports possible leaks in the MDS
462
* https://tracker.ceph.com/issues/57656
463
  dbench: write failed on handle 10010 (Resource temporarily unavailable)
464
* https://tracker.ceph.com/issues/57655
465
  qa: fs:mixed-clients kernel_untar_build failure
466
* https://tracker.ceph.com/issues/57676
467
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
468
469
* https://tracker.ceph.com/issues/59531
470
  "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio)"
471
* https://tracker.ceph.com/issues/52624
472
  qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
473
474 198 Patrick Donnelly
h3. 24 October 2023
475
476
https://pulpito.ceph.com/?branch=wip-batrick-testing-20231024.144545
477
478 200 Patrick Donnelly
Two failures:
479
480
https://pulpito.ceph.com/pdonnell-2023-10-26_05:21:22-fs-wip-batrick-testing-20231024.144545-distro-default-smithi/7438459/
481
https://pulpito.ceph.com/pdonnell-2023-10-26_05:21:22-fs-wip-batrick-testing-20231024.144545-distro-default-smithi/7438468/
482
483
probably related to https://github.com/ceph/ceph/pull/53255. Killing the mount as part of the test did not complete. Will research more.
484
485 198 Patrick Donnelly
* https://tracker.ceph.com/issues/52624
486
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
487
* https://tracker.ceph.com/issues/57676
488 199 Patrick Donnelly
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
489
* https://tracker.ceph.com/issues/63233
490
    mon|client|mds: valgrind reports possible leaks in the MDS
491
* https://tracker.ceph.com/issues/59531
492
    "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS 
493
* https://tracker.ceph.com/issues/57655
494
    qa: fs:mixed-clients kernel_untar_build failure
495 200 Patrick Donnelly
* https://tracker.ceph.com/issues/62067
496
    ffsb.sh failure "Resource temporarily unavailable"
497
* https://tracker.ceph.com/issues/63411
498
    qa: flush journal may cause timeouts of `scrub status`
499
* https://tracker.ceph.com/issues/61243
500
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
501
* https://tracker.ceph.com/issues/63141
502 198 Patrick Donnelly
    test_idem_unaffected_root_squash (test_admin.TestFsAuthorizeUpdate) fails
503 148 Rishabh Dave
504 195 Venky Shankar
h3. 18 Oct 2023
505
506
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20231018.065603
507
508
* https://tracker.ceph.com/issues/52624
509
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
510
* https://tracker.ceph.com/issues/57676
511
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
512
* https://tracker.ceph.com/issues/63233
513
    mon|client|mds: valgrind reports possible leaks in the MDS
514
* https://tracker.ceph.com/issues/63141
515
    qa/cephfs: test_idem_unaffected_root_squash fails
516
* https://tracker.ceph.com/issues/59531
517
  "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio)" 
518
* https://tracker.ceph.com/issues/62658
519
    error during scrub thrashing: reached maximum tries (31) after waiting for 900 seconds
520
* https://tracker.ceph.com/issues/62580
521
    testing: Test failure: test_snapshot_remove (tasks.cephfs.test_strays.TestStrays)
522
* https://tracker.ceph.com/issues/62067
523
    ffsb.sh failure "Resource temporarily unavailable"
524
* https://tracker.ceph.com/issues/57655
525
    qa: fs:mixed-clients kernel_untar_build failure
526
* https://tracker.ceph.com/issues/62036
527
    src/mds/MDCache.cc: 5131: FAILED ceph_assert(isolated_inodes.empty())
528
* https://tracker.ceph.com/issues/58945
529
    qa: xfstests-dev's generic test suite has 20 failures with fuse client
530
* https://tracker.ceph.com/issues/62847
531
    mds: blogbench requests stuck (5mds+scrub+snaps-flush)
532
533 193 Venky Shankar
h3. 13 Oct 2023
534
535
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20231013.093215
536
537
* https://tracker.ceph.com/issues/52624
538
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
539
* https://tracker.ceph.com/issues/62936
540
    Test failure: test_mirroring_init_failure_with_recovery (tasks.cephfs.test_mirroring.TestMirroring)
541
* https://tracker.ceph.com/issues/47292
542
    cephfs-shell: test_df_for_valid_file failure
543
* https://tracker.ceph.com/issues/63141
544
    qa/cephfs: test_idem_unaffected_root_squash fails
545
* https://tracker.ceph.com/issues/62081
546
    tasks/fscrypt-common does not finish, timesout
547 1 Patrick Donnelly
* https://tracker.ceph.com/issues/58945
548
    qa: xfstests-dev's generic test suite has 20 failures with fuse client
549 194 Venky Shankar
* https://tracker.ceph.com/issues/63233
550
    mon|client|mds: valgrind reports possible leaks in the MDS
551 193 Venky Shankar
552 190 Patrick Donnelly
h3. 16 Oct 2023
553
554
https://pulpito.ceph.com/?branch=wip-batrick-testing-20231016.203825
555
556 192 Patrick Donnelly
Infrastructure issues:
557
* /teuthology/pdonnell-2023-10-19_12:04:12-fs-wip-batrick-testing-20231016.203825-distro-default-smithi/7432286/teuthology.log
558
    Host lost.
559
560 196 Patrick Donnelly
One followup fix:
561
* https://pulpito.ceph.com/pdonnell-2023-10-20_00:33:29-fs-wip-batrick-testing-20231016.203825-distro-default-smithi/
562
563 192 Patrick Donnelly
Failures:
564
565
* https://tracker.ceph.com/issues/56694
566
    qa: avoid blocking forever on hung umount
567
* https://tracker.ceph.com/issues/63089
568
    qa: tasks/mirror times out
569
* https://tracker.ceph.com/issues/52624
570
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
571
* https://tracker.ceph.com/issues/59531
572
  "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio)" 
573
* https://tracker.ceph.com/issues/57676
574
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
575
* https://tracker.ceph.com/issues/62658 
576
    error during scrub thrashing: reached maximum tries (31) after waiting for 900 seconds
577
* https://tracker.ceph.com/issues/61243
578
  test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
579
* https://tracker.ceph.com/issues/57656
580
  dbench: write failed on handle 10010 (Resource temporarily unavailable)
581
* https://tracker.ceph.com/issues/63233
582
  mon|client|mds: valgrind reports possible leaks in the MDS
583 197 Patrick Donnelly
* https://tracker.ceph.com/issues/63278
584
  kclient: may wrongly decode session messages and believe it is blocklisted (dead jobs)
585 192 Patrick Donnelly
586 189 Rishabh Dave
h3. 9 Oct 2023
587
588
https://pulpito.ceph.com/rishabh-2023-10-06_11:56:52-fs-rishabh-cephfs-mon-testing-default-smithi/
589
590
* https://tracker.ceph.com/issues/54460
591
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
592
* https://tracker.ceph.com/issues/63141
593
  test_idem_unaffected_root_squash (test_admin.TestFsAuthorizeUpdate) fails
594
* https://tracker.ceph.com/issues/62937
595
  logrotate doesn't support parallel execution on same set of logfiles
596
* https://tracker.ceph.com/issues/61400
597
  valgrind+ceph-mon issues
598
* https://tracker.ceph.com/issues/57676
599
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
600
* https://tracker.ceph.com/issues/55805
601
  error during scrub thrashing reached max tries in 900 secs
602
603 188 Venky Shankar
h3. 26 Sep 2023
604
605
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230926.081818
606
607
* https://tracker.ceph.com/issues/52624
608
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
609
* https://tracker.ceph.com/issues/62873
610
    qa: FAIL: test_client_blocklisted_oldest_tid (tasks.cephfs.test_client_limits.TestClientLimits)
611
* https://tracker.ceph.com/issues/61400
612
    valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
613
* https://tracker.ceph.com/issues/57676
614
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
615
* https://tracker.ceph.com/issues/62682
616
    mon: no mdsmap broadcast after "fs set joinable" is set to true
617
* https://tracker.ceph.com/issues/63089
618
    qa: tasks/mirror times out
619
620 185 Rishabh Dave
h3. 22 Sep 2023
621
622
https://pulpito.ceph.com/rishabh-2023-09-12_12:12:15-fs-wip-rishabh-2023sep12-b2-testing-default-smithi/
623
624
* https://tracker.ceph.com/issues/59348
625
  qa: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota
626
* https://tracker.ceph.com/issues/59344
627
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
628
* https://tracker.ceph.com/issues/59531
629
  "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio)" 
630
* https://tracker.ceph.com/issues/61574
631
  build failure for mdtest project
632
* https://tracker.ceph.com/issues/62702
633
  fsstress.sh: MDS slow requests for the internal 'rename' requests
634
* https://tracker.ceph.com/issues/57676
635
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
636
637
* https://tracker.ceph.com/issues/62863 
638
  deadlock in ceph-fuse causes teuthology job to hang and fail
639
* https://tracker.ceph.com/issues/62870
640
  test_cluster_info (tasks.cephfs.test_nfs.TestNFS)
641
* https://tracker.ceph.com/issues/62873
642
  test_client_blocklisted_oldest_tid (tasks.cephfs.test_client_limits.TestClientLimits)
643
644 186 Venky Shankar
h3. 20 Sep 2023
645
646
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230920.072635
647
648
* https://tracker.ceph.com/issues/52624
649
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
650
* https://tracker.ceph.com/issues/61400
651
    valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
652
* https://tracker.ceph.com/issues/61399
653
    libmpich: undefined references to fi_strerror
654
* https://tracker.ceph.com/issues/62081
655
    tasks/fscrypt-common does not finish, timesout
656
* https://tracker.ceph.com/issues/62658 
657
    error during scrub thrashing: reached maximum tries (31) after waiting for 900 seconds
658
* https://tracker.ceph.com/issues/62915
659
    qa/suites/fs/nfs: No orchestrator configured (try `ceph orch set backend`) while running test cases
660
* https://tracker.ceph.com/issues/59531
661
    quincy: "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio)"
662
* https://tracker.ceph.com/issues/62873
663
  qa: FAIL: test_client_blocklisted_oldest_tid (tasks.cephfs.test_client_limits.TestClientLimits)
664
* https://tracker.ceph.com/issues/62936
665
    Test failure: test_mirroring_init_failure_with_recovery (tasks.cephfs.test_mirroring.TestMirroring)
666
* https://tracker.ceph.com/issues/62937
667
    Command failed on smithi027 with status 3: 'sudo logrotate /etc/logrotate.d/ceph-test.conf'
668
* https://tracker.ceph.com/issues/62510
669
    snaptest-git-ceph.sh failure with fs/thrash
670
* https://tracker.ceph.com/issues/62081
671
    tasks/fscrypt-common does not finish, timesout
672
* https://tracker.ceph.com/issues/62126
673
    test failure: suites/blogbench.sh stops running
674 187 Venky Shankar
* https://tracker.ceph.com/issues/62682
675
    mon: no mdsmap broadcast after "fs set joinable" is set to true
676 186 Venky Shankar
677 184 Milind Changire
h3. 19 Sep 2023
678
679
http://pulpito.front.sepia.ceph.com/mchangir-2023-09-12_05:40:22-fs-wip-mchangir-testing-20230908.140927-testing-default-smithi/
680
681
* https://tracker.ceph.com/issues/58220#note-9
682
  workunit fs/test_python.sh: test_disk_quota_exceeeded_error failure
683
* https://tracker.ceph.com/issues/62702
684
  Command failed (workunit test suites/fsstress.sh) on smithi124 with status 124
685
* https://tracker.ceph.com/issues/57676
686
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
687
* https://tracker.ceph.com/issues/59348
688
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
689
* https://tracker.ceph.com/issues/52624
690
  qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
691
* https://tracker.ceph.com/issues/51964
692
  qa: test_cephfs_mirror_restart_sync_on_blocklist failure
693
* https://tracker.ceph.com/issues/61243
694
  test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
695
* https://tracker.ceph.com/issues/59344
696
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
697
* https://tracker.ceph.com/issues/62873
698
  qa: FAIL: test_client_blocklisted_oldest_tid (tasks.cephfs.test_client_limits.TestClientLimits)
699
* https://tracker.ceph.com/issues/59413
700
  cephfs: qa snaptest-git-ceph.sh failed with "got remote process result: 128"
701
* https://tracker.ceph.com/issues/53859
702
  qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
703
* https://tracker.ceph.com/issues/62482
704
  qa: cluster [WRN] Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
705
706 178 Patrick Donnelly
707 177 Venky Shankar
h3. 13 Sep 2023
708
709
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230908.065909
710
711
* https://tracker.ceph.com/issues/52624
712
      qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
713
* https://tracker.ceph.com/issues/57655
714
    qa: fs:mixed-clients kernel_untar_build failure
715
* https://tracker.ceph.com/issues/57676
716
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
717
* https://tracker.ceph.com/issues/61243
718
    qa: tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev - 17 tests failed
719
* https://tracker.ceph.com/issues/62567
720
    postgres workunit times out - MDS_SLOW_REQUEST in logs
721
* https://tracker.ceph.com/issues/61400
722
    valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
723
* https://tracker.ceph.com/issues/61399
724
    libmpich: undefined references to fi_strerror
725
* https://tracker.ceph.com/issues/57655
726
    qa: fs:mixed-clients kernel_untar_build failure
727
* https://tracker.ceph.com/issues/57676
728
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
729
* https://tracker.ceph.com/issues/51964
730
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
731
* https://tracker.ceph.com/issues/62081
732
    tasks/fscrypt-common does not finish, timesout
733 178 Patrick Donnelly
734 179 Patrick Donnelly
h3. 2023 Sep 12
735 178 Patrick Donnelly
736
https://pulpito.ceph.com/pdonnell-2023-09-12_14:07:50-fs-wip-batrick-testing-20230912.122437-distro-default-smithi/
737 1 Patrick Donnelly
738 181 Patrick Donnelly
A few failures caused by qa refactoring in https://github.com/ceph/ceph/pull/48130 ; notably:
739
740 182 Patrick Donnelly
* Test failure: test_export_pin_many (tasks.cephfs.test_exports.TestExportPin) caused by fragmentation from config changes.
741 181 Patrick Donnelly
742
Failures:
743
744 179 Patrick Donnelly
* https://tracker.ceph.com/issues/59348
745
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
746
* https://tracker.ceph.com/issues/57656
747
  dbench: write failed on handle 10010 (Resource temporarily unavailable)
748
* https://tracker.ceph.com/issues/55805
749
  error scrub thrashing reached max tries in 900 secs
750
* https://tracker.ceph.com/issues/62067
751
    ffsb.sh failure "Resource temporarily unavailable"
752
* https://tracker.ceph.com/issues/59344
753
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
754
* https://tracker.ceph.com/issues/61399
755 180 Patrick Donnelly
  libmpich: undefined references to fi_strerror
756
* https://tracker.ceph.com/issues/62832
757
  common: config_proxy deadlock during shutdown (and possibly other times)
758
* https://tracker.ceph.com/issues/59413
759 1 Patrick Donnelly
  cephfs: qa snaptest-git-ceph.sh failed with "got remote process result: 128"
760 181 Patrick Donnelly
* https://tracker.ceph.com/issues/57676
761
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
762
* https://tracker.ceph.com/issues/62567
763
  Command failed with status 1: "sudo TESTDIR=/home/ubuntu/cephtest bash -c 'sudo -u postgres -- pgbench -s 500 -i'"
764
* https://tracker.ceph.com/issues/54460
765
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
766
* https://tracker.ceph.com/issues/58220#note-9
767
  workunit fs/test_python.sh: test_disk_quota_exceeeded_error failure
768
* https://tracker.ceph.com/issues/59348
769
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
770 183 Patrick Donnelly
* https://tracker.ceph.com/issues/62847
771
    mds: blogbench requests stuck (5mds+scrub+snaps-flush)
772
* https://tracker.ceph.com/issues/62848
773
    qa: fail_fs upgrade scenario hanging
774
* https://tracker.ceph.com/issues/62081
775
    tasks/fscrypt-common does not finish, timesout
776 177 Venky Shankar
777 176 Venky Shankar
h3. 11 Sep 2023
778 175 Venky Shankar
779
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230830.153114
780
781
* https://tracker.ceph.com/issues/52624
782
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
783
* https://tracker.ceph.com/issues/61399
784
    qa: build failure for ior (the failed instance is when compiling `mdtest`)
785
* https://tracker.ceph.com/issues/57655
786
    qa: fs:mixed-clients kernel_untar_build failure
787
* https://tracker.ceph.com/issues/61399
788
    ior build failure
789
* https://tracker.ceph.com/issues/59531
790
    quincy: "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio)"
791
* https://tracker.ceph.com/issues/59344
792
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
793
* https://tracker.ceph.com/issues/59346
794
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
795
* https://tracker.ceph.com/issues/59348
796
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
797
* https://tracker.ceph.com/issues/57676
798
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
799
* https://tracker.ceph.com/issues/61243
800
  qa: tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev - 17 tests failed
801
* https://tracker.ceph.com/issues/62567
802
  postgres workunit times out - MDS_SLOW_REQUEST in logs
803
804
805 174 Rishabh Dave
h3. 6 Sep 2023 Run 2
806
807
https://pulpito.ceph.com/rishabh-2023-08-25_01:50:32-fs-wip-rishabh-2023aug3-b5-testing-default-smithi/ 
808
809
* https://tracker.ceph.com/issues/51964
810
  test_cephfs_mirror_restart_sync_on_blocklist failure
811
* https://tracker.ceph.com/issues/59348
812
  test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
813
* https://tracker.ceph.com/issues/53859
814
  qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
815
* https://tracker.ceph.com/issues/61892
816
  test_strays.TestStrays.test_snapshot_remove failed
817
* https://tracker.ceph.com/issues/54460
818
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
819
* https://tracker.ceph.com/issues/59346
820
  fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write"
821
* https://tracker.ceph.com/issues/59344
822
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
823
* https://tracker.ceph.com/issues/62484
824
  qa: ffsb.sh test failure
825
* https://tracker.ceph.com/issues/62567
826
  Command failed with status 1: "sudo TESTDIR=/home/ubuntu/cephtest bash -c 'sudo -u postgres -- pgbench -s 500 -i'"
827
  
828
* https://tracker.ceph.com/issues/61399
829
  ior build failure
830
* https://tracker.ceph.com/issues/57676
831
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
832
* https://tracker.ceph.com/issues/55805
833
  error scrub thrashing reached max tries in 900 secs
834
835 172 Rishabh Dave
h3. 6 Sep 2023
836 171 Rishabh Dave
837 173 Rishabh Dave
https://pulpito.ceph.com/rishabh-2023-08-10_20:16:46-fs-wip-rishabh-2023Aug1-b4-testing-default-smithi/
838 171 Rishabh Dave
839 1 Patrick Donnelly
* https://tracker.ceph.com/issues/53859
840
  qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
841 173 Rishabh Dave
* https://tracker.ceph.com/issues/51964
842
  qa: test_cephfs_mirror_restart_sync_on_blocklist failure
843 1 Patrick Donnelly
* https://tracker.ceph.com/issues/61892
844 173 Rishabh Dave
  test_snapshot_remove (test_strays.TestStrays) failed
845
* https://tracker.ceph.com/issues/59348
846
  qa: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota
847
* https://tracker.ceph.com/issues/54462
848
  Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128
849
* https://tracker.ceph.com/issues/62556
850
  test_acls: xfstests_dev: python2 is missing
851
* https://tracker.ceph.com/issues/62067
852
  ffsb.sh failure "Resource temporarily unavailable"
853
* https://tracker.ceph.com/issues/57656
854
  dbench: write failed on handle 10010 (Resource temporarily unavailable)
855 1 Patrick Donnelly
* https://tracker.ceph.com/issues/59346
856
  fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write"
857 171 Rishabh Dave
* https://tracker.ceph.com/issues/59344
858 173 Rishabh Dave
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
859
860 171 Rishabh Dave
* https://tracker.ceph.com/issues/61399
861
  ior build failure
862
* https://tracker.ceph.com/issues/57676
863
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
864
* https://tracker.ceph.com/issues/55805
865
  error scrub thrashing reached max tries in 900 secs
866 173 Rishabh Dave
867
* https://tracker.ceph.com/issues/62567
868
  Command failed on smithi008 with status 1: "sudo TESTDIR=/home/ubuntu/cephtest bash -c 'sudo -u postgres -- pgbench -s 500 -i'"
869
* https://tracker.ceph.com/issues/62702
870
  workunit test suites/fsstress.sh on smithi066 with status 124
871 170 Rishabh Dave
872
h3. 5 Sep 2023
873
874
https://pulpito.ceph.com/rishabh-2023-08-25_06:38:25-fs-wip-rishabh-2023aug3-b5-testing-default-smithi/
875
orch:cephadm suite run: http://pulpito.front.sepia.ceph.com/rishabh-2023-09-05_12:16:09-orch:cephadm-wip-rishabh-2023aug3-b5-testing-default-smithi/
876
  this run has failures but acc to Adam King these are not relevant and should be ignored
877
878
* https://tracker.ceph.com/issues/61892
879
  test_snapshot_remove (test_strays.TestStrays) failed
880
* https://tracker.ceph.com/issues/59348
881
  test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota
882
* https://tracker.ceph.com/issues/54462
883
  Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128
884
* https://tracker.ceph.com/issues/62067
885
  ffsb.sh failure "Resource temporarily unavailable"
886
* https://tracker.ceph.com/issues/57656 
887
  dbench: write failed on handle 10010 (Resource temporarily unavailable)
888
* https://tracker.ceph.com/issues/59346
889
  fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write"
890
* https://tracker.ceph.com/issues/59344
891
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
892
* https://tracker.ceph.com/issues/50223
893
  client.xxxx isn't responding to mclientcaps(revoke)
894
* https://tracker.ceph.com/issues/57655
895
  qa: fs:mixed-clients kernel_untar_build failure
896
* https://tracker.ceph.com/issues/62187
897
  iozone.sh: line 5: iozone: command not found
898
 
899
* https://tracker.ceph.com/issues/61399
900
  ior build failure
901
* https://tracker.ceph.com/issues/57676
902
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
903
* https://tracker.ceph.com/issues/55805
904
  error scrub thrashing reached max tries in 900 secs
905 169 Venky Shankar
906
907
h3. 31 Aug 2023
908
909
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230824.045828
910
911
* https://tracker.ceph.com/issues/52624
912
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
913
* https://tracker.ceph.com/issues/62187
914
    iozone: command not found
915
* https://tracker.ceph.com/issues/61399
916
    ior build failure
917
* https://tracker.ceph.com/issues/59531
918
    quincy: "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio)"
919
* https://tracker.ceph.com/issues/61399
920
    qa: build failure for ior (the failed instance is when compiling `mdtest`)
921
* https://tracker.ceph.com/issues/57655
922
    qa: fs:mixed-clients kernel_untar_build failure
923
* https://tracker.ceph.com/issues/59344
924
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
925
* https://tracker.ceph.com/issues/59346
926
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
927
* https://tracker.ceph.com/issues/59348
928
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
929
* https://tracker.ceph.com/issues/59413
930
    cephfs: qa snaptest-git-ceph.sh failed with "got remote process result: 128"
931
* https://tracker.ceph.com/issues/62653
932
    qa: unimplemented fcntl command: 1036 with fsstress
933
* https://tracker.ceph.com/issues/61400
934
    valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
935
* https://tracker.ceph.com/issues/62658
936
    error during scrub thrashing: reached maximum tries (31) after waiting for 900 seconds
937
* https://tracker.ceph.com/issues/62188
938
    AttributeError: 'RemoteProcess' object has no attribute 'read' (happens only with multis-auth test)
939 168 Venky Shankar
940
941
h3. 25 Aug 2023
942
943
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230822.064807
944
945
* https://tracker.ceph.com/issues/59344
946
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
947
* https://tracker.ceph.com/issues/59346
948
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
949
* https://tracker.ceph.com/issues/59348
950
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
951
* https://tracker.ceph.com/issues/57655
952
    qa: fs:mixed-clients kernel_untar_build failure
953
* https://tracker.ceph.com/issues/61243
954
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
955
* https://tracker.ceph.com/issues/61399
956
    ior build failure
957
* https://tracker.ceph.com/issues/61399
958
    qa: build failure for ior (the failed instance is when compiling `mdtest`)
959
* https://tracker.ceph.com/issues/62484
960
    qa: ffsb.sh test failure
961
* https://tracker.ceph.com/issues/59531
962
    quincy: "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio)"
963
* https://tracker.ceph.com/issues/62510
964
    snaptest-git-ceph.sh failure with fs/thrash
965 167 Venky Shankar
966
967
h3. 24 Aug 2023
968
969
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230822.060131
970
971
* https://tracker.ceph.com/issues/57676
972
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
973
* https://tracker.ceph.com/issues/51964
974
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
975
* https://tracker.ceph.com/issues/59344
976
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
977
* https://tracker.ceph.com/issues/59346
978
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
979
* https://tracker.ceph.com/issues/59348
980
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
981
* https://tracker.ceph.com/issues/61399
982
    ior build failure
983
* https://tracker.ceph.com/issues/61399
984
    qa: build failure for ior (the failed instance is when compiling `mdtest`)
985
* https://tracker.ceph.com/issues/62510
986
    snaptest-git-ceph.sh failure with fs/thrash
987
* https://tracker.ceph.com/issues/62484
988
    qa: ffsb.sh test failure
989
* https://tracker.ceph.com/issues/57087
990
    qa: test_fragmented_injection (tasks.cephfs.test_data_scan.TestDataScan) failure
991
* https://tracker.ceph.com/issues/57656
992
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
993
* https://tracker.ceph.com/issues/62187
994
    iozone: command not found
995
* https://tracker.ceph.com/issues/62188
996
    AttributeError: 'RemoteProcess' object has no attribute 'read' (happens only with multis-auth test)
997
* https://tracker.ceph.com/issues/62567
998
    postgres workunit times out - MDS_SLOW_REQUEST in logs
999 166 Venky Shankar
1000
1001
h3. 22 Aug 2023
1002
1003
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230809.035933
1004
1005
* https://tracker.ceph.com/issues/57676
1006
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1007
* https://tracker.ceph.com/issues/51964
1008
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1009
* https://tracker.ceph.com/issues/59344
1010
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
1011
* https://tracker.ceph.com/issues/59346
1012
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
1013
* https://tracker.ceph.com/issues/59348
1014
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1015
* https://tracker.ceph.com/issues/61399
1016
    ior build failure
1017
* https://tracker.ceph.com/issues/61399
1018
    qa: build failure for ior (the failed instance is when compiling `mdtest`)
1019
* https://tracker.ceph.com/issues/57655
1020
    qa: fs:mixed-clients kernel_untar_build failure
1021
* https://tracker.ceph.com/issues/61243
1022
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
1023
* https://tracker.ceph.com/issues/62188
1024
    AttributeError: 'RemoteProcess' object has no attribute 'read' (happens only with multis-auth test)
1025
* https://tracker.ceph.com/issues/62510
1026
    snaptest-git-ceph.sh failure with fs/thrash
1027
* https://tracker.ceph.com/issues/62511
1028
    src/mds/MDLog.cc: 299: FAILED ceph_assert(!mds_is_shutting_down)
1029 165 Venky Shankar
1030
1031
h3. 14 Aug 2023
1032
1033
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230808.093601
1034
1035
* https://tracker.ceph.com/issues/51964
1036
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1037
* https://tracker.ceph.com/issues/61400
1038
    valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
1039
* https://tracker.ceph.com/issues/61399
1040
    ior build failure
1041
* https://tracker.ceph.com/issues/59348
1042
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1043
* https://tracker.ceph.com/issues/59531
1044
    cluster [WRN] OSD bench result of 137706.272521 IOPS exceeded the threshold
1045
* https://tracker.ceph.com/issues/59344
1046
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
1047
* https://tracker.ceph.com/issues/59346
1048
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
1049
* https://tracker.ceph.com/issues/61399
1050
    qa: build failure for ior (the failed instance is when compiling `mdtest`)
1051
* https://tracker.ceph.com/issues/59684 [kclient bug]
1052
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
1053
* https://tracker.ceph.com/issues/61243 (NEW)
1054
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
1055
* https://tracker.ceph.com/issues/57655
1056
    qa: fs:mixed-clients kernel_untar_build failure
1057
* https://tracker.ceph.com/issues/57656
1058
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
1059 163 Venky Shankar
1060
1061
h3. 28 JULY 2023
1062
1063
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230725.053049
1064
1065
* https://tracker.ceph.com/issues/51964
1066
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1067
* https://tracker.ceph.com/issues/61400
1068
    valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
1069
* https://tracker.ceph.com/issues/61399
1070
    ior build failure
1071
* https://tracker.ceph.com/issues/57676
1072
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1073
* https://tracker.ceph.com/issues/59348
1074
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1075
* https://tracker.ceph.com/issues/59531
1076
    cluster [WRN] OSD bench result of 137706.272521 IOPS exceeded the threshold
1077
* https://tracker.ceph.com/issues/59344
1078
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
1079
* https://tracker.ceph.com/issues/59346
1080
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
1081
* https://github.com/ceph/ceph/pull/52556
1082
    task/test_nfs: AttributeError: 'TestNFS' object has no attribute 'run_ceph_cmd' (see note #4)
1083
* https://tracker.ceph.com/issues/62187
1084
    iozone: command not found
1085
* https://tracker.ceph.com/issues/61399
1086
    qa: build failure for ior (the failed instance is when compiling `mdtest`)
1087
* https://tracker.ceph.com/issues/62188
1088 164 Rishabh Dave
    AttributeError: 'RemoteProcess' object has no attribute 'read' (happens only with multis-auth test)
1089 158 Rishabh Dave
1090
h3. 24 Jul 2023
1091
1092
https://pulpito.ceph.com/rishabh-2023-07-13_21:35:13-fs-wip-rishabh-2023Jul13-testing-default-smithi/
1093
https://pulpito.ceph.com/rishabh-2023-07-14_10:26:42-fs-wip-rishabh-2023Jul13-testing-default-smithi/
1094
There were few failure from one of the PRs under testing. Following run confirms that removing this PR fixes these failures -
1095
https://pulpito.ceph.com/rishabh-2023-07-18_02:11:50-fs-wip-rishabh-2023Jul13-m-quota-testing-default-smithi/
1096
One more extra run to check if blogbench.sh fail every time:
1097
https://pulpito.ceph.com/rishabh-2023-07-21_17:58:19-fs-wip-rishabh-2023Jul13-m-quota-testing-default-smithi/
1098
blogbench.sh failure were seen on above runs for first time, following run with main branch that confirms that "blogbench.sh" was not related to any of the PRs that are under testing -
1099 161 Rishabh Dave
https://pulpito.ceph.com/rishabh-2023-07-21_21:30:53-fs-wip-rishabh-2023Jul13-base-2-testing-default-smithi/
1100
1101
* https://tracker.ceph.com/issues/61892
1102
  test_snapshot_remove (test_strays.TestStrays) failed
1103
* https://tracker.ceph.com/issues/53859
1104
  test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
1105
* https://tracker.ceph.com/issues/61982
1106
  test_clean_stale_subvolume_snapshot_metadata (tasks.cephfs.test_volumes.TestSubvolumeSnapshots)
1107
* https://tracker.ceph.com/issues/52438
1108
  qa: ffsb timeout
1109
* https://tracker.ceph.com/issues/54460
1110
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1111
* https://tracker.ceph.com/issues/57655
1112
  qa: fs:mixed-clients kernel_untar_build failure
1113
* https://tracker.ceph.com/issues/48773
1114
  reached max tries: scrub does not complete
1115
* https://tracker.ceph.com/issues/58340
1116
  mds: fsstress.sh hangs with multimds
1117
* https://tracker.ceph.com/issues/61400
1118
  valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
1119
* https://tracker.ceph.com/issues/57206
1120
  libcephfs/test.sh: ceph_test_libcephfs_reclaim
1121
  
1122
* https://tracker.ceph.com/issues/57656
1123
  [testing] dbench: write failed on handle 10010 (Resource temporarily unavailable)
1124
* https://tracker.ceph.com/issues/61399
1125
  ior build failure
1126
* https://tracker.ceph.com/issues/57676
1127
  error during scrub thrashing: backtrace
1128
  
1129
* https://tracker.ceph.com/issues/38452
1130
  'sudo -u postgres -- pgbench -s 500 -i' failed
1131 158 Rishabh Dave
* https://tracker.ceph.com/issues/62126
1132 157 Venky Shankar
  blogbench.sh failure
1133
1134
h3. 18 July 2023
1135
1136
* https://tracker.ceph.com/issues/52624
1137
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
1138
* https://tracker.ceph.com/issues/57676
1139
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1140
* https://tracker.ceph.com/issues/54460
1141
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1142
* https://tracker.ceph.com/issues/57655
1143
    qa: fs:mixed-clients kernel_untar_build failure
1144
* https://tracker.ceph.com/issues/51964
1145
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1146
* https://tracker.ceph.com/issues/59344
1147
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
1148
* https://tracker.ceph.com/issues/61182
1149
    cephfs-mirror-ha-workunit: reached maximum tries (50) after waiting for 300 seconds
1150
* https://tracker.ceph.com/issues/61957
1151
    test_client_limits.TestClientLimits.test_client_release_bug
1152
* https://tracker.ceph.com/issues/59348
1153
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1154
* https://tracker.ceph.com/issues/61892
1155
    test_strays.TestStrays.test_snapshot_remove failed
1156
* https://tracker.ceph.com/issues/59346
1157
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
1158
* https://tracker.ceph.com/issues/44565
1159
    src/mds/SimpleLock.h: 528: FAILED ceph_assert(state == LOCK_XLOCK || state == LOCK_XLOCKDONE || state == LOCK_XLOCKSNAP || state == LOCK_LOCK_XLOCK || state == LOCK_LOCK || is_locallock())
1160
* https://tracker.ceph.com/issues/62067
1161
    ffsb.sh failure "Resource temporarily unavailable"
1162 156 Venky Shankar
1163
1164
h3. 17 July 2023
1165
1166
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230704.040136
1167
1168
* https://tracker.ceph.com/issues/61982
1169
    Test failure: test_clean_stale_subvolume_snapshot_metadata (tasks.cephfs.test_volumes.TestSubvolumeSnapshots)
1170
* https://tracker.ceph.com/issues/59344
1171
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
1172
* https://tracker.ceph.com/issues/61182
1173
    cephfs-mirror-ha-workunit: reached maximum tries (50) after waiting for 300 seconds
1174
* https://tracker.ceph.com/issues/61957
1175
    test_client_limits.TestClientLimits.test_client_release_bug
1176
* https://tracker.ceph.com/issues/61400
1177
    valgrind+ceph-mon: segmentation fault in rocksdb+tcmalloc
1178
* https://tracker.ceph.com/issues/59348
1179
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1180
* https://tracker.ceph.com/issues/61892
1181
    test_strays.TestStrays.test_snapshot_remove failed
1182
* https://tracker.ceph.com/issues/59346
1183
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
1184
* https://tracker.ceph.com/issues/62036
1185
    src/mds/MDCache.cc: 5131: FAILED ceph_assert(isolated_inodes.empty())
1186
* https://tracker.ceph.com/issues/61737
1187
    coredump from '/bin/podman pull quay.ceph.io/ceph-ci/ceph:pacific'
1188
* https://tracker.ceph.com/issues/44565
1189
    src/mds/SimpleLock.h: 528: FAILED ceph_assert(state == LOCK_XLOCK || state == LOCK_XLOCKDONE || state == LOCK_XLOCKSNAP || state == LOCK_LOCK_XLOCK || state == LOCK_LOCK || is_locallock())
1190 155 Rishabh Dave
1191 1 Patrick Donnelly
1192 153 Rishabh Dave
h3. 13 July 2023 Run 2
1193 152 Rishabh Dave
1194
1195
https://pulpito.ceph.com/rishabh-2023-07-08_23:33:40-fs-wip-rishabh-2023Jul9-testing-default-smithi/
1196
https://pulpito.ceph.com/rishabh-2023-07-09_20:19:09-fs-wip-rishabh-2023Jul9-testing-default-smithi/
1197
1198
* https://tracker.ceph.com/issues/61957
1199
  test_client_limits.TestClientLimits.test_client_release_bug
1200
* https://tracker.ceph.com/issues/61982
1201
  Test failure: test_clean_stale_subvolume_snapshot_metadata (tasks.cephfs.test_volumes.TestSubvolumeSnapshots)
1202
* https://tracker.ceph.com/issues/59348
1203
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1204
* https://tracker.ceph.com/issues/59344
1205
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
1206
* https://tracker.ceph.com/issues/54460
1207
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1208
* https://tracker.ceph.com/issues/57655
1209
  qa: fs:mixed-clients kernel_untar_build failure
1210
* https://tracker.ceph.com/issues/61400
1211
  valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
1212
* https://tracker.ceph.com/issues/61399
1213
  ior build failure
1214
1215 151 Venky Shankar
h3. 13 July 2023
1216
1217
https://pulpito.ceph.com/vshankar-2023-07-04_11:45:30-fs-wip-vshankar-testing-20230704.040242-testing-default-smithi/
1218
1219
* https://tracker.ceph.com/issues/54460
1220
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1221
* https://tracker.ceph.com/issues/61400
1222
    valgrind+ceph-mon: segmentation fault in rocksdb+tcmalloc
1223
* https://tracker.ceph.com/issues/57655
1224
    qa: fs:mixed-clients kernel_untar_build failure
1225
* https://tracker.ceph.com/issues/61945
1226
    LibCephFS.DelegTimeout failure
1227
* https://tracker.ceph.com/issues/52624
1228
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
1229
* https://tracker.ceph.com/issues/57676
1230
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1231
* https://tracker.ceph.com/issues/59348
1232
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1233
* https://tracker.ceph.com/issues/59344
1234
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
1235
* https://tracker.ceph.com/issues/51964
1236
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1237
* https://tracker.ceph.com/issues/59346
1238
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
1239
* https://tracker.ceph.com/issues/61982
1240
    Test failure: test_clean_stale_subvolume_snapshot_metadata (tasks.cephfs.test_volumes.TestSubvolumeSnapshots)
1241 150 Rishabh Dave
1242
1243
h3. 13 Jul 2023
1244
1245
https://pulpito.ceph.com/rishabh-2023-07-05_22:21:20-fs-wip-rishabh-2023Jul5-testing-default-smithi/
1246
https://pulpito.ceph.com/rishabh-2023-07-06_19:33:28-fs-wip-rishabh-2023Jul5-testing-default-smithi/
1247
1248
* https://tracker.ceph.com/issues/61957
1249
  test_client_limits.TestClientLimits.test_client_release_bug
1250
* https://tracker.ceph.com/issues/59348
1251
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1252
* https://tracker.ceph.com/issues/59346
1253
  fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
1254
* https://tracker.ceph.com/issues/48773
1255
  scrub does not complete: reached max tries
1256
* https://tracker.ceph.com/issues/59344
1257
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
1258
* https://tracker.ceph.com/issues/52438
1259
  qa: ffsb timeout
1260
* https://tracker.ceph.com/issues/57656
1261
  [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
1262
* https://tracker.ceph.com/issues/58742
1263
  xfstests-dev: kcephfs: generic
1264
* https://tracker.ceph.com/issues/61399
1265 148 Rishabh Dave
  libmpich: undefined references to fi_strerror
1266 149 Rishabh Dave
1267 148 Rishabh Dave
h3. 12 July 2023
1268
1269
https://pulpito.ceph.com/rishabh-2023-07-05_18:32:52-fs-wip-rishabh-CephManager-in-CephFSTestCase-testing-default-smithi/
1270
https://pulpito.ceph.com/rishabh-2023-07-06_19:46:43-fs-wip-rishabh-CephManager-in-CephFSTestCase-testing-default-smithi/
1271
1272
* https://tracker.ceph.com/issues/61892
1273
  test_strays.TestStrays.test_snapshot_remove failed
1274
* https://tracker.ceph.com/issues/59348
1275
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1276
* https://tracker.ceph.com/issues/53859
1277
  qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
1278
* https://tracker.ceph.com/issues/59346
1279
  fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write"
1280
* https://tracker.ceph.com/issues/58742
1281
  xfstests-dev: kcephfs: generic
1282
* https://tracker.ceph.com/issues/59344
1283
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
1284
* https://tracker.ceph.com/issues/52438
1285
  qa: ffsb timeout
1286
* https://tracker.ceph.com/issues/57656
1287
  [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
1288
* https://tracker.ceph.com/issues/54460
1289
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1290
* https://tracker.ceph.com/issues/57655
1291
  qa: fs:mixed-clients kernel_untar_build failure
1292
* https://tracker.ceph.com/issues/61182
1293
  cephfs-mirror-ha-workunit: reached maximum tries (50) after waiting for 300 seconds
1294
* https://tracker.ceph.com/issues/61400
1295
  valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
1296 147 Rishabh Dave
* https://tracker.ceph.com/issues/48773
1297 146 Patrick Donnelly
  reached max tries: scrub does not complete
1298
1299
h3. 05 July 2023
1300
1301
https://pulpito.ceph.com/pdonnell-2023-07-05_03:38:33-fs:libcephfs-wip-pdonnell-testing-20230705.003205-distro-default-smithi/
1302
1303 137 Rishabh Dave
* https://tracker.ceph.com/issues/59346
1304 143 Rishabh Dave
  fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write"
1305
1306
h3. 27 Jun 2023
1307
1308
https://pulpito.ceph.com/rishabh-2023-06-21_23:38:17-fs-wip-rishabh-improvements-authmon-testing-default-smithi/
1309 144 Rishabh Dave
https://pulpito.ceph.com/rishabh-2023-06-23_17:37:30-fs-wip-rishabh-improvements-authmon-distro-default-smithi/
1310
1311
* https://tracker.ceph.com/issues/59348
1312
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1313
* https://tracker.ceph.com/issues/54460
1314
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1315
* https://tracker.ceph.com/issues/59346
1316
  fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write"
1317
* https://tracker.ceph.com/issues/59344
1318
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
1319
* https://tracker.ceph.com/issues/61399
1320
  libmpich: undefined references to fi_strerror
1321
* https://tracker.ceph.com/issues/50223
1322
  client.xxxx isn't responding to mclientcaps(revoke)
1323 143 Rishabh Dave
* https://tracker.ceph.com/issues/61831
1324
  Test failure: test_mirroring_init_failure_with_recovery (tasks.cephfs.test_mirroring.TestMirroring)
1325 142 Venky Shankar
1326
1327
h3. 22 June 2023
1328
1329
* https://tracker.ceph.com/issues/57676
1330
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1331
* https://tracker.ceph.com/issues/54460
1332
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1333
* https://tracker.ceph.com/issues/59344
1334
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
1335
* https://tracker.ceph.com/issues/59348
1336
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1337
* https://tracker.ceph.com/issues/61400
1338
    valgrind+ceph-mon: segmentation fault in rocksdb+tcmalloc
1339
* https://tracker.ceph.com/issues/57655
1340
    qa: fs:mixed-clients kernel_untar_build failure
1341
* https://tracker.ceph.com/issues/61394
1342
    qa/quincy: cluster [WRN] evicting unresponsive client smithi152 (4298), after 303.726 seconds" in cluster log
1343
* https://tracker.ceph.com/issues/61762
1344
    qa: wait_for_clean: failed before timeout expired
1345
* https://tracker.ceph.com/issues/61775
1346
    cephfs-mirror: mirror daemon does not shutdown (in mirror ha tests)
1347
* https://tracker.ceph.com/issues/44565
1348
    src/mds/SimpleLock.h: 528: FAILED ceph_assert(state == LOCK_XLOCK || state == LOCK_XLOCKDONE || state == LOCK_XLOCKSNAP || state == LOCK_LOCK_XLOCK || state == LOCK_LOCK || is_locallock())
1349
* https://tracker.ceph.com/issues/61790
1350
    cephfs client to mds comms remain silent after reconnect
1351
* https://tracker.ceph.com/issues/61791
1352
    snaptest-git-ceph.sh test timed out (job dead)
1353 139 Venky Shankar
1354
1355
h3. 20 June 2023
1356
1357
https://pulpito.ceph.com/vshankar-2023-06-15_04:58:28-fs-wip-vshankar-testing-20230614.124123-testing-default-smithi/
1358
1359
* https://tracker.ceph.com/issues/57676
1360
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1361
* https://tracker.ceph.com/issues/54460
1362
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1363 140 Venky Shankar
* https://tracker.ceph.com/issues/54462
1364 1 Patrick Donnelly
  Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128
1365 141 Venky Shankar
* https://tracker.ceph.com/issues/58340
1366 139 Venky Shankar
  mds: fsstress.sh hangs with multimds
1367
* https://tracker.ceph.com/issues/59344
1368
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
1369
* https://tracker.ceph.com/issues/59348
1370
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1371
* https://tracker.ceph.com/issues/57656
1372
  [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
1373
* https://tracker.ceph.com/issues/61400
1374
    valgrind+ceph-mon: segmentation fault in rocksdb+tcmalloc
1375
* https://tracker.ceph.com/issues/57655
1376
    qa: fs:mixed-clients kernel_untar_build failure
1377
* https://tracker.ceph.com/issues/44565
1378
    src/mds/SimpleLock.h: 528: FAILED ceph_assert(state == LOCK_XLOCK || state == LOCK_XLOCKDONE || state == LOCK_XLOCKSNAP || state == LOCK_LOCK_XLOCK || state == LOCK_LOCK || is_locallock())
1379
* https://tracker.ceph.com/issues/61737
1380 138 Rishabh Dave
    coredump from '/bin/podman pull quay.ceph.io/ceph-ci/ceph:pacific'
1381
1382
h3. 16 June 2023
1383
1384 1 Patrick Donnelly
https://pulpito.ceph.com/rishabh-2023-05-16_10:39:13-fs-wip-rishabh-2023May15-1524-testing-default-smithi/
1385 145 Rishabh Dave
https://pulpito.ceph.com/rishabh-2023-05-17_11:09:48-fs-wip-rishabh-2023May15-1524-testing-default-smithi/
1386 138 Rishabh Dave
https://pulpito.ceph.com/rishabh-2023-05-18_10:01:53-fs-wip-rishabh-2023May15-1524-testing-default-smithi/
1387 1 Patrick Donnelly
(bins were rebuilt with a subset of orig PRs) https://pulpito.ceph.com/rishabh-2023-06-09_10:19:22-fs-wip-rishabh-2023Jun9-1308-testing-default-smithi/
1388
1389
1390
* https://tracker.ceph.com/issues/59344
1391
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
1392 138 Rishabh Dave
* https://tracker.ceph.com/issues/59348
1393
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1394 145 Rishabh Dave
* https://tracker.ceph.com/issues/59346
1395
  fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write"
1396
* https://tracker.ceph.com/issues/57656
1397
  [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
1398
* https://tracker.ceph.com/issues/54460
1399
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1400 138 Rishabh Dave
* https://tracker.ceph.com/issues/54462
1401
  Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128
1402 145 Rishabh Dave
* https://tracker.ceph.com/issues/61399
1403
  libmpich: undefined references to fi_strerror
1404
* https://tracker.ceph.com/issues/58945
1405
  xfstests-dev: ceph-fuse: generic 
1406 138 Rishabh Dave
* https://tracker.ceph.com/issues/58742
1407 136 Patrick Donnelly
  xfstests-dev: kcephfs: generic
1408
1409
h3. 24 May 2023
1410
1411
https://pulpito.ceph.com/pdonnell-2023-05-23_18:20:18-fs-wip-pdonnell-testing-20230523.134409-distro-default-smithi/
1412
1413
* https://tracker.ceph.com/issues/57676
1414
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1415
* https://tracker.ceph.com/issues/59683
1416
    Error: Unable to find a match: userspace-rcu-devel libedit-devel device-mapper-devel with fscrypt tests
1417
* https://tracker.ceph.com/issues/61399
1418
    qa: "[Makefile:299: ior] Error 1"
1419
* https://tracker.ceph.com/issues/61265
1420
    qa: tasks.cephfs.fuse_mount:process failed to terminate after unmount
1421
* https://tracker.ceph.com/issues/59348
1422
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1423
* https://tracker.ceph.com/issues/59346
1424
    qa/workunits/fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write"
1425
* https://tracker.ceph.com/issues/61400
1426
    valgrind+ceph-mon: segmentation fault in rocksdb+tcmalloc
1427
* https://tracker.ceph.com/issues/54460
1428
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1429
* https://tracker.ceph.com/issues/51964
1430
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1431
* https://tracker.ceph.com/issues/59344
1432
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
1433
* https://tracker.ceph.com/issues/61407
1434
    mds: abort on CInode::verify_dirfrags
1435
* https://tracker.ceph.com/issues/48773
1436
    qa: scrub does not complete
1437
* https://tracker.ceph.com/issues/57655
1438
    qa: fs:mixed-clients kernel_untar_build failure
1439
* https://tracker.ceph.com/issues/61409
1440 128 Venky Shankar
    qa: _test_stale_caps does not wait for file flush before stat
1441
1442
h3. 15 May 2023
1443 130 Venky Shankar
1444 128 Venky Shankar
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230509.090020
1445
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230509.090020-6
1446
1447
* https://tracker.ceph.com/issues/52624
1448
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
1449
* https://tracker.ceph.com/issues/54460
1450
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1451
* https://tracker.ceph.com/issues/57676
1452
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1453
* https://tracker.ceph.com/issues/59684 [kclient bug]
1454
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
1455
* https://tracker.ceph.com/issues/59348
1456
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1457 131 Venky Shankar
* https://tracker.ceph.com/issues/61148
1458
    dbench test results in call trace in dmesg [kclient bug]
1459 133 Kotresh Hiremath Ravishankar
* https://tracker.ceph.com/issues/58340
1460 134 Kotresh Hiremath Ravishankar
    mds: fsstress.sh hangs with multimds
1461 125 Venky Shankar
1462
 
1463 129 Rishabh Dave
h3. 11 May 2023
1464
1465
https://pulpito.ceph.com/yuriw-2023-05-10_18:21:40-fs-wip-yuri7-testing-2023-05-10-0742-distro-default-smithi/
1466
1467
* https://tracker.ceph.com/issues/59684 [kclient bug]
1468
  Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
1469
* https://tracker.ceph.com/issues/59348
1470
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1471
* https://tracker.ceph.com/issues/57655
1472
  qa: fs:mixed-clients kernel_untar_build failure
1473
* https://tracker.ceph.com/issues/57676
1474
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
1475
* https://tracker.ceph.com/issues/55805
1476
  error during scrub thrashing reached max tries in 900 secs
1477
* https://tracker.ceph.com/issues/54460
1478
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1479
* https://tracker.ceph.com/issues/57656
1480
  [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
1481
* https://tracker.ceph.com/issues/58220
1482
  Command failed (workunit test fs/quota/quota.sh) on smithi081 with status 1:
1483 1 Patrick Donnelly
* https://tracker.ceph.com/issues/58220#note-9
1484
  workunit fs/test_python.sh: test_disk_quota_exceeeded_error failure
1485 134 Kotresh Hiremath Ravishankar
* https://tracker.ceph.com/issues/59342
1486
  qa/workunits/kernel_untar_build.sh failed when compiling the Linux source
1487 135 Kotresh Hiremath Ravishankar
* https://tracker.ceph.com/issues/58949
1488
    test_cephfs.test_disk_quota_exceeeded_error - AssertionError: DiskQuotaExceeded not raised by write
1489 129 Rishabh Dave
* https://tracker.ceph.com/issues/61243 (NEW)
1490
  test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
1491
1492 125 Venky Shankar
h3. 11 May 2023
1493 127 Venky Shankar
1494
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230509.054005
1495 126 Venky Shankar
1496 125 Venky Shankar
(no fsstress job failure [https://tracker.ceph.com/issues/58340] since https://github.com/ceph/ceph/pull/49553
1497
 was included in the branch, however, the PR got updated and needs retest).
1498
1499
* https://tracker.ceph.com/issues/52624
1500
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
1501
* https://tracker.ceph.com/issues/54460
1502
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1503
* https://tracker.ceph.com/issues/57676
1504
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1505
* https://tracker.ceph.com/issues/59683
1506
    Error: Unable to find a match: userspace-rcu-devel libedit-devel device-mapper-devel with fscrypt tests
1507
* https://tracker.ceph.com/issues/59684 [kclient bug]
1508
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
1509
* https://tracker.ceph.com/issues/59348
1510 124 Venky Shankar
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1511
1512
h3. 09 May 2023
1513
1514
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230506.143554
1515
1516
* https://tracker.ceph.com/issues/52624
1517
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
1518
* https://tracker.ceph.com/issues/58340
1519
    mds: fsstress.sh hangs with multimds
1520
* https://tracker.ceph.com/issues/54460
1521
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1522
* https://tracker.ceph.com/issues/57676
1523
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1524
* https://tracker.ceph.com/issues/51964
1525
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1526
* https://tracker.ceph.com/issues/59350
1527
    qa: test_flag_scrub_mdsdir (tasks.cephfs.test_scrub_checks.TestScrubChecks) ... ERROR
1528
* https://tracker.ceph.com/issues/59683
1529
    Error: Unable to find a match: userspace-rcu-devel libedit-devel device-mapper-devel with fscrypt tests
1530
* https://tracker.ceph.com/issues/59684 [kclient bug]
1531
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
1532
* https://tracker.ceph.com/issues/59348
1533 123 Venky Shankar
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1534
1535
h3. 10 Apr 2023
1536
1537
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230330.105356
1538
1539
* https://tracker.ceph.com/issues/52624
1540
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
1541
* https://tracker.ceph.com/issues/58340
1542
    mds: fsstress.sh hangs with multimds
1543
* https://tracker.ceph.com/issues/54460
1544
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1545
* https://tracker.ceph.com/issues/57676
1546
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1547 119 Rishabh Dave
* https://tracker.ceph.com/issues/51964
1548 120 Rishabh Dave
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1549 121 Rishabh Dave
1550 120 Rishabh Dave
h3. 31 Mar 2023
1551 122 Rishabh Dave
1552
run: http://pulpito.front.sepia.ceph.com/rishabh-2023-03-03_21:39:49-fs-wip-rishabh-2023Mar03-2316-testing-default-smithi/
1553 120 Rishabh Dave
re-run (some PRs removed from batch): http://pulpito.front.sepia.ceph.com/rishabh-2023-03-11_05:54:03-fs-wip-rishabh-2023Mar10-1727-testing-default-smithi/
1554
re-run (some PRs removed from batch): http://pulpito.front.sepia.ceph.com/rishabh-2023-03-23_08:27:28-fs-wip-rishabh-2023Mar20-2250-testing-default-smithi/
1555
1556
There were many more re-runs for "failed+dead" jobs as well as for individual jobs. half of the PRs from the batch were removed (gradually over subsequent re-runs).
1557
1558
* https://tracker.ceph.com/issues/57676
1559
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
1560
* https://tracker.ceph.com/issues/54460
1561
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1562
* https://tracker.ceph.com/issues/58220
1563
  Command failed (workunit test fs/quota/quota.sh) on smithixxx with status 1
1564
* https://tracker.ceph.com/issues/58220#note-9
1565
  workunit fs/test_python.sh: test_disk_quota_exceeeded_error failure
1566
* https://tracker.ceph.com/issues/56695
1567
  Command failed (workunit test suites/pjd.sh)
1568
* https://tracker.ceph.com/issues/58564 
1569
  workuit dbench failed with error code 1
1570
* https://tracker.ceph.com/issues/57206
1571
  libcephfs/test.sh: ceph_test_libcephfs_reclaim
1572
* https://tracker.ceph.com/issues/57580
1573
  Test failure: test_newops_getvxattr (tasks.cephfs.test_newops.TestNewOps)
1574
* https://tracker.ceph.com/issues/58940
1575
  ceph osd hit ceph_abort
1576
* https://tracker.ceph.com/issues/55805
1577 118 Venky Shankar
  error scrub thrashing reached max tries in 900 secs
1578
1579
h3. 30 March 2023
1580
1581
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230315.085747
1582
1583
* https://tracker.ceph.com/issues/58938
1584
    qa: xfstests-dev's generic test suite has 7 failures with kclient
1585
* https://tracker.ceph.com/issues/51964
1586
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1587
* https://tracker.ceph.com/issues/58340
1588 114 Venky Shankar
    mds: fsstress.sh hangs with multimds
1589
1590 115 Venky Shankar
h3. 29 March 2023
1591 114 Venky Shankar
1592
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230317.095222
1593
1594
* https://tracker.ceph.com/issues/56695
1595
    [RHEL stock] pjd test failures
1596
* https://tracker.ceph.com/issues/57676
1597
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1598
* https://tracker.ceph.com/issues/57087
1599
    qa: test_fragmented_injection (tasks.cephfs.test_data_scan.TestDataScan) failure
1600 116 Venky Shankar
* https://tracker.ceph.com/issues/58340
1601
    mds: fsstress.sh hangs with multimds
1602 114 Venky Shankar
* https://tracker.ceph.com/issues/57655
1603
    qa: fs:mixed-clients kernel_untar_build failure
1604 117 Venky Shankar
* https://tracker.ceph.com/issues/59230
1605
    Test failure: test_object_deletion (tasks.cephfs.test_damage.TestDamage)
1606 114 Venky Shankar
* https://tracker.ceph.com/issues/58938
1607 113 Venky Shankar
    qa: xfstests-dev's generic test suite has 7 failures with kclient
1608
1609
h3. 13 Mar 2023
1610
1611
* https://tracker.ceph.com/issues/56695
1612
    [RHEL stock] pjd test failures
1613
* https://tracker.ceph.com/issues/57676
1614
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1615
* https://tracker.ceph.com/issues/51964
1616
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1617
* https://tracker.ceph.com/issues/54460
1618
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1619
* https://tracker.ceph.com/issues/57656
1620 112 Venky Shankar
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
1621
1622
h3. 09 Mar 2023
1623
1624
https://pulpito.ceph.com/vshankar-2023-03-03_04:39:14-fs-wip-vshankar-testing-20230303.023823-testing-default-smithi/
1625
https://pulpito.ceph.com/vshankar-2023-03-08_15:12:36-fs-wip-vshankar-testing-20230308.112059-testing-default-smithi/
1626
1627
* https://tracker.ceph.com/issues/56695
1628
    [RHEL stock] pjd test failures
1629
* https://tracker.ceph.com/issues/57676
1630
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1631
* https://tracker.ceph.com/issues/51964
1632
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1633
* https://tracker.ceph.com/issues/54460
1634
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1635
* https://tracker.ceph.com/issues/58340
1636
    mds: fsstress.sh hangs with multimds
1637
* https://tracker.ceph.com/issues/57087
1638 111 Venky Shankar
    qa: test_fragmented_injection (tasks.cephfs.test_data_scan.TestDataScan) failure
1639
1640
h3. 07 Mar 2023
1641
1642
https://pulpito.ceph.com/vshankar-2023-03-02_09:21:58-fs-wip-vshankar-testing-20230222.044949-testing-default-smithi/
1643
https://pulpito.ceph.com/vshankar-2023-03-07_05:15:12-fs-wip-vshankar-testing-20230307.030510-testing-default-smithi/
1644
1645
* https://tracker.ceph.com/issues/56695
1646
    [RHEL stock] pjd test failures
1647
* https://tracker.ceph.com/issues/57676
1648
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1649
* https://tracker.ceph.com/issues/51964
1650
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1651
* https://tracker.ceph.com/issues/57656
1652
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
1653
* https://tracker.ceph.com/issues/57655
1654
    qa: fs:mixed-clients kernel_untar_build failure
1655
* https://tracker.ceph.com/issues/58220
1656
    Command failed (workunit test fs/quota/quota.sh) on smithi081 with status 1:
1657
* https://tracker.ceph.com/issues/54460
1658
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1659
* https://tracker.ceph.com/issues/58934
1660 109 Venky Shankar
    snaptest-git-ceph.sh failure with ceph-fuse
1661
1662
h3. 28 Feb 2023
1663
1664
https://pulpito.ceph.com/vshankar-2023-02-24_02:11:45-fs-wip-vshankar-testing-20230222.025426-testing-default-smithi/
1665
1666
* https://tracker.ceph.com/issues/56695
1667
    [RHEL stock] pjd test failures
1668
* https://tracker.ceph.com/issues/57676
1669
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1670 110 Venky Shankar
* https://tracker.ceph.com/issues/56446
1671 109 Venky Shankar
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
1672
1673 107 Venky Shankar
(teuthology infra issues causing testing delays - merging PRs which have tests passing)
1674
1675
h3. 25 Jan 2023
1676
1677
https://pulpito.ceph.com/vshankar-2023-01-25_07:57:32-fs-wip-vshankar-testing-20230125.055346-testing-default-smithi/
1678
1679
* https://tracker.ceph.com/issues/52624
1680
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
1681
* https://tracker.ceph.com/issues/56695
1682
    [RHEL stock] pjd test failures
1683
* https://tracker.ceph.com/issues/57676
1684
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1685
* https://tracker.ceph.com/issues/56446
1686
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
1687
* https://tracker.ceph.com/issues/57206
1688
    libcephfs/test.sh: ceph_test_libcephfs_reclaim
1689
* https://tracker.ceph.com/issues/58220
1690
    Command failed (workunit test fs/quota/quota.sh) on smithi081 with status 1:
1691
* https://tracker.ceph.com/issues/58340
1692
  mds: fsstress.sh hangs with multimds
1693
* https://tracker.ceph.com/issues/56011
1694
    fs/thrash: snaptest-snap-rm-cmp.sh fails in mds5sum comparison
1695
* https://tracker.ceph.com/issues/54460
1696 101 Rishabh Dave
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1697
1698
h3. 30 JAN 2023
1699
1700
run: http://pulpito.front.sepia.ceph.com/rishabh-2022-11-28_08:04:11-fs-wip-rishabh-testing-2022Nov24-1818-testing-default-smithi/
1701
re-run: http://pulpito.front.sepia.ceph.com/rishabh-2023-01-13_12:08:33-fs-wip-rishabh-testing-2022Nov24-11Jan2023-distro-default-smithi/
1702 105 Rishabh Dave
re-run of re-run: http://pulpito.front.sepia.ceph.com/rishabh-2023-01-23_18:53:32-fs-wip-rishabh-testing-2022Nov24-11Jan2023-distro-default-smithi/
1703
1704 101 Rishabh Dave
* https://tracker.ceph.com/issues/52624
1705
  qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
1706
* https://tracker.ceph.com/issues/56695
1707
  [RHEL stock] pjd test failures
1708
* https://tracker.ceph.com/issues/57676
1709
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
1710
* https://tracker.ceph.com/issues/55332
1711
  Failure in snaptest-git-ceph.sh
1712
* https://tracker.ceph.com/issues/51964
1713
  qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1714
* https://tracker.ceph.com/issues/56446
1715
  Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
1716
* https://tracker.ceph.com/issues/57655 
1717
  qa: fs:mixed-clients kernel_untar_build failure
1718
* https://tracker.ceph.com/issues/54460
1719
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1720 103 Rishabh Dave
* https://tracker.ceph.com/issues/58340
1721
  mds: fsstress.sh hangs with multimds
1722 101 Rishabh Dave
* https://tracker.ceph.com/issues/58219
1723 102 Rishabh Dave
  Command crashed: 'ceph-dencoder type inode_backtrace_t import - decode dump_json'
1724
1725
* "Failed to load ceph-mgr modules: prometheus" in cluster log"
1726 106 Rishabh Dave
  http://pulpito.front.sepia.ceph.com/rishabh-2023-01-23_18:53:32-fs-wip-rishabh-testing-2022Nov24-11Jan2023-distro-default-smithi/7134086
1727
  Acc to Venky this was fixed in https://github.com/ceph/ceph/commit/cf6089200d96fc56b08ee17a4e31f19823370dc8
1728 102 Rishabh Dave
* Created https://tracker.ceph.com/issues/58564
1729 100 Venky Shankar
  workunit test suites/dbench.sh failed error code 1
1730
1731
h3. 15 Dec 2022
1732
1733
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20221215.112736
1734
1735
* https://tracker.ceph.com/issues/52624
1736
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1737
* https://tracker.ceph.com/issues/56695
1738
    [RHEL stock] pjd test failures
1739
* https://tracker.ceph.com/issues/58219
1740
* https://tracker.ceph.com/issues/57655
1741
* qa: fs:mixed-clients kernel_untar_build failure
1742
    Test failure: test_journal_migration (tasks.cephfs.test_journal_migration.TestJournalMigration)
1743
* https://tracker.ceph.com/issues/57676
1744
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1745
* https://tracker.ceph.com/issues/58340
1746 96 Venky Shankar
    mds: fsstress.sh hangs with multimds
1747
1748
h3. 08 Dec 2022
1749 99 Venky Shankar
1750 96 Venky Shankar
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20221130.043104
1751
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20221209.043803
1752
1753
(lots of transient git.ceph.com failures)
1754
1755
* https://tracker.ceph.com/issues/52624
1756
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1757
* https://tracker.ceph.com/issues/56695
1758
    [RHEL stock] pjd test failures
1759
* https://tracker.ceph.com/issues/57655
1760
    qa: fs:mixed-clients kernel_untar_build failure
1761
* https://tracker.ceph.com/issues/58219
1762
    Test failure: test_journal_migration (tasks.cephfs.test_journal_migration.TestJournalMigration)
1763
* https://tracker.ceph.com/issues/58220
1764
    Command failed (workunit test fs/quota/quota.sh) on smithi081 with status 1:
1765 97 Venky Shankar
* https://tracker.ceph.com/issues/57676
1766
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1767 98 Venky Shankar
* https://tracker.ceph.com/issues/53859
1768
    qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
1769
* https://tracker.ceph.com/issues/54460
1770
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1771 96 Venky Shankar
* https://tracker.ceph.com/issues/58244
1772 95 Venky Shankar
    Test failure: test_rebuild_inotable (tasks.cephfs.test_data_scan.TestDataScan)
1773
1774
h3. 14 Oct 2022
1775
1776
https://pulpito.ceph.com/vshankar-2022-10-12_04:56:59-fs-wip-vshankar-testing-20221011-145847-testing-default-smithi/
1777
https://pulpito.ceph.com/vshankar-2022-10-14_04:04:57-fs-wip-vshankar-testing-20221014-072608-testing-default-smithi/
1778
1779
* https://tracker.ceph.com/issues/52624
1780
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1781
* https://tracker.ceph.com/issues/55804
1782
    Command failed (workunit test suites/pjd.sh)
1783
* https://tracker.ceph.com/issues/51964
1784
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1785
* https://tracker.ceph.com/issues/57682
1786
    client: ERROR: test_reconnect_after_blocklisted
1787 90 Rishabh Dave
* https://tracker.ceph.com/issues/54460
1788 91 Rishabh Dave
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1789
1790
h3. 10 Oct 2022
1791 92 Rishabh Dave
1792 91 Rishabh Dave
http://pulpito.front.sepia.ceph.com/rishabh-2022-09-30_19:45:21-fs-wip-rishabh-testing-30Sep2022-testing-default-smithi/
1793
1794
reruns
1795
* fs-thrash, passed: http://pulpito.front.sepia.ceph.com/rishabh-2022-10-04_13:19:47-fs-wip-rishabh-testing-30Sep2022-testing-default-smithi/
1796 94 Rishabh Dave
* fs-verify, passed: http://pulpito.front.sepia.ceph.com/rishabh-2022-10-05_12:25:37-fs-wip-rishabh-testing-30Sep2022-testing-default-smithi/
1797 91 Rishabh Dave
* cephadm failures also passed after many re-runs: http://pulpito.front.sepia.ceph.com/rishabh-2022-10-06_13:50:51-fs-wip-rishabh-testing-30Sep2022-2-testing-default-smithi/
1798 93 Rishabh Dave
    ** needed this PR to be merged in ceph-ci branch - https://github.com/ceph/ceph/pull/47458
1799 91 Rishabh Dave
1800
known bugs
1801
* https://tracker.ceph.com/issues/52624
1802
  qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1803
* https://tracker.ceph.com/issues/50223
1804
  client.xxxx isn't responding to mclientcaps(revoke
1805
* https://tracker.ceph.com/issues/57299
1806
  qa: test_dump_loads fails with JSONDecodeError
1807
* https://tracker.ceph.com/issues/57655 [Exist in main as well]
1808
  qa: fs:mixed-clients kernel_untar_build failure
1809
* https://tracker.ceph.com/issues/57206
1810 90 Rishabh Dave
  libcephfs/test.sh: ceph_test_libcephfs_reclaim
1811
1812
h3. 2022 Sep 29
1813
1814
http://pulpito.front.sepia.ceph.com/rishabh-2022-09-14_12:48:43-fs-wip-rishabh-testing-2022Sep9-1708-testing-default-smithi/
1815
1816
* https://tracker.ceph.com/issues/55804
1817
  Command failed (workunit test suites/pjd.sh)
1818
* https://tracker.ceph.com/issues/36593
1819
  Command failed (workunit test fs/quota/quota.sh) on smithixxx with status 1
1820
* https://tracker.ceph.com/issues/52624
1821
  qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1822
* https://tracker.ceph.com/issues/51964
1823
  qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1824
* https://tracker.ceph.com/issues/56632
1825
  Test failure: test_subvolume_snapshot_clone_quota_exceeded
1826
* https://tracker.ceph.com/issues/50821
1827 88 Patrick Donnelly
  qa: untar_snap_rm failure during mds thrashing
1828
1829
h3. 2022 Sep 26
1830
1831
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20220923.171109
1832
1833
* https://tracker.ceph.com/issues/55804
1834
    qa failure: pjd link tests failed
1835
* https://tracker.ceph.com/issues/57676
1836
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1837
* https://tracker.ceph.com/issues/52624
1838
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1839
* https://tracker.ceph.com/issues/57580
1840
    Test failure: test_newops_getvxattr (tasks.cephfs.test_newops.TestNewOps)
1841
* https://tracker.ceph.com/issues/48773
1842
    qa: scrub does not complete
1843
* https://tracker.ceph.com/issues/57299
1844
    qa: test_dump_loads fails with JSONDecodeError
1845
* https://tracker.ceph.com/issues/57280
1846
    qa: tasks/kernel_cfuse_workunits_untarbuild_blogbench fails - Failed to fetch package version from shaman
1847
* https://tracker.ceph.com/issues/57205
1848
    Test failure: test_subvolume_group_ls_filter_internal_directories (tasks.cephfs.test_volumes.TestSubvolumeGroups)
1849
* https://tracker.ceph.com/issues/57656
1850
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
1851
* https://tracker.ceph.com/issues/57677
1852
    qa: "1 MDSs behind on trimming (MDS_TRIM)"
1853
* https://tracker.ceph.com/issues/57206
1854
    libcephfs/test.sh: ceph_test_libcephfs_reclaim
1855
* https://tracker.ceph.com/issues/57446
1856
    qa: test_subvolume_snapshot_info_if_orphan_clone fails
1857 89 Patrick Donnelly
* https://tracker.ceph.com/issues/57655 [Exist in main as well]
1858
    qa: fs:mixed-clients kernel_untar_build failure
1859 88 Patrick Donnelly
* https://tracker.ceph.com/issues/57682
1860
    client: ERROR: test_reconnect_after_blocklisted
1861 87 Patrick Donnelly
1862
1863
h3. 2022 Sep 22
1864
1865
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20220920.234701
1866
1867
* https://tracker.ceph.com/issues/57299
1868
    qa: test_dump_loads fails with JSONDecodeError
1869
* https://tracker.ceph.com/issues/57205
1870
    Test failure: test_subvolume_group_ls_filter_internal_directories (tasks.cephfs.test_volumes.TestSubvolumeGroups)
1871
* https://tracker.ceph.com/issues/52624
1872
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1873
* https://tracker.ceph.com/issues/57580
1874
    Test failure: test_newops_getvxattr (tasks.cephfs.test_newops.TestNewOps)
1875
* https://tracker.ceph.com/issues/57280
1876
    qa: tasks/kernel_cfuse_workunits_untarbuild_blogbench fails - Failed to fetch package version from shaman
1877
* https://tracker.ceph.com/issues/48773
1878
    qa: scrub does not complete
1879
* https://tracker.ceph.com/issues/56446
1880
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
1881
* https://tracker.ceph.com/issues/57206
1882
    libcephfs/test.sh: ceph_test_libcephfs_reclaim
1883
* https://tracker.ceph.com/issues/51267
1884
    CommandFailedError: Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithi096 with status 1:...
1885
1886
NEW:
1887
1888
* https://tracker.ceph.com/issues/57656
1889
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
1890
* https://tracker.ceph.com/issues/57655 [Exist in main as well]
1891
    qa: fs:mixed-clients kernel_untar_build failure
1892
* https://tracker.ceph.com/issues/57657
1893
    mds: scrub locates mismatch between child accounted_rstats and self rstats
1894
1895
Segfault probably caused by: https://github.com/ceph/ceph/pull/47795#issuecomment-1255724799
1896 80 Venky Shankar
1897 79 Venky Shankar
1898
h3. 2022 Sep 16
1899
1900
https://pulpito.ceph.com/?branch=wip-vshankar-testing1-20220905-132828
1901
1902
* https://tracker.ceph.com/issues/57446
1903
    qa: test_subvolume_snapshot_info_if_orphan_clone fails
1904
* https://tracker.ceph.com/issues/57299
1905
    qa: test_dump_loads fails with JSONDecodeError
1906
* https://tracker.ceph.com/issues/50223
1907
    client.xxxx isn't responding to mclientcaps(revoke)
1908
* https://tracker.ceph.com/issues/52624
1909
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1910
* https://tracker.ceph.com/issues/57205
1911
    Test failure: test_subvolume_group_ls_filter_internal_directories (tasks.cephfs.test_volumes.TestSubvolumeGroups)
1912
* https://tracker.ceph.com/issues/57280
1913
    qa: tasks/kernel_cfuse_workunits_untarbuild_blogbench fails - Failed to fetch package version from shaman
1914
* https://tracker.ceph.com/issues/51282
1915
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
1916
* https://tracker.ceph.com/issues/48203
1917
  https://tracker.ceph.com/issues/36593
1918
    qa: quota failure
1919
    qa: quota failure caused by clients stepping on each other
1920
* https://tracker.ceph.com/issues/57580
1921 77 Rishabh Dave
    Test failure: test_newops_getvxattr (tasks.cephfs.test_newops.TestNewOps)
1922
1923 76 Rishabh Dave
1924
h3. 2022 Aug 26
1925
1926
http://pulpito.front.sepia.ceph.com/rishabh-2022-08-22_17:49:59-fs-wip-rishabh-testing-2022Aug19-testing-default-smithi/
1927
http://pulpito.front.sepia.ceph.com/rishabh-2022-08-24_11:56:51-fs-wip-rishabh-testing-2022Aug19-testing-default-smithi/
1928
1929
* https://tracker.ceph.com/issues/57206
1930
  libcephfs/test.sh: ceph_test_libcephfs_reclaim
1931
* https://tracker.ceph.com/issues/56632
1932
  Test failure: test_subvolume_snapshot_clone_quota_exceeded (tasks.cephfs.test_volumes.TestSubvolumeSnapshotClones)
1933
* https://tracker.ceph.com/issues/56446
1934
  Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
1935
* https://tracker.ceph.com/issues/51964
1936
  qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1937
* https://tracker.ceph.com/issues/53859
1938
  qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
1939
1940
* https://tracker.ceph.com/issues/54460
1941
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1942
* https://tracker.ceph.com/issues/54462
1943
  Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128
1944
* https://tracker.ceph.com/issues/54460
1945
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1946
* https://tracker.ceph.com/issues/36593
1947
  Command failed (workunit test fs/quota/quota.sh) on smithixxx with status 1
1948
1949
* https://tracker.ceph.com/issues/52624
1950
  qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1951
* https://tracker.ceph.com/issues/55804
1952
  Command failed (workunit test suites/pjd.sh)
1953
* https://tracker.ceph.com/issues/50223
1954
  client.xxxx isn't responding to mclientcaps(revoke)
1955 75 Venky Shankar
1956
1957
h3. 2022 Aug 22
1958
1959
https://pulpito.ceph.com/vshankar-2022-08-12_09:34:24-fs-wip-vshankar-testing1-20220812-072441-testing-default-smithi/
1960
https://pulpito.ceph.com/vshankar-2022-08-18_04:30:42-fs-wip-vshankar-testing1-20220818-082047-testing-default-smithi/ (drop problematic PR and re-run)
1961
1962
* https://tracker.ceph.com/issues/52624
1963
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1964
* https://tracker.ceph.com/issues/56446
1965
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
1966
* https://tracker.ceph.com/issues/55804
1967
    Command failed (workunit test suites/pjd.sh)
1968
* https://tracker.ceph.com/issues/51278
1969
    mds: "FAILED ceph_assert(!segments.empty())"
1970
* https://tracker.ceph.com/issues/54460
1971
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1972
* https://tracker.ceph.com/issues/57205
1973
    Test failure: test_subvolume_group_ls_filter_internal_directories (tasks.cephfs.test_volumes.TestSubvolumeGroups)
1974
* https://tracker.ceph.com/issues/57206
1975
    ceph_test_libcephfs_reclaim crashes during test
1976
* https://tracker.ceph.com/issues/53859
1977
  Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
1978
* https://tracker.ceph.com/issues/50223
1979 72 Venky Shankar
    client.xxxx isn't responding to mclientcaps(revoke)
1980
1981
h3. 2022 Aug 12
1982
1983
https://pulpito.ceph.com/vshankar-2022-08-10_04:06:00-fs-wip-vshankar-testing-20220805-190751-testing-default-smithi/
1984
https://pulpito.ceph.com/vshankar-2022-08-11_12:16:58-fs-wip-vshankar-testing-20220811-145809-testing-default-smithi/ (drop problematic PR and re-run)
1985
1986
* https://tracker.ceph.com/issues/52624
1987
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1988
* https://tracker.ceph.com/issues/56446
1989
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
1990
* https://tracker.ceph.com/issues/51964
1991
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1992
* https://tracker.ceph.com/issues/55804
1993
    Command failed (workunit test suites/pjd.sh)
1994
* https://tracker.ceph.com/issues/50223
1995
    client.xxxx isn't responding to mclientcaps(revoke)
1996
* https://tracker.ceph.com/issues/50821
1997 73 Venky Shankar
    qa: untar_snap_rm failure during mds thrashing
1998 72 Venky Shankar
* https://tracker.ceph.com/issues/54460
1999 71 Venky Shankar
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
2000
2001
h3. 2022 Aug 04
2002
2003
https://pulpito.ceph.com/?branch=wip-vshankar-testing1-20220804-123835 (only mgr/volumes, mgr/stats)
2004
2005 69 Rishabh Dave
Unrealted teuthology failure on rhel
2006 68 Rishabh Dave
2007
h3. 2022 Jul 25
2008
2009
http://pulpito.front.sepia.ceph.com/rishabh-2022-07-22_11:34:20-fs-wip-rishabh-testing-2022Jul22-1400-testing-default-smithi/
2010
2011 74 Rishabh Dave
1st re-run: http://pulpito.front.sepia.ceph.com/rishabh-2022-07-24_03:51:19-fs-wip-rishabh-testing-2022Jul22-1400-testing-default-smithi
2012
2nd re-run: http://pulpito.front.sepia.ceph.com/rishabh-2022-07-24_08:53:36-fs-wip-rishabh-testing-2022Jul22-1400-testing-default-smithi/
2013 68 Rishabh Dave
3rd re-run: http://pulpito.front.sepia.ceph.com/rishabh-2022-07-24_08:53:36-fs-wip-rishabh-testing-2022Jul22-1400-testing-default-smithi/
2014
4th (final) re-run: http://pulpito.front.sepia.ceph.com/rishabh-2022-07-28_03:59:01-fs-wip-rishabh-testing-2022Jul28-0143-testing-default-smithi/
2015
2016
* https://tracker.ceph.com/issues/55804
2017
  Command failed (workunit test suites/pjd.sh)
2018
* https://tracker.ceph.com/issues/50223
2019
  client.xxxx isn't responding to mclientcaps(revoke)
2020
2021
* https://tracker.ceph.com/issues/54460
2022
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
2023 1 Patrick Donnelly
* https://tracker.ceph.com/issues/36593
2024 74 Rishabh Dave
  Command failed (workunit test fs/quota/quota.sh) on smithixxx with status 1
2025 68 Rishabh Dave
* https://tracker.ceph.com/issues/54462
2026 67 Patrick Donnelly
  Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128~
2027
2028
h3. 2022 July 22
2029
2030
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20220721.235756
2031
2032
MDS_HEALTH_DUMMY error in log fixed by followup commit.
2033
transient selinux ping failure
2034
2035
* https://tracker.ceph.com/issues/56694
2036
    qa: avoid blocking forever on hung umount
2037
* https://tracker.ceph.com/issues/56695
2038
    [RHEL stock] pjd test failures
2039
* https://tracker.ceph.com/issues/56696
2040
    admin keyring disappears during qa run
2041
* https://tracker.ceph.com/issues/56697
2042
    qa: fs/snaps fails for fuse
2043
* https://tracker.ceph.com/issues/50222
2044
    osd: 5.2s0 deep-scrub : stat mismatch
2045
* https://tracker.ceph.com/issues/56698
2046
    client: FAILED ceph_assert(_size == 0)
2047
* https://tracker.ceph.com/issues/50223
2048
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2049 66 Rishabh Dave
2050 65 Rishabh Dave
2051
h3. 2022 Jul 15
2052
2053
http://pulpito.front.sepia.ceph.com/rishabh-2022-07-08_23:53:34-fs-wip-rishabh-testing-2022Jul08-1820-testing-default-smithi/
2054
2055
re-run: http://pulpito.front.sepia.ceph.com/rishabh-2022-07-15_06:42:04-fs-wip-rishabh-testing-2022Jul08-1820-testing-default-smithi/
2056
2057
* https://tracker.ceph.com/issues/53859
2058
  Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
2059
* https://tracker.ceph.com/issues/55804
2060
  Command failed (workunit test suites/pjd.sh)
2061
* https://tracker.ceph.com/issues/50223
2062
  client.xxxx isn't responding to mclientcaps(revoke)
2063
* https://tracker.ceph.com/issues/50222
2064
  osd: deep-scrub : stat mismatch
2065
2066
* https://tracker.ceph.com/issues/56632
2067
  Test failure: test_subvolume_snapshot_clone_quota_exceeded (tasks.cephfs.test_volumes.TestSubvolumeSnapshotClones)
2068
* https://tracker.ceph.com/issues/56634
2069
  workunit test fs/snaps/snaptest-intodir.sh
2070
* https://tracker.ceph.com/issues/56644
2071
  Test failure: test_rapid_creation (tasks.cephfs.test_fragment.TestFragmentation)
2072
2073 61 Rishabh Dave
2074
2075
h3. 2022 July 05
2076 62 Rishabh Dave
2077 64 Rishabh Dave
http://pulpito.front.sepia.ceph.com/rishabh-2022-07-02_14:14:52-fs-wip-rishabh-testing-20220702-1631-testing-default-smithi/
2078
2079
On 1st re-run some jobs passed - http://pulpito.front.sepia.ceph.com/rishabh-2022-07-03_15:10:28-fs-wip-rishabh-testing-20220702-1631-distro-default-smithi/
2080
2081
On 2nd re-run only few jobs failed -
2082 62 Rishabh Dave
http://pulpito.front.sepia.ceph.com/rishabh-2022-07-06_05:24:29-fs-wip-rishabh-testing-20220705-2132-distro-default-smithi/
2083
http://pulpito.front.sepia.ceph.com/rishabh-2022-07-06_05:24:29-fs-wip-rishabh-testing-20220705-2132-distro-default-smithi/
2084
2085
* https://tracker.ceph.com/issues/56446
2086
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
2087
* https://tracker.ceph.com/issues/55804
2088
    Command failed (workunit test suites/pjd.sh) on smithi047 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/
2089
2090
* https://tracker.ceph.com/issues/56445
2091 63 Rishabh Dave
    Command failed on smithi080 with status 123: "find /home/ubuntu/cephtest/archive/syslog -name '*.log' -print0 | sudo xargs -0 --no-run-if-empty -- gzip --"
2092
* https://tracker.ceph.com/issues/51267
2093
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithi098 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1
2094 62 Rishabh Dave
* https://tracker.ceph.com/issues/50224
2095
    Test failure: test_mirroring_init_failure_with_recovery (tasks.cephfs.test_mirroring.TestMirroring)
2096 61 Rishabh Dave
2097 58 Venky Shankar
2098
2099
h3. 2022 July 04
2100
2101
https://pulpito.ceph.com/vshankar-2022-06-29_09:19:00-fs-wip-vshankar-testing-20220627-100931-testing-default-smithi/
2102
(rhel runs were borked due to: https://lists.ceph.io/hyperkitty/list/dev@ceph.io/thread/JSZQFUKVLDND4W33PXDGCABPHNSPT6SS/, tests ran with --filter-out=rhel)
2103
2104
* https://tracker.ceph.com/issues/56445
2105 59 Rishabh Dave
    Command failed on smithi162 with status 123: "find /home/ubuntu/cephtest/archive/syslog -name '*.log' -print0 | sudo xargs -0 --no-run-if-empty -- gzip --"
2106
* https://tracker.ceph.com/issues/56446
2107
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
2108
* https://tracker.ceph.com/issues/51964
2109 60 Rishabh Dave
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
2110 59 Rishabh Dave
* https://tracker.ceph.com/issues/52624
2111 57 Venky Shankar
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2112
2113
h3. 2022 June 20
2114
2115
https://pulpito.ceph.com/vshankar-2022-06-15_04:03:39-fs-wip-vshankar-testing1-20220615-072516-testing-default-smithi/
2116
https://pulpito.ceph.com/vshankar-2022-06-19_08:22:46-fs-wip-vshankar-testing1-20220619-102531-testing-default-smithi/
2117
2118
* https://tracker.ceph.com/issues/52624
2119
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2120
* https://tracker.ceph.com/issues/55804
2121
    qa failure: pjd link tests failed
2122
* https://tracker.ceph.com/issues/54108
2123
    qa: iogen workunit: "The following counters failed to be set on mds daemons: {'mds.exported', 'mds.imported'}"
2124
* https://tracker.ceph.com/issues/55332
2125 56 Patrick Donnelly
    Failure in snaptest-git-ceph.sh (it's an async unlink/create bug)
2126
2127
h3. 2022 June 13
2128
2129
https://pulpito.ceph.com/pdonnell-2022-06-12_05:08:12-fs:workload-wip-pdonnell-testing-20220612.004943-distro-default-smithi/
2130
2131
* https://tracker.ceph.com/issues/56024
2132
    cephadm: removes ceph.conf during qa run causing command failure
2133
* https://tracker.ceph.com/issues/48773
2134
    qa: scrub does not complete
2135
* https://tracker.ceph.com/issues/56012
2136
    mds: src/mds/MDLog.cc: 283: FAILED ceph_assert(!mds->is_ any_replay())
2137 55 Venky Shankar
2138 54 Venky Shankar
2139
h3. 2022 Jun 13
2140
2141
https://pulpito.ceph.com/vshankar-2022-06-07_00:25:50-fs-wip-vshankar-testing-20220606-223254-testing-default-smithi/
2142
https://pulpito.ceph.com/vshankar-2022-06-10_01:04:46-fs-wip-vshankar-testing-20220609-175550-testing-default-smithi/
2143
2144
* https://tracker.ceph.com/issues/52624
2145
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2146
* https://tracker.ceph.com/issues/51964
2147
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
2148
* https://tracker.ceph.com/issues/53859
2149
    qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
2150
* https://tracker.ceph.com/issues/55804
2151
    qa failure: pjd link tests failed
2152
* https://tracker.ceph.com/issues/56003
2153
    client: src/include/xlist.h: 81: FAILED ceph_assert(_size == 0)
2154
* https://tracker.ceph.com/issues/56011
2155
    fs/thrash: snaptest-snap-rm-cmp.sh fails in mds5sum comparison
2156
* https://tracker.ceph.com/issues/56012
2157 53 Venky Shankar
    mds: src/mds/MDLog.cc: 283: FAILED ceph_assert(!mds->is_ any_replay())
2158
2159
h3. 2022 Jun 07
2160
2161
https://pulpito.ceph.com/vshankar-2022-06-06_21:25:41-fs-wip-vshankar-testing1-20220606-230129-testing-default-smithi/
2162
https://pulpito.ceph.com/vshankar-2022-06-07_10:53:31-fs-wip-vshankar-testing1-20220607-104134-testing-default-smithi/ (rerun after dropping a problematic PR)
2163
2164
* https://tracker.ceph.com/issues/52624
2165
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2166
* https://tracker.ceph.com/issues/50223
2167
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2168
* https://tracker.ceph.com/issues/50224
2169 51 Venky Shankar
    qa: test_mirroring_init_failure_with_recovery failure
2170
2171
h3. 2022 May 12
2172 52 Venky Shankar
2173 51 Venky Shankar
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20220509-125847
2174
https://pulpito.ceph.com/vshankar-2022-05-13_17:09:16-fs-wip-vshankar-testing-20220513-120051-testing-default-smithi/ (drop prs + rerun)
2175
2176
* https://tracker.ceph.com/issues/52624
2177
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2178
* https://tracker.ceph.com/issues/50223
2179
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2180
* https://tracker.ceph.com/issues/55332
2181
    Failure in snaptest-git-ceph.sh
2182
* https://tracker.ceph.com/issues/53859
2183 1 Patrick Donnelly
    qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
2184 52 Venky Shankar
* https://tracker.ceph.com/issues/55538
2185
    Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead)
2186 51 Venky Shankar
* https://tracker.ceph.com/issues/55258
2187 49 Venky Shankar
    lots of "heartbeat_check: no reply from X.X.X.X" in OSD logs (cropss up again, though very infrequent)
2188
2189 50 Venky Shankar
h3. 2022 May 04
2190
2191
https://pulpito.ceph.com/vshankar-2022-05-01_13:18:44-fs-wip-vshankar-testing1-20220428-204527-testing-default-smithi/
2192 49 Venky Shankar
https://pulpito.ceph.com/vshankar-2022-05-02_16:58:59-fs-wip-vshankar-testing1-20220502-201957-testing-default-smithi/ (after dropping PRs)
2193
2194
* https://tracker.ceph.com/issues/52624
2195
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2196
* https://tracker.ceph.com/issues/50223
2197
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2198
* https://tracker.ceph.com/issues/55332
2199
    Failure in snaptest-git-ceph.sh
2200
* https://tracker.ceph.com/issues/53859
2201
    qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
2202
* https://tracker.ceph.com/issues/55516
2203
    qa: fs suite tests failing with "json.decoder.JSONDecodeError: Extra data: line 2 column 82 (char 82)"
2204
* https://tracker.ceph.com/issues/55537
2205
    mds: crash during fs:upgrade test
2206
* https://tracker.ceph.com/issues/55538
2207 48 Venky Shankar
    Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead)
2208
2209
h3. 2022 Apr 25
2210
2211
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20220420-113951 (owner vshankar)
2212
2213
* https://tracker.ceph.com/issues/52624
2214
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2215
* https://tracker.ceph.com/issues/50223
2216
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2217
* https://tracker.ceph.com/issues/55258
2218
    lots of "heartbeat_check: no reply from X.X.X.X" in OSD logs
2219
* https://tracker.ceph.com/issues/55377
2220 47 Venky Shankar
    kclient: mds revoke Fwb caps stuck after the kclient tries writebcak once
2221
2222
h3. 2022 Apr 14
2223
2224
https://pulpito.ceph.com/?branch=wip-vshankar-testing1-20220411-144044
2225
2226
* https://tracker.ceph.com/issues/52624
2227
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2228
* https://tracker.ceph.com/issues/50223
2229
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2230
* https://tracker.ceph.com/issues/52438
2231
    qa: ffsb timeout
2232
* https://tracker.ceph.com/issues/55170
2233
    mds: crash during rejoin (CDir::fetch_keys)
2234
* https://tracker.ceph.com/issues/55331
2235
    pjd failure
2236
* https://tracker.ceph.com/issues/48773
2237
    qa: scrub does not complete
2238
* https://tracker.ceph.com/issues/55332
2239
    Failure in snaptest-git-ceph.sh
2240
* https://tracker.ceph.com/issues/55258
2241 45 Venky Shankar
    lots of "heartbeat_check: no reply from X.X.X.X" in OSD logs
2242
2243 46 Venky Shankar
h3. 2022 Apr 11
2244 45 Venky Shankar
2245
https://pulpito.ceph.com/?branch=wip-vshankar-testing-55110-20220408-203242
2246
2247
* https://tracker.ceph.com/issues/48773
2248
    qa: scrub does not complete
2249
* https://tracker.ceph.com/issues/52624
2250
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2251
* https://tracker.ceph.com/issues/52438
2252
    qa: ffsb timeout
2253
* https://tracker.ceph.com/issues/48680
2254
    mds: scrubbing stuck "scrub active (0 inodes in the stack)"
2255
* https://tracker.ceph.com/issues/55236
2256
    qa: fs/snaps tests fails with "hit max job timeout"
2257
* https://tracker.ceph.com/issues/54108
2258
    qa: iogen workunit: "The following counters failed to be set on mds daemons: {'mds.exported', 'mds.imported'}"
2259
* https://tracker.ceph.com/issues/54971
2260
    Test failure: test_perf_stats_stale_metrics (tasks.cephfs.test_mds_metrics.TestMDSMetrics)
2261
* https://tracker.ceph.com/issues/50223
2262
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2263
* https://tracker.ceph.com/issues/55258
2264 44 Venky Shankar
    lots of "heartbeat_check: no reply from X.X.X.X" in OSD logs
2265 42 Venky Shankar
2266 43 Venky Shankar
h3. 2022 Mar 21
2267
2268
https://pulpito.ceph.com/vshankar-2022-03-20_02:16:37-fs-wip-vshankar-testing-20220319-163539-testing-default-smithi/
2269
2270
Run didn't go well, lots of failures - debugging by dropping PRs and running against master branch. Only merging unrelated PRs that pass tests.
2271
2272
2273 42 Venky Shankar
h3. 2022 Mar 08
2274
2275
https://pulpito.ceph.com/vshankar-2022-02-28_04:32:15-fs-wip-vshankar-testing-20220226-211550-testing-default-smithi/
2276
2277
rerun with
2278
- (drop) https://github.com/ceph/ceph/pull/44679
2279
- (drop) https://github.com/ceph/ceph/pull/44958
2280
https://pulpito.ceph.com/vshankar-2022-03-06_14:47:51-fs-wip-vshankar-testing-20220304-132102-testing-default-smithi/
2281
2282
* https://tracker.ceph.com/issues/54419 (new)
2283
    `ceph orch upgrade start` seems to never reach completion
2284
* https://tracker.ceph.com/issues/51964
2285
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
2286
* https://tracker.ceph.com/issues/52624
2287
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2288
* https://tracker.ceph.com/issues/50223
2289
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2290
* https://tracker.ceph.com/issues/52438
2291
    qa: ffsb timeout
2292
* https://tracker.ceph.com/issues/50821
2293
    qa: untar_snap_rm failure during mds thrashing
2294 41 Venky Shankar
2295
2296
h3. 2022 Feb 09
2297
2298
https://pulpito.ceph.com/vshankar-2022-02-05_17:27:49-fs-wip-vshankar-testing-20220201-113815-testing-default-smithi/
2299
2300
rerun with
2301
- (drop) https://github.com/ceph/ceph/pull/37938
2302
- (drop) https://github.com/ceph/ceph/pull/44335
2303
- (drop) https://github.com/ceph/ceph/pull/44491
2304
- (drop) https://github.com/ceph/ceph/pull/44501
2305
https://pulpito.ceph.com/vshankar-2022-02-08_14:27:29-fs-wip-vshankar-testing-20220208-181241-testing-default-smithi/
2306
2307
* https://tracker.ceph.com/issues/51964
2308
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
2309
* https://tracker.ceph.com/issues/54066
2310
    test_subvolume_no_upgrade_v1_sanity fails with `AssertionError: 1000 != 0`
2311
* https://tracker.ceph.com/issues/48773
2312
    qa: scrub does not complete
2313
* https://tracker.ceph.com/issues/52624
2314
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2315
* https://tracker.ceph.com/issues/50223
2316
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2317
* https://tracker.ceph.com/issues/52438
2318 40 Patrick Donnelly
    qa: ffsb timeout
2319
2320
h3. 2022 Feb 01
2321
2322
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20220127.171526
2323
2324
* https://tracker.ceph.com/issues/54107
2325
    kclient: hang during umount
2326
* https://tracker.ceph.com/issues/54106
2327
    kclient: hang during workunit cleanup
2328
* https://tracker.ceph.com/issues/54108
2329
    qa: iogen workunit: "The following counters failed to be set on mds daemons: {'mds.exported', 'mds.imported'}"
2330
* https://tracker.ceph.com/issues/48773
2331
    qa: scrub does not complete
2332
* https://tracker.ceph.com/issues/52438
2333
    qa: ffsb timeout
2334 36 Venky Shankar
2335
2336
h3. 2022 Jan 13
2337 39 Venky Shankar
2338 36 Venky Shankar
https://pulpito.ceph.com/vshankar-2022-01-06_13:18:41-fs-wip-vshankar-testing-20220106-145819-testing-default-smithi/
2339 38 Venky Shankar
2340
rerun with:
2341 36 Venky Shankar
- (add) https://github.com/ceph/ceph/pull/44570
2342
- (drop) https://github.com/ceph/ceph/pull/43184
2343
https://pulpito.ceph.com/vshankar-2022-01-13_04:42:40-fs-wip-vshankar-testing-20220106-145819-testing-default-smithi/
2344
2345
* https://tracker.ceph.com/issues/50223
2346
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2347
* https://tracker.ceph.com/issues/51282
2348
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2349
* https://tracker.ceph.com/issues/48773
2350
    qa: scrub does not complete
2351
* https://tracker.ceph.com/issues/52624
2352
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2353
* https://tracker.ceph.com/issues/53859
2354 34 Venky Shankar
    qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
2355
2356
h3. 2022 Jan 03
2357
2358
https://pulpito.ceph.com/vshankar-2021-12-22_07:37:44-fs-wip-vshankar-testing-20211216-114012-testing-default-smithi/
2359
https://pulpito.ceph.com/vshankar-2022-01-03_12:27:45-fs-wip-vshankar-testing-20220103-142738-testing-default-smithi/ (rerun)
2360
2361
* https://tracker.ceph.com/issues/50223
2362
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2363
* https://tracker.ceph.com/issues/51964
2364
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
2365
* https://tracker.ceph.com/issues/51267
2366
    CommandFailedError: Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithi096 with status 1:...
2367
* https://tracker.ceph.com/issues/51282
2368
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2369
* https://tracker.ceph.com/issues/50821
2370
    qa: untar_snap_rm failure during mds thrashing
2371 35 Ramana Raja
* https://tracker.ceph.com/issues/51278
2372
    mds: "FAILED ceph_assert(!segments.empty())"
2373
* https://tracker.ceph.com/issues/52279
2374 34 Venky Shankar
    cephadm tests fail due to: error adding seccomp filter rule for syscall bdflush: requested action matches default action of filter
2375 33 Patrick Donnelly
2376
2377
h3. 2021 Dec 22
2378
2379
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211222.014316
2380
2381
* https://tracker.ceph.com/issues/52624
2382
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2383
* https://tracker.ceph.com/issues/50223
2384
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2385
* https://tracker.ceph.com/issues/52279
2386
    cephadm tests fail due to: error adding seccomp filter rule for syscall bdflush: requested action matches default action of filter
2387
* https://tracker.ceph.com/issues/50224
2388
    qa: test_mirroring_init_failure_with_recovery failure
2389
* https://tracker.ceph.com/issues/48773
2390
    qa: scrub does not complete
2391 32 Venky Shankar
2392
2393
h3. 2021 Nov 30
2394
2395
https://pulpito.ceph.com/vshankar-2021-11-24_07:14:27-fs-wip-vshankar-testing-20211124-094330-testing-default-smithi/
2396
https://pulpito.ceph.com/vshankar-2021-11-30_06:23:32-fs-wip-vshankar-testing-20211124-094330-distro-default-smithi/ (rerun w/ QA fixes)
2397
2398
* https://tracker.ceph.com/issues/53436
2399
    mds, mon: mds beacon messages get dropped? (mds never reaches up:active state)
2400
* https://tracker.ceph.com/issues/51964
2401
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
2402
* https://tracker.ceph.com/issues/48812
2403
    qa: test_scrub_pause_and_resume_with_abort failure
2404
* https://tracker.ceph.com/issues/51076
2405
    "wait_for_recovery: failed before timeout expired" during thrashosd test with EC backend.
2406
* https://tracker.ceph.com/issues/50223
2407
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2408
* https://tracker.ceph.com/issues/52624
2409
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2410
* https://tracker.ceph.com/issues/50250
2411
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
2412 31 Patrick Donnelly
2413
2414
h3. 2021 November 9
2415
2416
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211109.180315
2417
2418
* https://tracker.ceph.com/issues/53214
2419
    qa: "dd: error reading '/sys/kernel/debug/ceph/2a934501-6731-4052-a836-f42229a869be.client4874/metrics': Is a directory"
2420
* https://tracker.ceph.com/issues/48773
2421
    qa: scrub does not complete
2422
* https://tracker.ceph.com/issues/50223
2423
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2424
* https://tracker.ceph.com/issues/51282
2425
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2426
* https://tracker.ceph.com/issues/52624
2427
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2428
* https://tracker.ceph.com/issues/53216
2429
    qa: "RuntimeError: value of attributes should be either str or None. client_id"
2430
* https://tracker.ceph.com/issues/50250
2431
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
2432
2433 30 Patrick Donnelly
2434
2435
h3. 2021 November 03
2436
2437
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211103.023355
2438
2439
* https://tracker.ceph.com/issues/51964
2440
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
2441
* https://tracker.ceph.com/issues/51282
2442
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2443
* https://tracker.ceph.com/issues/52436
2444
    fs/ceph: "corrupt mdsmap"
2445
* https://tracker.ceph.com/issues/53074
2446
    pybind/mgr/cephadm: upgrade sequence does not continue if no MDS are active
2447
* https://tracker.ceph.com/issues/53150
2448
    pybind/mgr/cephadm/upgrade: tolerate MDS failures during upgrade straddling v16.2.5
2449
* https://tracker.ceph.com/issues/53155
2450
    MDSMonitor: assertion during upgrade to v16.2.5+
2451 29 Patrick Donnelly
2452
2453
h3. 2021 October 26
2454
2455
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211025.000447
2456
2457
* https://tracker.ceph.com/issues/53074
2458
    pybind/mgr/cephadm: upgrade sequence does not continue if no MDS are active
2459
* https://tracker.ceph.com/issues/52997
2460
    testing: hang ing umount
2461
* https://tracker.ceph.com/issues/50824
2462
    qa: snaptest-git-ceph bus error
2463
* https://tracker.ceph.com/issues/52436
2464
    fs/ceph: "corrupt mdsmap"
2465
* https://tracker.ceph.com/issues/48773
2466
    qa: scrub does not complete
2467
* https://tracker.ceph.com/issues/53082
2468
    ceph-fuse: segmenetation fault in Client::handle_mds_map
2469
* https://tracker.ceph.com/issues/50223
2470
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2471
* https://tracker.ceph.com/issues/52624
2472
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2473
* https://tracker.ceph.com/issues/50224
2474
    qa: test_mirroring_init_failure_with_recovery failure
2475
* https://tracker.ceph.com/issues/50821
2476
    qa: untar_snap_rm failure during mds thrashing
2477
* https://tracker.ceph.com/issues/50250
2478
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
2479
2480 27 Patrick Donnelly
2481
2482 28 Patrick Donnelly
h3. 2021 October 19
2483 27 Patrick Donnelly
2484
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211019.013028
2485
2486
* https://tracker.ceph.com/issues/52995
2487
    qa: test_standby_count_wanted failure
2488
* https://tracker.ceph.com/issues/52948
2489
    osd: fails to come up: "teuthology.misc:7 of 8 OSDs are up"
2490
* https://tracker.ceph.com/issues/52996
2491
    qa: test_perf_counters via test_openfiletable
2492
* https://tracker.ceph.com/issues/48772
2493
    qa: pjd: not ok 9, 44, 80
2494
* https://tracker.ceph.com/issues/52997
2495
    testing: hang ing umount
2496
* https://tracker.ceph.com/issues/50250
2497
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
2498
* https://tracker.ceph.com/issues/52624
2499
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2500
* https://tracker.ceph.com/issues/50223
2501
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2502
* https://tracker.ceph.com/issues/50821
2503
    qa: untar_snap_rm failure during mds thrashing
2504
* https://tracker.ceph.com/issues/48773
2505
    qa: scrub does not complete
2506 26 Patrick Donnelly
2507
2508
h3. 2021 October 12
2509
2510
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211012.192211
2511
2512
Some failures caused by teuthology bug: https://tracker.ceph.com/issues/52944
2513
2514
New test caused failure: https://github.com/ceph/ceph/pull/43297#discussion_r729883167
2515
2516
2517
* https://tracker.ceph.com/issues/51282
2518
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2519
* https://tracker.ceph.com/issues/52948
2520
    osd: fails to come up: "teuthology.misc:7 of 8 OSDs are up"
2521
* https://tracker.ceph.com/issues/48773
2522
    qa: scrub does not complete
2523
* https://tracker.ceph.com/issues/50224
2524
    qa: test_mirroring_init_failure_with_recovery failure
2525
* https://tracker.ceph.com/issues/52949
2526
    RuntimeError: The following counters failed to be set on mds daemons: {'mds.dir_split'}
2527 25 Patrick Donnelly
2528 23 Patrick Donnelly
2529 24 Patrick Donnelly
h3. 2021 October 02
2530
2531
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211002.163337
2532
2533
Some failures caused by cephadm upgrade test. Fixed in follow-up qa commit.
2534
2535
test_simple failures caused by PR in this set.
2536
2537
A few reruns because of QA infra noise.
2538
2539
* https://tracker.ceph.com/issues/52822
2540
    qa: failed pacific install on fs:upgrade
2541
* https://tracker.ceph.com/issues/52624
2542
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2543
* https://tracker.ceph.com/issues/50223
2544
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2545
* https://tracker.ceph.com/issues/48773
2546
    qa: scrub does not complete
2547
2548
2549 23 Patrick Donnelly
h3. 2021 September 20
2550
2551
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210917.174826
2552
2553
* https://tracker.ceph.com/issues/52677
2554
    qa: test_simple failure
2555
* https://tracker.ceph.com/issues/51279
2556
    kclient hangs on umount (testing branch)
2557
* https://tracker.ceph.com/issues/50223
2558
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2559
* https://tracker.ceph.com/issues/50250
2560
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
2561
* https://tracker.ceph.com/issues/52624
2562
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2563
* https://tracker.ceph.com/issues/52438
2564
    qa: ffsb timeout
2565 22 Patrick Donnelly
2566
2567
h3. 2021 September 10
2568
2569
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210910.181451
2570
2571
* https://tracker.ceph.com/issues/50223
2572
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2573
* https://tracker.ceph.com/issues/50250
2574
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
2575
* https://tracker.ceph.com/issues/52624
2576
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2577
* https://tracker.ceph.com/issues/52625
2578
    qa: test_kill_mdstable (tasks.cephfs.test_snapshots.TestSnapshots)
2579
* https://tracker.ceph.com/issues/52439
2580
    qa: acls does not compile on centos stream
2581
* https://tracker.ceph.com/issues/50821
2582
    qa: untar_snap_rm failure during mds thrashing
2583
* https://tracker.ceph.com/issues/48773
2584
    qa: scrub does not complete
2585
* https://tracker.ceph.com/issues/52626
2586
    mds: ScrubStack.cc: 831: FAILED ceph_assert(diri)
2587
* https://tracker.ceph.com/issues/51279
2588
    kclient hangs on umount (testing branch)
2589 21 Patrick Donnelly
2590
2591
h3. 2021 August 27
2592
2593
Several jobs died because of device failures.
2594
2595
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210827.024746
2596
2597
* https://tracker.ceph.com/issues/52430
2598
    mds: fast async create client mount breaks racy test
2599
* https://tracker.ceph.com/issues/52436
2600
    fs/ceph: "corrupt mdsmap"
2601
* https://tracker.ceph.com/issues/52437
2602
    mds: InoTable::replay_release_ids abort via test_inotable_sync
2603
* https://tracker.ceph.com/issues/51282
2604
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2605
* https://tracker.ceph.com/issues/52438
2606
    qa: ffsb timeout
2607
* https://tracker.ceph.com/issues/52439
2608
    qa: acls does not compile on centos stream
2609 20 Patrick Donnelly
2610
2611
h3. 2021 July 30
2612
2613
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210729.214022
2614
2615
* https://tracker.ceph.com/issues/50250
2616
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
2617
* https://tracker.ceph.com/issues/51282
2618
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2619
* https://tracker.ceph.com/issues/48773
2620
    qa: scrub does not complete
2621
* https://tracker.ceph.com/issues/51975
2622
    pybind/mgr/stats: KeyError
2623 19 Patrick Donnelly
2624
2625
h3. 2021 July 28
2626
2627
https://pulpito.ceph.com/pdonnell-2021-07-28_00:39:45-fs-wip-pdonnell-testing-20210727.213757-distro-basic-smithi/
2628
2629
with qa fix: https://pulpito.ceph.com/pdonnell-2021-07-28_16:20:28-fs-wip-pdonnell-testing-20210728.141004-distro-basic-smithi/
2630
2631
* https://tracker.ceph.com/issues/51905
2632
    qa: "error reading sessionmap 'mds1_sessionmap'"
2633
* https://tracker.ceph.com/issues/48773
2634
    qa: scrub does not complete
2635
* https://tracker.ceph.com/issues/50250
2636
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
2637
* https://tracker.ceph.com/issues/51267
2638
    CommandFailedError: Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithi096 with status 1:...
2639
* https://tracker.ceph.com/issues/51279
2640
    kclient hangs on umount (testing branch)
2641 18 Patrick Donnelly
2642
2643
h3. 2021 July 16
2644
2645
https://pulpito.ceph.com/pdonnell-2021-07-16_05:50:11-fs-wip-pdonnell-testing-20210716.022804-distro-basic-smithi/
2646
2647
* https://tracker.ceph.com/issues/48773
2648
    qa: scrub does not complete
2649
* https://tracker.ceph.com/issues/48772
2650
    qa: pjd: not ok 9, 44, 80
2651
* https://tracker.ceph.com/issues/45434
2652
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2653
* https://tracker.ceph.com/issues/51279
2654
    kclient hangs on umount (testing branch)
2655
* https://tracker.ceph.com/issues/50824
2656
    qa: snaptest-git-ceph bus error
2657 17 Patrick Donnelly
2658
2659
h3. 2021 July 04
2660
2661
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210703.052904
2662
2663
* https://tracker.ceph.com/issues/48773
2664
    qa: scrub does not complete
2665
* https://tracker.ceph.com/issues/39150
2666
    mon: "FAILED ceph_assert(session_map.sessions.empty())" when out of quorum
2667
* https://tracker.ceph.com/issues/45434
2668
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2669
* https://tracker.ceph.com/issues/51282
2670
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2671
* https://tracker.ceph.com/issues/48771
2672
    qa: iogen: workload fails to cause balancing
2673
* https://tracker.ceph.com/issues/51279
2674
    kclient hangs on umount (testing branch)
2675
* https://tracker.ceph.com/issues/50250
2676
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
2677 16 Patrick Donnelly
2678
2679
h3. 2021 July 01
2680
2681
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210701.192056
2682
2683
* https://tracker.ceph.com/issues/51197
2684
    qa: [WRN] Scrub error on inode 0x10000001520 (/client.0/tmp/t/linux-5.4/Documentation/driver-api) see mds.f log and `damage ls` output for details
2685
* https://tracker.ceph.com/issues/50866
2686
    osd: stat mismatch on objects
2687
* https://tracker.ceph.com/issues/48773
2688
    qa: scrub does not complete
2689 15 Patrick Donnelly
2690
2691
h3. 2021 June 26
2692
2693
https://pulpito.ceph.com/pdonnell-2021-06-26_00:57:00-fs-wip-pdonnell-testing-20210625.225421-distro-basic-smithi/
2694
2695
* https://tracker.ceph.com/issues/51183
2696
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
2697
* https://tracker.ceph.com/issues/51410
2698
    kclient: fails to finish reconnect during MDS thrashing (testing branch)
2699
* https://tracker.ceph.com/issues/48773
2700
    qa: scrub does not complete
2701
* https://tracker.ceph.com/issues/51282
2702
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2703
* https://tracker.ceph.com/issues/51169
2704
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
2705
* https://tracker.ceph.com/issues/48772
2706
    qa: pjd: not ok 9, 44, 80
2707 14 Patrick Donnelly
2708
2709
h3. 2021 June 21
2710
2711
https://pulpito.ceph.com/pdonnell-2021-06-22_00:27:21-fs-wip-pdonnell-testing-20210621.231646-distro-basic-smithi/
2712
2713
One failure caused by PR: https://github.com/ceph/ceph/pull/41935#issuecomment-866472599
2714
2715
* https://tracker.ceph.com/issues/51282
2716
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2717
* https://tracker.ceph.com/issues/51183
2718
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
2719
* https://tracker.ceph.com/issues/48773
2720
    qa: scrub does not complete
2721
* https://tracker.ceph.com/issues/48771
2722
    qa: iogen: workload fails to cause balancing
2723
* https://tracker.ceph.com/issues/51169
2724
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
2725
* https://tracker.ceph.com/issues/50495
2726
    libcephfs: shutdown race fails with status 141
2727
* https://tracker.ceph.com/issues/45434
2728
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2729
* https://tracker.ceph.com/issues/50824
2730
    qa: snaptest-git-ceph bus error
2731
* https://tracker.ceph.com/issues/50223
2732
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2733 13 Patrick Donnelly
2734
2735
h3. 2021 June 16
2736
2737
https://pulpito.ceph.com/pdonnell-2021-06-16_21:26:55-fs-wip-pdonnell-testing-20210616.191804-distro-basic-smithi/
2738
2739
MDS abort class of failures caused by PR: https://github.com/ceph/ceph/pull/41667
2740
2741
* https://tracker.ceph.com/issues/45434
2742
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2743
* https://tracker.ceph.com/issues/51169
2744
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
2745
* https://tracker.ceph.com/issues/43216
2746
    MDSMonitor: removes MDS coming out of quorum election
2747
* https://tracker.ceph.com/issues/51278
2748
    mds: "FAILED ceph_assert(!segments.empty())"
2749
* https://tracker.ceph.com/issues/51279
2750
    kclient hangs on umount (testing branch)
2751
* https://tracker.ceph.com/issues/51280
2752
    mds: "FAILED ceph_assert(r == 0 || r == -2)"
2753
* https://tracker.ceph.com/issues/51183
2754
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
2755
* https://tracker.ceph.com/issues/51281
2756
    qa: snaptest-snap-rm-cmp.sh: "echo 'FAIL: bad match, /tmp/a 4637e766853d1ad16a7b17079e2c6f03 != real c3883760b18d50e8d78819c54d579b00'"
2757
* https://tracker.ceph.com/issues/48773
2758
    qa: scrub does not complete
2759
* https://tracker.ceph.com/issues/51076
2760
    "wait_for_recovery: failed before timeout expired" during thrashosd test with EC backend.
2761
* https://tracker.ceph.com/issues/51228
2762
    qa: rmdir: failed to remove 'a/.snap/*': No such file or directory
2763
* https://tracker.ceph.com/issues/51282
2764
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2765 12 Patrick Donnelly
2766
2767
h3. 2021 June 14
2768
2769
https://pulpito.ceph.com/pdonnell-2021-06-14_20:53:05-fs-wip-pdonnell-testing-20210614.173325-distro-basic-smithi/
2770
2771
Some Ubuntu 20.04 upgrade fallout. In particular, upgrade tests are failing due to missing packages for 18.04 Pacific.
2772
2773
* https://tracker.ceph.com/issues/51169
2774
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
2775
* https://tracker.ceph.com/issues/51228
2776
    qa: rmdir: failed to remove 'a/.snap/*': No such file or directory
2777
* https://tracker.ceph.com/issues/48773
2778
    qa: scrub does not complete
2779
* https://tracker.ceph.com/issues/51183
2780
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
2781
* https://tracker.ceph.com/issues/45434
2782
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2783
* https://tracker.ceph.com/issues/51182
2784
    pybind/mgr/snap_schedule: Invalid command: Unexpected argument 'fs=cephfs'
2785
* https://tracker.ceph.com/issues/51229
2786
    qa: test_multi_snap_schedule list difference failure
2787
* https://tracker.ceph.com/issues/50821
2788
    qa: untar_snap_rm failure during mds thrashing
2789 11 Patrick Donnelly
2790
2791
h3. 2021 June 13
2792
2793
https://pulpito.ceph.com/pdonnell-2021-06-12_02:45:35-fs-wip-pdonnell-testing-20210612.002809-distro-basic-smithi/
2794
2795
Some Ubuntu 20.04 upgrade fallout. In particular, upgrade tests are failing due to missing packages for 18.04 Pacific.
2796
2797
* https://tracker.ceph.com/issues/51169
2798
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
2799
* https://tracker.ceph.com/issues/48773
2800
    qa: scrub does not complete
2801
* https://tracker.ceph.com/issues/51182
2802
    pybind/mgr/snap_schedule: Invalid command: Unexpected argument 'fs=cephfs'
2803
* https://tracker.ceph.com/issues/51183
2804
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
2805
* https://tracker.ceph.com/issues/51197
2806
    qa: [WRN] Scrub error on inode 0x10000001520 (/client.0/tmp/t/linux-5.4/Documentation/driver-api) see mds.f log and `damage ls` output for details
2807
* https://tracker.ceph.com/issues/45434
2808 10 Patrick Donnelly
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2809
2810
h3. 2021 June 11
2811
2812
https://pulpito.ceph.com/pdonnell-2021-06-11_18:02:10-fs-wip-pdonnell-testing-20210611.162716-distro-basic-smithi/
2813
2814
Some Ubuntu 20.04 upgrade fallout. In particular, upgrade tests are failing due to missing packages for 18.04 Pacific.
2815
2816
* https://tracker.ceph.com/issues/51169
2817
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
2818
* https://tracker.ceph.com/issues/45434
2819
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2820
* https://tracker.ceph.com/issues/48771
2821
    qa: iogen: workload fails to cause balancing
2822
* https://tracker.ceph.com/issues/43216
2823
    MDSMonitor: removes MDS coming out of quorum election
2824
* https://tracker.ceph.com/issues/51182
2825
    pybind/mgr/snap_schedule: Invalid command: Unexpected argument 'fs=cephfs'
2826
* https://tracker.ceph.com/issues/50223
2827
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2828
* https://tracker.ceph.com/issues/48773
2829
    qa: scrub does not complete
2830
* https://tracker.ceph.com/issues/51183
2831
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
2832
* https://tracker.ceph.com/issues/51184
2833
    qa: fs:bugs does not specify distro
2834 9 Patrick Donnelly
2835
2836
h3. 2021 June 03
2837
2838
https://pulpito.ceph.com/pdonnell-2021-06-03_03:40:33-fs-wip-pdonnell-testing-20210603.020013-distro-basic-smithi/
2839
2840
* https://tracker.ceph.com/issues/45434
2841
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2842
* https://tracker.ceph.com/issues/50016
2843
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
2844
* https://tracker.ceph.com/issues/50821
2845
    qa: untar_snap_rm failure during mds thrashing
2846
* https://tracker.ceph.com/issues/50622 (regression)
2847
    msg: active_connections regression
2848
* https://tracker.ceph.com/issues/49845#note-2 (regression)
2849
    qa: failed umount in test_volumes
2850
* https://tracker.ceph.com/issues/48773
2851
    qa: scrub does not complete
2852
* https://tracker.ceph.com/issues/43216
2853
    MDSMonitor: removes MDS coming out of quorum election
2854 7 Patrick Donnelly
2855
2856 8 Patrick Donnelly
h3. 2021 May 18
2857
2858
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210518.214114
2859
2860
Regression in testing kernel caused some failures. Ilya fixed those and rerun
2861
looked better. Some odd new noise in the rerun relating to packaging and "No
2862
module named 'tasks.ceph'".
2863
2864
* https://tracker.ceph.com/issues/50824
2865
    qa: snaptest-git-ceph bus error
2866
* https://tracker.ceph.com/issues/50622 (regression)
2867
    msg: active_connections regression
2868
* https://tracker.ceph.com/issues/49845#note-2 (regression)
2869
    qa: failed umount in test_volumes
2870
* https://tracker.ceph.com/issues/48203 (stock kernel update required)
2871
    qa: quota failure
2872
2873
2874 7 Patrick Donnelly
h3. 2021 May 18
2875
2876
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210518.025642
2877
2878
* https://tracker.ceph.com/issues/50821
2879
    qa: untar_snap_rm failure during mds thrashing
2880
* https://tracker.ceph.com/issues/48773
2881
    qa: scrub does not complete
2882
* https://tracker.ceph.com/issues/45591
2883
    mgr: FAILED ceph_assert(daemon != nullptr)
2884
* https://tracker.ceph.com/issues/50866
2885
    osd: stat mismatch on objects
2886
* https://tracker.ceph.com/issues/50016
2887
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
2888
* https://tracker.ceph.com/issues/50867
2889
    qa: fs:mirror: reduced data availability
2890
* https://tracker.ceph.com/issues/50821
2891
    qa: untar_snap_rm failure during mds thrashing
2892
* https://tracker.ceph.com/issues/50622 (regression)
2893
    msg: active_connections regression
2894
* https://tracker.ceph.com/issues/50223
2895
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2896
* https://tracker.ceph.com/issues/50868
2897
    qa: "kern.log.gz already exists; not overwritten"
2898
* https://tracker.ceph.com/issues/50870
2899
    qa: test_full: "rm: cannot remove 'large_file_a': Permission denied"
2900 6 Patrick Donnelly
2901
2902
h3. 2021 May 11
2903
2904
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210511.232042
2905
2906
* one class of failures caused by PR
2907
* https://tracker.ceph.com/issues/48812
2908
    qa: test_scrub_pause_and_resume_with_abort failure
2909
* https://tracker.ceph.com/issues/50390
2910
    mds: monclient: wait_auth_rotating timed out after 30
2911
* https://tracker.ceph.com/issues/48773
2912
    qa: scrub does not complete
2913
* https://tracker.ceph.com/issues/50821
2914
    qa: untar_snap_rm failure during mds thrashing
2915
* https://tracker.ceph.com/issues/50224
2916
    qa: test_mirroring_init_failure_with_recovery failure
2917
* https://tracker.ceph.com/issues/50622 (regression)
2918
    msg: active_connections regression
2919
* https://tracker.ceph.com/issues/50825
2920
    qa: snaptest-git-ceph hang during mon thrashing v2
2921
* https://tracker.ceph.com/issues/50821
2922
    qa: untar_snap_rm failure during mds thrashing
2923
* https://tracker.ceph.com/issues/50823
2924
    qa: RuntimeError: timeout waiting for cluster to stabilize
2925 5 Patrick Donnelly
2926
2927
h3. 2021 May 14
2928
2929
https://pulpito.ceph.com/pdonnell-2021-05-14_21:45:42-fs-master-distro-basic-smithi/
2930
2931
* https://tracker.ceph.com/issues/48812
2932
    qa: test_scrub_pause_and_resume_with_abort failure
2933
* https://tracker.ceph.com/issues/50821
2934
    qa: untar_snap_rm failure during mds thrashing
2935
* https://tracker.ceph.com/issues/50622 (regression)
2936
    msg: active_connections regression
2937
* https://tracker.ceph.com/issues/50822
2938
    qa: testing kernel patch for client metrics causes mds abort
2939
* https://tracker.ceph.com/issues/48773
2940
    qa: scrub does not complete
2941
* https://tracker.ceph.com/issues/50823
2942
    qa: RuntimeError: timeout waiting for cluster to stabilize
2943
* https://tracker.ceph.com/issues/50824
2944
    qa: snaptest-git-ceph bus error
2945
* https://tracker.ceph.com/issues/50825
2946
    qa: snaptest-git-ceph hang during mon thrashing v2
2947
* https://tracker.ceph.com/issues/50826
2948
    kceph: stock RHEL kernel hangs on snaptests with mon|osd thrashers
2949 4 Patrick Donnelly
2950
2951
h3. 2021 May 01
2952
2953
https://pulpito.ceph.com/pdonnell-2021-05-01_09:07:09-fs-wip-pdonnell-testing-20210501.040415-distro-basic-smithi/
2954
2955
* https://tracker.ceph.com/issues/45434
2956
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2957
* https://tracker.ceph.com/issues/50281
2958
    qa: untar_snap_rm timeout
2959
* https://tracker.ceph.com/issues/48203 (stock kernel update required)
2960
    qa: quota failure
2961
* https://tracker.ceph.com/issues/48773
2962
    qa: scrub does not complete
2963
* https://tracker.ceph.com/issues/50390
2964
    mds: monclient: wait_auth_rotating timed out after 30
2965
* https://tracker.ceph.com/issues/50250
2966
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details"
2967
* https://tracker.ceph.com/issues/50622 (regression)
2968
    msg: active_connections regression
2969
* https://tracker.ceph.com/issues/45591
2970
    mgr: FAILED ceph_assert(daemon != nullptr)
2971
* https://tracker.ceph.com/issues/50221
2972
    qa: snaptest-git-ceph failure in git diff
2973
* https://tracker.ceph.com/issues/50016
2974
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
2975 3 Patrick Donnelly
2976
2977
h3. 2021 Apr 15
2978
2979
https://pulpito.ceph.com/pdonnell-2021-04-15_01:35:57-fs-wip-pdonnell-testing-20210414.230315-distro-basic-smithi/
2980
2981
* https://tracker.ceph.com/issues/50281
2982
    qa: untar_snap_rm timeout
2983
* https://tracker.ceph.com/issues/50220
2984
    qa: dbench workload timeout
2985
* https://tracker.ceph.com/issues/50246
2986
    mds: failure replaying journal (EMetaBlob)
2987
* https://tracker.ceph.com/issues/50250
2988
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details"
2989
* https://tracker.ceph.com/issues/50016
2990
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
2991
* https://tracker.ceph.com/issues/50222
2992
    osd: 5.2s0 deep-scrub : stat mismatch
2993
* https://tracker.ceph.com/issues/45434
2994
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2995
* https://tracker.ceph.com/issues/49845
2996
    qa: failed umount in test_volumes
2997
* https://tracker.ceph.com/issues/37808
2998
    osd: osdmap cache weak_refs assert during shutdown
2999
* https://tracker.ceph.com/issues/50387
3000
    client: fs/snaps failure
3001
* https://tracker.ceph.com/issues/50389
3002
    mds: "cluster [ERR] Error recovering journal 0x203: (2) No such file or directory" in cluster log"
3003
* https://tracker.ceph.com/issues/50216
3004
    qa: "ls: cannot access 'lost+found': No such file or directory"
3005
* https://tracker.ceph.com/issues/50390
3006
    mds: monclient: wait_auth_rotating timed out after 30
3007
3008 1 Patrick Donnelly
3009
3010 2 Patrick Donnelly
h3. 2021 Apr 08
3011
3012
https://pulpito.ceph.com/pdonnell-2021-04-08_22:42:24-fs-wip-pdonnell-testing-20210408.192301-distro-basic-smithi/
3013
3014
* https://tracker.ceph.com/issues/45434
3015
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
3016
* https://tracker.ceph.com/issues/50016
3017
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
3018
* https://tracker.ceph.com/issues/48773
3019
    qa: scrub does not complete
3020
* https://tracker.ceph.com/issues/50279
3021
    qa: "Replacing daemon mds.b as rank 0 with standby daemon mds.c"
3022
* https://tracker.ceph.com/issues/50246
3023
    mds: failure replaying journal (EMetaBlob)
3024
* https://tracker.ceph.com/issues/48365
3025
    qa: ffsb build failure on CentOS 8.2
3026
* https://tracker.ceph.com/issues/50216
3027
    qa: "ls: cannot access 'lost+found': No such file or directory"
3028
* https://tracker.ceph.com/issues/50223
3029
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
3030
* https://tracker.ceph.com/issues/50280
3031
    cephadm: RuntimeError: uid/gid not found
3032
* https://tracker.ceph.com/issues/50281
3033
    qa: untar_snap_rm timeout
3034
3035 1 Patrick Donnelly
h3. 2021 Apr 08
3036
3037
https://pulpito.ceph.com/pdonnell-2021-04-08_04:31:36-fs-wip-pdonnell-testing-20210408.024225-distro-basic-smithi/
3038
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210408.142238 (with logic inversion / QA fix)
3039
3040
* https://tracker.ceph.com/issues/50246
3041
    mds: failure replaying journal (EMetaBlob)
3042
* https://tracker.ceph.com/issues/50250
3043
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details"
3044
3045
3046
h3. 2021 Apr 07
3047
3048
https://pulpito.ceph.com/pdonnell-2021-04-07_02:12:41-fs-wip-pdonnell-testing-20210406.213012-distro-basic-smithi/
3049
3050
* https://tracker.ceph.com/issues/50215
3051
    qa: "log [ERR] : error reading sessionmap 'mds2_sessionmap'"
3052
* https://tracker.ceph.com/issues/49466
3053
    qa: "Command failed on gibba030 with status 1: 'set -ex\nsudo dd of=/tmp/tmp.ZEeZBasJer'"
3054
* https://tracker.ceph.com/issues/50216
3055
    qa: "ls: cannot access 'lost+found': No such file or directory"
3056
* https://tracker.ceph.com/issues/48773
3057
    qa: scrub does not complete
3058
* https://tracker.ceph.com/issues/49845
3059
    qa: failed umount in test_volumes
3060
* https://tracker.ceph.com/issues/50220
3061
    qa: dbench workload timeout
3062
* https://tracker.ceph.com/issues/50221
3063
    qa: snaptest-git-ceph failure in git diff
3064
* https://tracker.ceph.com/issues/50222
3065
    osd: 5.2s0 deep-scrub : stat mismatch
3066
* https://tracker.ceph.com/issues/50223
3067
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
3068
* https://tracker.ceph.com/issues/50224
3069
    qa: test_mirroring_init_failure_with_recovery failure
3070
3071
h3. 2021 Apr 01
3072
3073
https://pulpito.ceph.com/pdonnell-2021-04-01_00:45:34-fs-wip-pdonnell-testing-20210331.222326-distro-basic-smithi/
3074
3075
* https://tracker.ceph.com/issues/48772
3076
    qa: pjd: not ok 9, 44, 80
3077
* https://tracker.ceph.com/issues/50177
3078
    osd: "stalled aio... buggy kernel or bad device?"
3079
* https://tracker.ceph.com/issues/48771
3080
    qa: iogen: workload fails to cause balancing
3081
* https://tracker.ceph.com/issues/49845
3082
    qa: failed umount in test_volumes
3083
* https://tracker.ceph.com/issues/48773
3084
    qa: scrub does not complete
3085
* https://tracker.ceph.com/issues/48805
3086
    mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details"
3087
* https://tracker.ceph.com/issues/50178
3088
    qa: "TypeError: run() got an unexpected keyword argument 'shell'"
3089
* https://tracker.ceph.com/issues/45434
3090
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
3091
3092
h3. 2021 Mar 24
3093
3094
https://pulpito.ceph.com/pdonnell-2021-03-24_23:26:35-fs-wip-pdonnell-testing-20210324.190252-distro-basic-smithi/
3095
3096
* https://tracker.ceph.com/issues/49500
3097
    qa: "Assertion `cb_done' failed."
3098
* https://tracker.ceph.com/issues/50019
3099
    qa: mount failure with cephadm "probably no MDS server is up?"
3100
* https://tracker.ceph.com/issues/50020
3101
    qa: "RADOS object not found (Failed to operate read op for oid cephfs_mirror)"
3102
* https://tracker.ceph.com/issues/48773
3103
    qa: scrub does not complete
3104
* https://tracker.ceph.com/issues/45434
3105
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
3106
* https://tracker.ceph.com/issues/48805
3107
    mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details"
3108
* https://tracker.ceph.com/issues/48772
3109
    qa: pjd: not ok 9, 44, 80
3110
* https://tracker.ceph.com/issues/50021
3111
    qa: snaptest-git-ceph failure during mon thrashing
3112
* https://tracker.ceph.com/issues/48771
3113
    qa: iogen: workload fails to cause balancing
3114
* https://tracker.ceph.com/issues/50016
3115
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
3116
* https://tracker.ceph.com/issues/49466
3117
    qa: "Command failed on gibba030 with status 1: 'set -ex\nsudo dd of=/tmp/tmp.ZEeZBasJer'"
3118
3119
3120
h3. 2021 Mar 18
3121
3122
https://pulpito.ceph.com/pdonnell-2021-03-18_13:46:31-fs-wip-pdonnell-testing-20210318.024145-distro-basic-smithi/
3123
3124
* https://tracker.ceph.com/issues/49466
3125
    qa: "Command failed on gibba030 with status 1: 'set -ex\nsudo dd of=/tmp/tmp.ZEeZBasJer'"
3126
* https://tracker.ceph.com/issues/48773
3127
    qa: scrub does not complete
3128
* https://tracker.ceph.com/issues/48805
3129
    mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details"
3130
* https://tracker.ceph.com/issues/45434
3131
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
3132
* https://tracker.ceph.com/issues/49845
3133
    qa: failed umount in test_volumes
3134
* https://tracker.ceph.com/issues/49605
3135
    mgr: drops command on the floor
3136
* https://tracker.ceph.com/issues/48203 (stock kernel update required)
3137
    qa: quota failure
3138
* https://tracker.ceph.com/issues/49928
3139
    client: items pinned in cache preventing unmount x2
3140
3141
h3. 2021 Mar 15
3142
3143
https://pulpito.ceph.com/pdonnell-2021-03-15_22:16:56-fs-wip-pdonnell-testing-20210315.182203-distro-basic-smithi/
3144
3145
* https://tracker.ceph.com/issues/49842
3146
    qa: stuck pkg install
3147
* https://tracker.ceph.com/issues/49466
3148
    qa: "Command failed on gibba030 with status 1: 'set -ex\nsudo dd of=/tmp/tmp.ZEeZBasJer'"
3149
* https://tracker.ceph.com/issues/49822
3150
    test: test_mirroring_command_idempotency (tasks.cephfs.test_admin.TestMirroringCommands) failure
3151
* https://tracker.ceph.com/issues/49240
3152
    terminate called after throwing an instance of 'std::bad_alloc'
3153
* https://tracker.ceph.com/issues/48773
3154
    qa: scrub does not complete
3155
* https://tracker.ceph.com/issues/45434
3156
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
3157
* https://tracker.ceph.com/issues/49500
3158
    qa: "Assertion `cb_done' failed."
3159
* https://tracker.ceph.com/issues/49843
3160
    qa: fs/snaps/snaptest-upchildrealms.sh failure
3161
* https://tracker.ceph.com/issues/49845
3162
    qa: failed umount in test_volumes
3163
* https://tracker.ceph.com/issues/48805
3164
    mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details"
3165
* https://tracker.ceph.com/issues/49605
3166
    mgr: drops command on the floor
3167
3168
and failure caused by PR: https://github.com/ceph/ceph/pull/39969
3169
3170
3171
h3. 2021 Mar 09
3172
3173
https://pulpito.ceph.com/pdonnell-2021-03-09_03:27:39-fs-wip-pdonnell-testing-20210308.214827-distro-basic-smithi/
3174
3175
* https://tracker.ceph.com/issues/49500
3176
    qa: "Assertion `cb_done' failed."
3177
* https://tracker.ceph.com/issues/48805
3178
    mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details"
3179
* https://tracker.ceph.com/issues/48773
3180
    qa: scrub does not complete
3181
* https://tracker.ceph.com/issues/45434
3182
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
3183
* https://tracker.ceph.com/issues/49240
3184
    terminate called after throwing an instance of 'std::bad_alloc'
3185
* https://tracker.ceph.com/issues/49466
3186
    qa: "Command failed on gibba030 with status 1: 'set -ex\nsudo dd of=/tmp/tmp.ZEeZBasJer'"
3187
* https://tracker.ceph.com/issues/49684
3188
    qa: fs:cephadm mount does not wait for mds to be created
3189
* https://tracker.ceph.com/issues/48771
3190
    qa: iogen: workload fails to cause balancing