Project

General

Profile

Main » History » Version 247

Rishabh Dave, 04/04/2024 08:04 AM

1 221 Patrick Donnelly
h1. <code>main</code> branch
2 1 Patrick Donnelly
3 247 Rishabh Dave
h3. ADD NEW ENTRY HERE
4
5 246 Rishabh Dave
h3. 4 Apr 2024 TEMP
6
7
https://pulpito.ceph.com/rishabh-2024-03-27_05:27:11-fs-wip-rishabh-testing-20240326.131558-testing-default-smithi/
8
9
* https://tracker.ceph.com/issues/64927
10
  qa/cephfs: test_cephfs_mirror_blocklist raises "KeyError: 'rados_inst'"
11
* https://tracker.ceph.com/issues/65022
12
  qa: test_max_items_per_obj open procs not fully cleaned up
13
* https://tracker.ceph.com/issues/63699
14
  qa: failed cephfs-shell test_reading_conf
15
* https://tracker.ceph.com/issues/63700
16
  qa: test_cd_with_args failure
17
* https://tracker.ceph.com/issues/65136
18
  QA failure: test_fscrypt_dummy_encryption_with_quick_group
19
* https://tracker.ceph.com/issues/65246
20
  qa/cephfs: test_multifs_single_path_rootsquash (tasks.cephfs.test_admin.TestFsAuthorize)
21
22
* https://tracker.ceph.com/issues/58945
23
  qa: xfstests-dev's generic test suite has failures with fuse client
24
* https://tracker.ceph.com/issues/57656
25
  [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
26
* https://tracker.ceph.com/issues/63265
27
  qa: fs/snaps/snaptest-git-ceph.sh failed when reseting to tag 'v0.1'
28
* https://tracker.ceph.com/issues/62067
29
  ffsb.sh failure "Resource temporarily unavailable"
30
* https://tracker.ceph.com/issues/63949
31
  leak in mds.c detected by valgrind during CephFS QA run
32
* https://tracker.ceph.com/issues/48562
33
  qa: scrub - object missing on disk; some files may be lost
34
* https://tracker.ceph.com/issues/65020
35
  qa: Scrub error on inode 0x1000000356c (/volumes/qa/sv_0/2f8f6bb4-3ea9-47a0-bd79-a0f50dc149d5/client.0/tmp/clients/client7/~dmtmp/PARADOX) see mds.b log and `damage ls` output for details" in cluster log
36
* https://tracker.ceph.com/issues/64572
37
  workunits/fsx.sh failure
38
* https://tracker.ceph.com/issues/57676
39
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
40
* https://tracker.ceph.com/issues/64502
41
  client: ceph-fuse fails to unmount after upgrade to main
42
* https://tracker.ceph.com/issues/65018
43
  PG_DEGRADED warnings during cluster creation via cephadm: "Health check failed: Degraded data redundancy: 2/192 objects degraded (1.042%), 1 pg degraded (PG_DEGRADED)"
44
* https://tracker.ceph.com/issues/52624
45
  qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
46
* https://tracker.ceph.com/issues/54741
47
  crash: MDSTableClient::got_journaled_ack(unsigned long)
48 245 Rishabh Dave
49 240 Patrick Donnelly
h3. 2024-04-02
50
51
https://tracker.ceph.com/issues/65215
52
53
* "qa: error during scrub thrashing: rank damage found: {'backtrace'}":https://tracker.ceph.com/issues/57676
54
* "qa: ceph tell 4.3a deep-scrub command not found":https://tracker.ceph.com/issues/64972
55
* "pacific/quincy/v18.2.0: client: ceph-fuse fails to unmount after upgrade to main":https://tracker.ceph.com/issues/64502
56
* "Test failure: test_cephfs_mirror_cancel_mirroring_and_readd":https://tracker.ceph.com/issues/64711
57
* "workunits/fsx.sh failure":https://tracker.ceph.com/issues/64572
58
* "qa: failed cephfs-shell test_reading_conf":https://tracker.ceph.com/issues/63699
59
* "centos 9 testing reveals rocksdb Leak_StillReachable memory leak in mons":https://tracker.ceph.com/issues/61774
60
* "qa: test_max_items_per_obj open procs not fully cleaned up":https://tracker.ceph.com/issues/65022
61
* "qa: dbench workload timeout":https://tracker.ceph.com/issues/50220
62
* "suites/fsstress.sh hangs on one client - test times out":https://tracker.ceph.com/issues/64707
63 241 Patrick Donnelly
* "qa/suites/fs/nfs: cluster [WRN] Health check failed: 1 stray daemon(s) not managed by cephadm (CEPHADM_STRAY_DAEMON)" in cluster log":https://tracker.ceph.com/issues/65021
64
* "qa: Scrub error on inode 0x1000000356c (/volumes/qa/sv_0/2f8f6bb4-3ea9-47a0-bd79-a0f50dc149d5/client.0/tmp/clients/client7/~dmtmp/PARADOX) see mds.b log and `damage ls` output for details in cluster log":https://tracker.ceph.com/issues/65020
65
* "qa: iogen workunit: The following counters failed to be set on mds daemons: {'mds.exported', 'mds.imported'}":https://tracker.ceph.com/issues/54108
66
* "ffsb.sh failure Resource temporarily unavailable":https://tracker.ceph.com/issues/62067
67
* "QA failure: test_fscrypt_dummy_encryption_with_quick_group":https://tracker.ceph.com/issues/65136
68 244 Patrick Donnelly
* "qa: cluster [WRN] Health detail: HEALTH_WARN 1 pool(s) do not have an application enabled in cluster log":https://tracker.ceph.com/issues/65271
69 241 Patrick Donnelly
* "qa: test_cephfs_mirror_cancel_sync fails in a 100 jobs run of fs:mirror suite":https://tracker.ceph.com/issues/64534
70 240 Patrick Donnelly
71 236 Patrick Donnelly
h3. 2024-03-28
72
73
https://tracker.ceph.com/issues/65213
74
75 237 Patrick Donnelly
* "qa: error during scrub thrashing: rank damage found: {'backtrace'}":https://tracker.ceph.com/issues/57676
76
* "workunits/fsx.sh failure":https://tracker.ceph.com/issues/64572
77
* "PG_DEGRADED warnings during cluster creation via cephadm: Health check failed: Degraded data":https://tracker.ceph.com/issues/65018
78 238 Patrick Donnelly
* "suites/fsstress.sh hangs on one client - test times out":https://tracker.ceph.com/issues/64707
79
* "qa: ceph tell 4.3a deep-scrub command not found":https://tracker.ceph.com/issues/64972
80
* "qa: iogen workunit: The following counters failed to be set on mds daemons: {'mds.exported', 'mds.imported'}":https://tracker.ceph.com/issues/54108
81 239 Patrick Donnelly
* "qa: Scrub error on inode 0x1000000356c (/volumes/qa/sv_0/2f8f6bb4-3ea9-47a0-bd79-a0f50dc149d5/client.0/tmp/clients/client7/~dmtmp/PARADOX) see mds.b log and `damage ls` output for details in cluster log":https://tracker.ceph.com/issues/65020
82
* "qa: failed cephfs-shell test_reading_conf":https://tracker.ceph.com/issues/63699
83
* "Test failure: test_cephfs_mirror_cancel_mirroring_and_readd":https://tracker.ceph.com/issues/64711
84
* "qa: test_max_items_per_obj open procs not fully cleaned up":https://tracker.ceph.com/issues/65022
85
* "pacific/quincy/v18.2.0: client: ceph-fuse fails to unmount after upgrade to main":https://tracker.ceph.com/issues/64502
86
* "centos 9 testing reveals rocksdb Leak_StillReachable memory leak in mons":https://tracker.ceph.com/issues/61774
87
* "qa: Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)":https://tracker.ceph.com/issues/52624
88
* "qa: dbench workload timeout":https://tracker.ceph.com/issues/50220
89
90
91 236 Patrick Donnelly
92 235 Milind Changire
h3. 2024-03-25
93
94
https://pulpito.ceph.com/mchangir-2024-03-22_09:46:06-fs:upgrade-wip-mchangir-testing-main-20240318.032620-testing-default-smithi/
95
* https://tracker.ceph.com/issues/64502
96
  fusermount -u fails with: teuthology.exceptions.MaxWhileTries: reached maximum tries (51) after waiting for 300 seconds
97
98
https://pulpito.ceph.com/mchangir-2024-03-22_09:48:09-fs:libcephfs-wip-mchangir-testing-main-20240318.032620-testing-default-smithi/
99
100
* https://tracker.ceph.com/issues/62245
101
  libcephfs/test.sh failed - https://tracker.ceph.com/issues/62245#note-3
102
103
104 228 Patrick Donnelly
h3. 2024-03-20
105
106 234 Patrick Donnelly
https://pulpito.ceph.com/?branch=wip-batrick-testing-20240320.145742
107 228 Patrick Donnelly
108 233 Patrick Donnelly
https://github.com/batrick/ceph/commit/360516069d9393362c4cc6eb9371680fe16d66ab
109
110 229 Patrick Donnelly
Ubuntu jobs filtered out because builds were skipped by jenkins/shaman.
111 1 Patrick Donnelly
112 229 Patrick Donnelly
This run has a lot more failures because https://github.com/ceph/ceph/pull/55455 fixed log WRN/ERR checks.
113 228 Patrick Donnelly
114 229 Patrick Donnelly
* https://tracker.ceph.com/issues/57676
115
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
116
* https://tracker.ceph.com/issues/64572
117
    workunits/fsx.sh failure
118
* https://tracker.ceph.com/issues/65018
119
    PG_DEGRADED warnings during cluster creation via cephadm: "Health check failed: Degraded data redundancy: 2/192 objects degraded (1.042%), 1 pg degraded (PG_DEGRADED)"
120
* https://tracker.ceph.com/issues/64707 (new issue)
121
    suites/fsstress.sh hangs on one client - test times out
122 1 Patrick Donnelly
* https://tracker.ceph.com/issues/64988
123
    qa: fs:workloads mgr client evicted indicated by "cluster [WRN] evicting unresponsive client smithi042:x (15288), after 303.306 seconds"
124
* https://tracker.ceph.com/issues/59684
125
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
126 230 Patrick Donnelly
* https://tracker.ceph.com/issues/64972
127
    qa: "ceph tell 4.3a deep-scrub" command not found
128
* https://tracker.ceph.com/issues/54108
129
    qa: iogen workunit: "The following counters failed to be set on mds daemons: {'mds.exported', 'mds.imported'}"
130
* https://tracker.ceph.com/issues/65019
131
    qa/suites/fs/top: [WRN] Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)" in cluster log 
132
* https://tracker.ceph.com/issues/65020
133
    qa: Scrub error on inode 0x1000000356c (/volumes/qa/sv_0/2f8f6bb4-3ea9-47a0-bd79-a0f50dc149d5/client.0/tmp/clients/client7/~dmtmp/PARADOX) see mds.b log and `damage ls` output for details" in cluster log
134
* https://tracker.ceph.com/issues/65021
135
    qa/suites/fs/nfs: cluster [WRN] Health check failed: 1 stray daemon(s) not managed by cephadm (CEPHADM_STRAY_DAEMON)" in cluster log
136
* https://tracker.ceph.com/issues/63699
137
    qa: failed cephfs-shell test_reading_conf
138 231 Patrick Donnelly
* https://tracker.ceph.com/issues/64711
139
    Test failure: test_cephfs_mirror_cancel_mirroring_and_readd (tasks.cephfs.test_mirroring.TestMirroring)
140
* https://tracker.ceph.com/issues/50821
141
    qa: untar_snap_rm failure during mds thrashing
142 232 Patrick Donnelly
* https://tracker.ceph.com/issues/65022
143
    qa: test_max_items_per_obj open procs not fully cleaned up
144 228 Patrick Donnelly
145 226 Venky Shankar
h3.  14th March 2024
146
147
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20240307.013758
148
149 227 Venky Shankar
(pjd.sh failures are related to a bug in the testing kernel. See - https://tracker.ceph.com/issues/64679#note-4)
150 226 Venky Shankar
151
* https://tracker.ceph.com/issues/62067
152
    ffsb.sh failure "Resource temporarily unavailable"
153
* https://tracker.ceph.com/issues/57676
154
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
155
* https://tracker.ceph.com/issues/64502
156
    pacific/quincy/v18.2.0: client: ceph-fuse fails to unmount after upgrade to main
157
* https://tracker.ceph.com/issues/64572
158
    workunits/fsx.sh failure
159
* https://tracker.ceph.com/issues/63700
160
    qa: test_cd_with_args failure
161
* https://tracker.ceph.com/issues/59684
162
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
163
* https://tracker.ceph.com/issues/61243
164
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
165
166 225 Venky Shankar
h3. 5th March 2024
167
168
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20240304.042522
169
170
* https://tracker.ceph.com/issues/57676
171
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
172
* https://tracker.ceph.com/issues/64502
173
    pacific/quincy/v18.2.0: client: ceph-fuse fails to unmount after upgrade to main
174
* https://tracker.ceph.com/issues/63949
175
    leak in mds.c detected by valgrind during CephFS QA run
176
* https://tracker.ceph.com/issues/57656
177
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
178
* https://tracker.ceph.com/issues/63699
179
    qa: failed cephfs-shell test_reading_conf
180
* https://tracker.ceph.com/issues/64572
181
    workunits/fsx.sh failure
182
* https://tracker.ceph.com/issues/64707 (new issue)
183
    suites/fsstress.sh hangs on one client - test times out
184
* https://tracker.ceph.com/issues/59684
185
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
186
* https://tracker.ceph.com/issues/63700
187
    qa: test_cd_with_args failure
188
* https://tracker.ceph.com/issues/64711
189
    Test failure: test_cephfs_mirror_cancel_mirroring_and_readd (tasks.cephfs.test_mirroring.TestMirroring)
190
* https://tracker.ceph.com/issues/64729 (new issue)
191
    mon.a (mon.0) 1281 : cluster 3 [WRN] MDS_SLOW_METADATA_IO: 3 MDSs report slow metadata IOs" in cluster log
192
* https://tracker.ceph.com/issues/64730
193
    fs/misc/multiple_rsync.sh workunit times out
194
195 224 Venky Shankar
h3. 26th Feb 2024
196
197
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20240216.060239
198
199
(This run is a bit messy due to
200
201
  a) OCI runtime issues in the testing kernel with centos9
202
  b) SELinux denials related failures
203
  c) Unrelated MON_DOWN warnings)
204
205
* https://tracker.ceph.com/issues/57676
206
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
207
* https://tracker.ceph.com/issues/63700
208
    qa: test_cd_with_args failure
209
* https://tracker.ceph.com/issues/63949
210
    leak in mds.c detected by valgrind during CephFS QA run
211
* https://tracker.ceph.com/issues/59684
212
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
213
* https://tracker.ceph.com/issues/61243
214
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
215
* https://tracker.ceph.com/issues/63699
216
    qa: failed cephfs-shell test_reading_conf
217
* https://tracker.ceph.com/issues/64172
218
    Test failure: test_multiple_path_r (tasks.cephfs.test_admin.TestFsAuthorize)
219
* https://tracker.ceph.com/issues/57656
220
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
221
* https://tracker.ceph.com/issues/64572
222
    workunits/fsx.sh failure
223
224 222 Patrick Donnelly
h3. 20th Feb 2024
225
226
https://github.com/ceph/ceph/pull/55601
227
https://github.com/ceph/ceph/pull/55659
228
229
https://pulpito.ceph.com/pdonnell-2024-02-20_07:23:03-fs:upgrade:mds_upgrade_sequence-wip-batrick-testing-20240220.022152-distro-default-smithi/
230
231
* https://tracker.ceph.com/issues/64502
232
    client: quincy ceph-fuse fails to unmount after upgrade to main
233
234 223 Patrick Donnelly
This run has numerous problems. #55601 introduces testing for the upgrade sequence from </code>reef/{v18.2.0,v18.2.1,reef}</code> as well as an extra dimension for the ceph-fuse client. The main "big" issue is i64502: the ceph-fuse client is not being unmounted when <code>fusermount -u</code> is called. Instead, the client begins to unmount only after daemons are shut down during test cleanup.
235 218 Venky Shankar
236
h3. 19th Feb 2024
237
238 220 Venky Shankar
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20240217.015652
239
240 218 Venky Shankar
* https://tracker.ceph.com/issues/61243
241
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
242
* https://tracker.ceph.com/issues/63700
243
    qa: test_cd_with_args failure
244
* https://tracker.ceph.com/issues/63141
245
    qa/cephfs: test_idem_unaffected_root_squash fails
246
* https://tracker.ceph.com/issues/59684
247
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
248
* https://tracker.ceph.com/issues/63949
249
    leak in mds.c detected by valgrind during CephFS QA run
250
* https://tracker.ceph.com/issues/63764
251
    Test failure: test_r_with_fsname_and_no_path_in_cap (tasks.cephfs.test_multifs_auth.TestMDSCaps)
252
* https://tracker.ceph.com/issues/63699
253
    qa: failed cephfs-shell test_reading_conf
254 219 Venky Shankar
* https://tracker.ceph.com/issues/64482
255
    ceph: stderr Error: OCI runtime error: crun: bpf create ``: Function not implemented
256 201 Rishabh Dave
257 217 Venky Shankar
h3. 29 Jan 2024
258
259
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20240119.075157-1
260
261
* https://tracker.ceph.com/issues/57676
262
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
263
* https://tracker.ceph.com/issues/63949
264
    leak in mds.c detected by valgrind during CephFS QA run
265
* https://tracker.ceph.com/issues/62067
266
    ffsb.sh failure "Resource temporarily unavailable"
267
* https://tracker.ceph.com/issues/64172
268
    Test failure: test_multiple_path_r (tasks.cephfs.test_admin.TestFsAuthorize)
269
* https://tracker.ceph.com/issues/63265
270
    qa: fs/snaps/snaptest-git-ceph.sh failed when reseting to tag 'v0.1'
271
* https://tracker.ceph.com/issues/61243
272
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
273
* https://tracker.ceph.com/issues/59684
274
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
275
* https://tracker.ceph.com/issues/57656
276
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
277
* https://tracker.ceph.com/issues/64209
278
    snaptest-multiple-capsnaps.sh fails with "got remote process result: 1"
279
280 216 Venky Shankar
h3. 17th Jan 2024
281
282
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20240103.072409-1
283
284
* https://tracker.ceph.com/issues/63764
285
    Test failure: test_r_with_fsname_and_no_path_in_cap (tasks.cephfs.test_multifs_auth.TestMDSCaps)
286
* https://tracker.ceph.com/issues/57676
287
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
288
* https://tracker.ceph.com/issues/51964
289
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
290
* https://tracker.ceph.com/issues/63949
291
    leak in mds.c detected by valgrind during CephFS QA run
292
* https://tracker.ceph.com/issues/62067
293
    ffsb.sh failure "Resource temporarily unavailable"
294
* https://tracker.ceph.com/issues/61243
295
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
296
* https://tracker.ceph.com/issues/63259
297
    mds: failed to store backtrace and force file system read-only
298
* https://tracker.ceph.com/issues/63265
299
    qa: fs/snaps/snaptest-git-ceph.sh failed when reseting to tag 'v0.1'
300
301
h3. 16 Jan 2024
302 215 Rishabh Dave
303 214 Rishabh Dave
https://pulpito.ceph.com/rishabh-2023-12-11_15:37:57-fs-rishabh-2023dec11-testing-default-smithi/
304
https://pulpito.ceph.com/rishabh-2023-12-17_11:19:43-fs-rishabh-2023dec11-testing-default-smithi/
305
https://pulpito.ceph.com/rishabh-2024-01-04_18:43:16-fs-rishabh-2024jan4-testing-default-smithi
306
307
* https://tracker.ceph.com/issues/63764
308
  Test failure: test_r_with_fsname_and_no_path_in_cap (tasks.cephfs.test_multifs_auth.TestMDSCaps)
309
* https://tracker.ceph.com/issues/63141
310
  qa/cephfs: test_idem_unaffected_root_squash fails
311
* https://tracker.ceph.com/issues/62067
312
  ffsb.sh failure "Resource temporarily unavailable" 
313
* https://tracker.ceph.com/issues/51964
314
  qa: test_cephfs_mirror_restart_sync_on_blocklist failure
315
* https://tracker.ceph.com/issues/54462 
316
  Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128
317
* https://tracker.ceph.com/issues/57676
318
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
319
320
* https://tracker.ceph.com/issues/63949
321
  valgrind leak in MDS
322
* https://tracker.ceph.com/issues/64041
323
  qa/cephfs: fs/upgrade/nofs suite attempts to jump more than 2 releases
324
* fsstress failure in last run was due a kernel MM layer failure, unrelated to CephFS
325
* from last run, job #7507400 failed due to MGR; FS wasn't degraded, so it's unrelated to CephFS
326
327 213 Venky Shankar
h3. 06 Dec 2023
328
329
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20231206.125818
330
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20231206.125818-x (rerun w/ squid kickoff changes)
331
332
* https://tracker.ceph.com/issues/63764
333
    Test failure: test_r_with_fsname_and_no_path_in_cap (tasks.cephfs.test_multifs_auth.TestMDSCaps)
334
* https://tracker.ceph.com/issues/63233
335
    mon|client|mds: valgrind reports possible leaks in the MDS
336
* https://tracker.ceph.com/issues/57676
337
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
338
* https://tracker.ceph.com/issues/62580
339
    testing: Test failure: test_snapshot_remove (tasks.cephfs.test_strays.TestStrays)
340
* https://tracker.ceph.com/issues/62067
341
    ffsb.sh failure "Resource temporarily unavailable"
342
* https://tracker.ceph.com/issues/61243
343
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
344
* https://tracker.ceph.com/issues/62081
345
    tasks/fscrypt-common does not finish, timesout
346
* https://tracker.ceph.com/issues/63265
347
    qa: fs/snaps/snaptest-git-ceph.sh failed when reseting to tag 'v0.1'
348
* https://tracker.ceph.com/issues/63806
349
    ffsb.sh workunit failure (MDS: std::out_of_range, damaged)
350
351 211 Patrick Donnelly
h3. 30 Nov 2023
352
353
https://pulpito.ceph.com/pdonnell-2023-11-30_08:05:19-fs:shell-wip-batrick-testing-20231130.014408-distro-default-smithi/
354
355
* https://tracker.ceph.com/issues/63699
356 212 Patrick Donnelly
    qa: failed cephfs-shell test_reading_conf
357
* https://tracker.ceph.com/issues/63700
358
    qa: test_cd_with_args failure
359 211 Patrick Donnelly
360 210 Venky Shankar
h3. 29 Nov 2023
361
362
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20231107.042705
363
364
* https://tracker.ceph.com/issues/63233
365
    mon|client|mds: valgrind reports possible leaks in the MDS
366
* https://tracker.ceph.com/issues/63141
367
    qa/cephfs: test_idem_unaffected_root_squash fails
368
* https://tracker.ceph.com/issues/57676
369
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
370
* https://tracker.ceph.com/issues/57655
371
    qa: fs:mixed-clients kernel_untar_build failure
372
* https://tracker.ceph.com/issues/62067
373
    ffsb.sh failure "Resource temporarily unavailable"
374
* https://tracker.ceph.com/issues/61243
375
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
376
* https://tracker.ceph.com/issues/62510 (pending RHEL back port)
    snaptest-git-ceph.sh failure with fs/thrash
377
* https://tracker.ceph.com/issues/62810
378
    Failure in snaptest-git-ceph.sh (it's an async unlink/create bug) -- Need to fix again
379
380 206 Venky Shankar
h3. 14 Nov 2023
381 207 Milind Changire
(Milind)
382
383
https://pulpito.ceph.com/mchangir-2023-11-13_10:27:15-fs-wip-mchangir-testing-20231110.052303-testing-default-smithi/
384
385
* https://tracker.ceph.com/issues/53859
386
  qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
387
* https://tracker.ceph.com/issues/63233
388
  mon|client|mds: valgrind reports possible leaks in the MDS
389
* https://tracker.ceph.com/issues/63521
390
  qa: Test failure: test_scrub_merge_dirfrags (tasks.cephfs.test_scrub_checks.TestScrubChecks)
391
* https://tracker.ceph.com/issues/57655
392
  qa: fs:mixed-clients kernel_untar_build failure
393
* https://tracker.ceph.com/issues/62580
394
    testing: Test failure: test_snapshot_remove (tasks.cephfs.test_strays.TestStrays)
395
* https://tracker.ceph.com/issues/57676
396
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
397
* https://tracker.ceph.com/issues/61243
398
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
399
* https://tracker.ceph.com/issues/63141
400
    qa/cephfs: test_idem_unaffected_root_squash fails
401
* https://tracker.ceph.com/issues/51964
402
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
403
* https://tracker.ceph.com/issues/63522
404
    No module named 'tasks.ceph_fuse'
405
    No module named 'tasks.kclient'
406
    No module named 'tasks.cephfs.fuse_mount'
407
    No module named 'tasks.ceph'
408
* https://tracker.ceph.com/issues/63523
409
    Command failed - qa/workunits/fs/misc/general_vxattrs.sh
410
411
412
h3. 14 Nov 2023
413 206 Venky Shankar
414
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20231106.073650
415
416
(nvm the fs:upgrade test failure - the PR is excluded from merge)
417
418
* https://tracker.ceph.com/issues/57676
419
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
420
* https://tracker.ceph.com/issues/63233
421
    mon|client|mds: valgrind reports possible leaks in the MDS
422
* https://tracker.ceph.com/issues/63141
423
    qa/cephfs: test_idem_unaffected_root_squash fails
424
* https://tracker.ceph.com/issues/62580
425
    testing: Test failure: test_snapshot_remove (tasks.cephfs.test_strays.TestStrays)
426
* https://tracker.ceph.com/issues/57655
427
    qa: fs:mixed-clients kernel_untar_build failure
428
* https://tracker.ceph.com/issues/51964
429
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
430
* https://tracker.ceph.com/issues/63519
431
    ceph-fuse: reef ceph-fuse crashes with main branch ceph-mds
432
* https://tracker.ceph.com/issues/57087
433
    qa: test_fragmented_injection (tasks.cephfs.test_data_scan.TestDataScan) failure
434
* https://tracker.ceph.com/issues/58945
435
    qa: xfstests-dev's generic test suite has 20 failures with fuse client
436
437 204 Rishabh Dave
h3. 7 Nov 2023
438
439 205 Rishabh Dave
fs: https://pulpito.ceph.com/rishabh-2023-11-04_04:30:51-fs-rishabh-2023nov3-testing-default-smithi/
440
re-run: https://pulpito.ceph.com/rishabh-2023-11-05_14:10:09-fs-rishabh-2023nov3-testing-default-smithi/
441
smoke: https://pulpito.ceph.com/rishabh-2023-11-08_08:39:05-smoke-rishabh-2023nov3-testing-default-smithi/
442 204 Rishabh Dave
443
* https://tracker.ceph.com/issues/53859
444
  qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
445
* https://tracker.ceph.com/issues/63233
446
  mon|client|mds: valgrind reports possible leaks in the MDS
447
* https://tracker.ceph.com/issues/57655
448
  qa: fs:mixed-clients kernel_untar_build failure
449
* https://tracker.ceph.com/issues/57676
450
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
451
452
* https://tracker.ceph.com/issues/63473
453
  fsstress.sh failed with errno 124
454
455 202 Rishabh Dave
h3. 3 Nov 2023
456 203 Rishabh Dave
457 202 Rishabh Dave
https://pulpito.ceph.com/rishabh-2023-10-27_06:26:52-fs-rishabh-2023oct26-testing-default-smithi/
458
459
* https://tracker.ceph.com/issues/63141
460
  qa/cephfs: test_idem_unaffected_root_squash fails
461
* https://tracker.ceph.com/issues/63233
462
  mon|client|mds: valgrind reports possible leaks in the MDS
463
* https://tracker.ceph.com/issues/57656
464
  dbench: write failed on handle 10010 (Resource temporarily unavailable)
465
* https://tracker.ceph.com/issues/57655
466
  qa: fs:mixed-clients kernel_untar_build failure
467
* https://tracker.ceph.com/issues/57676
468
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
469
470
* https://tracker.ceph.com/issues/59531
471
  "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio)"
472
* https://tracker.ceph.com/issues/52624
473
  qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
474
475 198 Patrick Donnelly
h3. 24 October 2023
476
477
https://pulpito.ceph.com/?branch=wip-batrick-testing-20231024.144545
478
479 200 Patrick Donnelly
Two failures:
480
481
https://pulpito.ceph.com/pdonnell-2023-10-26_05:21:22-fs-wip-batrick-testing-20231024.144545-distro-default-smithi/7438459/
482
https://pulpito.ceph.com/pdonnell-2023-10-26_05:21:22-fs-wip-batrick-testing-20231024.144545-distro-default-smithi/7438468/
483
484
probably related to https://github.com/ceph/ceph/pull/53255. Killing the mount as part of the test did not complete. Will research more.
485
486 198 Patrick Donnelly
* https://tracker.ceph.com/issues/52624
487
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
488
* https://tracker.ceph.com/issues/57676
489 199 Patrick Donnelly
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
490
* https://tracker.ceph.com/issues/63233
491
    mon|client|mds: valgrind reports possible leaks in the MDS
492
* https://tracker.ceph.com/issues/59531
493
    "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS 
494
* https://tracker.ceph.com/issues/57655
495
    qa: fs:mixed-clients kernel_untar_build failure
496 200 Patrick Donnelly
* https://tracker.ceph.com/issues/62067
497
    ffsb.sh failure "Resource temporarily unavailable"
498
* https://tracker.ceph.com/issues/63411
499
    qa: flush journal may cause timeouts of `scrub status`
500
* https://tracker.ceph.com/issues/61243
501
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
502
* https://tracker.ceph.com/issues/63141
503 198 Patrick Donnelly
    test_idem_unaffected_root_squash (test_admin.TestFsAuthorizeUpdate) fails
504 148 Rishabh Dave
505 195 Venky Shankar
h3. 18 Oct 2023
506
507
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20231018.065603
508
509
* https://tracker.ceph.com/issues/52624
510
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
511
* https://tracker.ceph.com/issues/57676
512
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
513
* https://tracker.ceph.com/issues/63233
514
    mon|client|mds: valgrind reports possible leaks in the MDS
515
* https://tracker.ceph.com/issues/63141
516
    qa/cephfs: test_idem_unaffected_root_squash fails
517
* https://tracker.ceph.com/issues/59531
518
  "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio)" 
519
* https://tracker.ceph.com/issues/62658
520
    error during scrub thrashing: reached maximum tries (31) after waiting for 900 seconds
521
* https://tracker.ceph.com/issues/62580
522
    testing: Test failure: test_snapshot_remove (tasks.cephfs.test_strays.TestStrays)
523
* https://tracker.ceph.com/issues/62067
524
    ffsb.sh failure "Resource temporarily unavailable"
525
* https://tracker.ceph.com/issues/57655
526
    qa: fs:mixed-clients kernel_untar_build failure
527
* https://tracker.ceph.com/issues/62036
528
    src/mds/MDCache.cc: 5131: FAILED ceph_assert(isolated_inodes.empty())
529
* https://tracker.ceph.com/issues/58945
530
    qa: xfstests-dev's generic test suite has 20 failures with fuse client
531
* https://tracker.ceph.com/issues/62847
532
    mds: blogbench requests stuck (5mds+scrub+snaps-flush)
533
534 193 Venky Shankar
h3. 13 Oct 2023
535
536
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20231013.093215
537
538
* https://tracker.ceph.com/issues/52624
539
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
540
* https://tracker.ceph.com/issues/62936
541
    Test failure: test_mirroring_init_failure_with_recovery (tasks.cephfs.test_mirroring.TestMirroring)
542
* https://tracker.ceph.com/issues/47292
543
    cephfs-shell: test_df_for_valid_file failure
544
* https://tracker.ceph.com/issues/63141
545
    qa/cephfs: test_idem_unaffected_root_squash fails
546
* https://tracker.ceph.com/issues/62081
547
    tasks/fscrypt-common does not finish, timesout
548 1 Patrick Donnelly
* https://tracker.ceph.com/issues/58945
549
    qa: xfstests-dev's generic test suite has 20 failures with fuse client
550 194 Venky Shankar
* https://tracker.ceph.com/issues/63233
551
    mon|client|mds: valgrind reports possible leaks in the MDS
552 193 Venky Shankar
553 190 Patrick Donnelly
h3. 16 Oct 2023
554
555
https://pulpito.ceph.com/?branch=wip-batrick-testing-20231016.203825
556
557 192 Patrick Donnelly
Infrastructure issues:
558
* /teuthology/pdonnell-2023-10-19_12:04:12-fs-wip-batrick-testing-20231016.203825-distro-default-smithi/7432286/teuthology.log
559
    Host lost.
560
561 196 Patrick Donnelly
One followup fix:
562
* https://pulpito.ceph.com/pdonnell-2023-10-20_00:33:29-fs-wip-batrick-testing-20231016.203825-distro-default-smithi/
563
564 192 Patrick Donnelly
Failures:
565
566
* https://tracker.ceph.com/issues/56694
567
    qa: avoid blocking forever on hung umount
568
* https://tracker.ceph.com/issues/63089
569
    qa: tasks/mirror times out
570
* https://tracker.ceph.com/issues/52624
571
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
572
* https://tracker.ceph.com/issues/59531
573
  "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio)" 
574
* https://tracker.ceph.com/issues/57676
575
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
576
* https://tracker.ceph.com/issues/62658 
577
    error during scrub thrashing: reached maximum tries (31) after waiting for 900 seconds
578
* https://tracker.ceph.com/issues/61243
579
  test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
580
* https://tracker.ceph.com/issues/57656
581
  dbench: write failed on handle 10010 (Resource temporarily unavailable)
582
* https://tracker.ceph.com/issues/63233
583
  mon|client|mds: valgrind reports possible leaks in the MDS
584 197 Patrick Donnelly
* https://tracker.ceph.com/issues/63278
585
  kclient: may wrongly decode session messages and believe it is blocklisted (dead jobs)
586 192 Patrick Donnelly
587 189 Rishabh Dave
h3. 9 Oct 2023
588
589
https://pulpito.ceph.com/rishabh-2023-10-06_11:56:52-fs-rishabh-cephfs-mon-testing-default-smithi/
590
591
* https://tracker.ceph.com/issues/54460
592
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
593
* https://tracker.ceph.com/issues/63141
594
  test_idem_unaffected_root_squash (test_admin.TestFsAuthorizeUpdate) fails
595
* https://tracker.ceph.com/issues/62937
596
  logrotate doesn't support parallel execution on same set of logfiles
597
* https://tracker.ceph.com/issues/61400
598
  valgrind+ceph-mon issues
599
* https://tracker.ceph.com/issues/57676
600
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
601
* https://tracker.ceph.com/issues/55805
602
  error during scrub thrashing reached max tries in 900 secs
603
604 188 Venky Shankar
h3. 26 Sep 2023
605
606
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230926.081818
607
608
* https://tracker.ceph.com/issues/52624
609
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
610
* https://tracker.ceph.com/issues/62873
611
    qa: FAIL: test_client_blocklisted_oldest_tid (tasks.cephfs.test_client_limits.TestClientLimits)
612
* https://tracker.ceph.com/issues/61400
613
    valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
614
* https://tracker.ceph.com/issues/57676
615
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
616
* https://tracker.ceph.com/issues/62682
617
    mon: no mdsmap broadcast after "fs set joinable" is set to true
618
* https://tracker.ceph.com/issues/63089
619
    qa: tasks/mirror times out
620
621 185 Rishabh Dave
h3. 22 Sep 2023
622
623
https://pulpito.ceph.com/rishabh-2023-09-12_12:12:15-fs-wip-rishabh-2023sep12-b2-testing-default-smithi/
624
625
* https://tracker.ceph.com/issues/59348
626
  qa: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota
627
* https://tracker.ceph.com/issues/59344
628
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
629
* https://tracker.ceph.com/issues/59531
630
  "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio)" 
631
* https://tracker.ceph.com/issues/61574
632
  build failure for mdtest project
633
* https://tracker.ceph.com/issues/62702
634
  fsstress.sh: MDS slow requests for the internal 'rename' requests
635
* https://tracker.ceph.com/issues/57676
636
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
637
638
* https://tracker.ceph.com/issues/62863 
639
  deadlock in ceph-fuse causes teuthology job to hang and fail
640
* https://tracker.ceph.com/issues/62870
641
  test_cluster_info (tasks.cephfs.test_nfs.TestNFS)
642
* https://tracker.ceph.com/issues/62873
643
  test_client_blocklisted_oldest_tid (tasks.cephfs.test_client_limits.TestClientLimits)
644
645 186 Venky Shankar
h3. 20 Sep 2023
646
647
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230920.072635
648
649
* https://tracker.ceph.com/issues/52624
650
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
651
* https://tracker.ceph.com/issues/61400
652
    valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
653
* https://tracker.ceph.com/issues/61399
654
    libmpich: undefined references to fi_strerror
655
* https://tracker.ceph.com/issues/62081
656
    tasks/fscrypt-common does not finish, timesout
657
* https://tracker.ceph.com/issues/62658 
658
    error during scrub thrashing: reached maximum tries (31) after waiting for 900 seconds
659
* https://tracker.ceph.com/issues/62915
660
    qa/suites/fs/nfs: No orchestrator configured (try `ceph orch set backend`) while running test cases
661
* https://tracker.ceph.com/issues/59531
662
    quincy: "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio)"
663
* https://tracker.ceph.com/issues/62873
664
  qa: FAIL: test_client_blocklisted_oldest_tid (tasks.cephfs.test_client_limits.TestClientLimits)
665
* https://tracker.ceph.com/issues/62936
666
    Test failure: test_mirroring_init_failure_with_recovery (tasks.cephfs.test_mirroring.TestMirroring)
667
* https://tracker.ceph.com/issues/62937
668
    Command failed on smithi027 with status 3: 'sudo logrotate /etc/logrotate.d/ceph-test.conf'
669
* https://tracker.ceph.com/issues/62510
670
    snaptest-git-ceph.sh failure with fs/thrash
671
* https://tracker.ceph.com/issues/62081
672
    tasks/fscrypt-common does not finish, timesout
673
* https://tracker.ceph.com/issues/62126
674
    test failure: suites/blogbench.sh stops running
675 187 Venky Shankar
* https://tracker.ceph.com/issues/62682
676
    mon: no mdsmap broadcast after "fs set joinable" is set to true
677 186 Venky Shankar
678 184 Milind Changire
h3. 19 Sep 2023
679
680
http://pulpito.front.sepia.ceph.com/mchangir-2023-09-12_05:40:22-fs-wip-mchangir-testing-20230908.140927-testing-default-smithi/
681
682
* https://tracker.ceph.com/issues/58220#note-9
683
  workunit fs/test_python.sh: test_disk_quota_exceeeded_error failure
684
* https://tracker.ceph.com/issues/62702
685
  Command failed (workunit test suites/fsstress.sh) on smithi124 with status 124
686
* https://tracker.ceph.com/issues/57676
687
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
688
* https://tracker.ceph.com/issues/59348
689
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
690
* https://tracker.ceph.com/issues/52624
691
  qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
692
* https://tracker.ceph.com/issues/51964
693
  qa: test_cephfs_mirror_restart_sync_on_blocklist failure
694
* https://tracker.ceph.com/issues/61243
695
  test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
696
* https://tracker.ceph.com/issues/59344
697
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
698
* https://tracker.ceph.com/issues/62873
699
  qa: FAIL: test_client_blocklisted_oldest_tid (tasks.cephfs.test_client_limits.TestClientLimits)
700
* https://tracker.ceph.com/issues/59413
701
  cephfs: qa snaptest-git-ceph.sh failed with "got remote process result: 128"
702
* https://tracker.ceph.com/issues/53859
703
  qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
704
* https://tracker.ceph.com/issues/62482
705
  qa: cluster [WRN] Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
706
707 178 Patrick Donnelly
708 177 Venky Shankar
h3. 13 Sep 2023
709
710
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230908.065909
711
712
* https://tracker.ceph.com/issues/52624
713
      qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
714
* https://tracker.ceph.com/issues/57655
715
    qa: fs:mixed-clients kernel_untar_build failure
716
* https://tracker.ceph.com/issues/57676
717
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
718
* https://tracker.ceph.com/issues/61243
719
    qa: tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev - 17 tests failed
720
* https://tracker.ceph.com/issues/62567
721
    postgres workunit times out - MDS_SLOW_REQUEST in logs
722
* https://tracker.ceph.com/issues/61400
723
    valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
724
* https://tracker.ceph.com/issues/61399
725
    libmpich: undefined references to fi_strerror
726
* https://tracker.ceph.com/issues/57655
727
    qa: fs:mixed-clients kernel_untar_build failure
728
* https://tracker.ceph.com/issues/57676
729
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
730
* https://tracker.ceph.com/issues/51964
731
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
732
* https://tracker.ceph.com/issues/62081
733
    tasks/fscrypt-common does not finish, timesout
734 178 Patrick Donnelly
735 179 Patrick Donnelly
h3. 2023 Sep 12
736 178 Patrick Donnelly
737
https://pulpito.ceph.com/pdonnell-2023-09-12_14:07:50-fs-wip-batrick-testing-20230912.122437-distro-default-smithi/
738 1 Patrick Donnelly
739 181 Patrick Donnelly
A few failures caused by qa refactoring in https://github.com/ceph/ceph/pull/48130 ; notably:
740
741 182 Patrick Donnelly
* Test failure: test_export_pin_many (tasks.cephfs.test_exports.TestExportPin) caused by fragmentation from config changes.
742 181 Patrick Donnelly
743
Failures:
744
745 179 Patrick Donnelly
* https://tracker.ceph.com/issues/59348
746
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
747
* https://tracker.ceph.com/issues/57656
748
  dbench: write failed on handle 10010 (Resource temporarily unavailable)
749
* https://tracker.ceph.com/issues/55805
750
  error scrub thrashing reached max tries in 900 secs
751
* https://tracker.ceph.com/issues/62067
752
    ffsb.sh failure "Resource temporarily unavailable"
753
* https://tracker.ceph.com/issues/59344
754
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
755
* https://tracker.ceph.com/issues/61399
756 180 Patrick Donnelly
  libmpich: undefined references to fi_strerror
757
* https://tracker.ceph.com/issues/62832
758
  common: config_proxy deadlock during shutdown (and possibly other times)
759
* https://tracker.ceph.com/issues/59413
760 1 Patrick Donnelly
  cephfs: qa snaptest-git-ceph.sh failed with "got remote process result: 128"
761 181 Patrick Donnelly
* https://tracker.ceph.com/issues/57676
762
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
763
* https://tracker.ceph.com/issues/62567
764
  Command failed with status 1: "sudo TESTDIR=/home/ubuntu/cephtest bash -c 'sudo -u postgres -- pgbench -s 500 -i'"
765
* https://tracker.ceph.com/issues/54460
766
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
767
* https://tracker.ceph.com/issues/58220#note-9
768
  workunit fs/test_python.sh: test_disk_quota_exceeeded_error failure
769
* https://tracker.ceph.com/issues/59348
770
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
771 183 Patrick Donnelly
* https://tracker.ceph.com/issues/62847
772
    mds: blogbench requests stuck (5mds+scrub+snaps-flush)
773
* https://tracker.ceph.com/issues/62848
774
    qa: fail_fs upgrade scenario hanging
775
* https://tracker.ceph.com/issues/62081
776
    tasks/fscrypt-common does not finish, timesout
777 177 Venky Shankar
778 176 Venky Shankar
h3. 11 Sep 2023
779 175 Venky Shankar
780
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230830.153114
781
782
* https://tracker.ceph.com/issues/52624
783
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
784
* https://tracker.ceph.com/issues/61399
785
    qa: build failure for ior (the failed instance is when compiling `mdtest`)
786
* https://tracker.ceph.com/issues/57655
787
    qa: fs:mixed-clients kernel_untar_build failure
788
* https://tracker.ceph.com/issues/61399
789
    ior build failure
790
* https://tracker.ceph.com/issues/59531
791
    quincy: "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio)"
792
* https://tracker.ceph.com/issues/59344
793
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
794
* https://tracker.ceph.com/issues/59346
795
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
796
* https://tracker.ceph.com/issues/59348
797
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
798
* https://tracker.ceph.com/issues/57676
799
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
800
* https://tracker.ceph.com/issues/61243
801
  qa: tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev - 17 tests failed
802
* https://tracker.ceph.com/issues/62567
803
  postgres workunit times out - MDS_SLOW_REQUEST in logs
804
805
806 174 Rishabh Dave
h3. 6 Sep 2023 Run 2
807
808
https://pulpito.ceph.com/rishabh-2023-08-25_01:50:32-fs-wip-rishabh-2023aug3-b5-testing-default-smithi/ 
809
810
* https://tracker.ceph.com/issues/51964
811
  test_cephfs_mirror_restart_sync_on_blocklist failure
812
* https://tracker.ceph.com/issues/59348
813
  test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
814
* https://tracker.ceph.com/issues/53859
815
  qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
816
* https://tracker.ceph.com/issues/61892
817
  test_strays.TestStrays.test_snapshot_remove failed
818
* https://tracker.ceph.com/issues/54460
819
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
820
* https://tracker.ceph.com/issues/59346
821
  fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write"
822
* https://tracker.ceph.com/issues/59344
823
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
824
* https://tracker.ceph.com/issues/62484
825
  qa: ffsb.sh test failure
826
* https://tracker.ceph.com/issues/62567
827
  Command failed with status 1: "sudo TESTDIR=/home/ubuntu/cephtest bash -c 'sudo -u postgres -- pgbench -s 500 -i'"
828
  
829
* https://tracker.ceph.com/issues/61399
830
  ior build failure
831
* https://tracker.ceph.com/issues/57676
832
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
833
* https://tracker.ceph.com/issues/55805
834
  error scrub thrashing reached max tries in 900 secs
835
836 172 Rishabh Dave
h3. 6 Sep 2023
837 171 Rishabh Dave
838 173 Rishabh Dave
https://pulpito.ceph.com/rishabh-2023-08-10_20:16:46-fs-wip-rishabh-2023Aug1-b4-testing-default-smithi/
839 171 Rishabh Dave
840 1 Patrick Donnelly
* https://tracker.ceph.com/issues/53859
841
  qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
842 173 Rishabh Dave
* https://tracker.ceph.com/issues/51964
843
  qa: test_cephfs_mirror_restart_sync_on_blocklist failure
844 1 Patrick Donnelly
* https://tracker.ceph.com/issues/61892
845 173 Rishabh Dave
  test_snapshot_remove (test_strays.TestStrays) failed
846
* https://tracker.ceph.com/issues/59348
847
  qa: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota
848
* https://tracker.ceph.com/issues/54462
849
  Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128
850
* https://tracker.ceph.com/issues/62556
851
  test_acls: xfstests_dev: python2 is missing
852
* https://tracker.ceph.com/issues/62067
853
  ffsb.sh failure "Resource temporarily unavailable"
854
* https://tracker.ceph.com/issues/57656
855
  dbench: write failed on handle 10010 (Resource temporarily unavailable)
856 1 Patrick Donnelly
* https://tracker.ceph.com/issues/59346
857
  fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write"
858 171 Rishabh Dave
* https://tracker.ceph.com/issues/59344
859 173 Rishabh Dave
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
860
861 171 Rishabh Dave
* https://tracker.ceph.com/issues/61399
862
  ior build failure
863
* https://tracker.ceph.com/issues/57676
864
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
865
* https://tracker.ceph.com/issues/55805
866
  error scrub thrashing reached max tries in 900 secs
867 173 Rishabh Dave
868
* https://tracker.ceph.com/issues/62567
869
  Command failed on smithi008 with status 1: "sudo TESTDIR=/home/ubuntu/cephtest bash -c 'sudo -u postgres -- pgbench -s 500 -i'"
870
* https://tracker.ceph.com/issues/62702
871
  workunit test suites/fsstress.sh on smithi066 with status 124
872 170 Rishabh Dave
873
h3. 5 Sep 2023
874
875
https://pulpito.ceph.com/rishabh-2023-08-25_06:38:25-fs-wip-rishabh-2023aug3-b5-testing-default-smithi/
876
orch:cephadm suite run: http://pulpito.front.sepia.ceph.com/rishabh-2023-09-05_12:16:09-orch:cephadm-wip-rishabh-2023aug3-b5-testing-default-smithi/
877
  this run has failures but acc to Adam King these are not relevant and should be ignored
878
879
* https://tracker.ceph.com/issues/61892
880
  test_snapshot_remove (test_strays.TestStrays) failed
881
* https://tracker.ceph.com/issues/59348
882
  test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota
883
* https://tracker.ceph.com/issues/54462
884
  Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128
885
* https://tracker.ceph.com/issues/62067
886
  ffsb.sh failure "Resource temporarily unavailable"
887
* https://tracker.ceph.com/issues/57656 
888
  dbench: write failed on handle 10010 (Resource temporarily unavailable)
889
* https://tracker.ceph.com/issues/59346
890
  fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write"
891
* https://tracker.ceph.com/issues/59344
892
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
893
* https://tracker.ceph.com/issues/50223
894
  client.xxxx isn't responding to mclientcaps(revoke)
895
* https://tracker.ceph.com/issues/57655
896
  qa: fs:mixed-clients kernel_untar_build failure
897
* https://tracker.ceph.com/issues/62187
898
  iozone.sh: line 5: iozone: command not found
899
 
900
* https://tracker.ceph.com/issues/61399
901
  ior build failure
902
* https://tracker.ceph.com/issues/57676
903
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
904
* https://tracker.ceph.com/issues/55805
905
  error scrub thrashing reached max tries in 900 secs
906 169 Venky Shankar
907
908
h3. 31 Aug 2023
909
910
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230824.045828
911
912
* https://tracker.ceph.com/issues/52624
913
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
914
* https://tracker.ceph.com/issues/62187
915
    iozone: command not found
916
* https://tracker.ceph.com/issues/61399
917
    ior build failure
918
* https://tracker.ceph.com/issues/59531
919
    quincy: "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio)"
920
* https://tracker.ceph.com/issues/61399
921
    qa: build failure for ior (the failed instance is when compiling `mdtest`)
922
* https://tracker.ceph.com/issues/57655
923
    qa: fs:mixed-clients kernel_untar_build failure
924
* https://tracker.ceph.com/issues/59344
925
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
926
* https://tracker.ceph.com/issues/59346
927
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
928
* https://tracker.ceph.com/issues/59348
929
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
930
* https://tracker.ceph.com/issues/59413
931
    cephfs: qa snaptest-git-ceph.sh failed with "got remote process result: 128"
932
* https://tracker.ceph.com/issues/62653
933
    qa: unimplemented fcntl command: 1036 with fsstress
934
* https://tracker.ceph.com/issues/61400
935
    valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
936
* https://tracker.ceph.com/issues/62658
937
    error during scrub thrashing: reached maximum tries (31) after waiting for 900 seconds
938
* https://tracker.ceph.com/issues/62188
939
    AttributeError: 'RemoteProcess' object has no attribute 'read' (happens only with multis-auth test)
940 168 Venky Shankar
941
942
h3. 25 Aug 2023
943
944
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230822.064807
945
946
* https://tracker.ceph.com/issues/59344
947
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
948
* https://tracker.ceph.com/issues/59346
949
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
950
* https://tracker.ceph.com/issues/59348
951
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
952
* https://tracker.ceph.com/issues/57655
953
    qa: fs:mixed-clients kernel_untar_build failure
954
* https://tracker.ceph.com/issues/61243
955
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
956
* https://tracker.ceph.com/issues/61399
957
    ior build failure
958
* https://tracker.ceph.com/issues/61399
959
    qa: build failure for ior (the failed instance is when compiling `mdtest`)
960
* https://tracker.ceph.com/issues/62484
961
    qa: ffsb.sh test failure
962
* https://tracker.ceph.com/issues/59531
963
    quincy: "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio)"
964
* https://tracker.ceph.com/issues/62510
965
    snaptest-git-ceph.sh failure with fs/thrash
966 167 Venky Shankar
967
968
h3. 24 Aug 2023
969
970
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230822.060131
971
972
* https://tracker.ceph.com/issues/57676
973
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
974
* https://tracker.ceph.com/issues/51964
975
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
976
* https://tracker.ceph.com/issues/59344
977
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
978
* https://tracker.ceph.com/issues/59346
979
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
980
* https://tracker.ceph.com/issues/59348
981
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
982
* https://tracker.ceph.com/issues/61399
983
    ior build failure
984
* https://tracker.ceph.com/issues/61399
985
    qa: build failure for ior (the failed instance is when compiling `mdtest`)
986
* https://tracker.ceph.com/issues/62510
987
    snaptest-git-ceph.sh failure with fs/thrash
988
* https://tracker.ceph.com/issues/62484
989
    qa: ffsb.sh test failure
990
* https://tracker.ceph.com/issues/57087
991
    qa: test_fragmented_injection (tasks.cephfs.test_data_scan.TestDataScan) failure
992
* https://tracker.ceph.com/issues/57656
993
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
994
* https://tracker.ceph.com/issues/62187
995
    iozone: command not found
996
* https://tracker.ceph.com/issues/62188
997
    AttributeError: 'RemoteProcess' object has no attribute 'read' (happens only with multis-auth test)
998
* https://tracker.ceph.com/issues/62567
999
    postgres workunit times out - MDS_SLOW_REQUEST in logs
1000 166 Venky Shankar
1001
1002
h3. 22 Aug 2023
1003
1004
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230809.035933
1005
1006
* https://tracker.ceph.com/issues/57676
1007
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1008
* https://tracker.ceph.com/issues/51964
1009
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1010
* https://tracker.ceph.com/issues/59344
1011
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
1012
* https://tracker.ceph.com/issues/59346
1013
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
1014
* https://tracker.ceph.com/issues/59348
1015
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1016
* https://tracker.ceph.com/issues/61399
1017
    ior build failure
1018
* https://tracker.ceph.com/issues/61399
1019
    qa: build failure for ior (the failed instance is when compiling `mdtest`)
1020
* https://tracker.ceph.com/issues/57655
1021
    qa: fs:mixed-clients kernel_untar_build failure
1022
* https://tracker.ceph.com/issues/61243
1023
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
1024
* https://tracker.ceph.com/issues/62188
1025
    AttributeError: 'RemoteProcess' object has no attribute 'read' (happens only with multis-auth test)
1026
* https://tracker.ceph.com/issues/62510
1027
    snaptest-git-ceph.sh failure with fs/thrash
1028
* https://tracker.ceph.com/issues/62511
1029
    src/mds/MDLog.cc: 299: FAILED ceph_assert(!mds_is_shutting_down)
1030 165 Venky Shankar
1031
1032
h3. 14 Aug 2023
1033
1034
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230808.093601
1035
1036
* https://tracker.ceph.com/issues/51964
1037
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1038
* https://tracker.ceph.com/issues/61400
1039
    valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
1040
* https://tracker.ceph.com/issues/61399
1041
    ior build failure
1042
* https://tracker.ceph.com/issues/59348
1043
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1044
* https://tracker.ceph.com/issues/59531
1045
    cluster [WRN] OSD bench result of 137706.272521 IOPS exceeded the threshold
1046
* https://tracker.ceph.com/issues/59344
1047
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
1048
* https://tracker.ceph.com/issues/59346
1049
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
1050
* https://tracker.ceph.com/issues/61399
1051
    qa: build failure for ior (the failed instance is when compiling `mdtest`)
1052
* https://tracker.ceph.com/issues/59684 [kclient bug]
1053
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
1054
* https://tracker.ceph.com/issues/61243 (NEW)
1055
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
1056
* https://tracker.ceph.com/issues/57655
1057
    qa: fs:mixed-clients kernel_untar_build failure
1058
* https://tracker.ceph.com/issues/57656
1059
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
1060 163 Venky Shankar
1061
1062
h3. 28 JULY 2023
1063
1064
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230725.053049
1065
1066
* https://tracker.ceph.com/issues/51964
1067
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1068
* https://tracker.ceph.com/issues/61400
1069
    valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
1070
* https://tracker.ceph.com/issues/61399
1071
    ior build failure
1072
* https://tracker.ceph.com/issues/57676
1073
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1074
* https://tracker.ceph.com/issues/59348
1075
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1076
* https://tracker.ceph.com/issues/59531
1077
    cluster [WRN] OSD bench result of 137706.272521 IOPS exceeded the threshold
1078
* https://tracker.ceph.com/issues/59344
1079
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
1080
* https://tracker.ceph.com/issues/59346
1081
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
1082
* https://github.com/ceph/ceph/pull/52556
1083
    task/test_nfs: AttributeError: 'TestNFS' object has no attribute 'run_ceph_cmd' (see note #4)
1084
* https://tracker.ceph.com/issues/62187
1085
    iozone: command not found
1086
* https://tracker.ceph.com/issues/61399
1087
    qa: build failure for ior (the failed instance is when compiling `mdtest`)
1088
* https://tracker.ceph.com/issues/62188
1089 164 Rishabh Dave
    AttributeError: 'RemoteProcess' object has no attribute 'read' (happens only with multis-auth test)
1090 158 Rishabh Dave
1091
h3. 24 Jul 2023
1092
1093
https://pulpito.ceph.com/rishabh-2023-07-13_21:35:13-fs-wip-rishabh-2023Jul13-testing-default-smithi/
1094
https://pulpito.ceph.com/rishabh-2023-07-14_10:26:42-fs-wip-rishabh-2023Jul13-testing-default-smithi/
1095
There were few failure from one of the PRs under testing. Following run confirms that removing this PR fixes these failures -
1096
https://pulpito.ceph.com/rishabh-2023-07-18_02:11:50-fs-wip-rishabh-2023Jul13-m-quota-testing-default-smithi/
1097
One more extra run to check if blogbench.sh fail every time:
1098
https://pulpito.ceph.com/rishabh-2023-07-21_17:58:19-fs-wip-rishabh-2023Jul13-m-quota-testing-default-smithi/
1099
blogbench.sh failure were seen on above runs for first time, following run with main branch that confirms that "blogbench.sh" was not related to any of the PRs that are under testing -
1100 161 Rishabh Dave
https://pulpito.ceph.com/rishabh-2023-07-21_21:30:53-fs-wip-rishabh-2023Jul13-base-2-testing-default-smithi/
1101
1102
* https://tracker.ceph.com/issues/61892
1103
  test_snapshot_remove (test_strays.TestStrays) failed
1104
* https://tracker.ceph.com/issues/53859
1105
  test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
1106
* https://tracker.ceph.com/issues/61982
1107
  test_clean_stale_subvolume_snapshot_metadata (tasks.cephfs.test_volumes.TestSubvolumeSnapshots)
1108
* https://tracker.ceph.com/issues/52438
1109
  qa: ffsb timeout
1110
* https://tracker.ceph.com/issues/54460
1111
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1112
* https://tracker.ceph.com/issues/57655
1113
  qa: fs:mixed-clients kernel_untar_build failure
1114
* https://tracker.ceph.com/issues/48773
1115
  reached max tries: scrub does not complete
1116
* https://tracker.ceph.com/issues/58340
1117
  mds: fsstress.sh hangs with multimds
1118
* https://tracker.ceph.com/issues/61400
1119
  valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
1120
* https://tracker.ceph.com/issues/57206
1121
  libcephfs/test.sh: ceph_test_libcephfs_reclaim
1122
  
1123
* https://tracker.ceph.com/issues/57656
1124
  [testing] dbench: write failed on handle 10010 (Resource temporarily unavailable)
1125
* https://tracker.ceph.com/issues/61399
1126
  ior build failure
1127
* https://tracker.ceph.com/issues/57676
1128
  error during scrub thrashing: backtrace
1129
  
1130
* https://tracker.ceph.com/issues/38452
1131
  'sudo -u postgres -- pgbench -s 500 -i' failed
1132 158 Rishabh Dave
* https://tracker.ceph.com/issues/62126
1133 157 Venky Shankar
  blogbench.sh failure
1134
1135
h3. 18 July 2023
1136
1137
* https://tracker.ceph.com/issues/52624
1138
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
1139
* https://tracker.ceph.com/issues/57676
1140
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1141
* https://tracker.ceph.com/issues/54460
1142
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1143
* https://tracker.ceph.com/issues/57655
1144
    qa: fs:mixed-clients kernel_untar_build failure
1145
* https://tracker.ceph.com/issues/51964
1146
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1147
* https://tracker.ceph.com/issues/59344
1148
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
1149
* https://tracker.ceph.com/issues/61182
1150
    cephfs-mirror-ha-workunit: reached maximum tries (50) after waiting for 300 seconds
1151
* https://tracker.ceph.com/issues/61957
1152
    test_client_limits.TestClientLimits.test_client_release_bug
1153
* https://tracker.ceph.com/issues/59348
1154
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1155
* https://tracker.ceph.com/issues/61892
1156
    test_strays.TestStrays.test_snapshot_remove failed
1157
* https://tracker.ceph.com/issues/59346
1158
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
1159
* https://tracker.ceph.com/issues/44565
1160
    src/mds/SimpleLock.h: 528: FAILED ceph_assert(state == LOCK_XLOCK || state == LOCK_XLOCKDONE || state == LOCK_XLOCKSNAP || state == LOCK_LOCK_XLOCK || state == LOCK_LOCK || is_locallock())
1161
* https://tracker.ceph.com/issues/62067
1162
    ffsb.sh failure "Resource temporarily unavailable"
1163 156 Venky Shankar
1164
1165
h3. 17 July 2023
1166
1167
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230704.040136
1168
1169
* https://tracker.ceph.com/issues/61982
1170
    Test failure: test_clean_stale_subvolume_snapshot_metadata (tasks.cephfs.test_volumes.TestSubvolumeSnapshots)
1171
* https://tracker.ceph.com/issues/59344
1172
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
1173
* https://tracker.ceph.com/issues/61182
1174
    cephfs-mirror-ha-workunit: reached maximum tries (50) after waiting for 300 seconds
1175
* https://tracker.ceph.com/issues/61957
1176
    test_client_limits.TestClientLimits.test_client_release_bug
1177
* https://tracker.ceph.com/issues/61400
1178
    valgrind+ceph-mon: segmentation fault in rocksdb+tcmalloc
1179
* https://tracker.ceph.com/issues/59348
1180
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1181
* https://tracker.ceph.com/issues/61892
1182
    test_strays.TestStrays.test_snapshot_remove failed
1183
* https://tracker.ceph.com/issues/59346
1184
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
1185
* https://tracker.ceph.com/issues/62036
1186
    src/mds/MDCache.cc: 5131: FAILED ceph_assert(isolated_inodes.empty())
1187
* https://tracker.ceph.com/issues/61737
1188
    coredump from '/bin/podman pull quay.ceph.io/ceph-ci/ceph:pacific'
1189
* https://tracker.ceph.com/issues/44565
1190
    src/mds/SimpleLock.h: 528: FAILED ceph_assert(state == LOCK_XLOCK || state == LOCK_XLOCKDONE || state == LOCK_XLOCKSNAP || state == LOCK_LOCK_XLOCK || state == LOCK_LOCK || is_locallock())
1191 155 Rishabh Dave
1192 1 Patrick Donnelly
1193 153 Rishabh Dave
h3. 13 July 2023 Run 2
1194 152 Rishabh Dave
1195
1196
https://pulpito.ceph.com/rishabh-2023-07-08_23:33:40-fs-wip-rishabh-2023Jul9-testing-default-smithi/
1197
https://pulpito.ceph.com/rishabh-2023-07-09_20:19:09-fs-wip-rishabh-2023Jul9-testing-default-smithi/
1198
1199
* https://tracker.ceph.com/issues/61957
1200
  test_client_limits.TestClientLimits.test_client_release_bug
1201
* https://tracker.ceph.com/issues/61982
1202
  Test failure: test_clean_stale_subvolume_snapshot_metadata (tasks.cephfs.test_volumes.TestSubvolumeSnapshots)
1203
* https://tracker.ceph.com/issues/59348
1204
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1205
* https://tracker.ceph.com/issues/59344
1206
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
1207
* https://tracker.ceph.com/issues/54460
1208
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1209
* https://tracker.ceph.com/issues/57655
1210
  qa: fs:mixed-clients kernel_untar_build failure
1211
* https://tracker.ceph.com/issues/61400
1212
  valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
1213
* https://tracker.ceph.com/issues/61399
1214
  ior build failure
1215
1216 151 Venky Shankar
h3. 13 July 2023
1217
1218
https://pulpito.ceph.com/vshankar-2023-07-04_11:45:30-fs-wip-vshankar-testing-20230704.040242-testing-default-smithi/
1219
1220
* https://tracker.ceph.com/issues/54460
1221
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1222
* https://tracker.ceph.com/issues/61400
1223
    valgrind+ceph-mon: segmentation fault in rocksdb+tcmalloc
1224
* https://tracker.ceph.com/issues/57655
1225
    qa: fs:mixed-clients kernel_untar_build failure
1226
* https://tracker.ceph.com/issues/61945
1227
    LibCephFS.DelegTimeout failure
1228
* https://tracker.ceph.com/issues/52624
1229
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
1230
* https://tracker.ceph.com/issues/57676
1231
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1232
* https://tracker.ceph.com/issues/59348
1233
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1234
* https://tracker.ceph.com/issues/59344
1235
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
1236
* https://tracker.ceph.com/issues/51964
1237
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1238
* https://tracker.ceph.com/issues/59346
1239
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
1240
* https://tracker.ceph.com/issues/61982
1241
    Test failure: test_clean_stale_subvolume_snapshot_metadata (tasks.cephfs.test_volumes.TestSubvolumeSnapshots)
1242 150 Rishabh Dave
1243
1244
h3. 13 Jul 2023
1245
1246
https://pulpito.ceph.com/rishabh-2023-07-05_22:21:20-fs-wip-rishabh-2023Jul5-testing-default-smithi/
1247
https://pulpito.ceph.com/rishabh-2023-07-06_19:33:28-fs-wip-rishabh-2023Jul5-testing-default-smithi/
1248
1249
* https://tracker.ceph.com/issues/61957
1250
  test_client_limits.TestClientLimits.test_client_release_bug
1251
* https://tracker.ceph.com/issues/59348
1252
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1253
* https://tracker.ceph.com/issues/59346
1254
  fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
1255
* https://tracker.ceph.com/issues/48773
1256
  scrub does not complete: reached max tries
1257
* https://tracker.ceph.com/issues/59344
1258
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
1259
* https://tracker.ceph.com/issues/52438
1260
  qa: ffsb timeout
1261
* https://tracker.ceph.com/issues/57656
1262
  [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
1263
* https://tracker.ceph.com/issues/58742
1264
  xfstests-dev: kcephfs: generic
1265
* https://tracker.ceph.com/issues/61399
1266 148 Rishabh Dave
  libmpich: undefined references to fi_strerror
1267 149 Rishabh Dave
1268 148 Rishabh Dave
h3. 12 July 2023
1269
1270
https://pulpito.ceph.com/rishabh-2023-07-05_18:32:52-fs-wip-rishabh-CephManager-in-CephFSTestCase-testing-default-smithi/
1271
https://pulpito.ceph.com/rishabh-2023-07-06_19:46:43-fs-wip-rishabh-CephManager-in-CephFSTestCase-testing-default-smithi/
1272
1273
* https://tracker.ceph.com/issues/61892
1274
  test_strays.TestStrays.test_snapshot_remove failed
1275
* https://tracker.ceph.com/issues/59348
1276
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1277
* https://tracker.ceph.com/issues/53859
1278
  qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
1279
* https://tracker.ceph.com/issues/59346
1280
  fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write"
1281
* https://tracker.ceph.com/issues/58742
1282
  xfstests-dev: kcephfs: generic
1283
* https://tracker.ceph.com/issues/59344
1284
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
1285
* https://tracker.ceph.com/issues/52438
1286
  qa: ffsb timeout
1287
* https://tracker.ceph.com/issues/57656
1288
  [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
1289
* https://tracker.ceph.com/issues/54460
1290
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1291
* https://tracker.ceph.com/issues/57655
1292
  qa: fs:mixed-clients kernel_untar_build failure
1293
* https://tracker.ceph.com/issues/61182
1294
  cephfs-mirror-ha-workunit: reached maximum tries (50) after waiting for 300 seconds
1295
* https://tracker.ceph.com/issues/61400
1296
  valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
1297 147 Rishabh Dave
* https://tracker.ceph.com/issues/48773
1298 146 Patrick Donnelly
  reached max tries: scrub does not complete
1299
1300
h3. 05 July 2023
1301
1302
https://pulpito.ceph.com/pdonnell-2023-07-05_03:38:33-fs:libcephfs-wip-pdonnell-testing-20230705.003205-distro-default-smithi/
1303
1304 137 Rishabh Dave
* https://tracker.ceph.com/issues/59346
1305 143 Rishabh Dave
  fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write"
1306
1307
h3. 27 Jun 2023
1308
1309
https://pulpito.ceph.com/rishabh-2023-06-21_23:38:17-fs-wip-rishabh-improvements-authmon-testing-default-smithi/
1310 144 Rishabh Dave
https://pulpito.ceph.com/rishabh-2023-06-23_17:37:30-fs-wip-rishabh-improvements-authmon-distro-default-smithi/
1311
1312
* https://tracker.ceph.com/issues/59348
1313
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1314
* https://tracker.ceph.com/issues/54460
1315
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1316
* https://tracker.ceph.com/issues/59346
1317
  fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write"
1318
* https://tracker.ceph.com/issues/59344
1319
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
1320
* https://tracker.ceph.com/issues/61399
1321
  libmpich: undefined references to fi_strerror
1322
* https://tracker.ceph.com/issues/50223
1323
  client.xxxx isn't responding to mclientcaps(revoke)
1324 143 Rishabh Dave
* https://tracker.ceph.com/issues/61831
1325
  Test failure: test_mirroring_init_failure_with_recovery (tasks.cephfs.test_mirroring.TestMirroring)
1326 142 Venky Shankar
1327
1328
h3. 22 June 2023
1329
1330
* https://tracker.ceph.com/issues/57676
1331
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1332
* https://tracker.ceph.com/issues/54460
1333
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1334
* https://tracker.ceph.com/issues/59344
1335
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
1336
* https://tracker.ceph.com/issues/59348
1337
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1338
* https://tracker.ceph.com/issues/61400
1339
    valgrind+ceph-mon: segmentation fault in rocksdb+tcmalloc
1340
* https://tracker.ceph.com/issues/57655
1341
    qa: fs:mixed-clients kernel_untar_build failure
1342
* https://tracker.ceph.com/issues/61394
1343
    qa/quincy: cluster [WRN] evicting unresponsive client smithi152 (4298), after 303.726 seconds" in cluster log
1344
* https://tracker.ceph.com/issues/61762
1345
    qa: wait_for_clean: failed before timeout expired
1346
* https://tracker.ceph.com/issues/61775
1347
    cephfs-mirror: mirror daemon does not shutdown (in mirror ha tests)
1348
* https://tracker.ceph.com/issues/44565
1349
    src/mds/SimpleLock.h: 528: FAILED ceph_assert(state == LOCK_XLOCK || state == LOCK_XLOCKDONE || state == LOCK_XLOCKSNAP || state == LOCK_LOCK_XLOCK || state == LOCK_LOCK || is_locallock())
1350
* https://tracker.ceph.com/issues/61790
1351
    cephfs client to mds comms remain silent after reconnect
1352
* https://tracker.ceph.com/issues/61791
1353
    snaptest-git-ceph.sh test timed out (job dead)
1354 139 Venky Shankar
1355
1356
h3. 20 June 2023
1357
1358
https://pulpito.ceph.com/vshankar-2023-06-15_04:58:28-fs-wip-vshankar-testing-20230614.124123-testing-default-smithi/
1359
1360
* https://tracker.ceph.com/issues/57676
1361
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1362
* https://tracker.ceph.com/issues/54460
1363
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1364 140 Venky Shankar
* https://tracker.ceph.com/issues/54462
1365 1 Patrick Donnelly
  Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128
1366 141 Venky Shankar
* https://tracker.ceph.com/issues/58340
1367 139 Venky Shankar
  mds: fsstress.sh hangs with multimds
1368
* https://tracker.ceph.com/issues/59344
1369
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
1370
* https://tracker.ceph.com/issues/59348
1371
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1372
* https://tracker.ceph.com/issues/57656
1373
  [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
1374
* https://tracker.ceph.com/issues/61400
1375
    valgrind+ceph-mon: segmentation fault in rocksdb+tcmalloc
1376
* https://tracker.ceph.com/issues/57655
1377
    qa: fs:mixed-clients kernel_untar_build failure
1378
* https://tracker.ceph.com/issues/44565
1379
    src/mds/SimpleLock.h: 528: FAILED ceph_assert(state == LOCK_XLOCK || state == LOCK_XLOCKDONE || state == LOCK_XLOCKSNAP || state == LOCK_LOCK_XLOCK || state == LOCK_LOCK || is_locallock())
1380
* https://tracker.ceph.com/issues/61737
1381 138 Rishabh Dave
    coredump from '/bin/podman pull quay.ceph.io/ceph-ci/ceph:pacific'
1382
1383
h3. 16 June 2023
1384
1385 1 Patrick Donnelly
https://pulpito.ceph.com/rishabh-2023-05-16_10:39:13-fs-wip-rishabh-2023May15-1524-testing-default-smithi/
1386 145 Rishabh Dave
https://pulpito.ceph.com/rishabh-2023-05-17_11:09:48-fs-wip-rishabh-2023May15-1524-testing-default-smithi/
1387 138 Rishabh Dave
https://pulpito.ceph.com/rishabh-2023-05-18_10:01:53-fs-wip-rishabh-2023May15-1524-testing-default-smithi/
1388 1 Patrick Donnelly
(bins were rebuilt with a subset of orig PRs) https://pulpito.ceph.com/rishabh-2023-06-09_10:19:22-fs-wip-rishabh-2023Jun9-1308-testing-default-smithi/
1389
1390
1391
* https://tracker.ceph.com/issues/59344
1392
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
1393 138 Rishabh Dave
* https://tracker.ceph.com/issues/59348
1394
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1395 145 Rishabh Dave
* https://tracker.ceph.com/issues/59346
1396
  fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write"
1397
* https://tracker.ceph.com/issues/57656
1398
  [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
1399
* https://tracker.ceph.com/issues/54460
1400
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1401 138 Rishabh Dave
* https://tracker.ceph.com/issues/54462
1402
  Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128
1403 145 Rishabh Dave
* https://tracker.ceph.com/issues/61399
1404
  libmpich: undefined references to fi_strerror
1405
* https://tracker.ceph.com/issues/58945
1406
  xfstests-dev: ceph-fuse: generic 
1407 138 Rishabh Dave
* https://tracker.ceph.com/issues/58742
1408 136 Patrick Donnelly
  xfstests-dev: kcephfs: generic
1409
1410
h3. 24 May 2023
1411
1412
https://pulpito.ceph.com/pdonnell-2023-05-23_18:20:18-fs-wip-pdonnell-testing-20230523.134409-distro-default-smithi/
1413
1414
* https://tracker.ceph.com/issues/57676
1415
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1416
* https://tracker.ceph.com/issues/59683
1417
    Error: Unable to find a match: userspace-rcu-devel libedit-devel device-mapper-devel with fscrypt tests
1418
* https://tracker.ceph.com/issues/61399
1419
    qa: "[Makefile:299: ior] Error 1"
1420
* https://tracker.ceph.com/issues/61265
1421
    qa: tasks.cephfs.fuse_mount:process failed to terminate after unmount
1422
* https://tracker.ceph.com/issues/59348
1423
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1424
* https://tracker.ceph.com/issues/59346
1425
    qa/workunits/fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write"
1426
* https://tracker.ceph.com/issues/61400
1427
    valgrind+ceph-mon: segmentation fault in rocksdb+tcmalloc
1428
* https://tracker.ceph.com/issues/54460
1429
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1430
* https://tracker.ceph.com/issues/51964
1431
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1432
* https://tracker.ceph.com/issues/59344
1433
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
1434
* https://tracker.ceph.com/issues/61407
1435
    mds: abort on CInode::verify_dirfrags
1436
* https://tracker.ceph.com/issues/48773
1437
    qa: scrub does not complete
1438
* https://tracker.ceph.com/issues/57655
1439
    qa: fs:mixed-clients kernel_untar_build failure
1440
* https://tracker.ceph.com/issues/61409
1441 128 Venky Shankar
    qa: _test_stale_caps does not wait for file flush before stat
1442
1443
h3. 15 May 2023
1444 130 Venky Shankar
1445 128 Venky Shankar
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230509.090020
1446
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230509.090020-6
1447
1448
* https://tracker.ceph.com/issues/52624
1449
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
1450
* https://tracker.ceph.com/issues/54460
1451
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1452
* https://tracker.ceph.com/issues/57676
1453
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1454
* https://tracker.ceph.com/issues/59684 [kclient bug]
1455
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
1456
* https://tracker.ceph.com/issues/59348
1457
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1458 131 Venky Shankar
* https://tracker.ceph.com/issues/61148
1459
    dbench test results in call trace in dmesg [kclient bug]
1460 133 Kotresh Hiremath Ravishankar
* https://tracker.ceph.com/issues/58340
1461 134 Kotresh Hiremath Ravishankar
    mds: fsstress.sh hangs with multimds
1462 125 Venky Shankar
1463
 
1464 129 Rishabh Dave
h3. 11 May 2023
1465
1466
https://pulpito.ceph.com/yuriw-2023-05-10_18:21:40-fs-wip-yuri7-testing-2023-05-10-0742-distro-default-smithi/
1467
1468
* https://tracker.ceph.com/issues/59684 [kclient bug]
1469
  Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
1470
* https://tracker.ceph.com/issues/59348
1471
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1472
* https://tracker.ceph.com/issues/57655
1473
  qa: fs:mixed-clients kernel_untar_build failure
1474
* https://tracker.ceph.com/issues/57676
1475
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
1476
* https://tracker.ceph.com/issues/55805
1477
  error during scrub thrashing reached max tries in 900 secs
1478
* https://tracker.ceph.com/issues/54460
1479
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1480
* https://tracker.ceph.com/issues/57656
1481
  [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
1482
* https://tracker.ceph.com/issues/58220
1483
  Command failed (workunit test fs/quota/quota.sh) on smithi081 with status 1:
1484 1 Patrick Donnelly
* https://tracker.ceph.com/issues/58220#note-9
1485
  workunit fs/test_python.sh: test_disk_quota_exceeeded_error failure
1486 134 Kotresh Hiremath Ravishankar
* https://tracker.ceph.com/issues/59342
1487
  qa/workunits/kernel_untar_build.sh failed when compiling the Linux source
1488 135 Kotresh Hiremath Ravishankar
* https://tracker.ceph.com/issues/58949
1489
    test_cephfs.test_disk_quota_exceeeded_error - AssertionError: DiskQuotaExceeded not raised by write
1490 129 Rishabh Dave
* https://tracker.ceph.com/issues/61243 (NEW)
1491
  test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
1492
1493 125 Venky Shankar
h3. 11 May 2023
1494 127 Venky Shankar
1495
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230509.054005
1496 126 Venky Shankar
1497 125 Venky Shankar
(no fsstress job failure [https://tracker.ceph.com/issues/58340] since https://github.com/ceph/ceph/pull/49553
1498
 was included in the branch, however, the PR got updated and needs retest).
1499
1500
* https://tracker.ceph.com/issues/52624
1501
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
1502
* https://tracker.ceph.com/issues/54460
1503
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1504
* https://tracker.ceph.com/issues/57676
1505
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1506
* https://tracker.ceph.com/issues/59683
1507
    Error: Unable to find a match: userspace-rcu-devel libedit-devel device-mapper-devel with fscrypt tests
1508
* https://tracker.ceph.com/issues/59684 [kclient bug]
1509
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
1510
* https://tracker.ceph.com/issues/59348
1511 124 Venky Shankar
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1512
1513
h3. 09 May 2023
1514
1515
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230506.143554
1516
1517
* https://tracker.ceph.com/issues/52624
1518
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
1519
* https://tracker.ceph.com/issues/58340
1520
    mds: fsstress.sh hangs with multimds
1521
* https://tracker.ceph.com/issues/54460
1522
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1523
* https://tracker.ceph.com/issues/57676
1524
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1525
* https://tracker.ceph.com/issues/51964
1526
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1527
* https://tracker.ceph.com/issues/59350
1528
    qa: test_flag_scrub_mdsdir (tasks.cephfs.test_scrub_checks.TestScrubChecks) ... ERROR
1529
* https://tracker.ceph.com/issues/59683
1530
    Error: Unable to find a match: userspace-rcu-devel libedit-devel device-mapper-devel with fscrypt tests
1531
* https://tracker.ceph.com/issues/59684 [kclient bug]
1532
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
1533
* https://tracker.ceph.com/issues/59348
1534 123 Venky Shankar
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1535
1536
h3. 10 Apr 2023
1537
1538
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230330.105356
1539
1540
* https://tracker.ceph.com/issues/52624
1541
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
1542
* https://tracker.ceph.com/issues/58340
1543
    mds: fsstress.sh hangs with multimds
1544
* https://tracker.ceph.com/issues/54460
1545
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1546
* https://tracker.ceph.com/issues/57676
1547
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1548 119 Rishabh Dave
* https://tracker.ceph.com/issues/51964
1549 120 Rishabh Dave
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1550 121 Rishabh Dave
1551 120 Rishabh Dave
h3. 31 Mar 2023
1552 122 Rishabh Dave
1553
run: http://pulpito.front.sepia.ceph.com/rishabh-2023-03-03_21:39:49-fs-wip-rishabh-2023Mar03-2316-testing-default-smithi/
1554 120 Rishabh Dave
re-run (some PRs removed from batch): http://pulpito.front.sepia.ceph.com/rishabh-2023-03-11_05:54:03-fs-wip-rishabh-2023Mar10-1727-testing-default-smithi/
1555
re-run (some PRs removed from batch): http://pulpito.front.sepia.ceph.com/rishabh-2023-03-23_08:27:28-fs-wip-rishabh-2023Mar20-2250-testing-default-smithi/
1556
1557
There were many more re-runs for "failed+dead" jobs as well as for individual jobs. half of the PRs from the batch were removed (gradually over subsequent re-runs).
1558
1559
* https://tracker.ceph.com/issues/57676
1560
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
1561
* https://tracker.ceph.com/issues/54460
1562
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1563
* https://tracker.ceph.com/issues/58220
1564
  Command failed (workunit test fs/quota/quota.sh) on smithixxx with status 1
1565
* https://tracker.ceph.com/issues/58220#note-9
1566
  workunit fs/test_python.sh: test_disk_quota_exceeeded_error failure
1567
* https://tracker.ceph.com/issues/56695
1568
  Command failed (workunit test suites/pjd.sh)
1569
* https://tracker.ceph.com/issues/58564 
1570
  workuit dbench failed with error code 1
1571
* https://tracker.ceph.com/issues/57206
1572
  libcephfs/test.sh: ceph_test_libcephfs_reclaim
1573
* https://tracker.ceph.com/issues/57580
1574
  Test failure: test_newops_getvxattr (tasks.cephfs.test_newops.TestNewOps)
1575
* https://tracker.ceph.com/issues/58940
1576
  ceph osd hit ceph_abort
1577
* https://tracker.ceph.com/issues/55805
1578 118 Venky Shankar
  error scrub thrashing reached max tries in 900 secs
1579
1580
h3. 30 March 2023
1581
1582
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230315.085747
1583
1584
* https://tracker.ceph.com/issues/58938
1585
    qa: xfstests-dev's generic test suite has 7 failures with kclient
1586
* https://tracker.ceph.com/issues/51964
1587
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1588
* https://tracker.ceph.com/issues/58340
1589 114 Venky Shankar
    mds: fsstress.sh hangs with multimds
1590
1591 115 Venky Shankar
h3. 29 March 2023
1592 114 Venky Shankar
1593
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230317.095222
1594
1595
* https://tracker.ceph.com/issues/56695
1596
    [RHEL stock] pjd test failures
1597
* https://tracker.ceph.com/issues/57676
1598
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1599
* https://tracker.ceph.com/issues/57087
1600
    qa: test_fragmented_injection (tasks.cephfs.test_data_scan.TestDataScan) failure
1601 116 Venky Shankar
* https://tracker.ceph.com/issues/58340
1602
    mds: fsstress.sh hangs with multimds
1603 114 Venky Shankar
* https://tracker.ceph.com/issues/57655
1604
    qa: fs:mixed-clients kernel_untar_build failure
1605 117 Venky Shankar
* https://tracker.ceph.com/issues/59230
1606
    Test failure: test_object_deletion (tasks.cephfs.test_damage.TestDamage)
1607 114 Venky Shankar
* https://tracker.ceph.com/issues/58938
1608 113 Venky Shankar
    qa: xfstests-dev's generic test suite has 7 failures with kclient
1609
1610
h3. 13 Mar 2023
1611
1612
* https://tracker.ceph.com/issues/56695
1613
    [RHEL stock] pjd test failures
1614
* https://tracker.ceph.com/issues/57676
1615
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1616
* https://tracker.ceph.com/issues/51964
1617
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1618
* https://tracker.ceph.com/issues/54460
1619
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1620
* https://tracker.ceph.com/issues/57656
1621 112 Venky Shankar
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
1622
1623
h3. 09 Mar 2023
1624
1625
https://pulpito.ceph.com/vshankar-2023-03-03_04:39:14-fs-wip-vshankar-testing-20230303.023823-testing-default-smithi/
1626
https://pulpito.ceph.com/vshankar-2023-03-08_15:12:36-fs-wip-vshankar-testing-20230308.112059-testing-default-smithi/
1627
1628
* https://tracker.ceph.com/issues/56695
1629
    [RHEL stock] pjd test failures
1630
* https://tracker.ceph.com/issues/57676
1631
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1632
* https://tracker.ceph.com/issues/51964
1633
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1634
* https://tracker.ceph.com/issues/54460
1635
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1636
* https://tracker.ceph.com/issues/58340
1637
    mds: fsstress.sh hangs with multimds
1638
* https://tracker.ceph.com/issues/57087
1639 111 Venky Shankar
    qa: test_fragmented_injection (tasks.cephfs.test_data_scan.TestDataScan) failure
1640
1641
h3. 07 Mar 2023
1642
1643
https://pulpito.ceph.com/vshankar-2023-03-02_09:21:58-fs-wip-vshankar-testing-20230222.044949-testing-default-smithi/
1644
https://pulpito.ceph.com/vshankar-2023-03-07_05:15:12-fs-wip-vshankar-testing-20230307.030510-testing-default-smithi/
1645
1646
* https://tracker.ceph.com/issues/56695
1647
    [RHEL stock] pjd test failures
1648
* https://tracker.ceph.com/issues/57676
1649
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1650
* https://tracker.ceph.com/issues/51964
1651
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1652
* https://tracker.ceph.com/issues/57656
1653
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
1654
* https://tracker.ceph.com/issues/57655
1655
    qa: fs:mixed-clients kernel_untar_build failure
1656
* https://tracker.ceph.com/issues/58220
1657
    Command failed (workunit test fs/quota/quota.sh) on smithi081 with status 1:
1658
* https://tracker.ceph.com/issues/54460
1659
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1660
* https://tracker.ceph.com/issues/58934
1661 109 Venky Shankar
    snaptest-git-ceph.sh failure with ceph-fuse
1662
1663
h3. 28 Feb 2023
1664
1665
https://pulpito.ceph.com/vshankar-2023-02-24_02:11:45-fs-wip-vshankar-testing-20230222.025426-testing-default-smithi/
1666
1667
* https://tracker.ceph.com/issues/56695
1668
    [RHEL stock] pjd test failures
1669
* https://tracker.ceph.com/issues/57676
1670
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1671 110 Venky Shankar
* https://tracker.ceph.com/issues/56446
1672 109 Venky Shankar
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
1673
1674 107 Venky Shankar
(teuthology infra issues causing testing delays - merging PRs which have tests passing)
1675
1676
h3. 25 Jan 2023
1677
1678
https://pulpito.ceph.com/vshankar-2023-01-25_07:57:32-fs-wip-vshankar-testing-20230125.055346-testing-default-smithi/
1679
1680
* https://tracker.ceph.com/issues/52624
1681
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
1682
* https://tracker.ceph.com/issues/56695
1683
    [RHEL stock] pjd test failures
1684
* https://tracker.ceph.com/issues/57676
1685
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1686
* https://tracker.ceph.com/issues/56446
1687
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
1688
* https://tracker.ceph.com/issues/57206
1689
    libcephfs/test.sh: ceph_test_libcephfs_reclaim
1690
* https://tracker.ceph.com/issues/58220
1691
    Command failed (workunit test fs/quota/quota.sh) on smithi081 with status 1:
1692
* https://tracker.ceph.com/issues/58340
1693
  mds: fsstress.sh hangs with multimds
1694
* https://tracker.ceph.com/issues/56011
1695
    fs/thrash: snaptest-snap-rm-cmp.sh fails in mds5sum comparison
1696
* https://tracker.ceph.com/issues/54460
1697 101 Rishabh Dave
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1698
1699
h3. 30 JAN 2023
1700
1701
run: http://pulpito.front.sepia.ceph.com/rishabh-2022-11-28_08:04:11-fs-wip-rishabh-testing-2022Nov24-1818-testing-default-smithi/
1702
re-run: http://pulpito.front.sepia.ceph.com/rishabh-2023-01-13_12:08:33-fs-wip-rishabh-testing-2022Nov24-11Jan2023-distro-default-smithi/
1703 105 Rishabh Dave
re-run of re-run: http://pulpito.front.sepia.ceph.com/rishabh-2023-01-23_18:53:32-fs-wip-rishabh-testing-2022Nov24-11Jan2023-distro-default-smithi/
1704
1705 101 Rishabh Dave
* https://tracker.ceph.com/issues/52624
1706
  qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
1707
* https://tracker.ceph.com/issues/56695
1708
  [RHEL stock] pjd test failures
1709
* https://tracker.ceph.com/issues/57676
1710
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
1711
* https://tracker.ceph.com/issues/55332
1712
  Failure in snaptest-git-ceph.sh
1713
* https://tracker.ceph.com/issues/51964
1714
  qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1715
* https://tracker.ceph.com/issues/56446
1716
  Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
1717
* https://tracker.ceph.com/issues/57655 
1718
  qa: fs:mixed-clients kernel_untar_build failure
1719
* https://tracker.ceph.com/issues/54460
1720
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1721 103 Rishabh Dave
* https://tracker.ceph.com/issues/58340
1722
  mds: fsstress.sh hangs with multimds
1723 101 Rishabh Dave
* https://tracker.ceph.com/issues/58219
1724 102 Rishabh Dave
  Command crashed: 'ceph-dencoder type inode_backtrace_t import - decode dump_json'
1725
1726
* "Failed to load ceph-mgr modules: prometheus" in cluster log"
1727 106 Rishabh Dave
  http://pulpito.front.sepia.ceph.com/rishabh-2023-01-23_18:53:32-fs-wip-rishabh-testing-2022Nov24-11Jan2023-distro-default-smithi/7134086
1728
  Acc to Venky this was fixed in https://github.com/ceph/ceph/commit/cf6089200d96fc56b08ee17a4e31f19823370dc8
1729 102 Rishabh Dave
* Created https://tracker.ceph.com/issues/58564
1730 100 Venky Shankar
  workunit test suites/dbench.sh failed error code 1
1731
1732
h3. 15 Dec 2022
1733
1734
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20221215.112736
1735
1736
* https://tracker.ceph.com/issues/52624
1737
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1738
* https://tracker.ceph.com/issues/56695
1739
    [RHEL stock] pjd test failures
1740
* https://tracker.ceph.com/issues/58219
1741
* https://tracker.ceph.com/issues/57655
1742
* qa: fs:mixed-clients kernel_untar_build failure
1743
    Test failure: test_journal_migration (tasks.cephfs.test_journal_migration.TestJournalMigration)
1744
* https://tracker.ceph.com/issues/57676
1745
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1746
* https://tracker.ceph.com/issues/58340
1747 96 Venky Shankar
    mds: fsstress.sh hangs with multimds
1748
1749
h3. 08 Dec 2022
1750 99 Venky Shankar
1751 96 Venky Shankar
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20221130.043104
1752
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20221209.043803
1753
1754
(lots of transient git.ceph.com failures)
1755
1756
* https://tracker.ceph.com/issues/52624
1757
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1758
* https://tracker.ceph.com/issues/56695
1759
    [RHEL stock] pjd test failures
1760
* https://tracker.ceph.com/issues/57655
1761
    qa: fs:mixed-clients kernel_untar_build failure
1762
* https://tracker.ceph.com/issues/58219
1763
    Test failure: test_journal_migration (tasks.cephfs.test_journal_migration.TestJournalMigration)
1764
* https://tracker.ceph.com/issues/58220
1765
    Command failed (workunit test fs/quota/quota.sh) on smithi081 with status 1:
1766 97 Venky Shankar
* https://tracker.ceph.com/issues/57676
1767
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1768 98 Venky Shankar
* https://tracker.ceph.com/issues/53859
1769
    qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
1770
* https://tracker.ceph.com/issues/54460
1771
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1772 96 Venky Shankar
* https://tracker.ceph.com/issues/58244
1773 95 Venky Shankar
    Test failure: test_rebuild_inotable (tasks.cephfs.test_data_scan.TestDataScan)
1774
1775
h3. 14 Oct 2022
1776
1777
https://pulpito.ceph.com/vshankar-2022-10-12_04:56:59-fs-wip-vshankar-testing-20221011-145847-testing-default-smithi/
1778
https://pulpito.ceph.com/vshankar-2022-10-14_04:04:57-fs-wip-vshankar-testing-20221014-072608-testing-default-smithi/
1779
1780
* https://tracker.ceph.com/issues/52624
1781
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1782
* https://tracker.ceph.com/issues/55804
1783
    Command failed (workunit test suites/pjd.sh)
1784
* https://tracker.ceph.com/issues/51964
1785
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1786
* https://tracker.ceph.com/issues/57682
1787
    client: ERROR: test_reconnect_after_blocklisted
1788 90 Rishabh Dave
* https://tracker.ceph.com/issues/54460
1789 91 Rishabh Dave
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1790
1791
h3. 10 Oct 2022
1792 92 Rishabh Dave
1793 91 Rishabh Dave
http://pulpito.front.sepia.ceph.com/rishabh-2022-09-30_19:45:21-fs-wip-rishabh-testing-30Sep2022-testing-default-smithi/
1794
1795
reruns
1796
* fs-thrash, passed: http://pulpito.front.sepia.ceph.com/rishabh-2022-10-04_13:19:47-fs-wip-rishabh-testing-30Sep2022-testing-default-smithi/
1797 94 Rishabh Dave
* fs-verify, passed: http://pulpito.front.sepia.ceph.com/rishabh-2022-10-05_12:25:37-fs-wip-rishabh-testing-30Sep2022-testing-default-smithi/
1798 91 Rishabh Dave
* cephadm failures also passed after many re-runs: http://pulpito.front.sepia.ceph.com/rishabh-2022-10-06_13:50:51-fs-wip-rishabh-testing-30Sep2022-2-testing-default-smithi/
1799 93 Rishabh Dave
    ** needed this PR to be merged in ceph-ci branch - https://github.com/ceph/ceph/pull/47458
1800 91 Rishabh Dave
1801
known bugs
1802
* https://tracker.ceph.com/issues/52624
1803
  qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1804
* https://tracker.ceph.com/issues/50223
1805
  client.xxxx isn't responding to mclientcaps(revoke
1806
* https://tracker.ceph.com/issues/57299
1807
  qa: test_dump_loads fails with JSONDecodeError
1808
* https://tracker.ceph.com/issues/57655 [Exist in main as well]
1809
  qa: fs:mixed-clients kernel_untar_build failure
1810
* https://tracker.ceph.com/issues/57206
1811 90 Rishabh Dave
  libcephfs/test.sh: ceph_test_libcephfs_reclaim
1812
1813
h3. 2022 Sep 29
1814
1815
http://pulpito.front.sepia.ceph.com/rishabh-2022-09-14_12:48:43-fs-wip-rishabh-testing-2022Sep9-1708-testing-default-smithi/
1816
1817
* https://tracker.ceph.com/issues/55804
1818
  Command failed (workunit test suites/pjd.sh)
1819
* https://tracker.ceph.com/issues/36593
1820
  Command failed (workunit test fs/quota/quota.sh) on smithixxx with status 1
1821
* https://tracker.ceph.com/issues/52624
1822
  qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1823
* https://tracker.ceph.com/issues/51964
1824
  qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1825
* https://tracker.ceph.com/issues/56632
1826
  Test failure: test_subvolume_snapshot_clone_quota_exceeded
1827
* https://tracker.ceph.com/issues/50821
1828 88 Patrick Donnelly
  qa: untar_snap_rm failure during mds thrashing
1829
1830
h3. 2022 Sep 26
1831
1832
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20220923.171109
1833
1834
* https://tracker.ceph.com/issues/55804
1835
    qa failure: pjd link tests failed
1836
* https://tracker.ceph.com/issues/57676
1837
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1838
* https://tracker.ceph.com/issues/52624
1839
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1840
* https://tracker.ceph.com/issues/57580
1841
    Test failure: test_newops_getvxattr (tasks.cephfs.test_newops.TestNewOps)
1842
* https://tracker.ceph.com/issues/48773
1843
    qa: scrub does not complete
1844
* https://tracker.ceph.com/issues/57299
1845
    qa: test_dump_loads fails with JSONDecodeError
1846
* https://tracker.ceph.com/issues/57280
1847
    qa: tasks/kernel_cfuse_workunits_untarbuild_blogbench fails - Failed to fetch package version from shaman
1848
* https://tracker.ceph.com/issues/57205
1849
    Test failure: test_subvolume_group_ls_filter_internal_directories (tasks.cephfs.test_volumes.TestSubvolumeGroups)
1850
* https://tracker.ceph.com/issues/57656
1851
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
1852
* https://tracker.ceph.com/issues/57677
1853
    qa: "1 MDSs behind on trimming (MDS_TRIM)"
1854
* https://tracker.ceph.com/issues/57206
1855
    libcephfs/test.sh: ceph_test_libcephfs_reclaim
1856
* https://tracker.ceph.com/issues/57446
1857
    qa: test_subvolume_snapshot_info_if_orphan_clone fails
1858 89 Patrick Donnelly
* https://tracker.ceph.com/issues/57655 [Exist in main as well]
1859
    qa: fs:mixed-clients kernel_untar_build failure
1860 88 Patrick Donnelly
* https://tracker.ceph.com/issues/57682
1861
    client: ERROR: test_reconnect_after_blocklisted
1862 87 Patrick Donnelly
1863
1864
h3. 2022 Sep 22
1865
1866
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20220920.234701
1867
1868
* https://tracker.ceph.com/issues/57299
1869
    qa: test_dump_loads fails with JSONDecodeError
1870
* https://tracker.ceph.com/issues/57205
1871
    Test failure: test_subvolume_group_ls_filter_internal_directories (tasks.cephfs.test_volumes.TestSubvolumeGroups)
1872
* https://tracker.ceph.com/issues/52624
1873
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1874
* https://tracker.ceph.com/issues/57580
1875
    Test failure: test_newops_getvxattr (tasks.cephfs.test_newops.TestNewOps)
1876
* https://tracker.ceph.com/issues/57280
1877
    qa: tasks/kernel_cfuse_workunits_untarbuild_blogbench fails - Failed to fetch package version from shaman
1878
* https://tracker.ceph.com/issues/48773
1879
    qa: scrub does not complete
1880
* https://tracker.ceph.com/issues/56446
1881
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
1882
* https://tracker.ceph.com/issues/57206
1883
    libcephfs/test.sh: ceph_test_libcephfs_reclaim
1884
* https://tracker.ceph.com/issues/51267
1885
    CommandFailedError: Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithi096 with status 1:...
1886
1887
NEW:
1888
1889
* https://tracker.ceph.com/issues/57656
1890
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
1891
* https://tracker.ceph.com/issues/57655 [Exist in main as well]
1892
    qa: fs:mixed-clients kernel_untar_build failure
1893
* https://tracker.ceph.com/issues/57657
1894
    mds: scrub locates mismatch between child accounted_rstats and self rstats
1895
1896
Segfault probably caused by: https://github.com/ceph/ceph/pull/47795#issuecomment-1255724799
1897 80 Venky Shankar
1898 79 Venky Shankar
1899
h3. 2022 Sep 16
1900
1901
https://pulpito.ceph.com/?branch=wip-vshankar-testing1-20220905-132828
1902
1903
* https://tracker.ceph.com/issues/57446
1904
    qa: test_subvolume_snapshot_info_if_orphan_clone fails
1905
* https://tracker.ceph.com/issues/57299
1906
    qa: test_dump_loads fails with JSONDecodeError
1907
* https://tracker.ceph.com/issues/50223
1908
    client.xxxx isn't responding to mclientcaps(revoke)
1909
* https://tracker.ceph.com/issues/52624
1910
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1911
* https://tracker.ceph.com/issues/57205
1912
    Test failure: test_subvolume_group_ls_filter_internal_directories (tasks.cephfs.test_volumes.TestSubvolumeGroups)
1913
* https://tracker.ceph.com/issues/57280
1914
    qa: tasks/kernel_cfuse_workunits_untarbuild_blogbench fails - Failed to fetch package version from shaman
1915
* https://tracker.ceph.com/issues/51282
1916
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
1917
* https://tracker.ceph.com/issues/48203
1918
  https://tracker.ceph.com/issues/36593
1919
    qa: quota failure
1920
    qa: quota failure caused by clients stepping on each other
1921
* https://tracker.ceph.com/issues/57580
1922 77 Rishabh Dave
    Test failure: test_newops_getvxattr (tasks.cephfs.test_newops.TestNewOps)
1923
1924 76 Rishabh Dave
1925
h3. 2022 Aug 26
1926
1927
http://pulpito.front.sepia.ceph.com/rishabh-2022-08-22_17:49:59-fs-wip-rishabh-testing-2022Aug19-testing-default-smithi/
1928
http://pulpito.front.sepia.ceph.com/rishabh-2022-08-24_11:56:51-fs-wip-rishabh-testing-2022Aug19-testing-default-smithi/
1929
1930
* https://tracker.ceph.com/issues/57206
1931
  libcephfs/test.sh: ceph_test_libcephfs_reclaim
1932
* https://tracker.ceph.com/issues/56632
1933
  Test failure: test_subvolume_snapshot_clone_quota_exceeded (tasks.cephfs.test_volumes.TestSubvolumeSnapshotClones)
1934
* https://tracker.ceph.com/issues/56446
1935
  Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
1936
* https://tracker.ceph.com/issues/51964
1937
  qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1938
* https://tracker.ceph.com/issues/53859
1939
  qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
1940
1941
* https://tracker.ceph.com/issues/54460
1942
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1943
* https://tracker.ceph.com/issues/54462
1944
  Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128
1945
* https://tracker.ceph.com/issues/54460
1946
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1947
* https://tracker.ceph.com/issues/36593
1948
  Command failed (workunit test fs/quota/quota.sh) on smithixxx with status 1
1949
1950
* https://tracker.ceph.com/issues/52624
1951
  qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1952
* https://tracker.ceph.com/issues/55804
1953
  Command failed (workunit test suites/pjd.sh)
1954
* https://tracker.ceph.com/issues/50223
1955
  client.xxxx isn't responding to mclientcaps(revoke)
1956 75 Venky Shankar
1957
1958
h3. 2022 Aug 22
1959
1960
https://pulpito.ceph.com/vshankar-2022-08-12_09:34:24-fs-wip-vshankar-testing1-20220812-072441-testing-default-smithi/
1961
https://pulpito.ceph.com/vshankar-2022-08-18_04:30:42-fs-wip-vshankar-testing1-20220818-082047-testing-default-smithi/ (drop problematic PR and re-run)
1962
1963
* https://tracker.ceph.com/issues/52624
1964
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1965
* https://tracker.ceph.com/issues/56446
1966
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
1967
* https://tracker.ceph.com/issues/55804
1968
    Command failed (workunit test suites/pjd.sh)
1969
* https://tracker.ceph.com/issues/51278
1970
    mds: "FAILED ceph_assert(!segments.empty())"
1971
* https://tracker.ceph.com/issues/54460
1972
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1973
* https://tracker.ceph.com/issues/57205
1974
    Test failure: test_subvolume_group_ls_filter_internal_directories (tasks.cephfs.test_volumes.TestSubvolumeGroups)
1975
* https://tracker.ceph.com/issues/57206
1976
    ceph_test_libcephfs_reclaim crashes during test
1977
* https://tracker.ceph.com/issues/53859
1978
  Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
1979
* https://tracker.ceph.com/issues/50223
1980 72 Venky Shankar
    client.xxxx isn't responding to mclientcaps(revoke)
1981
1982
h3. 2022 Aug 12
1983
1984
https://pulpito.ceph.com/vshankar-2022-08-10_04:06:00-fs-wip-vshankar-testing-20220805-190751-testing-default-smithi/
1985
https://pulpito.ceph.com/vshankar-2022-08-11_12:16:58-fs-wip-vshankar-testing-20220811-145809-testing-default-smithi/ (drop problematic PR and re-run)
1986
1987
* https://tracker.ceph.com/issues/52624
1988
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1989
* https://tracker.ceph.com/issues/56446
1990
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
1991
* https://tracker.ceph.com/issues/51964
1992
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1993
* https://tracker.ceph.com/issues/55804
1994
    Command failed (workunit test suites/pjd.sh)
1995
* https://tracker.ceph.com/issues/50223
1996
    client.xxxx isn't responding to mclientcaps(revoke)
1997
* https://tracker.ceph.com/issues/50821
1998 73 Venky Shankar
    qa: untar_snap_rm failure during mds thrashing
1999 72 Venky Shankar
* https://tracker.ceph.com/issues/54460
2000 71 Venky Shankar
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
2001
2002
h3. 2022 Aug 04
2003
2004
https://pulpito.ceph.com/?branch=wip-vshankar-testing1-20220804-123835 (only mgr/volumes, mgr/stats)
2005
2006 69 Rishabh Dave
Unrealted teuthology failure on rhel
2007 68 Rishabh Dave
2008
h3. 2022 Jul 25
2009
2010
http://pulpito.front.sepia.ceph.com/rishabh-2022-07-22_11:34:20-fs-wip-rishabh-testing-2022Jul22-1400-testing-default-smithi/
2011
2012 74 Rishabh Dave
1st re-run: http://pulpito.front.sepia.ceph.com/rishabh-2022-07-24_03:51:19-fs-wip-rishabh-testing-2022Jul22-1400-testing-default-smithi
2013
2nd re-run: http://pulpito.front.sepia.ceph.com/rishabh-2022-07-24_08:53:36-fs-wip-rishabh-testing-2022Jul22-1400-testing-default-smithi/
2014 68 Rishabh Dave
3rd re-run: http://pulpito.front.sepia.ceph.com/rishabh-2022-07-24_08:53:36-fs-wip-rishabh-testing-2022Jul22-1400-testing-default-smithi/
2015
4th (final) re-run: http://pulpito.front.sepia.ceph.com/rishabh-2022-07-28_03:59:01-fs-wip-rishabh-testing-2022Jul28-0143-testing-default-smithi/
2016
2017
* https://tracker.ceph.com/issues/55804
2018
  Command failed (workunit test suites/pjd.sh)
2019
* https://tracker.ceph.com/issues/50223
2020
  client.xxxx isn't responding to mclientcaps(revoke)
2021
2022
* https://tracker.ceph.com/issues/54460
2023
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
2024 1 Patrick Donnelly
* https://tracker.ceph.com/issues/36593
2025 74 Rishabh Dave
  Command failed (workunit test fs/quota/quota.sh) on smithixxx with status 1
2026 68 Rishabh Dave
* https://tracker.ceph.com/issues/54462
2027 67 Patrick Donnelly
  Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128~
2028
2029
h3. 2022 July 22
2030
2031
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20220721.235756
2032
2033
MDS_HEALTH_DUMMY error in log fixed by followup commit.
2034
transient selinux ping failure
2035
2036
* https://tracker.ceph.com/issues/56694
2037
    qa: avoid blocking forever on hung umount
2038
* https://tracker.ceph.com/issues/56695
2039
    [RHEL stock] pjd test failures
2040
* https://tracker.ceph.com/issues/56696
2041
    admin keyring disappears during qa run
2042
* https://tracker.ceph.com/issues/56697
2043
    qa: fs/snaps fails for fuse
2044
* https://tracker.ceph.com/issues/50222
2045
    osd: 5.2s0 deep-scrub : stat mismatch
2046
* https://tracker.ceph.com/issues/56698
2047
    client: FAILED ceph_assert(_size == 0)
2048
* https://tracker.ceph.com/issues/50223
2049
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2050 66 Rishabh Dave
2051 65 Rishabh Dave
2052
h3. 2022 Jul 15
2053
2054
http://pulpito.front.sepia.ceph.com/rishabh-2022-07-08_23:53:34-fs-wip-rishabh-testing-2022Jul08-1820-testing-default-smithi/
2055
2056
re-run: http://pulpito.front.sepia.ceph.com/rishabh-2022-07-15_06:42:04-fs-wip-rishabh-testing-2022Jul08-1820-testing-default-smithi/
2057
2058
* https://tracker.ceph.com/issues/53859
2059
  Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
2060
* https://tracker.ceph.com/issues/55804
2061
  Command failed (workunit test suites/pjd.sh)
2062
* https://tracker.ceph.com/issues/50223
2063
  client.xxxx isn't responding to mclientcaps(revoke)
2064
* https://tracker.ceph.com/issues/50222
2065
  osd: deep-scrub : stat mismatch
2066
2067
* https://tracker.ceph.com/issues/56632
2068
  Test failure: test_subvolume_snapshot_clone_quota_exceeded (tasks.cephfs.test_volumes.TestSubvolumeSnapshotClones)
2069
* https://tracker.ceph.com/issues/56634
2070
  workunit test fs/snaps/snaptest-intodir.sh
2071
* https://tracker.ceph.com/issues/56644
2072
  Test failure: test_rapid_creation (tasks.cephfs.test_fragment.TestFragmentation)
2073
2074 61 Rishabh Dave
2075
2076
h3. 2022 July 05
2077 62 Rishabh Dave
2078 64 Rishabh Dave
http://pulpito.front.sepia.ceph.com/rishabh-2022-07-02_14:14:52-fs-wip-rishabh-testing-20220702-1631-testing-default-smithi/
2079
2080
On 1st re-run some jobs passed - http://pulpito.front.sepia.ceph.com/rishabh-2022-07-03_15:10:28-fs-wip-rishabh-testing-20220702-1631-distro-default-smithi/
2081
2082
On 2nd re-run only few jobs failed -
2083 62 Rishabh Dave
http://pulpito.front.sepia.ceph.com/rishabh-2022-07-06_05:24:29-fs-wip-rishabh-testing-20220705-2132-distro-default-smithi/
2084
http://pulpito.front.sepia.ceph.com/rishabh-2022-07-06_05:24:29-fs-wip-rishabh-testing-20220705-2132-distro-default-smithi/
2085
2086
* https://tracker.ceph.com/issues/56446
2087
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
2088
* https://tracker.ceph.com/issues/55804
2089
    Command failed (workunit test suites/pjd.sh) on smithi047 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/
2090
2091
* https://tracker.ceph.com/issues/56445
2092 63 Rishabh Dave
    Command failed on smithi080 with status 123: "find /home/ubuntu/cephtest/archive/syslog -name '*.log' -print0 | sudo xargs -0 --no-run-if-empty -- gzip --"
2093
* https://tracker.ceph.com/issues/51267
2094
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithi098 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1
2095 62 Rishabh Dave
* https://tracker.ceph.com/issues/50224
2096
    Test failure: test_mirroring_init_failure_with_recovery (tasks.cephfs.test_mirroring.TestMirroring)
2097 61 Rishabh Dave
2098 58 Venky Shankar
2099
2100
h3. 2022 July 04
2101
2102
https://pulpito.ceph.com/vshankar-2022-06-29_09:19:00-fs-wip-vshankar-testing-20220627-100931-testing-default-smithi/
2103
(rhel runs were borked due to: https://lists.ceph.io/hyperkitty/list/dev@ceph.io/thread/JSZQFUKVLDND4W33PXDGCABPHNSPT6SS/, tests ran with --filter-out=rhel)
2104
2105
* https://tracker.ceph.com/issues/56445
2106 59 Rishabh Dave
    Command failed on smithi162 with status 123: "find /home/ubuntu/cephtest/archive/syslog -name '*.log' -print0 | sudo xargs -0 --no-run-if-empty -- gzip --"
2107
* https://tracker.ceph.com/issues/56446
2108
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
2109
* https://tracker.ceph.com/issues/51964
2110 60 Rishabh Dave
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
2111 59 Rishabh Dave
* https://tracker.ceph.com/issues/52624
2112 57 Venky Shankar
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2113
2114
h3. 2022 June 20
2115
2116
https://pulpito.ceph.com/vshankar-2022-06-15_04:03:39-fs-wip-vshankar-testing1-20220615-072516-testing-default-smithi/
2117
https://pulpito.ceph.com/vshankar-2022-06-19_08:22:46-fs-wip-vshankar-testing1-20220619-102531-testing-default-smithi/
2118
2119
* https://tracker.ceph.com/issues/52624
2120
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2121
* https://tracker.ceph.com/issues/55804
2122
    qa failure: pjd link tests failed
2123
* https://tracker.ceph.com/issues/54108
2124
    qa: iogen workunit: "The following counters failed to be set on mds daemons: {'mds.exported', 'mds.imported'}"
2125
* https://tracker.ceph.com/issues/55332
2126 56 Patrick Donnelly
    Failure in snaptest-git-ceph.sh (it's an async unlink/create bug)
2127
2128
h3. 2022 June 13
2129
2130
https://pulpito.ceph.com/pdonnell-2022-06-12_05:08:12-fs:workload-wip-pdonnell-testing-20220612.004943-distro-default-smithi/
2131
2132
* https://tracker.ceph.com/issues/56024
2133
    cephadm: removes ceph.conf during qa run causing command failure
2134
* https://tracker.ceph.com/issues/48773
2135
    qa: scrub does not complete
2136
* https://tracker.ceph.com/issues/56012
2137
    mds: src/mds/MDLog.cc: 283: FAILED ceph_assert(!mds->is_ any_replay())
2138 55 Venky Shankar
2139 54 Venky Shankar
2140
h3. 2022 Jun 13
2141
2142
https://pulpito.ceph.com/vshankar-2022-06-07_00:25:50-fs-wip-vshankar-testing-20220606-223254-testing-default-smithi/
2143
https://pulpito.ceph.com/vshankar-2022-06-10_01:04:46-fs-wip-vshankar-testing-20220609-175550-testing-default-smithi/
2144
2145
* https://tracker.ceph.com/issues/52624
2146
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2147
* https://tracker.ceph.com/issues/51964
2148
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
2149
* https://tracker.ceph.com/issues/53859
2150
    qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
2151
* https://tracker.ceph.com/issues/55804
2152
    qa failure: pjd link tests failed
2153
* https://tracker.ceph.com/issues/56003
2154
    client: src/include/xlist.h: 81: FAILED ceph_assert(_size == 0)
2155
* https://tracker.ceph.com/issues/56011
2156
    fs/thrash: snaptest-snap-rm-cmp.sh fails in mds5sum comparison
2157
* https://tracker.ceph.com/issues/56012
2158 53 Venky Shankar
    mds: src/mds/MDLog.cc: 283: FAILED ceph_assert(!mds->is_ any_replay())
2159
2160
h3. 2022 Jun 07
2161
2162
https://pulpito.ceph.com/vshankar-2022-06-06_21:25:41-fs-wip-vshankar-testing1-20220606-230129-testing-default-smithi/
2163
https://pulpito.ceph.com/vshankar-2022-06-07_10:53:31-fs-wip-vshankar-testing1-20220607-104134-testing-default-smithi/ (rerun after dropping a problematic PR)
2164
2165
* https://tracker.ceph.com/issues/52624
2166
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2167
* https://tracker.ceph.com/issues/50223
2168
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2169
* https://tracker.ceph.com/issues/50224
2170 51 Venky Shankar
    qa: test_mirroring_init_failure_with_recovery failure
2171
2172
h3. 2022 May 12
2173 52 Venky Shankar
2174 51 Venky Shankar
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20220509-125847
2175
https://pulpito.ceph.com/vshankar-2022-05-13_17:09:16-fs-wip-vshankar-testing-20220513-120051-testing-default-smithi/ (drop prs + rerun)
2176
2177
* https://tracker.ceph.com/issues/52624
2178
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2179
* https://tracker.ceph.com/issues/50223
2180
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2181
* https://tracker.ceph.com/issues/55332
2182
    Failure in snaptest-git-ceph.sh
2183
* https://tracker.ceph.com/issues/53859
2184 1 Patrick Donnelly
    qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
2185 52 Venky Shankar
* https://tracker.ceph.com/issues/55538
2186
    Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead)
2187 51 Venky Shankar
* https://tracker.ceph.com/issues/55258
2188 49 Venky Shankar
    lots of "heartbeat_check: no reply from X.X.X.X" in OSD logs (cropss up again, though very infrequent)
2189
2190 50 Venky Shankar
h3. 2022 May 04
2191
2192
https://pulpito.ceph.com/vshankar-2022-05-01_13:18:44-fs-wip-vshankar-testing1-20220428-204527-testing-default-smithi/
2193 49 Venky Shankar
https://pulpito.ceph.com/vshankar-2022-05-02_16:58:59-fs-wip-vshankar-testing1-20220502-201957-testing-default-smithi/ (after dropping PRs)
2194
2195
* https://tracker.ceph.com/issues/52624
2196
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2197
* https://tracker.ceph.com/issues/50223
2198
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2199
* https://tracker.ceph.com/issues/55332
2200
    Failure in snaptest-git-ceph.sh
2201
* https://tracker.ceph.com/issues/53859
2202
    qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
2203
* https://tracker.ceph.com/issues/55516
2204
    qa: fs suite tests failing with "json.decoder.JSONDecodeError: Extra data: line 2 column 82 (char 82)"
2205
* https://tracker.ceph.com/issues/55537
2206
    mds: crash during fs:upgrade test
2207
* https://tracker.ceph.com/issues/55538
2208 48 Venky Shankar
    Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead)
2209
2210
h3. 2022 Apr 25
2211
2212
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20220420-113951 (owner vshankar)
2213
2214
* https://tracker.ceph.com/issues/52624
2215
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2216
* https://tracker.ceph.com/issues/50223
2217
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2218
* https://tracker.ceph.com/issues/55258
2219
    lots of "heartbeat_check: no reply from X.X.X.X" in OSD logs
2220
* https://tracker.ceph.com/issues/55377
2221 47 Venky Shankar
    kclient: mds revoke Fwb caps stuck after the kclient tries writebcak once
2222
2223
h3. 2022 Apr 14
2224
2225
https://pulpito.ceph.com/?branch=wip-vshankar-testing1-20220411-144044
2226
2227
* https://tracker.ceph.com/issues/52624
2228
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2229
* https://tracker.ceph.com/issues/50223
2230
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2231
* https://tracker.ceph.com/issues/52438
2232
    qa: ffsb timeout
2233
* https://tracker.ceph.com/issues/55170
2234
    mds: crash during rejoin (CDir::fetch_keys)
2235
* https://tracker.ceph.com/issues/55331
2236
    pjd failure
2237
* https://tracker.ceph.com/issues/48773
2238
    qa: scrub does not complete
2239
* https://tracker.ceph.com/issues/55332
2240
    Failure in snaptest-git-ceph.sh
2241
* https://tracker.ceph.com/issues/55258
2242 45 Venky Shankar
    lots of "heartbeat_check: no reply from X.X.X.X" in OSD logs
2243
2244 46 Venky Shankar
h3. 2022 Apr 11
2245 45 Venky Shankar
2246
https://pulpito.ceph.com/?branch=wip-vshankar-testing-55110-20220408-203242
2247
2248
* https://tracker.ceph.com/issues/48773
2249
    qa: scrub does not complete
2250
* https://tracker.ceph.com/issues/52624
2251
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2252
* https://tracker.ceph.com/issues/52438
2253
    qa: ffsb timeout
2254
* https://tracker.ceph.com/issues/48680
2255
    mds: scrubbing stuck "scrub active (0 inodes in the stack)"
2256
* https://tracker.ceph.com/issues/55236
2257
    qa: fs/snaps tests fails with "hit max job timeout"
2258
* https://tracker.ceph.com/issues/54108
2259
    qa: iogen workunit: "The following counters failed to be set on mds daemons: {'mds.exported', 'mds.imported'}"
2260
* https://tracker.ceph.com/issues/54971
2261
    Test failure: test_perf_stats_stale_metrics (tasks.cephfs.test_mds_metrics.TestMDSMetrics)
2262
* https://tracker.ceph.com/issues/50223
2263
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2264
* https://tracker.ceph.com/issues/55258
2265 44 Venky Shankar
    lots of "heartbeat_check: no reply from X.X.X.X" in OSD logs
2266 42 Venky Shankar
2267 43 Venky Shankar
h3. 2022 Mar 21
2268
2269
https://pulpito.ceph.com/vshankar-2022-03-20_02:16:37-fs-wip-vshankar-testing-20220319-163539-testing-default-smithi/
2270
2271
Run didn't go well, lots of failures - debugging by dropping PRs and running against master branch. Only merging unrelated PRs that pass tests.
2272
2273
2274 42 Venky Shankar
h3. 2022 Mar 08
2275
2276
https://pulpito.ceph.com/vshankar-2022-02-28_04:32:15-fs-wip-vshankar-testing-20220226-211550-testing-default-smithi/
2277
2278
rerun with
2279
- (drop) https://github.com/ceph/ceph/pull/44679
2280
- (drop) https://github.com/ceph/ceph/pull/44958
2281
https://pulpito.ceph.com/vshankar-2022-03-06_14:47:51-fs-wip-vshankar-testing-20220304-132102-testing-default-smithi/
2282
2283
* https://tracker.ceph.com/issues/54419 (new)
2284
    `ceph orch upgrade start` seems to never reach completion
2285
* https://tracker.ceph.com/issues/51964
2286
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
2287
* https://tracker.ceph.com/issues/52624
2288
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2289
* https://tracker.ceph.com/issues/50223
2290
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2291
* https://tracker.ceph.com/issues/52438
2292
    qa: ffsb timeout
2293
* https://tracker.ceph.com/issues/50821
2294
    qa: untar_snap_rm failure during mds thrashing
2295 41 Venky Shankar
2296
2297
h3. 2022 Feb 09
2298
2299
https://pulpito.ceph.com/vshankar-2022-02-05_17:27:49-fs-wip-vshankar-testing-20220201-113815-testing-default-smithi/
2300
2301
rerun with
2302
- (drop) https://github.com/ceph/ceph/pull/37938
2303
- (drop) https://github.com/ceph/ceph/pull/44335
2304
- (drop) https://github.com/ceph/ceph/pull/44491
2305
- (drop) https://github.com/ceph/ceph/pull/44501
2306
https://pulpito.ceph.com/vshankar-2022-02-08_14:27:29-fs-wip-vshankar-testing-20220208-181241-testing-default-smithi/
2307
2308
* https://tracker.ceph.com/issues/51964
2309
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
2310
* https://tracker.ceph.com/issues/54066
2311
    test_subvolume_no_upgrade_v1_sanity fails with `AssertionError: 1000 != 0`
2312
* https://tracker.ceph.com/issues/48773
2313
    qa: scrub does not complete
2314
* https://tracker.ceph.com/issues/52624
2315
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2316
* https://tracker.ceph.com/issues/50223
2317
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2318
* https://tracker.ceph.com/issues/52438
2319 40 Patrick Donnelly
    qa: ffsb timeout
2320
2321
h3. 2022 Feb 01
2322
2323
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20220127.171526
2324
2325
* https://tracker.ceph.com/issues/54107
2326
    kclient: hang during umount
2327
* https://tracker.ceph.com/issues/54106
2328
    kclient: hang during workunit cleanup
2329
* https://tracker.ceph.com/issues/54108
2330
    qa: iogen workunit: "The following counters failed to be set on mds daemons: {'mds.exported', 'mds.imported'}"
2331
* https://tracker.ceph.com/issues/48773
2332
    qa: scrub does not complete
2333
* https://tracker.ceph.com/issues/52438
2334
    qa: ffsb timeout
2335 36 Venky Shankar
2336
2337
h3. 2022 Jan 13
2338 39 Venky Shankar
2339 36 Venky Shankar
https://pulpito.ceph.com/vshankar-2022-01-06_13:18:41-fs-wip-vshankar-testing-20220106-145819-testing-default-smithi/
2340 38 Venky Shankar
2341
rerun with:
2342 36 Venky Shankar
- (add) https://github.com/ceph/ceph/pull/44570
2343
- (drop) https://github.com/ceph/ceph/pull/43184
2344
https://pulpito.ceph.com/vshankar-2022-01-13_04:42:40-fs-wip-vshankar-testing-20220106-145819-testing-default-smithi/
2345
2346
* https://tracker.ceph.com/issues/50223
2347
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2348
* https://tracker.ceph.com/issues/51282
2349
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2350
* https://tracker.ceph.com/issues/48773
2351
    qa: scrub does not complete
2352
* https://tracker.ceph.com/issues/52624
2353
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2354
* https://tracker.ceph.com/issues/53859
2355 34 Venky Shankar
    qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
2356
2357
h3. 2022 Jan 03
2358
2359
https://pulpito.ceph.com/vshankar-2021-12-22_07:37:44-fs-wip-vshankar-testing-20211216-114012-testing-default-smithi/
2360
https://pulpito.ceph.com/vshankar-2022-01-03_12:27:45-fs-wip-vshankar-testing-20220103-142738-testing-default-smithi/ (rerun)
2361
2362
* https://tracker.ceph.com/issues/50223
2363
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2364
* https://tracker.ceph.com/issues/51964
2365
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
2366
* https://tracker.ceph.com/issues/51267
2367
    CommandFailedError: Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithi096 with status 1:...
2368
* https://tracker.ceph.com/issues/51282
2369
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2370
* https://tracker.ceph.com/issues/50821
2371
    qa: untar_snap_rm failure during mds thrashing
2372 35 Ramana Raja
* https://tracker.ceph.com/issues/51278
2373
    mds: "FAILED ceph_assert(!segments.empty())"
2374
* https://tracker.ceph.com/issues/52279
2375 34 Venky Shankar
    cephadm tests fail due to: error adding seccomp filter rule for syscall bdflush: requested action matches default action of filter
2376 33 Patrick Donnelly
2377
2378
h3. 2021 Dec 22
2379
2380
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211222.014316
2381
2382
* https://tracker.ceph.com/issues/52624
2383
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2384
* https://tracker.ceph.com/issues/50223
2385
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2386
* https://tracker.ceph.com/issues/52279
2387
    cephadm tests fail due to: error adding seccomp filter rule for syscall bdflush: requested action matches default action of filter
2388
* https://tracker.ceph.com/issues/50224
2389
    qa: test_mirroring_init_failure_with_recovery failure
2390
* https://tracker.ceph.com/issues/48773
2391
    qa: scrub does not complete
2392 32 Venky Shankar
2393
2394
h3. 2021 Nov 30
2395
2396
https://pulpito.ceph.com/vshankar-2021-11-24_07:14:27-fs-wip-vshankar-testing-20211124-094330-testing-default-smithi/
2397
https://pulpito.ceph.com/vshankar-2021-11-30_06:23:32-fs-wip-vshankar-testing-20211124-094330-distro-default-smithi/ (rerun w/ QA fixes)
2398
2399
* https://tracker.ceph.com/issues/53436
2400
    mds, mon: mds beacon messages get dropped? (mds never reaches up:active state)
2401
* https://tracker.ceph.com/issues/51964
2402
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
2403
* https://tracker.ceph.com/issues/48812
2404
    qa: test_scrub_pause_and_resume_with_abort failure
2405
* https://tracker.ceph.com/issues/51076
2406
    "wait_for_recovery: failed before timeout expired" during thrashosd test with EC backend.
2407
* https://tracker.ceph.com/issues/50223
2408
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2409
* https://tracker.ceph.com/issues/52624
2410
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2411
* https://tracker.ceph.com/issues/50250
2412
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
2413 31 Patrick Donnelly
2414
2415
h3. 2021 November 9
2416
2417
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211109.180315
2418
2419
* https://tracker.ceph.com/issues/53214
2420
    qa: "dd: error reading '/sys/kernel/debug/ceph/2a934501-6731-4052-a836-f42229a869be.client4874/metrics': Is a directory"
2421
* https://tracker.ceph.com/issues/48773
2422
    qa: scrub does not complete
2423
* https://tracker.ceph.com/issues/50223
2424
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2425
* https://tracker.ceph.com/issues/51282
2426
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2427
* https://tracker.ceph.com/issues/52624
2428
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2429
* https://tracker.ceph.com/issues/53216
2430
    qa: "RuntimeError: value of attributes should be either str or None. client_id"
2431
* https://tracker.ceph.com/issues/50250
2432
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
2433
2434 30 Patrick Donnelly
2435
2436
h3. 2021 November 03
2437
2438
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211103.023355
2439
2440
* https://tracker.ceph.com/issues/51964
2441
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
2442
* https://tracker.ceph.com/issues/51282
2443
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2444
* https://tracker.ceph.com/issues/52436
2445
    fs/ceph: "corrupt mdsmap"
2446
* https://tracker.ceph.com/issues/53074
2447
    pybind/mgr/cephadm: upgrade sequence does not continue if no MDS are active
2448
* https://tracker.ceph.com/issues/53150
2449
    pybind/mgr/cephadm/upgrade: tolerate MDS failures during upgrade straddling v16.2.5
2450
* https://tracker.ceph.com/issues/53155
2451
    MDSMonitor: assertion during upgrade to v16.2.5+
2452 29 Patrick Donnelly
2453
2454
h3. 2021 October 26
2455
2456
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211025.000447
2457
2458
* https://tracker.ceph.com/issues/53074
2459
    pybind/mgr/cephadm: upgrade sequence does not continue if no MDS are active
2460
* https://tracker.ceph.com/issues/52997
2461
    testing: hang ing umount
2462
* https://tracker.ceph.com/issues/50824
2463
    qa: snaptest-git-ceph bus error
2464
* https://tracker.ceph.com/issues/52436
2465
    fs/ceph: "corrupt mdsmap"
2466
* https://tracker.ceph.com/issues/48773
2467
    qa: scrub does not complete
2468
* https://tracker.ceph.com/issues/53082
2469
    ceph-fuse: segmenetation fault in Client::handle_mds_map
2470
* https://tracker.ceph.com/issues/50223
2471
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2472
* https://tracker.ceph.com/issues/52624
2473
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2474
* https://tracker.ceph.com/issues/50224
2475
    qa: test_mirroring_init_failure_with_recovery failure
2476
* https://tracker.ceph.com/issues/50821
2477
    qa: untar_snap_rm failure during mds thrashing
2478
* https://tracker.ceph.com/issues/50250
2479
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
2480
2481 27 Patrick Donnelly
2482
2483 28 Patrick Donnelly
h3. 2021 October 19
2484 27 Patrick Donnelly
2485
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211019.013028
2486
2487
* https://tracker.ceph.com/issues/52995
2488
    qa: test_standby_count_wanted failure
2489
* https://tracker.ceph.com/issues/52948
2490
    osd: fails to come up: "teuthology.misc:7 of 8 OSDs are up"
2491
* https://tracker.ceph.com/issues/52996
2492
    qa: test_perf_counters via test_openfiletable
2493
* https://tracker.ceph.com/issues/48772
2494
    qa: pjd: not ok 9, 44, 80
2495
* https://tracker.ceph.com/issues/52997
2496
    testing: hang ing umount
2497
* https://tracker.ceph.com/issues/50250
2498
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
2499
* https://tracker.ceph.com/issues/52624
2500
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2501
* https://tracker.ceph.com/issues/50223
2502
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2503
* https://tracker.ceph.com/issues/50821
2504
    qa: untar_snap_rm failure during mds thrashing
2505
* https://tracker.ceph.com/issues/48773
2506
    qa: scrub does not complete
2507 26 Patrick Donnelly
2508
2509
h3. 2021 October 12
2510
2511
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211012.192211
2512
2513
Some failures caused by teuthology bug: https://tracker.ceph.com/issues/52944
2514
2515
New test caused failure: https://github.com/ceph/ceph/pull/43297#discussion_r729883167
2516
2517
2518
* https://tracker.ceph.com/issues/51282
2519
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2520
* https://tracker.ceph.com/issues/52948
2521
    osd: fails to come up: "teuthology.misc:7 of 8 OSDs are up"
2522
* https://tracker.ceph.com/issues/48773
2523
    qa: scrub does not complete
2524
* https://tracker.ceph.com/issues/50224
2525
    qa: test_mirroring_init_failure_with_recovery failure
2526
* https://tracker.ceph.com/issues/52949
2527
    RuntimeError: The following counters failed to be set on mds daemons: {'mds.dir_split'}
2528 25 Patrick Donnelly
2529 23 Patrick Donnelly
2530 24 Patrick Donnelly
h3. 2021 October 02
2531
2532
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211002.163337
2533
2534
Some failures caused by cephadm upgrade test. Fixed in follow-up qa commit.
2535
2536
test_simple failures caused by PR in this set.
2537
2538
A few reruns because of QA infra noise.
2539
2540
* https://tracker.ceph.com/issues/52822
2541
    qa: failed pacific install on fs:upgrade
2542
* https://tracker.ceph.com/issues/52624
2543
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2544
* https://tracker.ceph.com/issues/50223
2545
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2546
* https://tracker.ceph.com/issues/48773
2547
    qa: scrub does not complete
2548
2549
2550 23 Patrick Donnelly
h3. 2021 September 20
2551
2552
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210917.174826
2553
2554
* https://tracker.ceph.com/issues/52677
2555
    qa: test_simple failure
2556
* https://tracker.ceph.com/issues/51279
2557
    kclient hangs on umount (testing branch)
2558
* https://tracker.ceph.com/issues/50223
2559
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2560
* https://tracker.ceph.com/issues/50250
2561
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
2562
* https://tracker.ceph.com/issues/52624
2563
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2564
* https://tracker.ceph.com/issues/52438
2565
    qa: ffsb timeout
2566 22 Patrick Donnelly
2567
2568
h3. 2021 September 10
2569
2570
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210910.181451
2571
2572
* https://tracker.ceph.com/issues/50223
2573
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2574
* https://tracker.ceph.com/issues/50250
2575
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
2576
* https://tracker.ceph.com/issues/52624
2577
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2578
* https://tracker.ceph.com/issues/52625
2579
    qa: test_kill_mdstable (tasks.cephfs.test_snapshots.TestSnapshots)
2580
* https://tracker.ceph.com/issues/52439
2581
    qa: acls does not compile on centos stream
2582
* https://tracker.ceph.com/issues/50821
2583
    qa: untar_snap_rm failure during mds thrashing
2584
* https://tracker.ceph.com/issues/48773
2585
    qa: scrub does not complete
2586
* https://tracker.ceph.com/issues/52626
2587
    mds: ScrubStack.cc: 831: FAILED ceph_assert(diri)
2588
* https://tracker.ceph.com/issues/51279
2589
    kclient hangs on umount (testing branch)
2590 21 Patrick Donnelly
2591
2592
h3. 2021 August 27
2593
2594
Several jobs died because of device failures.
2595
2596
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210827.024746
2597
2598
* https://tracker.ceph.com/issues/52430
2599
    mds: fast async create client mount breaks racy test
2600
* https://tracker.ceph.com/issues/52436
2601
    fs/ceph: "corrupt mdsmap"
2602
* https://tracker.ceph.com/issues/52437
2603
    mds: InoTable::replay_release_ids abort via test_inotable_sync
2604
* https://tracker.ceph.com/issues/51282
2605
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2606
* https://tracker.ceph.com/issues/52438
2607
    qa: ffsb timeout
2608
* https://tracker.ceph.com/issues/52439
2609
    qa: acls does not compile on centos stream
2610 20 Patrick Donnelly
2611
2612
h3. 2021 July 30
2613
2614
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210729.214022
2615
2616
* https://tracker.ceph.com/issues/50250
2617
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
2618
* https://tracker.ceph.com/issues/51282
2619
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2620
* https://tracker.ceph.com/issues/48773
2621
    qa: scrub does not complete
2622
* https://tracker.ceph.com/issues/51975
2623
    pybind/mgr/stats: KeyError
2624 19 Patrick Donnelly
2625
2626
h3. 2021 July 28
2627
2628
https://pulpito.ceph.com/pdonnell-2021-07-28_00:39:45-fs-wip-pdonnell-testing-20210727.213757-distro-basic-smithi/
2629
2630
with qa fix: https://pulpito.ceph.com/pdonnell-2021-07-28_16:20:28-fs-wip-pdonnell-testing-20210728.141004-distro-basic-smithi/
2631
2632
* https://tracker.ceph.com/issues/51905
2633
    qa: "error reading sessionmap 'mds1_sessionmap'"
2634
* https://tracker.ceph.com/issues/48773
2635
    qa: scrub does not complete
2636
* https://tracker.ceph.com/issues/50250
2637
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
2638
* https://tracker.ceph.com/issues/51267
2639
    CommandFailedError: Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithi096 with status 1:...
2640
* https://tracker.ceph.com/issues/51279
2641
    kclient hangs on umount (testing branch)
2642 18 Patrick Donnelly
2643
2644
h3. 2021 July 16
2645
2646
https://pulpito.ceph.com/pdonnell-2021-07-16_05:50:11-fs-wip-pdonnell-testing-20210716.022804-distro-basic-smithi/
2647
2648
* https://tracker.ceph.com/issues/48773
2649
    qa: scrub does not complete
2650
* https://tracker.ceph.com/issues/48772
2651
    qa: pjd: not ok 9, 44, 80
2652
* https://tracker.ceph.com/issues/45434
2653
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2654
* https://tracker.ceph.com/issues/51279
2655
    kclient hangs on umount (testing branch)
2656
* https://tracker.ceph.com/issues/50824
2657
    qa: snaptest-git-ceph bus error
2658 17 Patrick Donnelly
2659
2660
h3. 2021 July 04
2661
2662
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210703.052904
2663
2664
* https://tracker.ceph.com/issues/48773
2665
    qa: scrub does not complete
2666
* https://tracker.ceph.com/issues/39150
2667
    mon: "FAILED ceph_assert(session_map.sessions.empty())" when out of quorum
2668
* https://tracker.ceph.com/issues/45434
2669
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2670
* https://tracker.ceph.com/issues/51282
2671
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2672
* https://tracker.ceph.com/issues/48771
2673
    qa: iogen: workload fails to cause balancing
2674
* https://tracker.ceph.com/issues/51279
2675
    kclient hangs on umount (testing branch)
2676
* https://tracker.ceph.com/issues/50250
2677
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
2678 16 Patrick Donnelly
2679
2680
h3. 2021 July 01
2681
2682
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210701.192056
2683
2684
* https://tracker.ceph.com/issues/51197
2685
    qa: [WRN] Scrub error on inode 0x10000001520 (/client.0/tmp/t/linux-5.4/Documentation/driver-api) see mds.f log and `damage ls` output for details
2686
* https://tracker.ceph.com/issues/50866
2687
    osd: stat mismatch on objects
2688
* https://tracker.ceph.com/issues/48773
2689
    qa: scrub does not complete
2690 15 Patrick Donnelly
2691
2692
h3. 2021 June 26
2693
2694
https://pulpito.ceph.com/pdonnell-2021-06-26_00:57:00-fs-wip-pdonnell-testing-20210625.225421-distro-basic-smithi/
2695
2696
* https://tracker.ceph.com/issues/51183
2697
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
2698
* https://tracker.ceph.com/issues/51410
2699
    kclient: fails to finish reconnect during MDS thrashing (testing branch)
2700
* https://tracker.ceph.com/issues/48773
2701
    qa: scrub does not complete
2702
* https://tracker.ceph.com/issues/51282
2703
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2704
* https://tracker.ceph.com/issues/51169
2705
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
2706
* https://tracker.ceph.com/issues/48772
2707
    qa: pjd: not ok 9, 44, 80
2708 14 Patrick Donnelly
2709
2710
h3. 2021 June 21
2711
2712
https://pulpito.ceph.com/pdonnell-2021-06-22_00:27:21-fs-wip-pdonnell-testing-20210621.231646-distro-basic-smithi/
2713
2714
One failure caused by PR: https://github.com/ceph/ceph/pull/41935#issuecomment-866472599
2715
2716
* https://tracker.ceph.com/issues/51282
2717
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2718
* https://tracker.ceph.com/issues/51183
2719
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
2720
* https://tracker.ceph.com/issues/48773
2721
    qa: scrub does not complete
2722
* https://tracker.ceph.com/issues/48771
2723
    qa: iogen: workload fails to cause balancing
2724
* https://tracker.ceph.com/issues/51169
2725
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
2726
* https://tracker.ceph.com/issues/50495
2727
    libcephfs: shutdown race fails with status 141
2728
* https://tracker.ceph.com/issues/45434
2729
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2730
* https://tracker.ceph.com/issues/50824
2731
    qa: snaptest-git-ceph bus error
2732
* https://tracker.ceph.com/issues/50223
2733
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2734 13 Patrick Donnelly
2735
2736
h3. 2021 June 16
2737
2738
https://pulpito.ceph.com/pdonnell-2021-06-16_21:26:55-fs-wip-pdonnell-testing-20210616.191804-distro-basic-smithi/
2739
2740
MDS abort class of failures caused by PR: https://github.com/ceph/ceph/pull/41667
2741
2742
* https://tracker.ceph.com/issues/45434
2743
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2744
* https://tracker.ceph.com/issues/51169
2745
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
2746
* https://tracker.ceph.com/issues/43216
2747
    MDSMonitor: removes MDS coming out of quorum election
2748
* https://tracker.ceph.com/issues/51278
2749
    mds: "FAILED ceph_assert(!segments.empty())"
2750
* https://tracker.ceph.com/issues/51279
2751
    kclient hangs on umount (testing branch)
2752
* https://tracker.ceph.com/issues/51280
2753
    mds: "FAILED ceph_assert(r == 0 || r == -2)"
2754
* https://tracker.ceph.com/issues/51183
2755
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
2756
* https://tracker.ceph.com/issues/51281
2757
    qa: snaptest-snap-rm-cmp.sh: "echo 'FAIL: bad match, /tmp/a 4637e766853d1ad16a7b17079e2c6f03 != real c3883760b18d50e8d78819c54d579b00'"
2758
* https://tracker.ceph.com/issues/48773
2759
    qa: scrub does not complete
2760
* https://tracker.ceph.com/issues/51076
2761
    "wait_for_recovery: failed before timeout expired" during thrashosd test with EC backend.
2762
* https://tracker.ceph.com/issues/51228
2763
    qa: rmdir: failed to remove 'a/.snap/*': No such file or directory
2764
* https://tracker.ceph.com/issues/51282
2765
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2766 12 Patrick Donnelly
2767
2768
h3. 2021 June 14
2769
2770
https://pulpito.ceph.com/pdonnell-2021-06-14_20:53:05-fs-wip-pdonnell-testing-20210614.173325-distro-basic-smithi/
2771
2772
Some Ubuntu 20.04 upgrade fallout. In particular, upgrade tests are failing due to missing packages for 18.04 Pacific.
2773
2774
* https://tracker.ceph.com/issues/51169
2775
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
2776
* https://tracker.ceph.com/issues/51228
2777
    qa: rmdir: failed to remove 'a/.snap/*': No such file or directory
2778
* https://tracker.ceph.com/issues/48773
2779
    qa: scrub does not complete
2780
* https://tracker.ceph.com/issues/51183
2781
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
2782
* https://tracker.ceph.com/issues/45434
2783
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2784
* https://tracker.ceph.com/issues/51182
2785
    pybind/mgr/snap_schedule: Invalid command: Unexpected argument 'fs=cephfs'
2786
* https://tracker.ceph.com/issues/51229
2787
    qa: test_multi_snap_schedule list difference failure
2788
* https://tracker.ceph.com/issues/50821
2789
    qa: untar_snap_rm failure during mds thrashing
2790 11 Patrick Donnelly
2791
2792
h3. 2021 June 13
2793
2794
https://pulpito.ceph.com/pdonnell-2021-06-12_02:45:35-fs-wip-pdonnell-testing-20210612.002809-distro-basic-smithi/
2795
2796
Some Ubuntu 20.04 upgrade fallout. In particular, upgrade tests are failing due to missing packages for 18.04 Pacific.
2797
2798
* https://tracker.ceph.com/issues/51169
2799
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
2800
* https://tracker.ceph.com/issues/48773
2801
    qa: scrub does not complete
2802
* https://tracker.ceph.com/issues/51182
2803
    pybind/mgr/snap_schedule: Invalid command: Unexpected argument 'fs=cephfs'
2804
* https://tracker.ceph.com/issues/51183
2805
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
2806
* https://tracker.ceph.com/issues/51197
2807
    qa: [WRN] Scrub error on inode 0x10000001520 (/client.0/tmp/t/linux-5.4/Documentation/driver-api) see mds.f log and `damage ls` output for details
2808
* https://tracker.ceph.com/issues/45434
2809 10 Patrick Donnelly
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2810
2811
h3. 2021 June 11
2812
2813
https://pulpito.ceph.com/pdonnell-2021-06-11_18:02:10-fs-wip-pdonnell-testing-20210611.162716-distro-basic-smithi/
2814
2815
Some Ubuntu 20.04 upgrade fallout. In particular, upgrade tests are failing due to missing packages for 18.04 Pacific.
2816
2817
* https://tracker.ceph.com/issues/51169
2818
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
2819
* https://tracker.ceph.com/issues/45434
2820
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2821
* https://tracker.ceph.com/issues/48771
2822
    qa: iogen: workload fails to cause balancing
2823
* https://tracker.ceph.com/issues/43216
2824
    MDSMonitor: removes MDS coming out of quorum election
2825
* https://tracker.ceph.com/issues/51182
2826
    pybind/mgr/snap_schedule: Invalid command: Unexpected argument 'fs=cephfs'
2827
* https://tracker.ceph.com/issues/50223
2828
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2829
* https://tracker.ceph.com/issues/48773
2830
    qa: scrub does not complete
2831
* https://tracker.ceph.com/issues/51183
2832
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
2833
* https://tracker.ceph.com/issues/51184
2834
    qa: fs:bugs does not specify distro
2835 9 Patrick Donnelly
2836
2837
h3. 2021 June 03
2838
2839
https://pulpito.ceph.com/pdonnell-2021-06-03_03:40:33-fs-wip-pdonnell-testing-20210603.020013-distro-basic-smithi/
2840
2841
* https://tracker.ceph.com/issues/45434
2842
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2843
* https://tracker.ceph.com/issues/50016
2844
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
2845
* https://tracker.ceph.com/issues/50821
2846
    qa: untar_snap_rm failure during mds thrashing
2847
* https://tracker.ceph.com/issues/50622 (regression)
2848
    msg: active_connections regression
2849
* https://tracker.ceph.com/issues/49845#note-2 (regression)
2850
    qa: failed umount in test_volumes
2851
* https://tracker.ceph.com/issues/48773
2852
    qa: scrub does not complete
2853
* https://tracker.ceph.com/issues/43216
2854
    MDSMonitor: removes MDS coming out of quorum election
2855 7 Patrick Donnelly
2856
2857 8 Patrick Donnelly
h3. 2021 May 18
2858
2859
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210518.214114
2860
2861
Regression in testing kernel caused some failures. Ilya fixed those and rerun
2862
looked better. Some odd new noise in the rerun relating to packaging and "No
2863
module named 'tasks.ceph'".
2864
2865
* https://tracker.ceph.com/issues/50824
2866
    qa: snaptest-git-ceph bus error
2867
* https://tracker.ceph.com/issues/50622 (regression)
2868
    msg: active_connections regression
2869
* https://tracker.ceph.com/issues/49845#note-2 (regression)
2870
    qa: failed umount in test_volumes
2871
* https://tracker.ceph.com/issues/48203 (stock kernel update required)
2872
    qa: quota failure
2873
2874
2875 7 Patrick Donnelly
h3. 2021 May 18
2876
2877
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210518.025642
2878
2879
* https://tracker.ceph.com/issues/50821
2880
    qa: untar_snap_rm failure during mds thrashing
2881
* https://tracker.ceph.com/issues/48773
2882
    qa: scrub does not complete
2883
* https://tracker.ceph.com/issues/45591
2884
    mgr: FAILED ceph_assert(daemon != nullptr)
2885
* https://tracker.ceph.com/issues/50866
2886
    osd: stat mismatch on objects
2887
* https://tracker.ceph.com/issues/50016
2888
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
2889
* https://tracker.ceph.com/issues/50867
2890
    qa: fs:mirror: reduced data availability
2891
* https://tracker.ceph.com/issues/50821
2892
    qa: untar_snap_rm failure during mds thrashing
2893
* https://tracker.ceph.com/issues/50622 (regression)
2894
    msg: active_connections regression
2895
* https://tracker.ceph.com/issues/50223
2896
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2897
* https://tracker.ceph.com/issues/50868
2898
    qa: "kern.log.gz already exists; not overwritten"
2899
* https://tracker.ceph.com/issues/50870
2900
    qa: test_full: "rm: cannot remove 'large_file_a': Permission denied"
2901 6 Patrick Donnelly
2902
2903
h3. 2021 May 11
2904
2905
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210511.232042
2906
2907
* one class of failures caused by PR
2908
* https://tracker.ceph.com/issues/48812
2909
    qa: test_scrub_pause_and_resume_with_abort failure
2910
* https://tracker.ceph.com/issues/50390
2911
    mds: monclient: wait_auth_rotating timed out after 30
2912
* https://tracker.ceph.com/issues/48773
2913
    qa: scrub does not complete
2914
* https://tracker.ceph.com/issues/50821
2915
    qa: untar_snap_rm failure during mds thrashing
2916
* https://tracker.ceph.com/issues/50224
2917
    qa: test_mirroring_init_failure_with_recovery failure
2918
* https://tracker.ceph.com/issues/50622 (regression)
2919
    msg: active_connections regression
2920
* https://tracker.ceph.com/issues/50825
2921
    qa: snaptest-git-ceph hang during mon thrashing v2
2922
* https://tracker.ceph.com/issues/50821
2923
    qa: untar_snap_rm failure during mds thrashing
2924
* https://tracker.ceph.com/issues/50823
2925
    qa: RuntimeError: timeout waiting for cluster to stabilize
2926 5 Patrick Donnelly
2927
2928
h3. 2021 May 14
2929
2930
https://pulpito.ceph.com/pdonnell-2021-05-14_21:45:42-fs-master-distro-basic-smithi/
2931
2932
* https://tracker.ceph.com/issues/48812
2933
    qa: test_scrub_pause_and_resume_with_abort failure
2934
* https://tracker.ceph.com/issues/50821
2935
    qa: untar_snap_rm failure during mds thrashing
2936
* https://tracker.ceph.com/issues/50622 (regression)
2937
    msg: active_connections regression
2938
* https://tracker.ceph.com/issues/50822
2939
    qa: testing kernel patch for client metrics causes mds abort
2940
* https://tracker.ceph.com/issues/48773
2941
    qa: scrub does not complete
2942
* https://tracker.ceph.com/issues/50823
2943
    qa: RuntimeError: timeout waiting for cluster to stabilize
2944
* https://tracker.ceph.com/issues/50824
2945
    qa: snaptest-git-ceph bus error
2946
* https://tracker.ceph.com/issues/50825
2947
    qa: snaptest-git-ceph hang during mon thrashing v2
2948
* https://tracker.ceph.com/issues/50826
2949
    kceph: stock RHEL kernel hangs on snaptests with mon|osd thrashers
2950 4 Patrick Donnelly
2951
2952
h3. 2021 May 01
2953
2954
https://pulpito.ceph.com/pdonnell-2021-05-01_09:07:09-fs-wip-pdonnell-testing-20210501.040415-distro-basic-smithi/
2955
2956
* https://tracker.ceph.com/issues/45434
2957
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2958
* https://tracker.ceph.com/issues/50281
2959
    qa: untar_snap_rm timeout
2960
* https://tracker.ceph.com/issues/48203 (stock kernel update required)
2961
    qa: quota failure
2962
* https://tracker.ceph.com/issues/48773
2963
    qa: scrub does not complete
2964
* https://tracker.ceph.com/issues/50390
2965
    mds: monclient: wait_auth_rotating timed out after 30
2966
* https://tracker.ceph.com/issues/50250
2967
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details"
2968
* https://tracker.ceph.com/issues/50622 (regression)
2969
    msg: active_connections regression
2970
* https://tracker.ceph.com/issues/45591
2971
    mgr: FAILED ceph_assert(daemon != nullptr)
2972
* https://tracker.ceph.com/issues/50221
2973
    qa: snaptest-git-ceph failure in git diff
2974
* https://tracker.ceph.com/issues/50016
2975
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
2976 3 Patrick Donnelly
2977
2978
h3. 2021 Apr 15
2979
2980
https://pulpito.ceph.com/pdonnell-2021-04-15_01:35:57-fs-wip-pdonnell-testing-20210414.230315-distro-basic-smithi/
2981
2982
* https://tracker.ceph.com/issues/50281
2983
    qa: untar_snap_rm timeout
2984
* https://tracker.ceph.com/issues/50220
2985
    qa: dbench workload timeout
2986
* https://tracker.ceph.com/issues/50246
2987
    mds: failure replaying journal (EMetaBlob)
2988
* https://tracker.ceph.com/issues/50250
2989
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details"
2990
* https://tracker.ceph.com/issues/50016
2991
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
2992
* https://tracker.ceph.com/issues/50222
2993
    osd: 5.2s0 deep-scrub : stat mismatch
2994
* https://tracker.ceph.com/issues/45434
2995
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2996
* https://tracker.ceph.com/issues/49845
2997
    qa: failed umount in test_volumes
2998
* https://tracker.ceph.com/issues/37808
2999
    osd: osdmap cache weak_refs assert during shutdown
3000
* https://tracker.ceph.com/issues/50387
3001
    client: fs/snaps failure
3002
* https://tracker.ceph.com/issues/50389
3003
    mds: "cluster [ERR] Error recovering journal 0x203: (2) No such file or directory" in cluster log"
3004
* https://tracker.ceph.com/issues/50216
3005
    qa: "ls: cannot access 'lost+found': No such file or directory"
3006
* https://tracker.ceph.com/issues/50390
3007
    mds: monclient: wait_auth_rotating timed out after 30
3008
3009 1 Patrick Donnelly
3010
3011 2 Patrick Donnelly
h3. 2021 Apr 08
3012
3013
https://pulpito.ceph.com/pdonnell-2021-04-08_22:42:24-fs-wip-pdonnell-testing-20210408.192301-distro-basic-smithi/
3014
3015
* https://tracker.ceph.com/issues/45434
3016
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
3017
* https://tracker.ceph.com/issues/50016
3018
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
3019
* https://tracker.ceph.com/issues/48773
3020
    qa: scrub does not complete
3021
* https://tracker.ceph.com/issues/50279
3022
    qa: "Replacing daemon mds.b as rank 0 with standby daemon mds.c"
3023
* https://tracker.ceph.com/issues/50246
3024
    mds: failure replaying journal (EMetaBlob)
3025
* https://tracker.ceph.com/issues/48365
3026
    qa: ffsb build failure on CentOS 8.2
3027
* https://tracker.ceph.com/issues/50216
3028
    qa: "ls: cannot access 'lost+found': No such file or directory"
3029
* https://tracker.ceph.com/issues/50223
3030
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
3031
* https://tracker.ceph.com/issues/50280
3032
    cephadm: RuntimeError: uid/gid not found
3033
* https://tracker.ceph.com/issues/50281
3034
    qa: untar_snap_rm timeout
3035
3036 1 Patrick Donnelly
h3. 2021 Apr 08
3037
3038
https://pulpito.ceph.com/pdonnell-2021-04-08_04:31:36-fs-wip-pdonnell-testing-20210408.024225-distro-basic-smithi/
3039
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210408.142238 (with logic inversion / QA fix)
3040
3041
* https://tracker.ceph.com/issues/50246
3042
    mds: failure replaying journal (EMetaBlob)
3043
* https://tracker.ceph.com/issues/50250
3044
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details"
3045
3046
3047
h3. 2021 Apr 07
3048
3049
https://pulpito.ceph.com/pdonnell-2021-04-07_02:12:41-fs-wip-pdonnell-testing-20210406.213012-distro-basic-smithi/
3050
3051
* https://tracker.ceph.com/issues/50215
3052
    qa: "log [ERR] : error reading sessionmap 'mds2_sessionmap'"
3053
* https://tracker.ceph.com/issues/49466
3054
    qa: "Command failed on gibba030 with status 1: 'set -ex\nsudo dd of=/tmp/tmp.ZEeZBasJer'"
3055
* https://tracker.ceph.com/issues/50216
3056
    qa: "ls: cannot access 'lost+found': No such file or directory"
3057
* https://tracker.ceph.com/issues/48773
3058
    qa: scrub does not complete
3059
* https://tracker.ceph.com/issues/49845
3060
    qa: failed umount in test_volumes
3061
* https://tracker.ceph.com/issues/50220
3062
    qa: dbench workload timeout
3063
* https://tracker.ceph.com/issues/50221
3064
    qa: snaptest-git-ceph failure in git diff
3065
* https://tracker.ceph.com/issues/50222
3066
    osd: 5.2s0 deep-scrub : stat mismatch
3067
* https://tracker.ceph.com/issues/50223
3068
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
3069
* https://tracker.ceph.com/issues/50224
3070
    qa: test_mirroring_init_failure_with_recovery failure
3071
3072
h3. 2021 Apr 01
3073
3074
https://pulpito.ceph.com/pdonnell-2021-04-01_00:45:34-fs-wip-pdonnell-testing-20210331.222326-distro-basic-smithi/
3075
3076
* https://tracker.ceph.com/issues/48772
3077
    qa: pjd: not ok 9, 44, 80
3078
* https://tracker.ceph.com/issues/50177
3079
    osd: "stalled aio... buggy kernel or bad device?"
3080
* https://tracker.ceph.com/issues/48771
3081
    qa: iogen: workload fails to cause balancing
3082
* https://tracker.ceph.com/issues/49845
3083
    qa: failed umount in test_volumes
3084
* https://tracker.ceph.com/issues/48773
3085
    qa: scrub does not complete
3086
* https://tracker.ceph.com/issues/48805
3087
    mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details"
3088
* https://tracker.ceph.com/issues/50178
3089
    qa: "TypeError: run() got an unexpected keyword argument 'shell'"
3090
* https://tracker.ceph.com/issues/45434
3091
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
3092
3093
h3. 2021 Mar 24
3094
3095
https://pulpito.ceph.com/pdonnell-2021-03-24_23:26:35-fs-wip-pdonnell-testing-20210324.190252-distro-basic-smithi/
3096
3097
* https://tracker.ceph.com/issues/49500
3098
    qa: "Assertion `cb_done' failed."
3099
* https://tracker.ceph.com/issues/50019
3100
    qa: mount failure with cephadm "probably no MDS server is up?"
3101
* https://tracker.ceph.com/issues/50020
3102
    qa: "RADOS object not found (Failed to operate read op for oid cephfs_mirror)"
3103
* https://tracker.ceph.com/issues/48773
3104
    qa: scrub does not complete
3105
* https://tracker.ceph.com/issues/45434
3106
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
3107
* https://tracker.ceph.com/issues/48805
3108
    mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details"
3109
* https://tracker.ceph.com/issues/48772
3110
    qa: pjd: not ok 9, 44, 80
3111
* https://tracker.ceph.com/issues/50021
3112
    qa: snaptest-git-ceph failure during mon thrashing
3113
* https://tracker.ceph.com/issues/48771
3114
    qa: iogen: workload fails to cause balancing
3115
* https://tracker.ceph.com/issues/50016
3116
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
3117
* https://tracker.ceph.com/issues/49466
3118
    qa: "Command failed on gibba030 with status 1: 'set -ex\nsudo dd of=/tmp/tmp.ZEeZBasJer'"
3119
3120
3121
h3. 2021 Mar 18
3122
3123
https://pulpito.ceph.com/pdonnell-2021-03-18_13:46:31-fs-wip-pdonnell-testing-20210318.024145-distro-basic-smithi/
3124
3125
* https://tracker.ceph.com/issues/49466
3126
    qa: "Command failed on gibba030 with status 1: 'set -ex\nsudo dd of=/tmp/tmp.ZEeZBasJer'"
3127
* https://tracker.ceph.com/issues/48773
3128
    qa: scrub does not complete
3129
* https://tracker.ceph.com/issues/48805
3130
    mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details"
3131
* https://tracker.ceph.com/issues/45434
3132
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
3133
* https://tracker.ceph.com/issues/49845
3134
    qa: failed umount in test_volumes
3135
* https://tracker.ceph.com/issues/49605
3136
    mgr: drops command on the floor
3137
* https://tracker.ceph.com/issues/48203 (stock kernel update required)
3138
    qa: quota failure
3139
* https://tracker.ceph.com/issues/49928
3140
    client: items pinned in cache preventing unmount x2
3141
3142
h3. 2021 Mar 15
3143
3144
https://pulpito.ceph.com/pdonnell-2021-03-15_22:16:56-fs-wip-pdonnell-testing-20210315.182203-distro-basic-smithi/
3145
3146
* https://tracker.ceph.com/issues/49842
3147
    qa: stuck pkg install
3148
* https://tracker.ceph.com/issues/49466
3149
    qa: "Command failed on gibba030 with status 1: 'set -ex\nsudo dd of=/tmp/tmp.ZEeZBasJer'"
3150
* https://tracker.ceph.com/issues/49822
3151
    test: test_mirroring_command_idempotency (tasks.cephfs.test_admin.TestMirroringCommands) failure
3152
* https://tracker.ceph.com/issues/49240
3153
    terminate called after throwing an instance of 'std::bad_alloc'
3154
* https://tracker.ceph.com/issues/48773
3155
    qa: scrub does not complete
3156
* https://tracker.ceph.com/issues/45434
3157
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
3158
* https://tracker.ceph.com/issues/49500
3159
    qa: "Assertion `cb_done' failed."
3160
* https://tracker.ceph.com/issues/49843
3161
    qa: fs/snaps/snaptest-upchildrealms.sh failure
3162
* https://tracker.ceph.com/issues/49845
3163
    qa: failed umount in test_volumes
3164
* https://tracker.ceph.com/issues/48805
3165
    mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details"
3166
* https://tracker.ceph.com/issues/49605
3167
    mgr: drops command on the floor
3168
3169
and failure caused by PR: https://github.com/ceph/ceph/pull/39969
3170
3171
3172
h3. 2021 Mar 09
3173
3174
https://pulpito.ceph.com/pdonnell-2021-03-09_03:27:39-fs-wip-pdonnell-testing-20210308.214827-distro-basic-smithi/
3175
3176
* https://tracker.ceph.com/issues/49500
3177
    qa: "Assertion `cb_done' failed."
3178
* https://tracker.ceph.com/issues/48805
3179
    mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details"
3180
* https://tracker.ceph.com/issues/48773
3181
    qa: scrub does not complete
3182
* https://tracker.ceph.com/issues/45434
3183
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
3184
* https://tracker.ceph.com/issues/49240
3185
    terminate called after throwing an instance of 'std::bad_alloc'
3186
* https://tracker.ceph.com/issues/49466
3187
    qa: "Command failed on gibba030 with status 1: 'set -ex\nsudo dd of=/tmp/tmp.ZEeZBasJer'"
3188
* https://tracker.ceph.com/issues/49684
3189
    qa: fs:cephadm mount does not wait for mds to be created
3190
* https://tracker.ceph.com/issues/48771
3191
    qa: iogen: workload fails to cause balancing