Project

General

Profile

Main » History » Version 236

Patrick Donnelly, 03/28/2024 06:28 PM

1 221 Patrick Donnelly
h1. <code>main</code> branch
2 1 Patrick Donnelly
3 236 Patrick Donnelly
h3. 2024-03-28
4
5
https://tracker.ceph.com/issues/65213
6
7
8
9 235 Milind Changire
h3. 2024-03-25
10
11
https://pulpito.ceph.com/mchangir-2024-03-22_09:46:06-fs:upgrade-wip-mchangir-testing-main-20240318.032620-testing-default-smithi/
12
* https://tracker.ceph.com/issues/64502
13
  fusermount -u fails with: teuthology.exceptions.MaxWhileTries: reached maximum tries (51) after waiting for 300 seconds
14
15
https://pulpito.ceph.com/mchangir-2024-03-22_09:48:09-fs:libcephfs-wip-mchangir-testing-main-20240318.032620-testing-default-smithi/
16
17
* https://tracker.ceph.com/issues/62245
18
  libcephfs/test.sh failed - https://tracker.ceph.com/issues/62245#note-3
19
20
21 228 Patrick Donnelly
h3. 2024-03-20
22
23 234 Patrick Donnelly
https://pulpito.ceph.com/?branch=wip-batrick-testing-20240320.145742
24 228 Patrick Donnelly
25 233 Patrick Donnelly
https://github.com/batrick/ceph/commit/360516069d9393362c4cc6eb9371680fe16d66ab
26
27 229 Patrick Donnelly
Ubuntu jobs filtered out because builds were skipped by jenkins/shaman.
28 1 Patrick Donnelly
29 229 Patrick Donnelly
This run has a lot more failures because https://github.com/ceph/ceph/pull/55455 fixed log WRN/ERR checks.
30 228 Patrick Donnelly
31 229 Patrick Donnelly
* https://tracker.ceph.com/issues/57676
32
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
33
* https://tracker.ceph.com/issues/64572
34
    workunits/fsx.sh failure
35
* https://tracker.ceph.com/issues/65018
36
    PG_DEGRADED warnings during cluster creation via cephadm: "Health check failed: Degraded data redundancy: 2/192 objects degraded (1.042%), 1 pg degraded (PG_DEGRADED)"
37
* https://tracker.ceph.com/issues/64707 (new issue)
38
    suites/fsstress.sh hangs on one client - test times out
39 1 Patrick Donnelly
* https://tracker.ceph.com/issues/64988
40
    qa: fs:workloads mgr client evicted indicated by "cluster [WRN] evicting unresponsive client smithi042:x (15288), after 303.306 seconds"
41
* https://tracker.ceph.com/issues/59684
42
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
43 230 Patrick Donnelly
* https://tracker.ceph.com/issues/64972
44
    qa: "ceph tell 4.3a deep-scrub" command not found
45
* https://tracker.ceph.com/issues/54108
46
    qa: iogen workunit: "The following counters failed to be set on mds daemons: {'mds.exported', 'mds.imported'}"
47
* https://tracker.ceph.com/issues/65019
48
    qa/suites/fs/top: [WRN] Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)" in cluster log 
49
* https://tracker.ceph.com/issues/65020
50
    qa: Scrub error on inode 0x1000000356c (/volumes/qa/sv_0/2f8f6bb4-3ea9-47a0-bd79-a0f50dc149d5/client.0/tmp/clients/client7/~dmtmp/PARADOX) see mds.b log and `damage ls` output for details" in cluster log
51
* https://tracker.ceph.com/issues/65021
52
    qa/suites/fs/nfs: cluster [WRN] Health check failed: 1 stray daemon(s) not managed by cephadm (CEPHADM_STRAY_DAEMON)" in cluster log
53
* https://tracker.ceph.com/issues/63699
54
    qa: failed cephfs-shell test_reading_conf
55 231 Patrick Donnelly
* https://tracker.ceph.com/issues/64711
56
    Test failure: test_cephfs_mirror_cancel_mirroring_and_readd (tasks.cephfs.test_mirroring.TestMirroring)
57
* https://tracker.ceph.com/issues/50821
58
    qa: untar_snap_rm failure during mds thrashing
59 232 Patrick Donnelly
* https://tracker.ceph.com/issues/65022
60
    qa: test_max_items_per_obj open procs not fully cleaned up
61 228 Patrick Donnelly
62 226 Venky Shankar
h3.  14th March 2024
63
64
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20240307.013758
65
66 227 Venky Shankar
(pjd.sh failures are related to a bug in the testing kernel. See - https://tracker.ceph.com/issues/64679#note-4)
67 226 Venky Shankar
68
* https://tracker.ceph.com/issues/62067
69
    ffsb.sh failure "Resource temporarily unavailable"
70
* https://tracker.ceph.com/issues/57676
71
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
72
* https://tracker.ceph.com/issues/64502
73
    pacific/quincy/v18.2.0: client: ceph-fuse fails to unmount after upgrade to main
74
* https://tracker.ceph.com/issues/64572
75
    workunits/fsx.sh failure
76
* https://tracker.ceph.com/issues/63700
77
    qa: test_cd_with_args failure
78
* https://tracker.ceph.com/issues/59684
79
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
80
* https://tracker.ceph.com/issues/61243
81
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
82
83 225 Venky Shankar
h3. 5th March 2024
84
85
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20240304.042522
86
87
* https://tracker.ceph.com/issues/57676
88
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
89
* https://tracker.ceph.com/issues/64502
90
    pacific/quincy/v18.2.0: client: ceph-fuse fails to unmount after upgrade to main
91
* https://tracker.ceph.com/issues/63949
92
    leak in mds.c detected by valgrind during CephFS QA run
93
* https://tracker.ceph.com/issues/57656
94
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
95
* https://tracker.ceph.com/issues/63699
96
    qa: failed cephfs-shell test_reading_conf
97
* https://tracker.ceph.com/issues/64572
98
    workunits/fsx.sh failure
99
* https://tracker.ceph.com/issues/64707 (new issue)
100
    suites/fsstress.sh hangs on one client - test times out
101
* https://tracker.ceph.com/issues/59684
102
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
103
* https://tracker.ceph.com/issues/63700
104
    qa: test_cd_with_args failure
105
* https://tracker.ceph.com/issues/64711
106
    Test failure: test_cephfs_mirror_cancel_mirroring_and_readd (tasks.cephfs.test_mirroring.TestMirroring)
107
* https://tracker.ceph.com/issues/64729 (new issue)
108
    mon.a (mon.0) 1281 : cluster 3 [WRN] MDS_SLOW_METADATA_IO: 3 MDSs report slow metadata IOs" in cluster log
109
* https://tracker.ceph.com/issues/64730
110
    fs/misc/multiple_rsync.sh workunit times out
111
112 224 Venky Shankar
h3. 26th Feb 2024
113
114
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20240216.060239
115
116
(This run is a bit messy due to
117
118
  a) OCI runtime issues in the testing kernel with centos9
119
  b) SELinux denials related failures
120
  c) Unrelated MON_DOWN warnings)
121
122
* https://tracker.ceph.com/issues/57676
123
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
124
* https://tracker.ceph.com/issues/63700
125
    qa: test_cd_with_args failure
126
* https://tracker.ceph.com/issues/63949
127
    leak in mds.c detected by valgrind during CephFS QA run
128
* https://tracker.ceph.com/issues/59684
129
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
130
* https://tracker.ceph.com/issues/61243
131
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
132
* https://tracker.ceph.com/issues/63699
133
    qa: failed cephfs-shell test_reading_conf
134
* https://tracker.ceph.com/issues/64172
135
    Test failure: test_multiple_path_r (tasks.cephfs.test_admin.TestFsAuthorize)
136
* https://tracker.ceph.com/issues/57656
137
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
138
* https://tracker.ceph.com/issues/64572
139
    workunits/fsx.sh failure
140
141 222 Patrick Donnelly
h3. 20th Feb 2024
142
143
https://github.com/ceph/ceph/pull/55601
144
https://github.com/ceph/ceph/pull/55659
145
146
https://pulpito.ceph.com/pdonnell-2024-02-20_07:23:03-fs:upgrade:mds_upgrade_sequence-wip-batrick-testing-20240220.022152-distro-default-smithi/
147
148
* https://tracker.ceph.com/issues/64502
149
    client: quincy ceph-fuse fails to unmount after upgrade to main
150
151 223 Patrick Donnelly
This run has numerous problems. #55601 introduces testing for the upgrade sequence from </code>reef/{v18.2.0,v18.2.1,reef}</code> as well as an extra dimension for the ceph-fuse client. The main "big" issue is i64502: the ceph-fuse client is not being unmounted when <code>fusermount -u</code> is called. Instead, the client begins to unmount only after daemons are shut down during test cleanup.
152 218 Venky Shankar
153
h3. 19th Feb 2024
154
155 220 Venky Shankar
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20240217.015652
156
157 218 Venky Shankar
* https://tracker.ceph.com/issues/61243
158
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
159
* https://tracker.ceph.com/issues/63700
160
    qa: test_cd_with_args failure
161
* https://tracker.ceph.com/issues/63141
162
    qa/cephfs: test_idem_unaffected_root_squash fails
163
* https://tracker.ceph.com/issues/59684
164
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
165
* https://tracker.ceph.com/issues/63949
166
    leak in mds.c detected by valgrind during CephFS QA run
167
* https://tracker.ceph.com/issues/63764
168
    Test failure: test_r_with_fsname_and_no_path_in_cap (tasks.cephfs.test_multifs_auth.TestMDSCaps)
169
* https://tracker.ceph.com/issues/63699
170
    qa: failed cephfs-shell test_reading_conf
171 219 Venky Shankar
* https://tracker.ceph.com/issues/64482
172
    ceph: stderr Error: OCI runtime error: crun: bpf create ``: Function not implemented
173 201 Rishabh Dave
174 217 Venky Shankar
h3. 29 Jan 2024
175
176
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20240119.075157-1
177
178
* https://tracker.ceph.com/issues/57676
179
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
180
* https://tracker.ceph.com/issues/63949
181
    leak in mds.c detected by valgrind during CephFS QA run
182
* https://tracker.ceph.com/issues/62067
183
    ffsb.sh failure "Resource temporarily unavailable"
184
* https://tracker.ceph.com/issues/64172
185
    Test failure: test_multiple_path_r (tasks.cephfs.test_admin.TestFsAuthorize)
186
* https://tracker.ceph.com/issues/63265
187
    qa: fs/snaps/snaptest-git-ceph.sh failed when reseting to tag 'v0.1'
188
* https://tracker.ceph.com/issues/61243
189
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
190
* https://tracker.ceph.com/issues/59684
191
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
192
* https://tracker.ceph.com/issues/57656
193
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
194
* https://tracker.ceph.com/issues/64209
195
    snaptest-multiple-capsnaps.sh fails with "got remote process result: 1"
196
197 216 Venky Shankar
h3. 17th Jan 2024
198
199
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20240103.072409-1
200
201
* https://tracker.ceph.com/issues/63764
202
    Test failure: test_r_with_fsname_and_no_path_in_cap (tasks.cephfs.test_multifs_auth.TestMDSCaps)
203
* https://tracker.ceph.com/issues/57676
204
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
205
* https://tracker.ceph.com/issues/51964
206
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
207
* https://tracker.ceph.com/issues/63949
208
    leak in mds.c detected by valgrind during CephFS QA run
209
* https://tracker.ceph.com/issues/62067
210
    ffsb.sh failure "Resource temporarily unavailable"
211
* https://tracker.ceph.com/issues/61243
212
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
213
* https://tracker.ceph.com/issues/63259
214
    mds: failed to store backtrace and force file system read-only
215
* https://tracker.ceph.com/issues/63265
216
    qa: fs/snaps/snaptest-git-ceph.sh failed when reseting to tag 'v0.1'
217
218
h3. 16 Jan 2024
219 215 Rishabh Dave
220 214 Rishabh Dave
https://pulpito.ceph.com/rishabh-2023-12-11_15:37:57-fs-rishabh-2023dec11-testing-default-smithi/
221
https://pulpito.ceph.com/rishabh-2023-12-17_11:19:43-fs-rishabh-2023dec11-testing-default-smithi/
222
https://pulpito.ceph.com/rishabh-2024-01-04_18:43:16-fs-rishabh-2024jan4-testing-default-smithi
223
224
* https://tracker.ceph.com/issues/63764
225
  Test failure: test_r_with_fsname_and_no_path_in_cap (tasks.cephfs.test_multifs_auth.TestMDSCaps)
226
* https://tracker.ceph.com/issues/63141
227
  qa/cephfs: test_idem_unaffected_root_squash fails
228
* https://tracker.ceph.com/issues/62067
229
  ffsb.sh failure "Resource temporarily unavailable" 
230
* https://tracker.ceph.com/issues/51964
231
  qa: test_cephfs_mirror_restart_sync_on_blocklist failure
232
* https://tracker.ceph.com/issues/54462 
233
  Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128
234
* https://tracker.ceph.com/issues/57676
235
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
236
237
* https://tracker.ceph.com/issues/63949
238
  valgrind leak in MDS
239
* https://tracker.ceph.com/issues/64041
240
  qa/cephfs: fs/upgrade/nofs suite attempts to jump more than 2 releases
241
* fsstress failure in last run was due a kernel MM layer failure, unrelated to CephFS
242
* from last run, job #7507400 failed due to MGR; FS wasn't degraded, so it's unrelated to CephFS
243
244 213 Venky Shankar
h3. 06 Dec 2023
245
246
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20231206.125818
247
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20231206.125818-x (rerun w/ squid kickoff changes)
248
249
* https://tracker.ceph.com/issues/63764
250
    Test failure: test_r_with_fsname_and_no_path_in_cap (tasks.cephfs.test_multifs_auth.TestMDSCaps)
251
* https://tracker.ceph.com/issues/63233
252
    mon|client|mds: valgrind reports possible leaks in the MDS
253
* https://tracker.ceph.com/issues/57676
254
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
255
* https://tracker.ceph.com/issues/62580
256
    testing: Test failure: test_snapshot_remove (tasks.cephfs.test_strays.TestStrays)
257
* https://tracker.ceph.com/issues/62067
258
    ffsb.sh failure "Resource temporarily unavailable"
259
* https://tracker.ceph.com/issues/61243
260
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
261
* https://tracker.ceph.com/issues/62081
262
    tasks/fscrypt-common does not finish, timesout
263
* https://tracker.ceph.com/issues/63265
264
    qa: fs/snaps/snaptest-git-ceph.sh failed when reseting to tag 'v0.1'
265
* https://tracker.ceph.com/issues/63806
266
    ffsb.sh workunit failure (MDS: std::out_of_range, damaged)
267
268 211 Patrick Donnelly
h3. 30 Nov 2023
269
270
https://pulpito.ceph.com/pdonnell-2023-11-30_08:05:19-fs:shell-wip-batrick-testing-20231130.014408-distro-default-smithi/
271
272
* https://tracker.ceph.com/issues/63699
273 212 Patrick Donnelly
    qa: failed cephfs-shell test_reading_conf
274
* https://tracker.ceph.com/issues/63700
275
    qa: test_cd_with_args failure
276 211 Patrick Donnelly
277 210 Venky Shankar
h3. 29 Nov 2023
278
279
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20231107.042705
280
281
* https://tracker.ceph.com/issues/63233
282
    mon|client|mds: valgrind reports possible leaks in the MDS
283
* https://tracker.ceph.com/issues/63141
284
    qa/cephfs: test_idem_unaffected_root_squash fails
285
* https://tracker.ceph.com/issues/57676
286
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
287
* https://tracker.ceph.com/issues/57655
288
    qa: fs:mixed-clients kernel_untar_build failure
289
* https://tracker.ceph.com/issues/62067
290
    ffsb.sh failure "Resource temporarily unavailable"
291
* https://tracker.ceph.com/issues/61243
292
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
293
* https://tracker.ceph.com/issues/62510 (pending RHEL back port)
    snaptest-git-ceph.sh failure with fs/thrash
294
* https://tracker.ceph.com/issues/62810
295
    Failure in snaptest-git-ceph.sh (it's an async unlink/create bug) -- Need to fix again
296
297 206 Venky Shankar
h3. 14 Nov 2023
298 207 Milind Changire
(Milind)
299
300
https://pulpito.ceph.com/mchangir-2023-11-13_10:27:15-fs-wip-mchangir-testing-20231110.052303-testing-default-smithi/
301
302
* https://tracker.ceph.com/issues/53859
303
  qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
304
* https://tracker.ceph.com/issues/63233
305
  mon|client|mds: valgrind reports possible leaks in the MDS
306
* https://tracker.ceph.com/issues/63521
307
  qa: Test failure: test_scrub_merge_dirfrags (tasks.cephfs.test_scrub_checks.TestScrubChecks)
308
* https://tracker.ceph.com/issues/57655
309
  qa: fs:mixed-clients kernel_untar_build failure
310
* https://tracker.ceph.com/issues/62580
311
    testing: Test failure: test_snapshot_remove (tasks.cephfs.test_strays.TestStrays)
312
* https://tracker.ceph.com/issues/57676
313
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
314
* https://tracker.ceph.com/issues/61243
315
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
316
* https://tracker.ceph.com/issues/63141
317
    qa/cephfs: test_idem_unaffected_root_squash fails
318
* https://tracker.ceph.com/issues/51964
319
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
320
* https://tracker.ceph.com/issues/63522
321
    No module named 'tasks.ceph_fuse'
322
    No module named 'tasks.kclient'
323
    No module named 'tasks.cephfs.fuse_mount'
324
    No module named 'tasks.ceph'
325
* https://tracker.ceph.com/issues/63523
326
    Command failed - qa/workunits/fs/misc/general_vxattrs.sh
327
328
329
h3. 14 Nov 2023
330 206 Venky Shankar
331
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20231106.073650
332
333
(nvm the fs:upgrade test failure - the PR is excluded from merge)
334
335
* https://tracker.ceph.com/issues/57676
336
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
337
* https://tracker.ceph.com/issues/63233
338
    mon|client|mds: valgrind reports possible leaks in the MDS
339
* https://tracker.ceph.com/issues/63141
340
    qa/cephfs: test_idem_unaffected_root_squash fails
341
* https://tracker.ceph.com/issues/62580
342
    testing: Test failure: test_snapshot_remove (tasks.cephfs.test_strays.TestStrays)
343
* https://tracker.ceph.com/issues/57655
344
    qa: fs:mixed-clients kernel_untar_build failure
345
* https://tracker.ceph.com/issues/51964
346
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
347
* https://tracker.ceph.com/issues/63519
348
    ceph-fuse: reef ceph-fuse crashes with main branch ceph-mds
349
* https://tracker.ceph.com/issues/57087
350
    qa: test_fragmented_injection (tasks.cephfs.test_data_scan.TestDataScan) failure
351
* https://tracker.ceph.com/issues/58945
352
    qa: xfstests-dev's generic test suite has 20 failures with fuse client
353
354 204 Rishabh Dave
h3. 7 Nov 2023
355
356 205 Rishabh Dave
fs: https://pulpito.ceph.com/rishabh-2023-11-04_04:30:51-fs-rishabh-2023nov3-testing-default-smithi/
357
re-run: https://pulpito.ceph.com/rishabh-2023-11-05_14:10:09-fs-rishabh-2023nov3-testing-default-smithi/
358
smoke: https://pulpito.ceph.com/rishabh-2023-11-08_08:39:05-smoke-rishabh-2023nov3-testing-default-smithi/
359 204 Rishabh Dave
360
* https://tracker.ceph.com/issues/53859
361
  qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
362
* https://tracker.ceph.com/issues/63233
363
  mon|client|mds: valgrind reports possible leaks in the MDS
364
* https://tracker.ceph.com/issues/57655
365
  qa: fs:mixed-clients kernel_untar_build failure
366
* https://tracker.ceph.com/issues/57676
367
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
368
369
* https://tracker.ceph.com/issues/63473
370
  fsstress.sh failed with errno 124
371
372 202 Rishabh Dave
h3. 3 Nov 2023
373 203 Rishabh Dave
374 202 Rishabh Dave
https://pulpito.ceph.com/rishabh-2023-10-27_06:26:52-fs-rishabh-2023oct26-testing-default-smithi/
375
376
* https://tracker.ceph.com/issues/63141
377
  qa/cephfs: test_idem_unaffected_root_squash fails
378
* https://tracker.ceph.com/issues/63233
379
  mon|client|mds: valgrind reports possible leaks in the MDS
380
* https://tracker.ceph.com/issues/57656
381
  dbench: write failed on handle 10010 (Resource temporarily unavailable)
382
* https://tracker.ceph.com/issues/57655
383
  qa: fs:mixed-clients kernel_untar_build failure
384
* https://tracker.ceph.com/issues/57676
385
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
386
387
* https://tracker.ceph.com/issues/59531
388
  "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio)"
389
* https://tracker.ceph.com/issues/52624
390
  qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
391
392 198 Patrick Donnelly
h3. 24 October 2023
393
394
https://pulpito.ceph.com/?branch=wip-batrick-testing-20231024.144545
395
396 200 Patrick Donnelly
Two failures:
397
398
https://pulpito.ceph.com/pdonnell-2023-10-26_05:21:22-fs-wip-batrick-testing-20231024.144545-distro-default-smithi/7438459/
399
https://pulpito.ceph.com/pdonnell-2023-10-26_05:21:22-fs-wip-batrick-testing-20231024.144545-distro-default-smithi/7438468/
400
401
probably related to https://github.com/ceph/ceph/pull/53255. Killing the mount as part of the test did not complete. Will research more.
402
403 198 Patrick Donnelly
* https://tracker.ceph.com/issues/52624
404
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
405
* https://tracker.ceph.com/issues/57676
406 199 Patrick Donnelly
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
407
* https://tracker.ceph.com/issues/63233
408
    mon|client|mds: valgrind reports possible leaks in the MDS
409
* https://tracker.ceph.com/issues/59531
410
    "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS 
411
* https://tracker.ceph.com/issues/57655
412
    qa: fs:mixed-clients kernel_untar_build failure
413 200 Patrick Donnelly
* https://tracker.ceph.com/issues/62067
414
    ffsb.sh failure "Resource temporarily unavailable"
415
* https://tracker.ceph.com/issues/63411
416
    qa: flush journal may cause timeouts of `scrub status`
417
* https://tracker.ceph.com/issues/61243
418
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
419
* https://tracker.ceph.com/issues/63141
420 198 Patrick Donnelly
    test_idem_unaffected_root_squash (test_admin.TestFsAuthorizeUpdate) fails
421 148 Rishabh Dave
422 195 Venky Shankar
h3. 18 Oct 2023
423
424
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20231018.065603
425
426
* https://tracker.ceph.com/issues/52624
427
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
428
* https://tracker.ceph.com/issues/57676
429
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
430
* https://tracker.ceph.com/issues/63233
431
    mon|client|mds: valgrind reports possible leaks in the MDS
432
* https://tracker.ceph.com/issues/63141
433
    qa/cephfs: test_idem_unaffected_root_squash fails
434
* https://tracker.ceph.com/issues/59531
435
  "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio)" 
436
* https://tracker.ceph.com/issues/62658
437
    error during scrub thrashing: reached maximum tries (31) after waiting for 900 seconds
438
* https://tracker.ceph.com/issues/62580
439
    testing: Test failure: test_snapshot_remove (tasks.cephfs.test_strays.TestStrays)
440
* https://tracker.ceph.com/issues/62067
441
    ffsb.sh failure "Resource temporarily unavailable"
442
* https://tracker.ceph.com/issues/57655
443
    qa: fs:mixed-clients kernel_untar_build failure
444
* https://tracker.ceph.com/issues/62036
445
    src/mds/MDCache.cc: 5131: FAILED ceph_assert(isolated_inodes.empty())
446
* https://tracker.ceph.com/issues/58945
447
    qa: xfstests-dev's generic test suite has 20 failures with fuse client
448
* https://tracker.ceph.com/issues/62847
449
    mds: blogbench requests stuck (5mds+scrub+snaps-flush)
450
451 193 Venky Shankar
h3. 13 Oct 2023
452
453
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20231013.093215
454
455
* https://tracker.ceph.com/issues/52624
456
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
457
* https://tracker.ceph.com/issues/62936
458
    Test failure: test_mirroring_init_failure_with_recovery (tasks.cephfs.test_mirroring.TestMirroring)
459
* https://tracker.ceph.com/issues/47292
460
    cephfs-shell: test_df_for_valid_file failure
461
* https://tracker.ceph.com/issues/63141
462
    qa/cephfs: test_idem_unaffected_root_squash fails
463
* https://tracker.ceph.com/issues/62081
464
    tasks/fscrypt-common does not finish, timesout
465 1 Patrick Donnelly
* https://tracker.ceph.com/issues/58945
466
    qa: xfstests-dev's generic test suite has 20 failures with fuse client
467 194 Venky Shankar
* https://tracker.ceph.com/issues/63233
468
    mon|client|mds: valgrind reports possible leaks in the MDS
469 193 Venky Shankar
470 190 Patrick Donnelly
h3. 16 Oct 2023
471
472
https://pulpito.ceph.com/?branch=wip-batrick-testing-20231016.203825
473
474 192 Patrick Donnelly
Infrastructure issues:
475
* /teuthology/pdonnell-2023-10-19_12:04:12-fs-wip-batrick-testing-20231016.203825-distro-default-smithi/7432286/teuthology.log
476
    Host lost.
477
478 196 Patrick Donnelly
One followup fix:
479
* https://pulpito.ceph.com/pdonnell-2023-10-20_00:33:29-fs-wip-batrick-testing-20231016.203825-distro-default-smithi/
480
481 192 Patrick Donnelly
Failures:
482
483
* https://tracker.ceph.com/issues/56694
484
    qa: avoid blocking forever on hung umount
485
* https://tracker.ceph.com/issues/63089
486
    qa: tasks/mirror times out
487
* https://tracker.ceph.com/issues/52624
488
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
489
* https://tracker.ceph.com/issues/59531
490
  "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio)" 
491
* https://tracker.ceph.com/issues/57676
492
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
493
* https://tracker.ceph.com/issues/62658 
494
    error during scrub thrashing: reached maximum tries (31) after waiting for 900 seconds
495
* https://tracker.ceph.com/issues/61243
496
  test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
497
* https://tracker.ceph.com/issues/57656
498
  dbench: write failed on handle 10010 (Resource temporarily unavailable)
499
* https://tracker.ceph.com/issues/63233
500
  mon|client|mds: valgrind reports possible leaks in the MDS
501 197 Patrick Donnelly
* https://tracker.ceph.com/issues/63278
502
  kclient: may wrongly decode session messages and believe it is blocklisted (dead jobs)
503 192 Patrick Donnelly
504 189 Rishabh Dave
h3. 9 Oct 2023
505
506
https://pulpito.ceph.com/rishabh-2023-10-06_11:56:52-fs-rishabh-cephfs-mon-testing-default-smithi/
507
508
* https://tracker.ceph.com/issues/54460
509
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
510
* https://tracker.ceph.com/issues/63141
511
  test_idem_unaffected_root_squash (test_admin.TestFsAuthorizeUpdate) fails
512
* https://tracker.ceph.com/issues/62937
513
  logrotate doesn't support parallel execution on same set of logfiles
514
* https://tracker.ceph.com/issues/61400
515
  valgrind+ceph-mon issues
516
* https://tracker.ceph.com/issues/57676
517
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
518
* https://tracker.ceph.com/issues/55805
519
  error during scrub thrashing reached max tries in 900 secs
520
521 188 Venky Shankar
h3. 26 Sep 2023
522
523
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230926.081818
524
525
* https://tracker.ceph.com/issues/52624
526
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
527
* https://tracker.ceph.com/issues/62873
528
    qa: FAIL: test_client_blocklisted_oldest_tid (tasks.cephfs.test_client_limits.TestClientLimits)
529
* https://tracker.ceph.com/issues/61400
530
    valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
531
* https://tracker.ceph.com/issues/57676
532
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
533
* https://tracker.ceph.com/issues/62682
534
    mon: no mdsmap broadcast after "fs set joinable" is set to true
535
* https://tracker.ceph.com/issues/63089
536
    qa: tasks/mirror times out
537
538 185 Rishabh Dave
h3. 22 Sep 2023
539
540
https://pulpito.ceph.com/rishabh-2023-09-12_12:12:15-fs-wip-rishabh-2023sep12-b2-testing-default-smithi/
541
542
* https://tracker.ceph.com/issues/59348
543
  qa: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota
544
* https://tracker.ceph.com/issues/59344
545
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
546
* https://tracker.ceph.com/issues/59531
547
  "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio)" 
548
* https://tracker.ceph.com/issues/61574
549
  build failure for mdtest project
550
* https://tracker.ceph.com/issues/62702
551
  fsstress.sh: MDS slow requests for the internal 'rename' requests
552
* https://tracker.ceph.com/issues/57676
553
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
554
555
* https://tracker.ceph.com/issues/62863 
556
  deadlock in ceph-fuse causes teuthology job to hang and fail
557
* https://tracker.ceph.com/issues/62870
558
  test_cluster_info (tasks.cephfs.test_nfs.TestNFS)
559
* https://tracker.ceph.com/issues/62873
560
  test_client_blocklisted_oldest_tid (tasks.cephfs.test_client_limits.TestClientLimits)
561
562 186 Venky Shankar
h3. 20 Sep 2023
563
564
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230920.072635
565
566
* https://tracker.ceph.com/issues/52624
567
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
568
* https://tracker.ceph.com/issues/61400
569
    valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
570
* https://tracker.ceph.com/issues/61399
571
    libmpich: undefined references to fi_strerror
572
* https://tracker.ceph.com/issues/62081
573
    tasks/fscrypt-common does not finish, timesout
574
* https://tracker.ceph.com/issues/62658 
575
    error during scrub thrashing: reached maximum tries (31) after waiting for 900 seconds
576
* https://tracker.ceph.com/issues/62915
577
    qa/suites/fs/nfs: No orchestrator configured (try `ceph orch set backend`) while running test cases
578
* https://tracker.ceph.com/issues/59531
579
    quincy: "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio)"
580
* https://tracker.ceph.com/issues/62873
581
  qa: FAIL: test_client_blocklisted_oldest_tid (tasks.cephfs.test_client_limits.TestClientLimits)
582
* https://tracker.ceph.com/issues/62936
583
    Test failure: test_mirroring_init_failure_with_recovery (tasks.cephfs.test_mirroring.TestMirroring)
584
* https://tracker.ceph.com/issues/62937
585
    Command failed on smithi027 with status 3: 'sudo logrotate /etc/logrotate.d/ceph-test.conf'
586
* https://tracker.ceph.com/issues/62510
587
    snaptest-git-ceph.sh failure with fs/thrash
588
* https://tracker.ceph.com/issues/62081
589
    tasks/fscrypt-common does not finish, timesout
590
* https://tracker.ceph.com/issues/62126
591
    test failure: suites/blogbench.sh stops running
592 187 Venky Shankar
* https://tracker.ceph.com/issues/62682
593
    mon: no mdsmap broadcast after "fs set joinable" is set to true
594 186 Venky Shankar
595 184 Milind Changire
h3. 19 Sep 2023
596
597
http://pulpito.front.sepia.ceph.com/mchangir-2023-09-12_05:40:22-fs-wip-mchangir-testing-20230908.140927-testing-default-smithi/
598
599
* https://tracker.ceph.com/issues/58220#note-9
600
  workunit fs/test_python.sh: test_disk_quota_exceeeded_error failure
601
* https://tracker.ceph.com/issues/62702
602
  Command failed (workunit test suites/fsstress.sh) on smithi124 with status 124
603
* https://tracker.ceph.com/issues/57676
604
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
605
* https://tracker.ceph.com/issues/59348
606
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
607
* https://tracker.ceph.com/issues/52624
608
  qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
609
* https://tracker.ceph.com/issues/51964
610
  qa: test_cephfs_mirror_restart_sync_on_blocklist failure
611
* https://tracker.ceph.com/issues/61243
612
  test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
613
* https://tracker.ceph.com/issues/59344
614
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
615
* https://tracker.ceph.com/issues/62873
616
  qa: FAIL: test_client_blocklisted_oldest_tid (tasks.cephfs.test_client_limits.TestClientLimits)
617
* https://tracker.ceph.com/issues/59413
618
  cephfs: qa snaptest-git-ceph.sh failed with "got remote process result: 128"
619
* https://tracker.ceph.com/issues/53859
620
  qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
621
* https://tracker.ceph.com/issues/62482
622
  qa: cluster [WRN] Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
623
624 178 Patrick Donnelly
625 177 Venky Shankar
h3. 13 Sep 2023
626
627
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230908.065909
628
629
* https://tracker.ceph.com/issues/52624
630
      qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
631
* https://tracker.ceph.com/issues/57655
632
    qa: fs:mixed-clients kernel_untar_build failure
633
* https://tracker.ceph.com/issues/57676
634
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
635
* https://tracker.ceph.com/issues/61243
636
    qa: tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev - 17 tests failed
637
* https://tracker.ceph.com/issues/62567
638
    postgres workunit times out - MDS_SLOW_REQUEST in logs
639
* https://tracker.ceph.com/issues/61400
640
    valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
641
* https://tracker.ceph.com/issues/61399
642
    libmpich: undefined references to fi_strerror
643
* https://tracker.ceph.com/issues/57655
644
    qa: fs:mixed-clients kernel_untar_build failure
645
* https://tracker.ceph.com/issues/57676
646
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
647
* https://tracker.ceph.com/issues/51964
648
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
649
* https://tracker.ceph.com/issues/62081
650
    tasks/fscrypt-common does not finish, timesout
651 178 Patrick Donnelly
652 179 Patrick Donnelly
h3. 2023 Sep 12
653 178 Patrick Donnelly
654
https://pulpito.ceph.com/pdonnell-2023-09-12_14:07:50-fs-wip-batrick-testing-20230912.122437-distro-default-smithi/
655 1 Patrick Donnelly
656 181 Patrick Donnelly
A few failures caused by qa refactoring in https://github.com/ceph/ceph/pull/48130 ; notably:
657
658 182 Patrick Donnelly
* Test failure: test_export_pin_many (tasks.cephfs.test_exports.TestExportPin) caused by fragmentation from config changes.
659 181 Patrick Donnelly
660
Failures:
661
662 179 Patrick Donnelly
* https://tracker.ceph.com/issues/59348
663
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
664
* https://tracker.ceph.com/issues/57656
665
  dbench: write failed on handle 10010 (Resource temporarily unavailable)
666
* https://tracker.ceph.com/issues/55805
667
  error scrub thrashing reached max tries in 900 secs
668
* https://tracker.ceph.com/issues/62067
669
    ffsb.sh failure "Resource temporarily unavailable"
670
* https://tracker.ceph.com/issues/59344
671
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
672
* https://tracker.ceph.com/issues/61399
673 180 Patrick Donnelly
  libmpich: undefined references to fi_strerror
674
* https://tracker.ceph.com/issues/62832
675
  common: config_proxy deadlock during shutdown (and possibly other times)
676
* https://tracker.ceph.com/issues/59413
677 1 Patrick Donnelly
  cephfs: qa snaptest-git-ceph.sh failed with "got remote process result: 128"
678 181 Patrick Donnelly
* https://tracker.ceph.com/issues/57676
679
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
680
* https://tracker.ceph.com/issues/62567
681
  Command failed with status 1: "sudo TESTDIR=/home/ubuntu/cephtest bash -c 'sudo -u postgres -- pgbench -s 500 -i'"
682
* https://tracker.ceph.com/issues/54460
683
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
684
* https://tracker.ceph.com/issues/58220#note-9
685
  workunit fs/test_python.sh: test_disk_quota_exceeeded_error failure
686
* https://tracker.ceph.com/issues/59348
687
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
688 183 Patrick Donnelly
* https://tracker.ceph.com/issues/62847
689
    mds: blogbench requests stuck (5mds+scrub+snaps-flush)
690
* https://tracker.ceph.com/issues/62848
691
    qa: fail_fs upgrade scenario hanging
692
* https://tracker.ceph.com/issues/62081
693
    tasks/fscrypt-common does not finish, timesout
694 177 Venky Shankar
695 176 Venky Shankar
h3. 11 Sep 2023
696 175 Venky Shankar
697
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230830.153114
698
699
* https://tracker.ceph.com/issues/52624
700
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
701
* https://tracker.ceph.com/issues/61399
702
    qa: build failure for ior (the failed instance is when compiling `mdtest`)
703
* https://tracker.ceph.com/issues/57655
704
    qa: fs:mixed-clients kernel_untar_build failure
705
* https://tracker.ceph.com/issues/61399
706
    ior build failure
707
* https://tracker.ceph.com/issues/59531
708
    quincy: "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio)"
709
* https://tracker.ceph.com/issues/59344
710
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
711
* https://tracker.ceph.com/issues/59346
712
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
713
* https://tracker.ceph.com/issues/59348
714
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
715
* https://tracker.ceph.com/issues/57676
716
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
717
* https://tracker.ceph.com/issues/61243
718
  qa: tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev - 17 tests failed
719
* https://tracker.ceph.com/issues/62567
720
  postgres workunit times out - MDS_SLOW_REQUEST in logs
721
722
723 174 Rishabh Dave
h3. 6 Sep 2023 Run 2
724
725
https://pulpito.ceph.com/rishabh-2023-08-25_01:50:32-fs-wip-rishabh-2023aug3-b5-testing-default-smithi/ 
726
727
* https://tracker.ceph.com/issues/51964
728
  test_cephfs_mirror_restart_sync_on_blocklist failure
729
* https://tracker.ceph.com/issues/59348
730
  test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
731
* https://tracker.ceph.com/issues/53859
732
  qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
733
* https://tracker.ceph.com/issues/61892
734
  test_strays.TestStrays.test_snapshot_remove failed
735
* https://tracker.ceph.com/issues/54460
736
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
737
* https://tracker.ceph.com/issues/59346
738
  fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write"
739
* https://tracker.ceph.com/issues/59344
740
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
741
* https://tracker.ceph.com/issues/62484
742
  qa: ffsb.sh test failure
743
* https://tracker.ceph.com/issues/62567
744
  Command failed with status 1: "sudo TESTDIR=/home/ubuntu/cephtest bash -c 'sudo -u postgres -- pgbench -s 500 -i'"
745
  
746
* https://tracker.ceph.com/issues/61399
747
  ior build failure
748
* https://tracker.ceph.com/issues/57676
749
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
750
* https://tracker.ceph.com/issues/55805
751
  error scrub thrashing reached max tries in 900 secs
752
753 172 Rishabh Dave
h3. 6 Sep 2023
754 171 Rishabh Dave
755 173 Rishabh Dave
https://pulpito.ceph.com/rishabh-2023-08-10_20:16:46-fs-wip-rishabh-2023Aug1-b4-testing-default-smithi/
756 171 Rishabh Dave
757 1 Patrick Donnelly
* https://tracker.ceph.com/issues/53859
758
  qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
759 173 Rishabh Dave
* https://tracker.ceph.com/issues/51964
760
  qa: test_cephfs_mirror_restart_sync_on_blocklist failure
761 1 Patrick Donnelly
* https://tracker.ceph.com/issues/61892
762 173 Rishabh Dave
  test_snapshot_remove (test_strays.TestStrays) failed
763
* https://tracker.ceph.com/issues/59348
764
  qa: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota
765
* https://tracker.ceph.com/issues/54462
766
  Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128
767
* https://tracker.ceph.com/issues/62556
768
  test_acls: xfstests_dev: python2 is missing
769
* https://tracker.ceph.com/issues/62067
770
  ffsb.sh failure "Resource temporarily unavailable"
771
* https://tracker.ceph.com/issues/57656
772
  dbench: write failed on handle 10010 (Resource temporarily unavailable)
773 1 Patrick Donnelly
* https://tracker.ceph.com/issues/59346
774
  fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write"
775 171 Rishabh Dave
* https://tracker.ceph.com/issues/59344
776 173 Rishabh Dave
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
777
778 171 Rishabh Dave
* https://tracker.ceph.com/issues/61399
779
  ior build failure
780
* https://tracker.ceph.com/issues/57676
781
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
782
* https://tracker.ceph.com/issues/55805
783
  error scrub thrashing reached max tries in 900 secs
784 173 Rishabh Dave
785
* https://tracker.ceph.com/issues/62567
786
  Command failed on smithi008 with status 1: "sudo TESTDIR=/home/ubuntu/cephtest bash -c 'sudo -u postgres -- pgbench -s 500 -i'"
787
* https://tracker.ceph.com/issues/62702
788
  workunit test suites/fsstress.sh on smithi066 with status 124
789 170 Rishabh Dave
790
h3. 5 Sep 2023
791
792
https://pulpito.ceph.com/rishabh-2023-08-25_06:38:25-fs-wip-rishabh-2023aug3-b5-testing-default-smithi/
793
orch:cephadm suite run: http://pulpito.front.sepia.ceph.com/rishabh-2023-09-05_12:16:09-orch:cephadm-wip-rishabh-2023aug3-b5-testing-default-smithi/
794
  this run has failures but acc to Adam King these are not relevant and should be ignored
795
796
* https://tracker.ceph.com/issues/61892
797
  test_snapshot_remove (test_strays.TestStrays) failed
798
* https://tracker.ceph.com/issues/59348
799
  test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota
800
* https://tracker.ceph.com/issues/54462
801
  Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128
802
* https://tracker.ceph.com/issues/62067
803
  ffsb.sh failure "Resource temporarily unavailable"
804
* https://tracker.ceph.com/issues/57656 
805
  dbench: write failed on handle 10010 (Resource temporarily unavailable)
806
* https://tracker.ceph.com/issues/59346
807
  fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write"
808
* https://tracker.ceph.com/issues/59344
809
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
810
* https://tracker.ceph.com/issues/50223
811
  client.xxxx isn't responding to mclientcaps(revoke)
812
* https://tracker.ceph.com/issues/57655
813
  qa: fs:mixed-clients kernel_untar_build failure
814
* https://tracker.ceph.com/issues/62187
815
  iozone.sh: line 5: iozone: command not found
816
 
817
* https://tracker.ceph.com/issues/61399
818
  ior build failure
819
* https://tracker.ceph.com/issues/57676
820
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
821
* https://tracker.ceph.com/issues/55805
822
  error scrub thrashing reached max tries in 900 secs
823 169 Venky Shankar
824
825
h3. 31 Aug 2023
826
827
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230824.045828
828
829
* https://tracker.ceph.com/issues/52624
830
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
831
* https://tracker.ceph.com/issues/62187
832
    iozone: command not found
833
* https://tracker.ceph.com/issues/61399
834
    ior build failure
835
* https://tracker.ceph.com/issues/59531
836
    quincy: "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio)"
837
* https://tracker.ceph.com/issues/61399
838
    qa: build failure for ior (the failed instance is when compiling `mdtest`)
839
* https://tracker.ceph.com/issues/57655
840
    qa: fs:mixed-clients kernel_untar_build failure
841
* https://tracker.ceph.com/issues/59344
842
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
843
* https://tracker.ceph.com/issues/59346
844
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
845
* https://tracker.ceph.com/issues/59348
846
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
847
* https://tracker.ceph.com/issues/59413
848
    cephfs: qa snaptest-git-ceph.sh failed with "got remote process result: 128"
849
* https://tracker.ceph.com/issues/62653
850
    qa: unimplemented fcntl command: 1036 with fsstress
851
* https://tracker.ceph.com/issues/61400
852
    valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
853
* https://tracker.ceph.com/issues/62658
854
    error during scrub thrashing: reached maximum tries (31) after waiting for 900 seconds
855
* https://tracker.ceph.com/issues/62188
856
    AttributeError: 'RemoteProcess' object has no attribute 'read' (happens only with multis-auth test)
857 168 Venky Shankar
858
859
h3. 25 Aug 2023
860
861
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230822.064807
862
863
* https://tracker.ceph.com/issues/59344
864
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
865
* https://tracker.ceph.com/issues/59346
866
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
867
* https://tracker.ceph.com/issues/59348
868
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
869
* https://tracker.ceph.com/issues/57655
870
    qa: fs:mixed-clients kernel_untar_build failure
871
* https://tracker.ceph.com/issues/61243
872
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
873
* https://tracker.ceph.com/issues/61399
874
    ior build failure
875
* https://tracker.ceph.com/issues/61399
876
    qa: build failure for ior (the failed instance is when compiling `mdtest`)
877
* https://tracker.ceph.com/issues/62484
878
    qa: ffsb.sh test failure
879
* https://tracker.ceph.com/issues/59531
880
    quincy: "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio)"
881
* https://tracker.ceph.com/issues/62510
882
    snaptest-git-ceph.sh failure with fs/thrash
883 167 Venky Shankar
884
885
h3. 24 Aug 2023
886
887
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230822.060131
888
889
* https://tracker.ceph.com/issues/57676
890
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
891
* https://tracker.ceph.com/issues/51964
892
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
893
* https://tracker.ceph.com/issues/59344
894
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
895
* https://tracker.ceph.com/issues/59346
896
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
897
* https://tracker.ceph.com/issues/59348
898
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
899
* https://tracker.ceph.com/issues/61399
900
    ior build failure
901
* https://tracker.ceph.com/issues/61399
902
    qa: build failure for ior (the failed instance is when compiling `mdtest`)
903
* https://tracker.ceph.com/issues/62510
904
    snaptest-git-ceph.sh failure with fs/thrash
905
* https://tracker.ceph.com/issues/62484
906
    qa: ffsb.sh test failure
907
* https://tracker.ceph.com/issues/57087
908
    qa: test_fragmented_injection (tasks.cephfs.test_data_scan.TestDataScan) failure
909
* https://tracker.ceph.com/issues/57656
910
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
911
* https://tracker.ceph.com/issues/62187
912
    iozone: command not found
913
* https://tracker.ceph.com/issues/62188
914
    AttributeError: 'RemoteProcess' object has no attribute 'read' (happens only with multis-auth test)
915
* https://tracker.ceph.com/issues/62567
916
    postgres workunit times out - MDS_SLOW_REQUEST in logs
917 166 Venky Shankar
918
919
h3. 22 Aug 2023
920
921
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230809.035933
922
923
* https://tracker.ceph.com/issues/57676
924
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
925
* https://tracker.ceph.com/issues/51964
926
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
927
* https://tracker.ceph.com/issues/59344
928
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
929
* https://tracker.ceph.com/issues/59346
930
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
931
* https://tracker.ceph.com/issues/59348
932
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
933
* https://tracker.ceph.com/issues/61399
934
    ior build failure
935
* https://tracker.ceph.com/issues/61399
936
    qa: build failure for ior (the failed instance is when compiling `mdtest`)
937
* https://tracker.ceph.com/issues/57655
938
    qa: fs:mixed-clients kernel_untar_build failure
939
* https://tracker.ceph.com/issues/61243
940
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
941
* https://tracker.ceph.com/issues/62188
942
    AttributeError: 'RemoteProcess' object has no attribute 'read' (happens only with multis-auth test)
943
* https://tracker.ceph.com/issues/62510
944
    snaptest-git-ceph.sh failure with fs/thrash
945
* https://tracker.ceph.com/issues/62511
946
    src/mds/MDLog.cc: 299: FAILED ceph_assert(!mds_is_shutting_down)
947 165 Venky Shankar
948
949
h3. 14 Aug 2023
950
951
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230808.093601
952
953
* https://tracker.ceph.com/issues/51964
954
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
955
* https://tracker.ceph.com/issues/61400
956
    valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
957
* https://tracker.ceph.com/issues/61399
958
    ior build failure
959
* https://tracker.ceph.com/issues/59348
960
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
961
* https://tracker.ceph.com/issues/59531
962
    cluster [WRN] OSD bench result of 137706.272521 IOPS exceeded the threshold
963
* https://tracker.ceph.com/issues/59344
964
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
965
* https://tracker.ceph.com/issues/59346
966
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
967
* https://tracker.ceph.com/issues/61399
968
    qa: build failure for ior (the failed instance is when compiling `mdtest`)
969
* https://tracker.ceph.com/issues/59684 [kclient bug]
970
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
971
* https://tracker.ceph.com/issues/61243 (NEW)
972
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
973
* https://tracker.ceph.com/issues/57655
974
    qa: fs:mixed-clients kernel_untar_build failure
975
* https://tracker.ceph.com/issues/57656
976
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
977 163 Venky Shankar
978
979
h3. 28 JULY 2023
980
981
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230725.053049
982
983
* https://tracker.ceph.com/issues/51964
984
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
985
* https://tracker.ceph.com/issues/61400
986
    valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
987
* https://tracker.ceph.com/issues/61399
988
    ior build failure
989
* https://tracker.ceph.com/issues/57676
990
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
991
* https://tracker.ceph.com/issues/59348
992
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
993
* https://tracker.ceph.com/issues/59531
994
    cluster [WRN] OSD bench result of 137706.272521 IOPS exceeded the threshold
995
* https://tracker.ceph.com/issues/59344
996
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
997
* https://tracker.ceph.com/issues/59346
998
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
999
* https://github.com/ceph/ceph/pull/52556
1000
    task/test_nfs: AttributeError: 'TestNFS' object has no attribute 'run_ceph_cmd' (see note #4)
1001
* https://tracker.ceph.com/issues/62187
1002
    iozone: command not found
1003
* https://tracker.ceph.com/issues/61399
1004
    qa: build failure for ior (the failed instance is when compiling `mdtest`)
1005
* https://tracker.ceph.com/issues/62188
1006 164 Rishabh Dave
    AttributeError: 'RemoteProcess' object has no attribute 'read' (happens only with multis-auth test)
1007 158 Rishabh Dave
1008
h3. 24 Jul 2023
1009
1010
https://pulpito.ceph.com/rishabh-2023-07-13_21:35:13-fs-wip-rishabh-2023Jul13-testing-default-smithi/
1011
https://pulpito.ceph.com/rishabh-2023-07-14_10:26:42-fs-wip-rishabh-2023Jul13-testing-default-smithi/
1012
There were few failure from one of the PRs under testing. Following run confirms that removing this PR fixes these failures -
1013
https://pulpito.ceph.com/rishabh-2023-07-18_02:11:50-fs-wip-rishabh-2023Jul13-m-quota-testing-default-smithi/
1014
One more extra run to check if blogbench.sh fail every time:
1015
https://pulpito.ceph.com/rishabh-2023-07-21_17:58:19-fs-wip-rishabh-2023Jul13-m-quota-testing-default-smithi/
1016
blogbench.sh failure were seen on above runs for first time, following run with main branch that confirms that "blogbench.sh" was not related to any of the PRs that are under testing -
1017 161 Rishabh Dave
https://pulpito.ceph.com/rishabh-2023-07-21_21:30:53-fs-wip-rishabh-2023Jul13-base-2-testing-default-smithi/
1018
1019
* https://tracker.ceph.com/issues/61892
1020
  test_snapshot_remove (test_strays.TestStrays) failed
1021
* https://tracker.ceph.com/issues/53859
1022
  test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
1023
* https://tracker.ceph.com/issues/61982
1024
  test_clean_stale_subvolume_snapshot_metadata (tasks.cephfs.test_volumes.TestSubvolumeSnapshots)
1025
* https://tracker.ceph.com/issues/52438
1026
  qa: ffsb timeout
1027
* https://tracker.ceph.com/issues/54460
1028
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1029
* https://tracker.ceph.com/issues/57655
1030
  qa: fs:mixed-clients kernel_untar_build failure
1031
* https://tracker.ceph.com/issues/48773
1032
  reached max tries: scrub does not complete
1033
* https://tracker.ceph.com/issues/58340
1034
  mds: fsstress.sh hangs with multimds
1035
* https://tracker.ceph.com/issues/61400
1036
  valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
1037
* https://tracker.ceph.com/issues/57206
1038
  libcephfs/test.sh: ceph_test_libcephfs_reclaim
1039
  
1040
* https://tracker.ceph.com/issues/57656
1041
  [testing] dbench: write failed on handle 10010 (Resource temporarily unavailable)
1042
* https://tracker.ceph.com/issues/61399
1043
  ior build failure
1044
* https://tracker.ceph.com/issues/57676
1045
  error during scrub thrashing: backtrace
1046
  
1047
* https://tracker.ceph.com/issues/38452
1048
  'sudo -u postgres -- pgbench -s 500 -i' failed
1049 158 Rishabh Dave
* https://tracker.ceph.com/issues/62126
1050 157 Venky Shankar
  blogbench.sh failure
1051
1052
h3. 18 July 2023
1053
1054
* https://tracker.ceph.com/issues/52624
1055
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
1056
* https://tracker.ceph.com/issues/57676
1057
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1058
* https://tracker.ceph.com/issues/54460
1059
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1060
* https://tracker.ceph.com/issues/57655
1061
    qa: fs:mixed-clients kernel_untar_build failure
1062
* https://tracker.ceph.com/issues/51964
1063
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1064
* https://tracker.ceph.com/issues/59344
1065
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
1066
* https://tracker.ceph.com/issues/61182
1067
    cephfs-mirror-ha-workunit: reached maximum tries (50) after waiting for 300 seconds
1068
* https://tracker.ceph.com/issues/61957
1069
    test_client_limits.TestClientLimits.test_client_release_bug
1070
* https://tracker.ceph.com/issues/59348
1071
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1072
* https://tracker.ceph.com/issues/61892
1073
    test_strays.TestStrays.test_snapshot_remove failed
1074
* https://tracker.ceph.com/issues/59346
1075
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
1076
* https://tracker.ceph.com/issues/44565
1077
    src/mds/SimpleLock.h: 528: FAILED ceph_assert(state == LOCK_XLOCK || state == LOCK_XLOCKDONE || state == LOCK_XLOCKSNAP || state == LOCK_LOCK_XLOCK || state == LOCK_LOCK || is_locallock())
1078
* https://tracker.ceph.com/issues/62067
1079
    ffsb.sh failure "Resource temporarily unavailable"
1080 156 Venky Shankar
1081
1082
h3. 17 July 2023
1083
1084
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230704.040136
1085
1086
* https://tracker.ceph.com/issues/61982
1087
    Test failure: test_clean_stale_subvolume_snapshot_metadata (tasks.cephfs.test_volumes.TestSubvolumeSnapshots)
1088
* https://tracker.ceph.com/issues/59344
1089
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
1090
* https://tracker.ceph.com/issues/61182
1091
    cephfs-mirror-ha-workunit: reached maximum tries (50) after waiting for 300 seconds
1092
* https://tracker.ceph.com/issues/61957
1093
    test_client_limits.TestClientLimits.test_client_release_bug
1094
* https://tracker.ceph.com/issues/61400
1095
    valgrind+ceph-mon: segmentation fault in rocksdb+tcmalloc
1096
* https://tracker.ceph.com/issues/59348
1097
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1098
* https://tracker.ceph.com/issues/61892
1099
    test_strays.TestStrays.test_snapshot_remove failed
1100
* https://tracker.ceph.com/issues/59346
1101
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
1102
* https://tracker.ceph.com/issues/62036
1103
    src/mds/MDCache.cc: 5131: FAILED ceph_assert(isolated_inodes.empty())
1104
* https://tracker.ceph.com/issues/61737
1105
    coredump from '/bin/podman pull quay.ceph.io/ceph-ci/ceph:pacific'
1106
* https://tracker.ceph.com/issues/44565
1107
    src/mds/SimpleLock.h: 528: FAILED ceph_assert(state == LOCK_XLOCK || state == LOCK_XLOCKDONE || state == LOCK_XLOCKSNAP || state == LOCK_LOCK_XLOCK || state == LOCK_LOCK || is_locallock())
1108 155 Rishabh Dave
1109 1 Patrick Donnelly
1110 153 Rishabh Dave
h3. 13 July 2023 Run 2
1111 152 Rishabh Dave
1112
1113
https://pulpito.ceph.com/rishabh-2023-07-08_23:33:40-fs-wip-rishabh-2023Jul9-testing-default-smithi/
1114
https://pulpito.ceph.com/rishabh-2023-07-09_20:19:09-fs-wip-rishabh-2023Jul9-testing-default-smithi/
1115
1116
* https://tracker.ceph.com/issues/61957
1117
  test_client_limits.TestClientLimits.test_client_release_bug
1118
* https://tracker.ceph.com/issues/61982
1119
  Test failure: test_clean_stale_subvolume_snapshot_metadata (tasks.cephfs.test_volumes.TestSubvolumeSnapshots)
1120
* https://tracker.ceph.com/issues/59348
1121
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1122
* https://tracker.ceph.com/issues/59344
1123
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
1124
* https://tracker.ceph.com/issues/54460
1125
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1126
* https://tracker.ceph.com/issues/57655
1127
  qa: fs:mixed-clients kernel_untar_build failure
1128
* https://tracker.ceph.com/issues/61400
1129
  valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
1130
* https://tracker.ceph.com/issues/61399
1131
  ior build failure
1132
1133 151 Venky Shankar
h3. 13 July 2023
1134
1135
https://pulpito.ceph.com/vshankar-2023-07-04_11:45:30-fs-wip-vshankar-testing-20230704.040242-testing-default-smithi/
1136
1137
* https://tracker.ceph.com/issues/54460
1138
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1139
* https://tracker.ceph.com/issues/61400
1140
    valgrind+ceph-mon: segmentation fault in rocksdb+tcmalloc
1141
* https://tracker.ceph.com/issues/57655
1142
    qa: fs:mixed-clients kernel_untar_build failure
1143
* https://tracker.ceph.com/issues/61945
1144
    LibCephFS.DelegTimeout failure
1145
* https://tracker.ceph.com/issues/52624
1146
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
1147
* https://tracker.ceph.com/issues/57676
1148
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1149
* https://tracker.ceph.com/issues/59348
1150
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1151
* https://tracker.ceph.com/issues/59344
1152
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
1153
* https://tracker.ceph.com/issues/51964
1154
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1155
* https://tracker.ceph.com/issues/59346
1156
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
1157
* https://tracker.ceph.com/issues/61982
1158
    Test failure: test_clean_stale_subvolume_snapshot_metadata (tasks.cephfs.test_volumes.TestSubvolumeSnapshots)
1159 150 Rishabh Dave
1160
1161
h3. 13 Jul 2023
1162
1163
https://pulpito.ceph.com/rishabh-2023-07-05_22:21:20-fs-wip-rishabh-2023Jul5-testing-default-smithi/
1164
https://pulpito.ceph.com/rishabh-2023-07-06_19:33:28-fs-wip-rishabh-2023Jul5-testing-default-smithi/
1165
1166
* https://tracker.ceph.com/issues/61957
1167
  test_client_limits.TestClientLimits.test_client_release_bug
1168
* https://tracker.ceph.com/issues/59348
1169
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1170
* https://tracker.ceph.com/issues/59346
1171
  fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
1172
* https://tracker.ceph.com/issues/48773
1173
  scrub does not complete: reached max tries
1174
* https://tracker.ceph.com/issues/59344
1175
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
1176
* https://tracker.ceph.com/issues/52438
1177
  qa: ffsb timeout
1178
* https://tracker.ceph.com/issues/57656
1179
  [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
1180
* https://tracker.ceph.com/issues/58742
1181
  xfstests-dev: kcephfs: generic
1182
* https://tracker.ceph.com/issues/61399
1183 148 Rishabh Dave
  libmpich: undefined references to fi_strerror
1184 149 Rishabh Dave
1185 148 Rishabh Dave
h3. 12 July 2023
1186
1187
https://pulpito.ceph.com/rishabh-2023-07-05_18:32:52-fs-wip-rishabh-CephManager-in-CephFSTestCase-testing-default-smithi/
1188
https://pulpito.ceph.com/rishabh-2023-07-06_19:46:43-fs-wip-rishabh-CephManager-in-CephFSTestCase-testing-default-smithi/
1189
1190
* https://tracker.ceph.com/issues/61892
1191
  test_strays.TestStrays.test_snapshot_remove failed
1192
* https://tracker.ceph.com/issues/59348
1193
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1194
* https://tracker.ceph.com/issues/53859
1195
  qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
1196
* https://tracker.ceph.com/issues/59346
1197
  fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write"
1198
* https://tracker.ceph.com/issues/58742
1199
  xfstests-dev: kcephfs: generic
1200
* https://tracker.ceph.com/issues/59344
1201
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
1202
* https://tracker.ceph.com/issues/52438
1203
  qa: ffsb timeout
1204
* https://tracker.ceph.com/issues/57656
1205
  [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
1206
* https://tracker.ceph.com/issues/54460
1207
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1208
* https://tracker.ceph.com/issues/57655
1209
  qa: fs:mixed-clients kernel_untar_build failure
1210
* https://tracker.ceph.com/issues/61182
1211
  cephfs-mirror-ha-workunit: reached maximum tries (50) after waiting for 300 seconds
1212
* https://tracker.ceph.com/issues/61400
1213
  valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
1214 147 Rishabh Dave
* https://tracker.ceph.com/issues/48773
1215 146 Patrick Donnelly
  reached max tries: scrub does not complete
1216
1217
h3. 05 July 2023
1218
1219
https://pulpito.ceph.com/pdonnell-2023-07-05_03:38:33-fs:libcephfs-wip-pdonnell-testing-20230705.003205-distro-default-smithi/
1220
1221 137 Rishabh Dave
* https://tracker.ceph.com/issues/59346
1222 143 Rishabh Dave
  fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write"
1223
1224
h3. 27 Jun 2023
1225
1226
https://pulpito.ceph.com/rishabh-2023-06-21_23:38:17-fs-wip-rishabh-improvements-authmon-testing-default-smithi/
1227 144 Rishabh Dave
https://pulpito.ceph.com/rishabh-2023-06-23_17:37:30-fs-wip-rishabh-improvements-authmon-distro-default-smithi/
1228
1229
* https://tracker.ceph.com/issues/59348
1230
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1231
* https://tracker.ceph.com/issues/54460
1232
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1233
* https://tracker.ceph.com/issues/59346
1234
  fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write"
1235
* https://tracker.ceph.com/issues/59344
1236
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
1237
* https://tracker.ceph.com/issues/61399
1238
  libmpich: undefined references to fi_strerror
1239
* https://tracker.ceph.com/issues/50223
1240
  client.xxxx isn't responding to mclientcaps(revoke)
1241 143 Rishabh Dave
* https://tracker.ceph.com/issues/61831
1242
  Test failure: test_mirroring_init_failure_with_recovery (tasks.cephfs.test_mirroring.TestMirroring)
1243 142 Venky Shankar
1244
1245
h3. 22 June 2023
1246
1247
* https://tracker.ceph.com/issues/57676
1248
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1249
* https://tracker.ceph.com/issues/54460
1250
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1251
* https://tracker.ceph.com/issues/59344
1252
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
1253
* https://tracker.ceph.com/issues/59348
1254
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1255
* https://tracker.ceph.com/issues/61400
1256
    valgrind+ceph-mon: segmentation fault in rocksdb+tcmalloc
1257
* https://tracker.ceph.com/issues/57655
1258
    qa: fs:mixed-clients kernel_untar_build failure
1259
* https://tracker.ceph.com/issues/61394
1260
    qa/quincy: cluster [WRN] evicting unresponsive client smithi152 (4298), after 303.726 seconds" in cluster log
1261
* https://tracker.ceph.com/issues/61762
1262
    qa: wait_for_clean: failed before timeout expired
1263
* https://tracker.ceph.com/issues/61775
1264
    cephfs-mirror: mirror daemon does not shutdown (in mirror ha tests)
1265
* https://tracker.ceph.com/issues/44565
1266
    src/mds/SimpleLock.h: 528: FAILED ceph_assert(state == LOCK_XLOCK || state == LOCK_XLOCKDONE || state == LOCK_XLOCKSNAP || state == LOCK_LOCK_XLOCK || state == LOCK_LOCK || is_locallock())
1267
* https://tracker.ceph.com/issues/61790
1268
    cephfs client to mds comms remain silent after reconnect
1269
* https://tracker.ceph.com/issues/61791
1270
    snaptest-git-ceph.sh test timed out (job dead)
1271 139 Venky Shankar
1272
1273
h3. 20 June 2023
1274
1275
https://pulpito.ceph.com/vshankar-2023-06-15_04:58:28-fs-wip-vshankar-testing-20230614.124123-testing-default-smithi/
1276
1277
* https://tracker.ceph.com/issues/57676
1278
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1279
* https://tracker.ceph.com/issues/54460
1280
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1281 140 Venky Shankar
* https://tracker.ceph.com/issues/54462
1282 1 Patrick Donnelly
  Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128
1283 141 Venky Shankar
* https://tracker.ceph.com/issues/58340
1284 139 Venky Shankar
  mds: fsstress.sh hangs with multimds
1285
* https://tracker.ceph.com/issues/59344
1286
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
1287
* https://tracker.ceph.com/issues/59348
1288
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1289
* https://tracker.ceph.com/issues/57656
1290
  [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
1291
* https://tracker.ceph.com/issues/61400
1292
    valgrind+ceph-mon: segmentation fault in rocksdb+tcmalloc
1293
* https://tracker.ceph.com/issues/57655
1294
    qa: fs:mixed-clients kernel_untar_build failure
1295
* https://tracker.ceph.com/issues/44565
1296
    src/mds/SimpleLock.h: 528: FAILED ceph_assert(state == LOCK_XLOCK || state == LOCK_XLOCKDONE || state == LOCK_XLOCKSNAP || state == LOCK_LOCK_XLOCK || state == LOCK_LOCK || is_locallock())
1297
* https://tracker.ceph.com/issues/61737
1298 138 Rishabh Dave
    coredump from '/bin/podman pull quay.ceph.io/ceph-ci/ceph:pacific'
1299
1300
h3. 16 June 2023
1301
1302 1 Patrick Donnelly
https://pulpito.ceph.com/rishabh-2023-05-16_10:39:13-fs-wip-rishabh-2023May15-1524-testing-default-smithi/
1303 145 Rishabh Dave
https://pulpito.ceph.com/rishabh-2023-05-17_11:09:48-fs-wip-rishabh-2023May15-1524-testing-default-smithi/
1304 138 Rishabh Dave
https://pulpito.ceph.com/rishabh-2023-05-18_10:01:53-fs-wip-rishabh-2023May15-1524-testing-default-smithi/
1305 1 Patrick Donnelly
(bins were rebuilt with a subset of orig PRs) https://pulpito.ceph.com/rishabh-2023-06-09_10:19:22-fs-wip-rishabh-2023Jun9-1308-testing-default-smithi/
1306
1307
1308
* https://tracker.ceph.com/issues/59344
1309
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
1310 138 Rishabh Dave
* https://tracker.ceph.com/issues/59348
1311
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1312 145 Rishabh Dave
* https://tracker.ceph.com/issues/59346
1313
  fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write"
1314
* https://tracker.ceph.com/issues/57656
1315
  [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
1316
* https://tracker.ceph.com/issues/54460
1317
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1318 138 Rishabh Dave
* https://tracker.ceph.com/issues/54462
1319
  Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128
1320 145 Rishabh Dave
* https://tracker.ceph.com/issues/61399
1321
  libmpich: undefined references to fi_strerror
1322
* https://tracker.ceph.com/issues/58945
1323
  xfstests-dev: ceph-fuse: generic 
1324 138 Rishabh Dave
* https://tracker.ceph.com/issues/58742
1325 136 Patrick Donnelly
  xfstests-dev: kcephfs: generic
1326
1327
h3. 24 May 2023
1328
1329
https://pulpito.ceph.com/pdonnell-2023-05-23_18:20:18-fs-wip-pdonnell-testing-20230523.134409-distro-default-smithi/
1330
1331
* https://tracker.ceph.com/issues/57676
1332
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1333
* https://tracker.ceph.com/issues/59683
1334
    Error: Unable to find a match: userspace-rcu-devel libedit-devel device-mapper-devel with fscrypt tests
1335
* https://tracker.ceph.com/issues/61399
1336
    qa: "[Makefile:299: ior] Error 1"
1337
* https://tracker.ceph.com/issues/61265
1338
    qa: tasks.cephfs.fuse_mount:process failed to terminate after unmount
1339
* https://tracker.ceph.com/issues/59348
1340
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1341
* https://tracker.ceph.com/issues/59346
1342
    qa/workunits/fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write"
1343
* https://tracker.ceph.com/issues/61400
1344
    valgrind+ceph-mon: segmentation fault in rocksdb+tcmalloc
1345
* https://tracker.ceph.com/issues/54460
1346
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1347
* https://tracker.ceph.com/issues/51964
1348
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1349
* https://tracker.ceph.com/issues/59344
1350
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
1351
* https://tracker.ceph.com/issues/61407
1352
    mds: abort on CInode::verify_dirfrags
1353
* https://tracker.ceph.com/issues/48773
1354
    qa: scrub does not complete
1355
* https://tracker.ceph.com/issues/57655
1356
    qa: fs:mixed-clients kernel_untar_build failure
1357
* https://tracker.ceph.com/issues/61409
1358 128 Venky Shankar
    qa: _test_stale_caps does not wait for file flush before stat
1359
1360
h3. 15 May 2023
1361 130 Venky Shankar
1362 128 Venky Shankar
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230509.090020
1363
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230509.090020-6
1364
1365
* https://tracker.ceph.com/issues/52624
1366
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
1367
* https://tracker.ceph.com/issues/54460
1368
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1369
* https://tracker.ceph.com/issues/57676
1370
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1371
* https://tracker.ceph.com/issues/59684 [kclient bug]
1372
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
1373
* https://tracker.ceph.com/issues/59348
1374
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1375 131 Venky Shankar
* https://tracker.ceph.com/issues/61148
1376
    dbench test results in call trace in dmesg [kclient bug]
1377 133 Kotresh Hiremath Ravishankar
* https://tracker.ceph.com/issues/58340
1378 134 Kotresh Hiremath Ravishankar
    mds: fsstress.sh hangs with multimds
1379 125 Venky Shankar
1380
 
1381 129 Rishabh Dave
h3. 11 May 2023
1382
1383
https://pulpito.ceph.com/yuriw-2023-05-10_18:21:40-fs-wip-yuri7-testing-2023-05-10-0742-distro-default-smithi/
1384
1385
* https://tracker.ceph.com/issues/59684 [kclient bug]
1386
  Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
1387
* https://tracker.ceph.com/issues/59348
1388
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1389
* https://tracker.ceph.com/issues/57655
1390
  qa: fs:mixed-clients kernel_untar_build failure
1391
* https://tracker.ceph.com/issues/57676
1392
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
1393
* https://tracker.ceph.com/issues/55805
1394
  error during scrub thrashing reached max tries in 900 secs
1395
* https://tracker.ceph.com/issues/54460
1396
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1397
* https://tracker.ceph.com/issues/57656
1398
  [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
1399
* https://tracker.ceph.com/issues/58220
1400
  Command failed (workunit test fs/quota/quota.sh) on smithi081 with status 1:
1401 1 Patrick Donnelly
* https://tracker.ceph.com/issues/58220#note-9
1402
  workunit fs/test_python.sh: test_disk_quota_exceeeded_error failure
1403 134 Kotresh Hiremath Ravishankar
* https://tracker.ceph.com/issues/59342
1404
  qa/workunits/kernel_untar_build.sh failed when compiling the Linux source
1405 135 Kotresh Hiremath Ravishankar
* https://tracker.ceph.com/issues/58949
1406
    test_cephfs.test_disk_quota_exceeeded_error - AssertionError: DiskQuotaExceeded not raised by write
1407 129 Rishabh Dave
* https://tracker.ceph.com/issues/61243 (NEW)
1408
  test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
1409
1410 125 Venky Shankar
h3. 11 May 2023
1411 127 Venky Shankar
1412
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230509.054005
1413 126 Venky Shankar
1414 125 Venky Shankar
(no fsstress job failure [https://tracker.ceph.com/issues/58340] since https://github.com/ceph/ceph/pull/49553
1415
 was included in the branch, however, the PR got updated and needs retest).
1416
1417
* https://tracker.ceph.com/issues/52624
1418
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
1419
* https://tracker.ceph.com/issues/54460
1420
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1421
* https://tracker.ceph.com/issues/57676
1422
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1423
* https://tracker.ceph.com/issues/59683
1424
    Error: Unable to find a match: userspace-rcu-devel libedit-devel device-mapper-devel with fscrypt tests
1425
* https://tracker.ceph.com/issues/59684 [kclient bug]
1426
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
1427
* https://tracker.ceph.com/issues/59348
1428 124 Venky Shankar
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1429
1430
h3. 09 May 2023
1431
1432
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230506.143554
1433
1434
* https://tracker.ceph.com/issues/52624
1435
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
1436
* https://tracker.ceph.com/issues/58340
1437
    mds: fsstress.sh hangs with multimds
1438
* https://tracker.ceph.com/issues/54460
1439
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1440
* https://tracker.ceph.com/issues/57676
1441
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1442
* https://tracker.ceph.com/issues/51964
1443
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1444
* https://tracker.ceph.com/issues/59350
1445
    qa: test_flag_scrub_mdsdir (tasks.cephfs.test_scrub_checks.TestScrubChecks) ... ERROR
1446
* https://tracker.ceph.com/issues/59683
1447
    Error: Unable to find a match: userspace-rcu-devel libedit-devel device-mapper-devel with fscrypt tests
1448
* https://tracker.ceph.com/issues/59684 [kclient bug]
1449
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
1450
* https://tracker.ceph.com/issues/59348
1451 123 Venky Shankar
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1452
1453
h3. 10 Apr 2023
1454
1455
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230330.105356
1456
1457
* https://tracker.ceph.com/issues/52624
1458
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
1459
* https://tracker.ceph.com/issues/58340
1460
    mds: fsstress.sh hangs with multimds
1461
* https://tracker.ceph.com/issues/54460
1462
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1463
* https://tracker.ceph.com/issues/57676
1464
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1465 119 Rishabh Dave
* https://tracker.ceph.com/issues/51964
1466 120 Rishabh Dave
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1467 121 Rishabh Dave
1468 120 Rishabh Dave
h3. 31 Mar 2023
1469 122 Rishabh Dave
1470
run: http://pulpito.front.sepia.ceph.com/rishabh-2023-03-03_21:39:49-fs-wip-rishabh-2023Mar03-2316-testing-default-smithi/
1471 120 Rishabh Dave
re-run (some PRs removed from batch): http://pulpito.front.sepia.ceph.com/rishabh-2023-03-11_05:54:03-fs-wip-rishabh-2023Mar10-1727-testing-default-smithi/
1472
re-run (some PRs removed from batch): http://pulpito.front.sepia.ceph.com/rishabh-2023-03-23_08:27:28-fs-wip-rishabh-2023Mar20-2250-testing-default-smithi/
1473
1474
There were many more re-runs for "failed+dead" jobs as well as for individual jobs. half of the PRs from the batch were removed (gradually over subsequent re-runs).
1475
1476
* https://tracker.ceph.com/issues/57676
1477
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
1478
* https://tracker.ceph.com/issues/54460
1479
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1480
* https://tracker.ceph.com/issues/58220
1481
  Command failed (workunit test fs/quota/quota.sh) on smithixxx with status 1
1482
* https://tracker.ceph.com/issues/58220#note-9
1483
  workunit fs/test_python.sh: test_disk_quota_exceeeded_error failure
1484
* https://tracker.ceph.com/issues/56695
1485
  Command failed (workunit test suites/pjd.sh)
1486
* https://tracker.ceph.com/issues/58564 
1487
  workuit dbench failed with error code 1
1488
* https://tracker.ceph.com/issues/57206
1489
  libcephfs/test.sh: ceph_test_libcephfs_reclaim
1490
* https://tracker.ceph.com/issues/57580
1491
  Test failure: test_newops_getvxattr (tasks.cephfs.test_newops.TestNewOps)
1492
* https://tracker.ceph.com/issues/58940
1493
  ceph osd hit ceph_abort
1494
* https://tracker.ceph.com/issues/55805
1495 118 Venky Shankar
  error scrub thrashing reached max tries in 900 secs
1496
1497
h3. 30 March 2023
1498
1499
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230315.085747
1500
1501
* https://tracker.ceph.com/issues/58938
1502
    qa: xfstests-dev's generic test suite has 7 failures with kclient
1503
* https://tracker.ceph.com/issues/51964
1504
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1505
* https://tracker.ceph.com/issues/58340
1506 114 Venky Shankar
    mds: fsstress.sh hangs with multimds
1507
1508 115 Venky Shankar
h3. 29 March 2023
1509 114 Venky Shankar
1510
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230317.095222
1511
1512
* https://tracker.ceph.com/issues/56695
1513
    [RHEL stock] pjd test failures
1514
* https://tracker.ceph.com/issues/57676
1515
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1516
* https://tracker.ceph.com/issues/57087
1517
    qa: test_fragmented_injection (tasks.cephfs.test_data_scan.TestDataScan) failure
1518 116 Venky Shankar
* https://tracker.ceph.com/issues/58340
1519
    mds: fsstress.sh hangs with multimds
1520 114 Venky Shankar
* https://tracker.ceph.com/issues/57655
1521
    qa: fs:mixed-clients kernel_untar_build failure
1522 117 Venky Shankar
* https://tracker.ceph.com/issues/59230
1523
    Test failure: test_object_deletion (tasks.cephfs.test_damage.TestDamage)
1524 114 Venky Shankar
* https://tracker.ceph.com/issues/58938
1525 113 Venky Shankar
    qa: xfstests-dev's generic test suite has 7 failures with kclient
1526
1527
h3. 13 Mar 2023
1528
1529
* https://tracker.ceph.com/issues/56695
1530
    [RHEL stock] pjd test failures
1531
* https://tracker.ceph.com/issues/57676
1532
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1533
* https://tracker.ceph.com/issues/51964
1534
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1535
* https://tracker.ceph.com/issues/54460
1536
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1537
* https://tracker.ceph.com/issues/57656
1538 112 Venky Shankar
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
1539
1540
h3. 09 Mar 2023
1541
1542
https://pulpito.ceph.com/vshankar-2023-03-03_04:39:14-fs-wip-vshankar-testing-20230303.023823-testing-default-smithi/
1543
https://pulpito.ceph.com/vshankar-2023-03-08_15:12:36-fs-wip-vshankar-testing-20230308.112059-testing-default-smithi/
1544
1545
* https://tracker.ceph.com/issues/56695
1546
    [RHEL stock] pjd test failures
1547
* https://tracker.ceph.com/issues/57676
1548
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1549
* https://tracker.ceph.com/issues/51964
1550
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1551
* https://tracker.ceph.com/issues/54460
1552
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1553
* https://tracker.ceph.com/issues/58340
1554
    mds: fsstress.sh hangs with multimds
1555
* https://tracker.ceph.com/issues/57087
1556 111 Venky Shankar
    qa: test_fragmented_injection (tasks.cephfs.test_data_scan.TestDataScan) failure
1557
1558
h3. 07 Mar 2023
1559
1560
https://pulpito.ceph.com/vshankar-2023-03-02_09:21:58-fs-wip-vshankar-testing-20230222.044949-testing-default-smithi/
1561
https://pulpito.ceph.com/vshankar-2023-03-07_05:15:12-fs-wip-vshankar-testing-20230307.030510-testing-default-smithi/
1562
1563
* https://tracker.ceph.com/issues/56695
1564
    [RHEL stock] pjd test failures
1565
* https://tracker.ceph.com/issues/57676
1566
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1567
* https://tracker.ceph.com/issues/51964
1568
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1569
* https://tracker.ceph.com/issues/57656
1570
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
1571
* https://tracker.ceph.com/issues/57655
1572
    qa: fs:mixed-clients kernel_untar_build failure
1573
* https://tracker.ceph.com/issues/58220
1574
    Command failed (workunit test fs/quota/quota.sh) on smithi081 with status 1:
1575
* https://tracker.ceph.com/issues/54460
1576
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1577
* https://tracker.ceph.com/issues/58934
1578 109 Venky Shankar
    snaptest-git-ceph.sh failure with ceph-fuse
1579
1580
h3. 28 Feb 2023
1581
1582
https://pulpito.ceph.com/vshankar-2023-02-24_02:11:45-fs-wip-vshankar-testing-20230222.025426-testing-default-smithi/
1583
1584
* https://tracker.ceph.com/issues/56695
1585
    [RHEL stock] pjd test failures
1586
* https://tracker.ceph.com/issues/57676
1587
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1588 110 Venky Shankar
* https://tracker.ceph.com/issues/56446
1589 109 Venky Shankar
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
1590
1591 107 Venky Shankar
(teuthology infra issues causing testing delays - merging PRs which have tests passing)
1592
1593
h3. 25 Jan 2023
1594
1595
https://pulpito.ceph.com/vshankar-2023-01-25_07:57:32-fs-wip-vshankar-testing-20230125.055346-testing-default-smithi/
1596
1597
* https://tracker.ceph.com/issues/52624
1598
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
1599
* https://tracker.ceph.com/issues/56695
1600
    [RHEL stock] pjd test failures
1601
* https://tracker.ceph.com/issues/57676
1602
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1603
* https://tracker.ceph.com/issues/56446
1604
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
1605
* https://tracker.ceph.com/issues/57206
1606
    libcephfs/test.sh: ceph_test_libcephfs_reclaim
1607
* https://tracker.ceph.com/issues/58220
1608
    Command failed (workunit test fs/quota/quota.sh) on smithi081 with status 1:
1609
* https://tracker.ceph.com/issues/58340
1610
  mds: fsstress.sh hangs with multimds
1611
* https://tracker.ceph.com/issues/56011
1612
    fs/thrash: snaptest-snap-rm-cmp.sh fails in mds5sum comparison
1613
* https://tracker.ceph.com/issues/54460
1614 101 Rishabh Dave
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1615
1616
h3. 30 JAN 2023
1617
1618
run: http://pulpito.front.sepia.ceph.com/rishabh-2022-11-28_08:04:11-fs-wip-rishabh-testing-2022Nov24-1818-testing-default-smithi/
1619
re-run: http://pulpito.front.sepia.ceph.com/rishabh-2023-01-13_12:08:33-fs-wip-rishabh-testing-2022Nov24-11Jan2023-distro-default-smithi/
1620 105 Rishabh Dave
re-run of re-run: http://pulpito.front.sepia.ceph.com/rishabh-2023-01-23_18:53:32-fs-wip-rishabh-testing-2022Nov24-11Jan2023-distro-default-smithi/
1621
1622 101 Rishabh Dave
* https://tracker.ceph.com/issues/52624
1623
  qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
1624
* https://tracker.ceph.com/issues/56695
1625
  [RHEL stock] pjd test failures
1626
* https://tracker.ceph.com/issues/57676
1627
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
1628
* https://tracker.ceph.com/issues/55332
1629
  Failure in snaptest-git-ceph.sh
1630
* https://tracker.ceph.com/issues/51964
1631
  qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1632
* https://tracker.ceph.com/issues/56446
1633
  Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
1634
* https://tracker.ceph.com/issues/57655 
1635
  qa: fs:mixed-clients kernel_untar_build failure
1636
* https://tracker.ceph.com/issues/54460
1637
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1638 103 Rishabh Dave
* https://tracker.ceph.com/issues/58340
1639
  mds: fsstress.sh hangs with multimds
1640 101 Rishabh Dave
* https://tracker.ceph.com/issues/58219
1641 102 Rishabh Dave
  Command crashed: 'ceph-dencoder type inode_backtrace_t import - decode dump_json'
1642
1643
* "Failed to load ceph-mgr modules: prometheus" in cluster log"
1644 106 Rishabh Dave
  http://pulpito.front.sepia.ceph.com/rishabh-2023-01-23_18:53:32-fs-wip-rishabh-testing-2022Nov24-11Jan2023-distro-default-smithi/7134086
1645
  Acc to Venky this was fixed in https://github.com/ceph/ceph/commit/cf6089200d96fc56b08ee17a4e31f19823370dc8
1646 102 Rishabh Dave
* Created https://tracker.ceph.com/issues/58564
1647 100 Venky Shankar
  workunit test suites/dbench.sh failed error code 1
1648
1649
h3. 15 Dec 2022
1650
1651
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20221215.112736
1652
1653
* https://tracker.ceph.com/issues/52624
1654
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1655
* https://tracker.ceph.com/issues/56695
1656
    [RHEL stock] pjd test failures
1657
* https://tracker.ceph.com/issues/58219
1658
* https://tracker.ceph.com/issues/57655
1659
* qa: fs:mixed-clients kernel_untar_build failure
1660
    Test failure: test_journal_migration (tasks.cephfs.test_journal_migration.TestJournalMigration)
1661
* https://tracker.ceph.com/issues/57676
1662
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1663
* https://tracker.ceph.com/issues/58340
1664 96 Venky Shankar
    mds: fsstress.sh hangs with multimds
1665
1666
h3. 08 Dec 2022
1667 99 Venky Shankar
1668 96 Venky Shankar
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20221130.043104
1669
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20221209.043803
1670
1671
(lots of transient git.ceph.com failures)
1672
1673
* https://tracker.ceph.com/issues/52624
1674
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1675
* https://tracker.ceph.com/issues/56695
1676
    [RHEL stock] pjd test failures
1677
* https://tracker.ceph.com/issues/57655
1678
    qa: fs:mixed-clients kernel_untar_build failure
1679
* https://tracker.ceph.com/issues/58219
1680
    Test failure: test_journal_migration (tasks.cephfs.test_journal_migration.TestJournalMigration)
1681
* https://tracker.ceph.com/issues/58220
1682
    Command failed (workunit test fs/quota/quota.sh) on smithi081 with status 1:
1683 97 Venky Shankar
* https://tracker.ceph.com/issues/57676
1684
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1685 98 Venky Shankar
* https://tracker.ceph.com/issues/53859
1686
    qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
1687
* https://tracker.ceph.com/issues/54460
1688
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1689 96 Venky Shankar
* https://tracker.ceph.com/issues/58244
1690 95 Venky Shankar
    Test failure: test_rebuild_inotable (tasks.cephfs.test_data_scan.TestDataScan)
1691
1692
h3. 14 Oct 2022
1693
1694
https://pulpito.ceph.com/vshankar-2022-10-12_04:56:59-fs-wip-vshankar-testing-20221011-145847-testing-default-smithi/
1695
https://pulpito.ceph.com/vshankar-2022-10-14_04:04:57-fs-wip-vshankar-testing-20221014-072608-testing-default-smithi/
1696
1697
* https://tracker.ceph.com/issues/52624
1698
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1699
* https://tracker.ceph.com/issues/55804
1700
    Command failed (workunit test suites/pjd.sh)
1701
* https://tracker.ceph.com/issues/51964
1702
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1703
* https://tracker.ceph.com/issues/57682
1704
    client: ERROR: test_reconnect_after_blocklisted
1705 90 Rishabh Dave
* https://tracker.ceph.com/issues/54460
1706 91 Rishabh Dave
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1707
1708
h3. 10 Oct 2022
1709 92 Rishabh Dave
1710 91 Rishabh Dave
http://pulpito.front.sepia.ceph.com/rishabh-2022-09-30_19:45:21-fs-wip-rishabh-testing-30Sep2022-testing-default-smithi/
1711
1712
reruns
1713
* fs-thrash, passed: http://pulpito.front.sepia.ceph.com/rishabh-2022-10-04_13:19:47-fs-wip-rishabh-testing-30Sep2022-testing-default-smithi/
1714 94 Rishabh Dave
* fs-verify, passed: http://pulpito.front.sepia.ceph.com/rishabh-2022-10-05_12:25:37-fs-wip-rishabh-testing-30Sep2022-testing-default-smithi/
1715 91 Rishabh Dave
* cephadm failures also passed after many re-runs: http://pulpito.front.sepia.ceph.com/rishabh-2022-10-06_13:50:51-fs-wip-rishabh-testing-30Sep2022-2-testing-default-smithi/
1716 93 Rishabh Dave
    ** needed this PR to be merged in ceph-ci branch - https://github.com/ceph/ceph/pull/47458
1717 91 Rishabh Dave
1718
known bugs
1719
* https://tracker.ceph.com/issues/52624
1720
  qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1721
* https://tracker.ceph.com/issues/50223
1722
  client.xxxx isn't responding to mclientcaps(revoke
1723
* https://tracker.ceph.com/issues/57299
1724
  qa: test_dump_loads fails with JSONDecodeError
1725
* https://tracker.ceph.com/issues/57655 [Exist in main as well]
1726
  qa: fs:mixed-clients kernel_untar_build failure
1727
* https://tracker.ceph.com/issues/57206
1728 90 Rishabh Dave
  libcephfs/test.sh: ceph_test_libcephfs_reclaim
1729
1730
h3. 2022 Sep 29
1731
1732
http://pulpito.front.sepia.ceph.com/rishabh-2022-09-14_12:48:43-fs-wip-rishabh-testing-2022Sep9-1708-testing-default-smithi/
1733
1734
* https://tracker.ceph.com/issues/55804
1735
  Command failed (workunit test suites/pjd.sh)
1736
* https://tracker.ceph.com/issues/36593
1737
  Command failed (workunit test fs/quota/quota.sh) on smithixxx with status 1
1738
* https://tracker.ceph.com/issues/52624
1739
  qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1740
* https://tracker.ceph.com/issues/51964
1741
  qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1742
* https://tracker.ceph.com/issues/56632
1743
  Test failure: test_subvolume_snapshot_clone_quota_exceeded
1744
* https://tracker.ceph.com/issues/50821
1745 88 Patrick Donnelly
  qa: untar_snap_rm failure during mds thrashing
1746
1747
h3. 2022 Sep 26
1748
1749
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20220923.171109
1750
1751
* https://tracker.ceph.com/issues/55804
1752
    qa failure: pjd link tests failed
1753
* https://tracker.ceph.com/issues/57676
1754
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1755
* https://tracker.ceph.com/issues/52624
1756
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1757
* https://tracker.ceph.com/issues/57580
1758
    Test failure: test_newops_getvxattr (tasks.cephfs.test_newops.TestNewOps)
1759
* https://tracker.ceph.com/issues/48773
1760
    qa: scrub does not complete
1761
* https://tracker.ceph.com/issues/57299
1762
    qa: test_dump_loads fails with JSONDecodeError
1763
* https://tracker.ceph.com/issues/57280
1764
    qa: tasks/kernel_cfuse_workunits_untarbuild_blogbench fails - Failed to fetch package version from shaman
1765
* https://tracker.ceph.com/issues/57205
1766
    Test failure: test_subvolume_group_ls_filter_internal_directories (tasks.cephfs.test_volumes.TestSubvolumeGroups)
1767
* https://tracker.ceph.com/issues/57656
1768
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
1769
* https://tracker.ceph.com/issues/57677
1770
    qa: "1 MDSs behind on trimming (MDS_TRIM)"
1771
* https://tracker.ceph.com/issues/57206
1772
    libcephfs/test.sh: ceph_test_libcephfs_reclaim
1773
* https://tracker.ceph.com/issues/57446
1774
    qa: test_subvolume_snapshot_info_if_orphan_clone fails
1775 89 Patrick Donnelly
* https://tracker.ceph.com/issues/57655 [Exist in main as well]
1776
    qa: fs:mixed-clients kernel_untar_build failure
1777 88 Patrick Donnelly
* https://tracker.ceph.com/issues/57682
1778
    client: ERROR: test_reconnect_after_blocklisted
1779 87 Patrick Donnelly
1780
1781
h3. 2022 Sep 22
1782
1783
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20220920.234701
1784
1785
* https://tracker.ceph.com/issues/57299
1786
    qa: test_dump_loads fails with JSONDecodeError
1787
* https://tracker.ceph.com/issues/57205
1788
    Test failure: test_subvolume_group_ls_filter_internal_directories (tasks.cephfs.test_volumes.TestSubvolumeGroups)
1789
* https://tracker.ceph.com/issues/52624
1790
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1791
* https://tracker.ceph.com/issues/57580
1792
    Test failure: test_newops_getvxattr (tasks.cephfs.test_newops.TestNewOps)
1793
* https://tracker.ceph.com/issues/57280
1794
    qa: tasks/kernel_cfuse_workunits_untarbuild_blogbench fails - Failed to fetch package version from shaman
1795
* https://tracker.ceph.com/issues/48773
1796
    qa: scrub does not complete
1797
* https://tracker.ceph.com/issues/56446
1798
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
1799
* https://tracker.ceph.com/issues/57206
1800
    libcephfs/test.sh: ceph_test_libcephfs_reclaim
1801
* https://tracker.ceph.com/issues/51267
1802
    CommandFailedError: Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithi096 with status 1:...
1803
1804
NEW:
1805
1806
* https://tracker.ceph.com/issues/57656
1807
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
1808
* https://tracker.ceph.com/issues/57655 [Exist in main as well]
1809
    qa: fs:mixed-clients kernel_untar_build failure
1810
* https://tracker.ceph.com/issues/57657
1811
    mds: scrub locates mismatch between child accounted_rstats and self rstats
1812
1813
Segfault probably caused by: https://github.com/ceph/ceph/pull/47795#issuecomment-1255724799
1814 80 Venky Shankar
1815 79 Venky Shankar
1816
h3. 2022 Sep 16
1817
1818
https://pulpito.ceph.com/?branch=wip-vshankar-testing1-20220905-132828
1819
1820
* https://tracker.ceph.com/issues/57446
1821
    qa: test_subvolume_snapshot_info_if_orphan_clone fails
1822
* https://tracker.ceph.com/issues/57299
1823
    qa: test_dump_loads fails with JSONDecodeError
1824
* https://tracker.ceph.com/issues/50223
1825
    client.xxxx isn't responding to mclientcaps(revoke)
1826
* https://tracker.ceph.com/issues/52624
1827
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1828
* https://tracker.ceph.com/issues/57205
1829
    Test failure: test_subvolume_group_ls_filter_internal_directories (tasks.cephfs.test_volumes.TestSubvolumeGroups)
1830
* https://tracker.ceph.com/issues/57280
1831
    qa: tasks/kernel_cfuse_workunits_untarbuild_blogbench fails - Failed to fetch package version from shaman
1832
* https://tracker.ceph.com/issues/51282
1833
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
1834
* https://tracker.ceph.com/issues/48203
1835
  https://tracker.ceph.com/issues/36593
1836
    qa: quota failure
1837
    qa: quota failure caused by clients stepping on each other
1838
* https://tracker.ceph.com/issues/57580
1839 77 Rishabh Dave
    Test failure: test_newops_getvxattr (tasks.cephfs.test_newops.TestNewOps)
1840
1841 76 Rishabh Dave
1842
h3. 2022 Aug 26
1843
1844
http://pulpito.front.sepia.ceph.com/rishabh-2022-08-22_17:49:59-fs-wip-rishabh-testing-2022Aug19-testing-default-smithi/
1845
http://pulpito.front.sepia.ceph.com/rishabh-2022-08-24_11:56:51-fs-wip-rishabh-testing-2022Aug19-testing-default-smithi/
1846
1847
* https://tracker.ceph.com/issues/57206
1848
  libcephfs/test.sh: ceph_test_libcephfs_reclaim
1849
* https://tracker.ceph.com/issues/56632
1850
  Test failure: test_subvolume_snapshot_clone_quota_exceeded (tasks.cephfs.test_volumes.TestSubvolumeSnapshotClones)
1851
* https://tracker.ceph.com/issues/56446
1852
  Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
1853
* https://tracker.ceph.com/issues/51964
1854
  qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1855
* https://tracker.ceph.com/issues/53859
1856
  qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
1857
1858
* https://tracker.ceph.com/issues/54460
1859
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1860
* https://tracker.ceph.com/issues/54462
1861
  Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128
1862
* https://tracker.ceph.com/issues/54460
1863
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1864
* https://tracker.ceph.com/issues/36593
1865
  Command failed (workunit test fs/quota/quota.sh) on smithixxx with status 1
1866
1867
* https://tracker.ceph.com/issues/52624
1868
  qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1869
* https://tracker.ceph.com/issues/55804
1870
  Command failed (workunit test suites/pjd.sh)
1871
* https://tracker.ceph.com/issues/50223
1872
  client.xxxx isn't responding to mclientcaps(revoke)
1873 75 Venky Shankar
1874
1875
h3. 2022 Aug 22
1876
1877
https://pulpito.ceph.com/vshankar-2022-08-12_09:34:24-fs-wip-vshankar-testing1-20220812-072441-testing-default-smithi/
1878
https://pulpito.ceph.com/vshankar-2022-08-18_04:30:42-fs-wip-vshankar-testing1-20220818-082047-testing-default-smithi/ (drop problematic PR and re-run)
1879
1880
* https://tracker.ceph.com/issues/52624
1881
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1882
* https://tracker.ceph.com/issues/56446
1883
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
1884
* https://tracker.ceph.com/issues/55804
1885
    Command failed (workunit test suites/pjd.sh)
1886
* https://tracker.ceph.com/issues/51278
1887
    mds: "FAILED ceph_assert(!segments.empty())"
1888
* https://tracker.ceph.com/issues/54460
1889
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1890
* https://tracker.ceph.com/issues/57205
1891
    Test failure: test_subvolume_group_ls_filter_internal_directories (tasks.cephfs.test_volumes.TestSubvolumeGroups)
1892
* https://tracker.ceph.com/issues/57206
1893
    ceph_test_libcephfs_reclaim crashes during test
1894
* https://tracker.ceph.com/issues/53859
1895
  Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
1896
* https://tracker.ceph.com/issues/50223
1897 72 Venky Shankar
    client.xxxx isn't responding to mclientcaps(revoke)
1898
1899
h3. 2022 Aug 12
1900
1901
https://pulpito.ceph.com/vshankar-2022-08-10_04:06:00-fs-wip-vshankar-testing-20220805-190751-testing-default-smithi/
1902
https://pulpito.ceph.com/vshankar-2022-08-11_12:16:58-fs-wip-vshankar-testing-20220811-145809-testing-default-smithi/ (drop problematic PR and re-run)
1903
1904
* https://tracker.ceph.com/issues/52624
1905
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1906
* https://tracker.ceph.com/issues/56446
1907
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
1908
* https://tracker.ceph.com/issues/51964
1909
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1910
* https://tracker.ceph.com/issues/55804
1911
    Command failed (workunit test suites/pjd.sh)
1912
* https://tracker.ceph.com/issues/50223
1913
    client.xxxx isn't responding to mclientcaps(revoke)
1914
* https://tracker.ceph.com/issues/50821
1915 73 Venky Shankar
    qa: untar_snap_rm failure during mds thrashing
1916 72 Venky Shankar
* https://tracker.ceph.com/issues/54460
1917 71 Venky Shankar
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1918
1919
h3. 2022 Aug 04
1920
1921
https://pulpito.ceph.com/?branch=wip-vshankar-testing1-20220804-123835 (only mgr/volumes, mgr/stats)
1922
1923 69 Rishabh Dave
Unrealted teuthology failure on rhel
1924 68 Rishabh Dave
1925
h3. 2022 Jul 25
1926
1927
http://pulpito.front.sepia.ceph.com/rishabh-2022-07-22_11:34:20-fs-wip-rishabh-testing-2022Jul22-1400-testing-default-smithi/
1928
1929 74 Rishabh Dave
1st re-run: http://pulpito.front.sepia.ceph.com/rishabh-2022-07-24_03:51:19-fs-wip-rishabh-testing-2022Jul22-1400-testing-default-smithi
1930
2nd re-run: http://pulpito.front.sepia.ceph.com/rishabh-2022-07-24_08:53:36-fs-wip-rishabh-testing-2022Jul22-1400-testing-default-smithi/
1931 68 Rishabh Dave
3rd re-run: http://pulpito.front.sepia.ceph.com/rishabh-2022-07-24_08:53:36-fs-wip-rishabh-testing-2022Jul22-1400-testing-default-smithi/
1932
4th (final) re-run: http://pulpito.front.sepia.ceph.com/rishabh-2022-07-28_03:59:01-fs-wip-rishabh-testing-2022Jul28-0143-testing-default-smithi/
1933
1934
* https://tracker.ceph.com/issues/55804
1935
  Command failed (workunit test suites/pjd.sh)
1936
* https://tracker.ceph.com/issues/50223
1937
  client.xxxx isn't responding to mclientcaps(revoke)
1938
1939
* https://tracker.ceph.com/issues/54460
1940
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1941 1 Patrick Donnelly
* https://tracker.ceph.com/issues/36593
1942 74 Rishabh Dave
  Command failed (workunit test fs/quota/quota.sh) on smithixxx with status 1
1943 68 Rishabh Dave
* https://tracker.ceph.com/issues/54462
1944 67 Patrick Donnelly
  Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128~
1945
1946
h3. 2022 July 22
1947
1948
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20220721.235756
1949
1950
MDS_HEALTH_DUMMY error in log fixed by followup commit.
1951
transient selinux ping failure
1952
1953
* https://tracker.ceph.com/issues/56694
1954
    qa: avoid blocking forever on hung umount
1955
* https://tracker.ceph.com/issues/56695
1956
    [RHEL stock] pjd test failures
1957
* https://tracker.ceph.com/issues/56696
1958
    admin keyring disappears during qa run
1959
* https://tracker.ceph.com/issues/56697
1960
    qa: fs/snaps fails for fuse
1961
* https://tracker.ceph.com/issues/50222
1962
    osd: 5.2s0 deep-scrub : stat mismatch
1963
* https://tracker.ceph.com/issues/56698
1964
    client: FAILED ceph_assert(_size == 0)
1965
* https://tracker.ceph.com/issues/50223
1966
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
1967 66 Rishabh Dave
1968 65 Rishabh Dave
1969
h3. 2022 Jul 15
1970
1971
http://pulpito.front.sepia.ceph.com/rishabh-2022-07-08_23:53:34-fs-wip-rishabh-testing-2022Jul08-1820-testing-default-smithi/
1972
1973
re-run: http://pulpito.front.sepia.ceph.com/rishabh-2022-07-15_06:42:04-fs-wip-rishabh-testing-2022Jul08-1820-testing-default-smithi/
1974
1975
* https://tracker.ceph.com/issues/53859
1976
  Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
1977
* https://tracker.ceph.com/issues/55804
1978
  Command failed (workunit test suites/pjd.sh)
1979
* https://tracker.ceph.com/issues/50223
1980
  client.xxxx isn't responding to mclientcaps(revoke)
1981
* https://tracker.ceph.com/issues/50222
1982
  osd: deep-scrub : stat mismatch
1983
1984
* https://tracker.ceph.com/issues/56632
1985
  Test failure: test_subvolume_snapshot_clone_quota_exceeded (tasks.cephfs.test_volumes.TestSubvolumeSnapshotClones)
1986
* https://tracker.ceph.com/issues/56634
1987
  workunit test fs/snaps/snaptest-intodir.sh
1988
* https://tracker.ceph.com/issues/56644
1989
  Test failure: test_rapid_creation (tasks.cephfs.test_fragment.TestFragmentation)
1990
1991 61 Rishabh Dave
1992
1993
h3. 2022 July 05
1994 62 Rishabh Dave
1995 64 Rishabh Dave
http://pulpito.front.sepia.ceph.com/rishabh-2022-07-02_14:14:52-fs-wip-rishabh-testing-20220702-1631-testing-default-smithi/
1996
1997
On 1st re-run some jobs passed - http://pulpito.front.sepia.ceph.com/rishabh-2022-07-03_15:10:28-fs-wip-rishabh-testing-20220702-1631-distro-default-smithi/
1998
1999
On 2nd re-run only few jobs failed -
2000 62 Rishabh Dave
http://pulpito.front.sepia.ceph.com/rishabh-2022-07-06_05:24:29-fs-wip-rishabh-testing-20220705-2132-distro-default-smithi/
2001
http://pulpito.front.sepia.ceph.com/rishabh-2022-07-06_05:24:29-fs-wip-rishabh-testing-20220705-2132-distro-default-smithi/
2002
2003
* https://tracker.ceph.com/issues/56446
2004
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
2005
* https://tracker.ceph.com/issues/55804
2006
    Command failed (workunit test suites/pjd.sh) on smithi047 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/
2007
2008
* https://tracker.ceph.com/issues/56445
2009 63 Rishabh Dave
    Command failed on smithi080 with status 123: "find /home/ubuntu/cephtest/archive/syslog -name '*.log' -print0 | sudo xargs -0 --no-run-if-empty -- gzip --"
2010
* https://tracker.ceph.com/issues/51267
2011
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithi098 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1
2012 62 Rishabh Dave
* https://tracker.ceph.com/issues/50224
2013
    Test failure: test_mirroring_init_failure_with_recovery (tasks.cephfs.test_mirroring.TestMirroring)
2014 61 Rishabh Dave
2015 58 Venky Shankar
2016
2017
h3. 2022 July 04
2018
2019
https://pulpito.ceph.com/vshankar-2022-06-29_09:19:00-fs-wip-vshankar-testing-20220627-100931-testing-default-smithi/
2020
(rhel runs were borked due to: https://lists.ceph.io/hyperkitty/list/dev@ceph.io/thread/JSZQFUKVLDND4W33PXDGCABPHNSPT6SS/, tests ran with --filter-out=rhel)
2021
2022
* https://tracker.ceph.com/issues/56445
2023 59 Rishabh Dave
    Command failed on smithi162 with status 123: "find /home/ubuntu/cephtest/archive/syslog -name '*.log' -print0 | sudo xargs -0 --no-run-if-empty -- gzip --"
2024
* https://tracker.ceph.com/issues/56446
2025
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
2026
* https://tracker.ceph.com/issues/51964
2027 60 Rishabh Dave
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
2028 59 Rishabh Dave
* https://tracker.ceph.com/issues/52624
2029 57 Venky Shankar
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2030
2031
h3. 2022 June 20
2032
2033
https://pulpito.ceph.com/vshankar-2022-06-15_04:03:39-fs-wip-vshankar-testing1-20220615-072516-testing-default-smithi/
2034
https://pulpito.ceph.com/vshankar-2022-06-19_08:22:46-fs-wip-vshankar-testing1-20220619-102531-testing-default-smithi/
2035
2036
* https://tracker.ceph.com/issues/52624
2037
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2038
* https://tracker.ceph.com/issues/55804
2039
    qa failure: pjd link tests failed
2040
* https://tracker.ceph.com/issues/54108
2041
    qa: iogen workunit: "The following counters failed to be set on mds daemons: {'mds.exported', 'mds.imported'}"
2042
* https://tracker.ceph.com/issues/55332
2043 56 Patrick Donnelly
    Failure in snaptest-git-ceph.sh (it's an async unlink/create bug)
2044
2045
h3. 2022 June 13
2046
2047
https://pulpito.ceph.com/pdonnell-2022-06-12_05:08:12-fs:workload-wip-pdonnell-testing-20220612.004943-distro-default-smithi/
2048
2049
* https://tracker.ceph.com/issues/56024
2050
    cephadm: removes ceph.conf during qa run causing command failure
2051
* https://tracker.ceph.com/issues/48773
2052
    qa: scrub does not complete
2053
* https://tracker.ceph.com/issues/56012
2054
    mds: src/mds/MDLog.cc: 283: FAILED ceph_assert(!mds->is_ any_replay())
2055 55 Venky Shankar
2056 54 Venky Shankar
2057
h3. 2022 Jun 13
2058
2059
https://pulpito.ceph.com/vshankar-2022-06-07_00:25:50-fs-wip-vshankar-testing-20220606-223254-testing-default-smithi/
2060
https://pulpito.ceph.com/vshankar-2022-06-10_01:04:46-fs-wip-vshankar-testing-20220609-175550-testing-default-smithi/
2061
2062
* https://tracker.ceph.com/issues/52624
2063
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2064
* https://tracker.ceph.com/issues/51964
2065
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
2066
* https://tracker.ceph.com/issues/53859
2067
    qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
2068
* https://tracker.ceph.com/issues/55804
2069
    qa failure: pjd link tests failed
2070
* https://tracker.ceph.com/issues/56003
2071
    client: src/include/xlist.h: 81: FAILED ceph_assert(_size == 0)
2072
* https://tracker.ceph.com/issues/56011
2073
    fs/thrash: snaptest-snap-rm-cmp.sh fails in mds5sum comparison
2074
* https://tracker.ceph.com/issues/56012
2075 53 Venky Shankar
    mds: src/mds/MDLog.cc: 283: FAILED ceph_assert(!mds->is_ any_replay())
2076
2077
h3. 2022 Jun 07
2078
2079
https://pulpito.ceph.com/vshankar-2022-06-06_21:25:41-fs-wip-vshankar-testing1-20220606-230129-testing-default-smithi/
2080
https://pulpito.ceph.com/vshankar-2022-06-07_10:53:31-fs-wip-vshankar-testing1-20220607-104134-testing-default-smithi/ (rerun after dropping a problematic PR)
2081
2082
* https://tracker.ceph.com/issues/52624
2083
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2084
* https://tracker.ceph.com/issues/50223
2085
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2086
* https://tracker.ceph.com/issues/50224
2087 51 Venky Shankar
    qa: test_mirroring_init_failure_with_recovery failure
2088
2089
h3. 2022 May 12
2090 52 Venky Shankar
2091 51 Venky Shankar
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20220509-125847
2092
https://pulpito.ceph.com/vshankar-2022-05-13_17:09:16-fs-wip-vshankar-testing-20220513-120051-testing-default-smithi/ (drop prs + rerun)
2093
2094
* https://tracker.ceph.com/issues/52624
2095
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2096
* https://tracker.ceph.com/issues/50223
2097
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2098
* https://tracker.ceph.com/issues/55332
2099
    Failure in snaptest-git-ceph.sh
2100
* https://tracker.ceph.com/issues/53859
2101 1 Patrick Donnelly
    qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
2102 52 Venky Shankar
* https://tracker.ceph.com/issues/55538
2103
    Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead)
2104 51 Venky Shankar
* https://tracker.ceph.com/issues/55258
2105 49 Venky Shankar
    lots of "heartbeat_check: no reply from X.X.X.X" in OSD logs (cropss up again, though very infrequent)
2106
2107 50 Venky Shankar
h3. 2022 May 04
2108
2109
https://pulpito.ceph.com/vshankar-2022-05-01_13:18:44-fs-wip-vshankar-testing1-20220428-204527-testing-default-smithi/
2110 49 Venky Shankar
https://pulpito.ceph.com/vshankar-2022-05-02_16:58:59-fs-wip-vshankar-testing1-20220502-201957-testing-default-smithi/ (after dropping PRs)
2111
2112
* https://tracker.ceph.com/issues/52624
2113
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2114
* https://tracker.ceph.com/issues/50223
2115
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2116
* https://tracker.ceph.com/issues/55332
2117
    Failure in snaptest-git-ceph.sh
2118
* https://tracker.ceph.com/issues/53859
2119
    qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
2120
* https://tracker.ceph.com/issues/55516
2121
    qa: fs suite tests failing with "json.decoder.JSONDecodeError: Extra data: line 2 column 82 (char 82)"
2122
* https://tracker.ceph.com/issues/55537
2123
    mds: crash during fs:upgrade test
2124
* https://tracker.ceph.com/issues/55538
2125 48 Venky Shankar
    Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead)
2126
2127
h3. 2022 Apr 25
2128
2129
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20220420-113951 (owner vshankar)
2130
2131
* https://tracker.ceph.com/issues/52624
2132
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2133
* https://tracker.ceph.com/issues/50223
2134
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2135
* https://tracker.ceph.com/issues/55258
2136
    lots of "heartbeat_check: no reply from X.X.X.X" in OSD logs
2137
* https://tracker.ceph.com/issues/55377
2138 47 Venky Shankar
    kclient: mds revoke Fwb caps stuck after the kclient tries writebcak once
2139
2140
h3. 2022 Apr 14
2141
2142
https://pulpito.ceph.com/?branch=wip-vshankar-testing1-20220411-144044
2143
2144
* https://tracker.ceph.com/issues/52624
2145
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2146
* https://tracker.ceph.com/issues/50223
2147
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2148
* https://tracker.ceph.com/issues/52438
2149
    qa: ffsb timeout
2150
* https://tracker.ceph.com/issues/55170
2151
    mds: crash during rejoin (CDir::fetch_keys)
2152
* https://tracker.ceph.com/issues/55331
2153
    pjd failure
2154
* https://tracker.ceph.com/issues/48773
2155
    qa: scrub does not complete
2156
* https://tracker.ceph.com/issues/55332
2157
    Failure in snaptest-git-ceph.sh
2158
* https://tracker.ceph.com/issues/55258
2159 45 Venky Shankar
    lots of "heartbeat_check: no reply from X.X.X.X" in OSD logs
2160
2161 46 Venky Shankar
h3. 2022 Apr 11
2162 45 Venky Shankar
2163
https://pulpito.ceph.com/?branch=wip-vshankar-testing-55110-20220408-203242
2164
2165
* https://tracker.ceph.com/issues/48773
2166
    qa: scrub does not complete
2167
* https://tracker.ceph.com/issues/52624
2168
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2169
* https://tracker.ceph.com/issues/52438
2170
    qa: ffsb timeout
2171
* https://tracker.ceph.com/issues/48680
2172
    mds: scrubbing stuck "scrub active (0 inodes in the stack)"
2173
* https://tracker.ceph.com/issues/55236
2174
    qa: fs/snaps tests fails with "hit max job timeout"
2175
* https://tracker.ceph.com/issues/54108
2176
    qa: iogen workunit: "The following counters failed to be set on mds daemons: {'mds.exported', 'mds.imported'}"
2177
* https://tracker.ceph.com/issues/54971
2178
    Test failure: test_perf_stats_stale_metrics (tasks.cephfs.test_mds_metrics.TestMDSMetrics)
2179
* https://tracker.ceph.com/issues/50223
2180
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2181
* https://tracker.ceph.com/issues/55258
2182 44 Venky Shankar
    lots of "heartbeat_check: no reply from X.X.X.X" in OSD logs
2183 42 Venky Shankar
2184 43 Venky Shankar
h3. 2022 Mar 21
2185
2186
https://pulpito.ceph.com/vshankar-2022-03-20_02:16:37-fs-wip-vshankar-testing-20220319-163539-testing-default-smithi/
2187
2188
Run didn't go well, lots of failures - debugging by dropping PRs and running against master branch. Only merging unrelated PRs that pass tests.
2189
2190
2191 42 Venky Shankar
h3. 2022 Mar 08
2192
2193
https://pulpito.ceph.com/vshankar-2022-02-28_04:32:15-fs-wip-vshankar-testing-20220226-211550-testing-default-smithi/
2194
2195
rerun with
2196
- (drop) https://github.com/ceph/ceph/pull/44679
2197
- (drop) https://github.com/ceph/ceph/pull/44958
2198
https://pulpito.ceph.com/vshankar-2022-03-06_14:47:51-fs-wip-vshankar-testing-20220304-132102-testing-default-smithi/
2199
2200
* https://tracker.ceph.com/issues/54419 (new)
2201
    `ceph orch upgrade start` seems to never reach completion
2202
* https://tracker.ceph.com/issues/51964
2203
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
2204
* https://tracker.ceph.com/issues/52624
2205
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2206
* https://tracker.ceph.com/issues/50223
2207
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2208
* https://tracker.ceph.com/issues/52438
2209
    qa: ffsb timeout
2210
* https://tracker.ceph.com/issues/50821
2211
    qa: untar_snap_rm failure during mds thrashing
2212 41 Venky Shankar
2213
2214
h3. 2022 Feb 09
2215
2216
https://pulpito.ceph.com/vshankar-2022-02-05_17:27:49-fs-wip-vshankar-testing-20220201-113815-testing-default-smithi/
2217
2218
rerun with
2219
- (drop) https://github.com/ceph/ceph/pull/37938
2220
- (drop) https://github.com/ceph/ceph/pull/44335
2221
- (drop) https://github.com/ceph/ceph/pull/44491
2222
- (drop) https://github.com/ceph/ceph/pull/44501
2223
https://pulpito.ceph.com/vshankar-2022-02-08_14:27:29-fs-wip-vshankar-testing-20220208-181241-testing-default-smithi/
2224
2225
* https://tracker.ceph.com/issues/51964
2226
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
2227
* https://tracker.ceph.com/issues/54066
2228
    test_subvolume_no_upgrade_v1_sanity fails with `AssertionError: 1000 != 0`
2229
* https://tracker.ceph.com/issues/48773
2230
    qa: scrub does not complete
2231
* https://tracker.ceph.com/issues/52624
2232
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2233
* https://tracker.ceph.com/issues/50223
2234
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2235
* https://tracker.ceph.com/issues/52438
2236 40 Patrick Donnelly
    qa: ffsb timeout
2237
2238
h3. 2022 Feb 01
2239
2240
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20220127.171526
2241
2242
* https://tracker.ceph.com/issues/54107
2243
    kclient: hang during umount
2244
* https://tracker.ceph.com/issues/54106
2245
    kclient: hang during workunit cleanup
2246
* https://tracker.ceph.com/issues/54108
2247
    qa: iogen workunit: "The following counters failed to be set on mds daemons: {'mds.exported', 'mds.imported'}"
2248
* https://tracker.ceph.com/issues/48773
2249
    qa: scrub does not complete
2250
* https://tracker.ceph.com/issues/52438
2251
    qa: ffsb timeout
2252 36 Venky Shankar
2253
2254
h3. 2022 Jan 13
2255 39 Venky Shankar
2256 36 Venky Shankar
https://pulpito.ceph.com/vshankar-2022-01-06_13:18:41-fs-wip-vshankar-testing-20220106-145819-testing-default-smithi/
2257 38 Venky Shankar
2258
rerun with:
2259 36 Venky Shankar
- (add) https://github.com/ceph/ceph/pull/44570
2260
- (drop) https://github.com/ceph/ceph/pull/43184
2261
https://pulpito.ceph.com/vshankar-2022-01-13_04:42:40-fs-wip-vshankar-testing-20220106-145819-testing-default-smithi/
2262
2263
* https://tracker.ceph.com/issues/50223
2264
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2265
* https://tracker.ceph.com/issues/51282
2266
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2267
* https://tracker.ceph.com/issues/48773
2268
    qa: scrub does not complete
2269
* https://tracker.ceph.com/issues/52624
2270
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2271
* https://tracker.ceph.com/issues/53859
2272 34 Venky Shankar
    qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
2273
2274
h3. 2022 Jan 03
2275
2276
https://pulpito.ceph.com/vshankar-2021-12-22_07:37:44-fs-wip-vshankar-testing-20211216-114012-testing-default-smithi/
2277
https://pulpito.ceph.com/vshankar-2022-01-03_12:27:45-fs-wip-vshankar-testing-20220103-142738-testing-default-smithi/ (rerun)
2278
2279
* https://tracker.ceph.com/issues/50223
2280
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2281
* https://tracker.ceph.com/issues/51964
2282
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
2283
* https://tracker.ceph.com/issues/51267
2284
    CommandFailedError: Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithi096 with status 1:...
2285
* https://tracker.ceph.com/issues/51282
2286
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2287
* https://tracker.ceph.com/issues/50821
2288
    qa: untar_snap_rm failure during mds thrashing
2289 35 Ramana Raja
* https://tracker.ceph.com/issues/51278
2290
    mds: "FAILED ceph_assert(!segments.empty())"
2291
* https://tracker.ceph.com/issues/52279
2292 34 Venky Shankar
    cephadm tests fail due to: error adding seccomp filter rule for syscall bdflush: requested action matches default action of filter
2293 33 Patrick Donnelly
2294
2295
h3. 2021 Dec 22
2296
2297
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211222.014316
2298
2299
* https://tracker.ceph.com/issues/52624
2300
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2301
* https://tracker.ceph.com/issues/50223
2302
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2303
* https://tracker.ceph.com/issues/52279
2304
    cephadm tests fail due to: error adding seccomp filter rule for syscall bdflush: requested action matches default action of filter
2305
* https://tracker.ceph.com/issues/50224
2306
    qa: test_mirroring_init_failure_with_recovery failure
2307
* https://tracker.ceph.com/issues/48773
2308
    qa: scrub does not complete
2309 32 Venky Shankar
2310
2311
h3. 2021 Nov 30
2312
2313
https://pulpito.ceph.com/vshankar-2021-11-24_07:14:27-fs-wip-vshankar-testing-20211124-094330-testing-default-smithi/
2314
https://pulpito.ceph.com/vshankar-2021-11-30_06:23:32-fs-wip-vshankar-testing-20211124-094330-distro-default-smithi/ (rerun w/ QA fixes)
2315
2316
* https://tracker.ceph.com/issues/53436
2317
    mds, mon: mds beacon messages get dropped? (mds never reaches up:active state)
2318
* https://tracker.ceph.com/issues/51964
2319
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
2320
* https://tracker.ceph.com/issues/48812
2321
    qa: test_scrub_pause_and_resume_with_abort failure
2322
* https://tracker.ceph.com/issues/51076
2323
    "wait_for_recovery: failed before timeout expired" during thrashosd test with EC backend.
2324
* https://tracker.ceph.com/issues/50223
2325
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2326
* https://tracker.ceph.com/issues/52624
2327
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2328
* https://tracker.ceph.com/issues/50250
2329
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
2330 31 Patrick Donnelly
2331
2332
h3. 2021 November 9
2333
2334
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211109.180315
2335
2336
* https://tracker.ceph.com/issues/53214
2337
    qa: "dd: error reading '/sys/kernel/debug/ceph/2a934501-6731-4052-a836-f42229a869be.client4874/metrics': Is a directory"
2338
* https://tracker.ceph.com/issues/48773
2339
    qa: scrub does not complete
2340
* https://tracker.ceph.com/issues/50223
2341
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2342
* https://tracker.ceph.com/issues/51282
2343
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2344
* https://tracker.ceph.com/issues/52624
2345
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2346
* https://tracker.ceph.com/issues/53216
2347
    qa: "RuntimeError: value of attributes should be either str or None. client_id"
2348
* https://tracker.ceph.com/issues/50250
2349
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
2350
2351 30 Patrick Donnelly
2352
2353
h3. 2021 November 03
2354
2355
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211103.023355
2356
2357
* https://tracker.ceph.com/issues/51964
2358
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
2359
* https://tracker.ceph.com/issues/51282
2360
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2361
* https://tracker.ceph.com/issues/52436
2362
    fs/ceph: "corrupt mdsmap"
2363
* https://tracker.ceph.com/issues/53074
2364
    pybind/mgr/cephadm: upgrade sequence does not continue if no MDS are active
2365
* https://tracker.ceph.com/issues/53150
2366
    pybind/mgr/cephadm/upgrade: tolerate MDS failures during upgrade straddling v16.2.5
2367
* https://tracker.ceph.com/issues/53155
2368
    MDSMonitor: assertion during upgrade to v16.2.5+
2369 29 Patrick Donnelly
2370
2371
h3. 2021 October 26
2372
2373
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211025.000447
2374
2375
* https://tracker.ceph.com/issues/53074
2376
    pybind/mgr/cephadm: upgrade sequence does not continue if no MDS are active
2377
* https://tracker.ceph.com/issues/52997
2378
    testing: hang ing umount
2379
* https://tracker.ceph.com/issues/50824
2380
    qa: snaptest-git-ceph bus error
2381
* https://tracker.ceph.com/issues/52436
2382
    fs/ceph: "corrupt mdsmap"
2383
* https://tracker.ceph.com/issues/48773
2384
    qa: scrub does not complete
2385
* https://tracker.ceph.com/issues/53082
2386
    ceph-fuse: segmenetation fault in Client::handle_mds_map
2387
* https://tracker.ceph.com/issues/50223
2388
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2389
* https://tracker.ceph.com/issues/52624
2390
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2391
* https://tracker.ceph.com/issues/50224
2392
    qa: test_mirroring_init_failure_with_recovery failure
2393
* https://tracker.ceph.com/issues/50821
2394
    qa: untar_snap_rm failure during mds thrashing
2395
* https://tracker.ceph.com/issues/50250
2396
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
2397
2398 27 Patrick Donnelly
2399
2400 28 Patrick Donnelly
h3. 2021 October 19
2401 27 Patrick Donnelly
2402
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211019.013028
2403
2404
* https://tracker.ceph.com/issues/52995
2405
    qa: test_standby_count_wanted failure
2406
* https://tracker.ceph.com/issues/52948
2407
    osd: fails to come up: "teuthology.misc:7 of 8 OSDs are up"
2408
* https://tracker.ceph.com/issues/52996
2409
    qa: test_perf_counters via test_openfiletable
2410
* https://tracker.ceph.com/issues/48772
2411
    qa: pjd: not ok 9, 44, 80
2412
* https://tracker.ceph.com/issues/52997
2413
    testing: hang ing umount
2414
* https://tracker.ceph.com/issues/50250
2415
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
2416
* https://tracker.ceph.com/issues/52624
2417
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2418
* https://tracker.ceph.com/issues/50223
2419
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2420
* https://tracker.ceph.com/issues/50821
2421
    qa: untar_snap_rm failure during mds thrashing
2422
* https://tracker.ceph.com/issues/48773
2423
    qa: scrub does not complete
2424 26 Patrick Donnelly
2425
2426
h3. 2021 October 12
2427
2428
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211012.192211
2429
2430
Some failures caused by teuthology bug: https://tracker.ceph.com/issues/52944
2431
2432
New test caused failure: https://github.com/ceph/ceph/pull/43297#discussion_r729883167
2433
2434
2435
* https://tracker.ceph.com/issues/51282
2436
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2437
* https://tracker.ceph.com/issues/52948
2438
    osd: fails to come up: "teuthology.misc:7 of 8 OSDs are up"
2439
* https://tracker.ceph.com/issues/48773
2440
    qa: scrub does not complete
2441
* https://tracker.ceph.com/issues/50224
2442
    qa: test_mirroring_init_failure_with_recovery failure
2443
* https://tracker.ceph.com/issues/52949
2444
    RuntimeError: The following counters failed to be set on mds daemons: {'mds.dir_split'}
2445 25 Patrick Donnelly
2446 23 Patrick Donnelly
2447 24 Patrick Donnelly
h3. 2021 October 02
2448
2449
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211002.163337
2450
2451
Some failures caused by cephadm upgrade test. Fixed in follow-up qa commit.
2452
2453
test_simple failures caused by PR in this set.
2454
2455
A few reruns because of QA infra noise.
2456
2457
* https://tracker.ceph.com/issues/52822
2458
    qa: failed pacific install on fs:upgrade
2459
* https://tracker.ceph.com/issues/52624
2460
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2461
* https://tracker.ceph.com/issues/50223
2462
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2463
* https://tracker.ceph.com/issues/48773
2464
    qa: scrub does not complete
2465
2466
2467 23 Patrick Donnelly
h3. 2021 September 20
2468
2469
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210917.174826
2470
2471
* https://tracker.ceph.com/issues/52677
2472
    qa: test_simple failure
2473
* https://tracker.ceph.com/issues/51279
2474
    kclient hangs on umount (testing branch)
2475
* https://tracker.ceph.com/issues/50223
2476
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2477
* https://tracker.ceph.com/issues/50250
2478
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
2479
* https://tracker.ceph.com/issues/52624
2480
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2481
* https://tracker.ceph.com/issues/52438
2482
    qa: ffsb timeout
2483 22 Patrick Donnelly
2484
2485
h3. 2021 September 10
2486
2487
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210910.181451
2488
2489
* https://tracker.ceph.com/issues/50223
2490
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2491
* https://tracker.ceph.com/issues/50250
2492
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
2493
* https://tracker.ceph.com/issues/52624
2494
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2495
* https://tracker.ceph.com/issues/52625
2496
    qa: test_kill_mdstable (tasks.cephfs.test_snapshots.TestSnapshots)
2497
* https://tracker.ceph.com/issues/52439
2498
    qa: acls does not compile on centos stream
2499
* https://tracker.ceph.com/issues/50821
2500
    qa: untar_snap_rm failure during mds thrashing
2501
* https://tracker.ceph.com/issues/48773
2502
    qa: scrub does not complete
2503
* https://tracker.ceph.com/issues/52626
2504
    mds: ScrubStack.cc: 831: FAILED ceph_assert(diri)
2505
* https://tracker.ceph.com/issues/51279
2506
    kclient hangs on umount (testing branch)
2507 21 Patrick Donnelly
2508
2509
h3. 2021 August 27
2510
2511
Several jobs died because of device failures.
2512
2513
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210827.024746
2514
2515
* https://tracker.ceph.com/issues/52430
2516
    mds: fast async create client mount breaks racy test
2517
* https://tracker.ceph.com/issues/52436
2518
    fs/ceph: "corrupt mdsmap"
2519
* https://tracker.ceph.com/issues/52437
2520
    mds: InoTable::replay_release_ids abort via test_inotable_sync
2521
* https://tracker.ceph.com/issues/51282
2522
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2523
* https://tracker.ceph.com/issues/52438
2524
    qa: ffsb timeout
2525
* https://tracker.ceph.com/issues/52439
2526
    qa: acls does not compile on centos stream
2527 20 Patrick Donnelly
2528
2529
h3. 2021 July 30
2530
2531
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210729.214022
2532
2533
* https://tracker.ceph.com/issues/50250
2534
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
2535
* https://tracker.ceph.com/issues/51282
2536
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2537
* https://tracker.ceph.com/issues/48773
2538
    qa: scrub does not complete
2539
* https://tracker.ceph.com/issues/51975
2540
    pybind/mgr/stats: KeyError
2541 19 Patrick Donnelly
2542
2543
h3. 2021 July 28
2544
2545
https://pulpito.ceph.com/pdonnell-2021-07-28_00:39:45-fs-wip-pdonnell-testing-20210727.213757-distro-basic-smithi/
2546
2547
with qa fix: https://pulpito.ceph.com/pdonnell-2021-07-28_16:20:28-fs-wip-pdonnell-testing-20210728.141004-distro-basic-smithi/
2548
2549
* https://tracker.ceph.com/issues/51905
2550
    qa: "error reading sessionmap 'mds1_sessionmap'"
2551
* https://tracker.ceph.com/issues/48773
2552
    qa: scrub does not complete
2553
* https://tracker.ceph.com/issues/50250
2554
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
2555
* https://tracker.ceph.com/issues/51267
2556
    CommandFailedError: Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithi096 with status 1:...
2557
* https://tracker.ceph.com/issues/51279
2558
    kclient hangs on umount (testing branch)
2559 18 Patrick Donnelly
2560
2561
h3. 2021 July 16
2562
2563
https://pulpito.ceph.com/pdonnell-2021-07-16_05:50:11-fs-wip-pdonnell-testing-20210716.022804-distro-basic-smithi/
2564
2565
* https://tracker.ceph.com/issues/48773
2566
    qa: scrub does not complete
2567
* https://tracker.ceph.com/issues/48772
2568
    qa: pjd: not ok 9, 44, 80
2569
* https://tracker.ceph.com/issues/45434
2570
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2571
* https://tracker.ceph.com/issues/51279
2572
    kclient hangs on umount (testing branch)
2573
* https://tracker.ceph.com/issues/50824
2574
    qa: snaptest-git-ceph bus error
2575 17 Patrick Donnelly
2576
2577
h3. 2021 July 04
2578
2579
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210703.052904
2580
2581
* https://tracker.ceph.com/issues/48773
2582
    qa: scrub does not complete
2583
* https://tracker.ceph.com/issues/39150
2584
    mon: "FAILED ceph_assert(session_map.sessions.empty())" when out of quorum
2585
* https://tracker.ceph.com/issues/45434
2586
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2587
* https://tracker.ceph.com/issues/51282
2588
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2589
* https://tracker.ceph.com/issues/48771
2590
    qa: iogen: workload fails to cause balancing
2591
* https://tracker.ceph.com/issues/51279
2592
    kclient hangs on umount (testing branch)
2593
* https://tracker.ceph.com/issues/50250
2594
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
2595 16 Patrick Donnelly
2596
2597
h3. 2021 July 01
2598
2599
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210701.192056
2600
2601
* https://tracker.ceph.com/issues/51197
2602
    qa: [WRN] Scrub error on inode 0x10000001520 (/client.0/tmp/t/linux-5.4/Documentation/driver-api) see mds.f log and `damage ls` output for details
2603
* https://tracker.ceph.com/issues/50866
2604
    osd: stat mismatch on objects
2605
* https://tracker.ceph.com/issues/48773
2606
    qa: scrub does not complete
2607 15 Patrick Donnelly
2608
2609
h3. 2021 June 26
2610
2611
https://pulpito.ceph.com/pdonnell-2021-06-26_00:57:00-fs-wip-pdonnell-testing-20210625.225421-distro-basic-smithi/
2612
2613
* https://tracker.ceph.com/issues/51183
2614
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
2615
* https://tracker.ceph.com/issues/51410
2616
    kclient: fails to finish reconnect during MDS thrashing (testing branch)
2617
* https://tracker.ceph.com/issues/48773
2618
    qa: scrub does not complete
2619
* https://tracker.ceph.com/issues/51282
2620
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2621
* https://tracker.ceph.com/issues/51169
2622
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
2623
* https://tracker.ceph.com/issues/48772
2624
    qa: pjd: not ok 9, 44, 80
2625 14 Patrick Donnelly
2626
2627
h3. 2021 June 21
2628
2629
https://pulpito.ceph.com/pdonnell-2021-06-22_00:27:21-fs-wip-pdonnell-testing-20210621.231646-distro-basic-smithi/
2630
2631
One failure caused by PR: https://github.com/ceph/ceph/pull/41935#issuecomment-866472599
2632
2633
* https://tracker.ceph.com/issues/51282
2634
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2635
* https://tracker.ceph.com/issues/51183
2636
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
2637
* https://tracker.ceph.com/issues/48773
2638
    qa: scrub does not complete
2639
* https://tracker.ceph.com/issues/48771
2640
    qa: iogen: workload fails to cause balancing
2641
* https://tracker.ceph.com/issues/51169
2642
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
2643
* https://tracker.ceph.com/issues/50495
2644
    libcephfs: shutdown race fails with status 141
2645
* https://tracker.ceph.com/issues/45434
2646
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2647
* https://tracker.ceph.com/issues/50824
2648
    qa: snaptest-git-ceph bus error
2649
* https://tracker.ceph.com/issues/50223
2650
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2651 13 Patrick Donnelly
2652
2653
h3. 2021 June 16
2654
2655
https://pulpito.ceph.com/pdonnell-2021-06-16_21:26:55-fs-wip-pdonnell-testing-20210616.191804-distro-basic-smithi/
2656
2657
MDS abort class of failures caused by PR: https://github.com/ceph/ceph/pull/41667
2658
2659
* https://tracker.ceph.com/issues/45434
2660
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2661
* https://tracker.ceph.com/issues/51169
2662
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
2663
* https://tracker.ceph.com/issues/43216
2664
    MDSMonitor: removes MDS coming out of quorum election
2665
* https://tracker.ceph.com/issues/51278
2666
    mds: "FAILED ceph_assert(!segments.empty())"
2667
* https://tracker.ceph.com/issues/51279
2668
    kclient hangs on umount (testing branch)
2669
* https://tracker.ceph.com/issues/51280
2670
    mds: "FAILED ceph_assert(r == 0 || r == -2)"
2671
* https://tracker.ceph.com/issues/51183
2672
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
2673
* https://tracker.ceph.com/issues/51281
2674
    qa: snaptest-snap-rm-cmp.sh: "echo 'FAIL: bad match, /tmp/a 4637e766853d1ad16a7b17079e2c6f03 != real c3883760b18d50e8d78819c54d579b00'"
2675
* https://tracker.ceph.com/issues/48773
2676
    qa: scrub does not complete
2677
* https://tracker.ceph.com/issues/51076
2678
    "wait_for_recovery: failed before timeout expired" during thrashosd test with EC backend.
2679
* https://tracker.ceph.com/issues/51228
2680
    qa: rmdir: failed to remove 'a/.snap/*': No such file or directory
2681
* https://tracker.ceph.com/issues/51282
2682
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2683 12 Patrick Donnelly
2684
2685
h3. 2021 June 14
2686
2687
https://pulpito.ceph.com/pdonnell-2021-06-14_20:53:05-fs-wip-pdonnell-testing-20210614.173325-distro-basic-smithi/
2688
2689
Some Ubuntu 20.04 upgrade fallout. In particular, upgrade tests are failing due to missing packages for 18.04 Pacific.
2690
2691
* https://tracker.ceph.com/issues/51169
2692
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
2693
* https://tracker.ceph.com/issues/51228
2694
    qa: rmdir: failed to remove 'a/.snap/*': No such file or directory
2695
* https://tracker.ceph.com/issues/48773
2696
    qa: scrub does not complete
2697
* https://tracker.ceph.com/issues/51183
2698
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
2699
* https://tracker.ceph.com/issues/45434
2700
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2701
* https://tracker.ceph.com/issues/51182
2702
    pybind/mgr/snap_schedule: Invalid command: Unexpected argument 'fs=cephfs'
2703
* https://tracker.ceph.com/issues/51229
2704
    qa: test_multi_snap_schedule list difference failure
2705
* https://tracker.ceph.com/issues/50821
2706
    qa: untar_snap_rm failure during mds thrashing
2707 11 Patrick Donnelly
2708
2709
h3. 2021 June 13
2710
2711
https://pulpito.ceph.com/pdonnell-2021-06-12_02:45:35-fs-wip-pdonnell-testing-20210612.002809-distro-basic-smithi/
2712
2713
Some Ubuntu 20.04 upgrade fallout. In particular, upgrade tests are failing due to missing packages for 18.04 Pacific.
2714
2715
* https://tracker.ceph.com/issues/51169
2716
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
2717
* https://tracker.ceph.com/issues/48773
2718
    qa: scrub does not complete
2719
* https://tracker.ceph.com/issues/51182
2720
    pybind/mgr/snap_schedule: Invalid command: Unexpected argument 'fs=cephfs'
2721
* https://tracker.ceph.com/issues/51183
2722
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
2723
* https://tracker.ceph.com/issues/51197
2724
    qa: [WRN] Scrub error on inode 0x10000001520 (/client.0/tmp/t/linux-5.4/Documentation/driver-api) see mds.f log and `damage ls` output for details
2725
* https://tracker.ceph.com/issues/45434
2726 10 Patrick Donnelly
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2727
2728
h3. 2021 June 11
2729
2730
https://pulpito.ceph.com/pdonnell-2021-06-11_18:02:10-fs-wip-pdonnell-testing-20210611.162716-distro-basic-smithi/
2731
2732
Some Ubuntu 20.04 upgrade fallout. In particular, upgrade tests are failing due to missing packages for 18.04 Pacific.
2733
2734
* https://tracker.ceph.com/issues/51169
2735
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
2736
* https://tracker.ceph.com/issues/45434
2737
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2738
* https://tracker.ceph.com/issues/48771
2739
    qa: iogen: workload fails to cause balancing
2740
* https://tracker.ceph.com/issues/43216
2741
    MDSMonitor: removes MDS coming out of quorum election
2742
* https://tracker.ceph.com/issues/51182
2743
    pybind/mgr/snap_schedule: Invalid command: Unexpected argument 'fs=cephfs'
2744
* https://tracker.ceph.com/issues/50223
2745
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2746
* https://tracker.ceph.com/issues/48773
2747
    qa: scrub does not complete
2748
* https://tracker.ceph.com/issues/51183
2749
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
2750
* https://tracker.ceph.com/issues/51184
2751
    qa: fs:bugs does not specify distro
2752 9 Patrick Donnelly
2753
2754
h3. 2021 June 03
2755
2756
https://pulpito.ceph.com/pdonnell-2021-06-03_03:40:33-fs-wip-pdonnell-testing-20210603.020013-distro-basic-smithi/
2757
2758
* https://tracker.ceph.com/issues/45434
2759
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2760
* https://tracker.ceph.com/issues/50016
2761
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
2762
* https://tracker.ceph.com/issues/50821
2763
    qa: untar_snap_rm failure during mds thrashing
2764
* https://tracker.ceph.com/issues/50622 (regression)
2765
    msg: active_connections regression
2766
* https://tracker.ceph.com/issues/49845#note-2 (regression)
2767
    qa: failed umount in test_volumes
2768
* https://tracker.ceph.com/issues/48773
2769
    qa: scrub does not complete
2770
* https://tracker.ceph.com/issues/43216
2771
    MDSMonitor: removes MDS coming out of quorum election
2772 7 Patrick Donnelly
2773
2774 8 Patrick Donnelly
h3. 2021 May 18
2775
2776
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210518.214114
2777
2778
Regression in testing kernel caused some failures. Ilya fixed those and rerun
2779
looked better. Some odd new noise in the rerun relating to packaging and "No
2780
module named 'tasks.ceph'".
2781
2782
* https://tracker.ceph.com/issues/50824
2783
    qa: snaptest-git-ceph bus error
2784
* https://tracker.ceph.com/issues/50622 (regression)
2785
    msg: active_connections regression
2786
* https://tracker.ceph.com/issues/49845#note-2 (regression)
2787
    qa: failed umount in test_volumes
2788
* https://tracker.ceph.com/issues/48203 (stock kernel update required)
2789
    qa: quota failure
2790
2791
2792 7 Patrick Donnelly
h3. 2021 May 18
2793
2794
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210518.025642
2795
2796
* https://tracker.ceph.com/issues/50821
2797
    qa: untar_snap_rm failure during mds thrashing
2798
* https://tracker.ceph.com/issues/48773
2799
    qa: scrub does not complete
2800
* https://tracker.ceph.com/issues/45591
2801
    mgr: FAILED ceph_assert(daemon != nullptr)
2802
* https://tracker.ceph.com/issues/50866
2803
    osd: stat mismatch on objects
2804
* https://tracker.ceph.com/issues/50016
2805
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
2806
* https://tracker.ceph.com/issues/50867
2807
    qa: fs:mirror: reduced data availability
2808
* https://tracker.ceph.com/issues/50821
2809
    qa: untar_snap_rm failure during mds thrashing
2810
* https://tracker.ceph.com/issues/50622 (regression)
2811
    msg: active_connections regression
2812
* https://tracker.ceph.com/issues/50223
2813
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2814
* https://tracker.ceph.com/issues/50868
2815
    qa: "kern.log.gz already exists; not overwritten"
2816
* https://tracker.ceph.com/issues/50870
2817
    qa: test_full: "rm: cannot remove 'large_file_a': Permission denied"
2818 6 Patrick Donnelly
2819
2820
h3. 2021 May 11
2821
2822
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210511.232042
2823
2824
* one class of failures caused by PR
2825
* https://tracker.ceph.com/issues/48812
2826
    qa: test_scrub_pause_and_resume_with_abort failure
2827
* https://tracker.ceph.com/issues/50390
2828
    mds: monclient: wait_auth_rotating timed out after 30
2829
* https://tracker.ceph.com/issues/48773
2830
    qa: scrub does not complete
2831
* https://tracker.ceph.com/issues/50821
2832
    qa: untar_snap_rm failure during mds thrashing
2833
* https://tracker.ceph.com/issues/50224
2834
    qa: test_mirroring_init_failure_with_recovery failure
2835
* https://tracker.ceph.com/issues/50622 (regression)
2836
    msg: active_connections regression
2837
* https://tracker.ceph.com/issues/50825
2838
    qa: snaptest-git-ceph hang during mon thrashing v2
2839
* https://tracker.ceph.com/issues/50821
2840
    qa: untar_snap_rm failure during mds thrashing
2841
* https://tracker.ceph.com/issues/50823
2842
    qa: RuntimeError: timeout waiting for cluster to stabilize
2843 5 Patrick Donnelly
2844
2845
h3. 2021 May 14
2846
2847
https://pulpito.ceph.com/pdonnell-2021-05-14_21:45:42-fs-master-distro-basic-smithi/
2848
2849
* https://tracker.ceph.com/issues/48812
2850
    qa: test_scrub_pause_and_resume_with_abort failure
2851
* https://tracker.ceph.com/issues/50821
2852
    qa: untar_snap_rm failure during mds thrashing
2853
* https://tracker.ceph.com/issues/50622 (regression)
2854
    msg: active_connections regression
2855
* https://tracker.ceph.com/issues/50822
2856
    qa: testing kernel patch for client metrics causes mds abort
2857
* https://tracker.ceph.com/issues/48773
2858
    qa: scrub does not complete
2859
* https://tracker.ceph.com/issues/50823
2860
    qa: RuntimeError: timeout waiting for cluster to stabilize
2861
* https://tracker.ceph.com/issues/50824
2862
    qa: snaptest-git-ceph bus error
2863
* https://tracker.ceph.com/issues/50825
2864
    qa: snaptest-git-ceph hang during mon thrashing v2
2865
* https://tracker.ceph.com/issues/50826
2866
    kceph: stock RHEL kernel hangs on snaptests with mon|osd thrashers
2867 4 Patrick Donnelly
2868
2869
h3. 2021 May 01
2870
2871
https://pulpito.ceph.com/pdonnell-2021-05-01_09:07:09-fs-wip-pdonnell-testing-20210501.040415-distro-basic-smithi/
2872
2873
* https://tracker.ceph.com/issues/45434
2874
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2875
* https://tracker.ceph.com/issues/50281
2876
    qa: untar_snap_rm timeout
2877
* https://tracker.ceph.com/issues/48203 (stock kernel update required)
2878
    qa: quota failure
2879
* https://tracker.ceph.com/issues/48773
2880
    qa: scrub does not complete
2881
* https://tracker.ceph.com/issues/50390
2882
    mds: monclient: wait_auth_rotating timed out after 30
2883
* https://tracker.ceph.com/issues/50250
2884
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details"
2885
* https://tracker.ceph.com/issues/50622 (regression)
2886
    msg: active_connections regression
2887
* https://tracker.ceph.com/issues/45591
2888
    mgr: FAILED ceph_assert(daemon != nullptr)
2889
* https://tracker.ceph.com/issues/50221
2890
    qa: snaptest-git-ceph failure in git diff
2891
* https://tracker.ceph.com/issues/50016
2892
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
2893 3 Patrick Donnelly
2894
2895
h3. 2021 Apr 15
2896
2897
https://pulpito.ceph.com/pdonnell-2021-04-15_01:35:57-fs-wip-pdonnell-testing-20210414.230315-distro-basic-smithi/
2898
2899
* https://tracker.ceph.com/issues/50281
2900
    qa: untar_snap_rm timeout
2901
* https://tracker.ceph.com/issues/50220
2902
    qa: dbench workload timeout
2903
* https://tracker.ceph.com/issues/50246
2904
    mds: failure replaying journal (EMetaBlob)
2905
* https://tracker.ceph.com/issues/50250
2906
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details"
2907
* https://tracker.ceph.com/issues/50016
2908
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
2909
* https://tracker.ceph.com/issues/50222
2910
    osd: 5.2s0 deep-scrub : stat mismatch
2911
* https://tracker.ceph.com/issues/45434
2912
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2913
* https://tracker.ceph.com/issues/49845
2914
    qa: failed umount in test_volumes
2915
* https://tracker.ceph.com/issues/37808
2916
    osd: osdmap cache weak_refs assert during shutdown
2917
* https://tracker.ceph.com/issues/50387
2918
    client: fs/snaps failure
2919
* https://tracker.ceph.com/issues/50389
2920
    mds: "cluster [ERR] Error recovering journal 0x203: (2) No such file or directory" in cluster log"
2921
* https://tracker.ceph.com/issues/50216
2922
    qa: "ls: cannot access 'lost+found': No such file or directory"
2923
* https://tracker.ceph.com/issues/50390
2924
    mds: monclient: wait_auth_rotating timed out after 30
2925
2926 1 Patrick Donnelly
2927
2928 2 Patrick Donnelly
h3. 2021 Apr 08
2929
2930
https://pulpito.ceph.com/pdonnell-2021-04-08_22:42:24-fs-wip-pdonnell-testing-20210408.192301-distro-basic-smithi/
2931
2932
* https://tracker.ceph.com/issues/45434
2933
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2934
* https://tracker.ceph.com/issues/50016
2935
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
2936
* https://tracker.ceph.com/issues/48773
2937
    qa: scrub does not complete
2938
* https://tracker.ceph.com/issues/50279
2939
    qa: "Replacing daemon mds.b as rank 0 with standby daemon mds.c"
2940
* https://tracker.ceph.com/issues/50246
2941
    mds: failure replaying journal (EMetaBlob)
2942
* https://tracker.ceph.com/issues/48365
2943
    qa: ffsb build failure on CentOS 8.2
2944
* https://tracker.ceph.com/issues/50216
2945
    qa: "ls: cannot access 'lost+found': No such file or directory"
2946
* https://tracker.ceph.com/issues/50223
2947
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2948
* https://tracker.ceph.com/issues/50280
2949
    cephadm: RuntimeError: uid/gid not found
2950
* https://tracker.ceph.com/issues/50281
2951
    qa: untar_snap_rm timeout
2952
2953 1 Patrick Donnelly
h3. 2021 Apr 08
2954
2955
https://pulpito.ceph.com/pdonnell-2021-04-08_04:31:36-fs-wip-pdonnell-testing-20210408.024225-distro-basic-smithi/
2956
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210408.142238 (with logic inversion / QA fix)
2957
2958
* https://tracker.ceph.com/issues/50246
2959
    mds: failure replaying journal (EMetaBlob)
2960
* https://tracker.ceph.com/issues/50250
2961
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details"
2962
2963
2964
h3. 2021 Apr 07
2965
2966
https://pulpito.ceph.com/pdonnell-2021-04-07_02:12:41-fs-wip-pdonnell-testing-20210406.213012-distro-basic-smithi/
2967
2968
* https://tracker.ceph.com/issues/50215
2969
    qa: "log [ERR] : error reading sessionmap 'mds2_sessionmap'"
2970
* https://tracker.ceph.com/issues/49466
2971
    qa: "Command failed on gibba030 with status 1: 'set -ex\nsudo dd of=/tmp/tmp.ZEeZBasJer'"
2972
* https://tracker.ceph.com/issues/50216
2973
    qa: "ls: cannot access 'lost+found': No such file or directory"
2974
* https://tracker.ceph.com/issues/48773
2975
    qa: scrub does not complete
2976
* https://tracker.ceph.com/issues/49845
2977
    qa: failed umount in test_volumes
2978
* https://tracker.ceph.com/issues/50220
2979
    qa: dbench workload timeout
2980
* https://tracker.ceph.com/issues/50221
2981
    qa: snaptest-git-ceph failure in git diff
2982
* https://tracker.ceph.com/issues/50222
2983
    osd: 5.2s0 deep-scrub : stat mismatch
2984
* https://tracker.ceph.com/issues/50223
2985
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2986
* https://tracker.ceph.com/issues/50224
2987
    qa: test_mirroring_init_failure_with_recovery failure
2988
2989
h3. 2021 Apr 01
2990
2991
https://pulpito.ceph.com/pdonnell-2021-04-01_00:45:34-fs-wip-pdonnell-testing-20210331.222326-distro-basic-smithi/
2992
2993
* https://tracker.ceph.com/issues/48772
2994
    qa: pjd: not ok 9, 44, 80
2995
* https://tracker.ceph.com/issues/50177
2996
    osd: "stalled aio... buggy kernel or bad device?"
2997
* https://tracker.ceph.com/issues/48771
2998
    qa: iogen: workload fails to cause balancing
2999
* https://tracker.ceph.com/issues/49845
3000
    qa: failed umount in test_volumes
3001
* https://tracker.ceph.com/issues/48773
3002
    qa: scrub does not complete
3003
* https://tracker.ceph.com/issues/48805
3004
    mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details"
3005
* https://tracker.ceph.com/issues/50178
3006
    qa: "TypeError: run() got an unexpected keyword argument 'shell'"
3007
* https://tracker.ceph.com/issues/45434
3008
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
3009
3010
h3. 2021 Mar 24
3011
3012
https://pulpito.ceph.com/pdonnell-2021-03-24_23:26:35-fs-wip-pdonnell-testing-20210324.190252-distro-basic-smithi/
3013
3014
* https://tracker.ceph.com/issues/49500
3015
    qa: "Assertion `cb_done' failed."
3016
* https://tracker.ceph.com/issues/50019
3017
    qa: mount failure with cephadm "probably no MDS server is up?"
3018
* https://tracker.ceph.com/issues/50020
3019
    qa: "RADOS object not found (Failed to operate read op for oid cephfs_mirror)"
3020
* https://tracker.ceph.com/issues/48773
3021
    qa: scrub does not complete
3022
* https://tracker.ceph.com/issues/45434
3023
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
3024
* https://tracker.ceph.com/issues/48805
3025
    mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details"
3026
* https://tracker.ceph.com/issues/48772
3027
    qa: pjd: not ok 9, 44, 80
3028
* https://tracker.ceph.com/issues/50021
3029
    qa: snaptest-git-ceph failure during mon thrashing
3030
* https://tracker.ceph.com/issues/48771
3031
    qa: iogen: workload fails to cause balancing
3032
* https://tracker.ceph.com/issues/50016
3033
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
3034
* https://tracker.ceph.com/issues/49466
3035
    qa: "Command failed on gibba030 with status 1: 'set -ex\nsudo dd of=/tmp/tmp.ZEeZBasJer'"
3036
3037
3038
h3. 2021 Mar 18
3039
3040
https://pulpito.ceph.com/pdonnell-2021-03-18_13:46:31-fs-wip-pdonnell-testing-20210318.024145-distro-basic-smithi/
3041
3042
* https://tracker.ceph.com/issues/49466
3043
    qa: "Command failed on gibba030 with status 1: 'set -ex\nsudo dd of=/tmp/tmp.ZEeZBasJer'"
3044
* https://tracker.ceph.com/issues/48773
3045
    qa: scrub does not complete
3046
* https://tracker.ceph.com/issues/48805
3047
    mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details"
3048
* https://tracker.ceph.com/issues/45434
3049
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
3050
* https://tracker.ceph.com/issues/49845
3051
    qa: failed umount in test_volumes
3052
* https://tracker.ceph.com/issues/49605
3053
    mgr: drops command on the floor
3054
* https://tracker.ceph.com/issues/48203 (stock kernel update required)
3055
    qa: quota failure
3056
* https://tracker.ceph.com/issues/49928
3057
    client: items pinned in cache preventing unmount x2
3058
3059
h3. 2021 Mar 15
3060
3061
https://pulpito.ceph.com/pdonnell-2021-03-15_22:16:56-fs-wip-pdonnell-testing-20210315.182203-distro-basic-smithi/
3062
3063
* https://tracker.ceph.com/issues/49842
3064
    qa: stuck pkg install
3065
* https://tracker.ceph.com/issues/49466
3066
    qa: "Command failed on gibba030 with status 1: 'set -ex\nsudo dd of=/tmp/tmp.ZEeZBasJer'"
3067
* https://tracker.ceph.com/issues/49822
3068
    test: test_mirroring_command_idempotency (tasks.cephfs.test_admin.TestMirroringCommands) failure
3069
* https://tracker.ceph.com/issues/49240
3070
    terminate called after throwing an instance of 'std::bad_alloc'
3071
* https://tracker.ceph.com/issues/48773
3072
    qa: scrub does not complete
3073
* https://tracker.ceph.com/issues/45434
3074
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
3075
* https://tracker.ceph.com/issues/49500
3076
    qa: "Assertion `cb_done' failed."
3077
* https://tracker.ceph.com/issues/49843
3078
    qa: fs/snaps/snaptest-upchildrealms.sh failure
3079
* https://tracker.ceph.com/issues/49845
3080
    qa: failed umount in test_volumes
3081
* https://tracker.ceph.com/issues/48805
3082
    mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details"
3083
* https://tracker.ceph.com/issues/49605
3084
    mgr: drops command on the floor
3085
3086
and failure caused by PR: https://github.com/ceph/ceph/pull/39969
3087
3088
3089
h3. 2021 Mar 09
3090
3091
https://pulpito.ceph.com/pdonnell-2021-03-09_03:27:39-fs-wip-pdonnell-testing-20210308.214827-distro-basic-smithi/
3092
3093
* https://tracker.ceph.com/issues/49500
3094
    qa: "Assertion `cb_done' failed."
3095
* https://tracker.ceph.com/issues/48805
3096
    mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details"
3097
* https://tracker.ceph.com/issues/48773
3098
    qa: scrub does not complete
3099
* https://tracker.ceph.com/issues/45434
3100
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
3101
* https://tracker.ceph.com/issues/49240
3102
    terminate called after throwing an instance of 'std::bad_alloc'
3103
* https://tracker.ceph.com/issues/49466
3104
    qa: "Command failed on gibba030 with status 1: 'set -ex\nsudo dd of=/tmp/tmp.ZEeZBasJer'"
3105
* https://tracker.ceph.com/issues/49684
3106
    qa: fs:cephadm mount does not wait for mds to be created
3107
* https://tracker.ceph.com/issues/48771
3108
    qa: iogen: workload fails to cause balancing