Project

General

Profile

Main » History » Version 219

Venky Shankar, 02/19/2024 07:54 AM

1 218 Venky Shankar
.h1. MAIN
2 1 Patrick Donnelly
3 218 Venky Shankar
h3. ADD NEW ENTRY BELOW.
4
5
h3. 19th Feb 2024
6
7
* https://tracker.ceph.com/issues/61243
8
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
9
* https://tracker.ceph.com/issues/63700
10
    qa: test_cd_with_args failure
11
* https://tracker.ceph.com/issues/63141
12
    qa/cephfs: test_idem_unaffected_root_squash fails
13
* https://tracker.ceph.com/issues/59684
14
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
15
* https://tracker.ceph.com/issues/63949
16
    leak in mds.c detected by valgrind during CephFS QA run
17
* https://tracker.ceph.com/issues/63764
18
    Test failure: test_r_with_fsname_and_no_path_in_cap (tasks.cephfs.test_multifs_auth.TestMDSCaps)
19
* https://tracker.ceph.com/issues/63699
20
    qa: failed cephfs-shell test_reading_conf
21 219 Venky Shankar
* https://tracker.ceph.com/issues/64482
22
    ceph: stderr Error: OCI runtime error: crun: bpf create ``: Function not implemented
23 201 Rishabh Dave
24 217 Venky Shankar
h3. 29 Jan 2024
25
26
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20240119.075157-1
27
28
* https://tracker.ceph.com/issues/57676
29
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
30
* https://tracker.ceph.com/issues/63949
31
    leak in mds.c detected by valgrind during CephFS QA run
32
* https://tracker.ceph.com/issues/62067
33
    ffsb.sh failure "Resource temporarily unavailable"
34
* https://tracker.ceph.com/issues/64172
35
    Test failure: test_multiple_path_r (tasks.cephfs.test_admin.TestFsAuthorize)
36
* https://tracker.ceph.com/issues/63265
37
    qa: fs/snaps/snaptest-git-ceph.sh failed when reseting to tag 'v0.1'
38
* https://tracker.ceph.com/issues/61243
39
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
40
* https://tracker.ceph.com/issues/59684
41
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
42
* https://tracker.ceph.com/issues/57656
43
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
44
* https://tracker.ceph.com/issues/64209
45
    snaptest-multiple-capsnaps.sh fails with "got remote process result: 1"
46
47 216 Venky Shankar
h3. 17th Jan 2024
48
49
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20240103.072409-1
50
51
* https://tracker.ceph.com/issues/63764
52
    Test failure: test_r_with_fsname_and_no_path_in_cap (tasks.cephfs.test_multifs_auth.TestMDSCaps)
53
* https://tracker.ceph.com/issues/57676
54
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
55
* https://tracker.ceph.com/issues/51964
56
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
57
* https://tracker.ceph.com/issues/63949
58
    leak in mds.c detected by valgrind during CephFS QA run
59
* https://tracker.ceph.com/issues/62067
60
    ffsb.sh failure "Resource temporarily unavailable"
61
* https://tracker.ceph.com/issues/61243
62
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
63
* https://tracker.ceph.com/issues/63259
64
    mds: failed to store backtrace and force file system read-only
65
* https://tracker.ceph.com/issues/63265
66
    qa: fs/snaps/snaptest-git-ceph.sh failed when reseting to tag 'v0.1'
67
68
h3. 16 Jan 2024
69 215 Rishabh Dave
70 214 Rishabh Dave
https://pulpito.ceph.com/rishabh-2023-12-11_15:37:57-fs-rishabh-2023dec11-testing-default-smithi/
71
https://pulpito.ceph.com/rishabh-2023-12-17_11:19:43-fs-rishabh-2023dec11-testing-default-smithi/
72
https://pulpito.ceph.com/rishabh-2024-01-04_18:43:16-fs-rishabh-2024jan4-testing-default-smithi
73
74
* https://tracker.ceph.com/issues/63764
75
  Test failure: test_r_with_fsname_and_no_path_in_cap (tasks.cephfs.test_multifs_auth.TestMDSCaps)
76
* https://tracker.ceph.com/issues/63141
77
  qa/cephfs: test_idem_unaffected_root_squash fails
78
* https://tracker.ceph.com/issues/62067
79
  ffsb.sh failure "Resource temporarily unavailable" 
80
* https://tracker.ceph.com/issues/51964
81
  qa: test_cephfs_mirror_restart_sync_on_blocklist failure
82
* https://tracker.ceph.com/issues/54462 
83
  Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128
84
* https://tracker.ceph.com/issues/57676
85
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
86
87
* https://tracker.ceph.com/issues/63949
88
  valgrind leak in MDS
89
* https://tracker.ceph.com/issues/64041
90
  qa/cephfs: fs/upgrade/nofs suite attempts to jump more than 2 releases
91
* fsstress failure in last run was due a kernel MM layer failure, unrelated to CephFS
92
* from last run, job #7507400 failed due to MGR; FS wasn't degraded, so it's unrelated to CephFS
93
94 213 Venky Shankar
h3. 06 Dec 2023
95
96
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20231206.125818
97
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20231206.125818-x (rerun w/ squid kickoff changes)
98
99
* https://tracker.ceph.com/issues/63764
100
    Test failure: test_r_with_fsname_and_no_path_in_cap (tasks.cephfs.test_multifs_auth.TestMDSCaps)
101
* https://tracker.ceph.com/issues/63233
102
    mon|client|mds: valgrind reports possible leaks in the MDS
103
* https://tracker.ceph.com/issues/57676
104
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
105
* https://tracker.ceph.com/issues/62580
106
    testing: Test failure: test_snapshot_remove (tasks.cephfs.test_strays.TestStrays)
107
* https://tracker.ceph.com/issues/62067
108
    ffsb.sh failure "Resource temporarily unavailable"
109
* https://tracker.ceph.com/issues/61243
110
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
111
* https://tracker.ceph.com/issues/62081
112
    tasks/fscrypt-common does not finish, timesout
113
* https://tracker.ceph.com/issues/63265
114
    qa: fs/snaps/snaptest-git-ceph.sh failed when reseting to tag 'v0.1'
115
* https://tracker.ceph.com/issues/63806
116
    ffsb.sh workunit failure (MDS: std::out_of_range, damaged)
117
118 211 Patrick Donnelly
h3. 30 Nov 2023
119
120
https://pulpito.ceph.com/pdonnell-2023-11-30_08:05:19-fs:shell-wip-batrick-testing-20231130.014408-distro-default-smithi/
121
122
* https://tracker.ceph.com/issues/63699
123 212 Patrick Donnelly
    qa: failed cephfs-shell test_reading_conf
124
* https://tracker.ceph.com/issues/63700
125
    qa: test_cd_with_args failure
126 211 Patrick Donnelly
127 210 Venky Shankar
h3. 29 Nov 2023
128
129
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20231107.042705
130
131
* https://tracker.ceph.com/issues/63233
132
    mon|client|mds: valgrind reports possible leaks in the MDS
133
* https://tracker.ceph.com/issues/63141
134
    qa/cephfs: test_idem_unaffected_root_squash fails
135
* https://tracker.ceph.com/issues/57676
136
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
137
* https://tracker.ceph.com/issues/57655
138
    qa: fs:mixed-clients kernel_untar_build failure
139
* https://tracker.ceph.com/issues/62067
140
    ffsb.sh failure "Resource temporarily unavailable"
141
* https://tracker.ceph.com/issues/61243
142
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
143
* https://tracker.ceph.com/issues/62510 (pending RHEL back port)
    snaptest-git-ceph.sh failure with fs/thrash
144
* https://tracker.ceph.com/issues/62810
145
    Failure in snaptest-git-ceph.sh (it's an async unlink/create bug) -- Need to fix again
146
147 206 Venky Shankar
h3. 14 Nov 2023
148 207 Milind Changire
(Milind)
149
150
https://pulpito.ceph.com/mchangir-2023-11-13_10:27:15-fs-wip-mchangir-testing-20231110.052303-testing-default-smithi/
151
152
* https://tracker.ceph.com/issues/53859
153
  qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
154
* https://tracker.ceph.com/issues/63233
155
  mon|client|mds: valgrind reports possible leaks in the MDS
156
* https://tracker.ceph.com/issues/63521
157
  qa: Test failure: test_scrub_merge_dirfrags (tasks.cephfs.test_scrub_checks.TestScrubChecks)
158
* https://tracker.ceph.com/issues/57655
159
  qa: fs:mixed-clients kernel_untar_build failure
160
* https://tracker.ceph.com/issues/62580
161
    testing: Test failure: test_snapshot_remove (tasks.cephfs.test_strays.TestStrays)
162
* https://tracker.ceph.com/issues/57676
163
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
164
* https://tracker.ceph.com/issues/61243
165
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
166
* https://tracker.ceph.com/issues/63141
167
    qa/cephfs: test_idem_unaffected_root_squash fails
168
* https://tracker.ceph.com/issues/51964
169
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
170
* https://tracker.ceph.com/issues/63522
171
    No module named 'tasks.ceph_fuse'
172
    No module named 'tasks.kclient'
173
    No module named 'tasks.cephfs.fuse_mount'
174
    No module named 'tasks.ceph'
175
* https://tracker.ceph.com/issues/63523
176
    Command failed - qa/workunits/fs/misc/general_vxattrs.sh
177
178
179
h3. 14 Nov 2023
180 206 Venky Shankar
181
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20231106.073650
182
183
(nvm the fs:upgrade test failure - the PR is excluded from merge)
184
185
* https://tracker.ceph.com/issues/57676
186
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
187
* https://tracker.ceph.com/issues/63233
188
    mon|client|mds: valgrind reports possible leaks in the MDS
189
* https://tracker.ceph.com/issues/63141
190
    qa/cephfs: test_idem_unaffected_root_squash fails
191
* https://tracker.ceph.com/issues/62580
192
    testing: Test failure: test_snapshot_remove (tasks.cephfs.test_strays.TestStrays)
193
* https://tracker.ceph.com/issues/57655
194
    qa: fs:mixed-clients kernel_untar_build failure
195
* https://tracker.ceph.com/issues/51964
196
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
197
* https://tracker.ceph.com/issues/63519
198
    ceph-fuse: reef ceph-fuse crashes with main branch ceph-mds
199
* https://tracker.ceph.com/issues/57087
200
    qa: test_fragmented_injection (tasks.cephfs.test_data_scan.TestDataScan) failure
201
* https://tracker.ceph.com/issues/58945
202
    qa: xfstests-dev's generic test suite has 20 failures with fuse client
203
204 204 Rishabh Dave
h3. 7 Nov 2023
205
206 205 Rishabh Dave
fs: https://pulpito.ceph.com/rishabh-2023-11-04_04:30:51-fs-rishabh-2023nov3-testing-default-smithi/
207
re-run: https://pulpito.ceph.com/rishabh-2023-11-05_14:10:09-fs-rishabh-2023nov3-testing-default-smithi/
208
smoke: https://pulpito.ceph.com/rishabh-2023-11-08_08:39:05-smoke-rishabh-2023nov3-testing-default-smithi/
209 204 Rishabh Dave
210
* https://tracker.ceph.com/issues/53859
211
  qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
212
* https://tracker.ceph.com/issues/63233
213
  mon|client|mds: valgrind reports possible leaks in the MDS
214
* https://tracker.ceph.com/issues/57655
215
  qa: fs:mixed-clients kernel_untar_build failure
216
* https://tracker.ceph.com/issues/57676
217
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
218
219
* https://tracker.ceph.com/issues/63473
220
  fsstress.sh failed with errno 124
221
222 202 Rishabh Dave
h3. 3 Nov 2023
223 203 Rishabh Dave
224 202 Rishabh Dave
https://pulpito.ceph.com/rishabh-2023-10-27_06:26:52-fs-rishabh-2023oct26-testing-default-smithi/
225
226
* https://tracker.ceph.com/issues/63141
227
  qa/cephfs: test_idem_unaffected_root_squash fails
228
* https://tracker.ceph.com/issues/63233
229
  mon|client|mds: valgrind reports possible leaks in the MDS
230
* https://tracker.ceph.com/issues/57656
231
  dbench: write failed on handle 10010 (Resource temporarily unavailable)
232
* https://tracker.ceph.com/issues/57655
233
  qa: fs:mixed-clients kernel_untar_build failure
234
* https://tracker.ceph.com/issues/57676
235
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
236
237
* https://tracker.ceph.com/issues/59531
238
  "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio)"
239
* https://tracker.ceph.com/issues/52624
240
  qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
241
242 198 Patrick Donnelly
h3. 24 October 2023
243
244
https://pulpito.ceph.com/?branch=wip-batrick-testing-20231024.144545
245
246 200 Patrick Donnelly
Two failures:
247
248
https://pulpito.ceph.com/pdonnell-2023-10-26_05:21:22-fs-wip-batrick-testing-20231024.144545-distro-default-smithi/7438459/
249
https://pulpito.ceph.com/pdonnell-2023-10-26_05:21:22-fs-wip-batrick-testing-20231024.144545-distro-default-smithi/7438468/
250
251
probably related to https://github.com/ceph/ceph/pull/53255. Killing the mount as part of the test did not complete. Will research more.
252
253 198 Patrick Donnelly
* https://tracker.ceph.com/issues/52624
254
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
255
* https://tracker.ceph.com/issues/57676
256 199 Patrick Donnelly
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
257
* https://tracker.ceph.com/issues/63233
258
    mon|client|mds: valgrind reports possible leaks in the MDS
259
* https://tracker.ceph.com/issues/59531
260
    "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS 
261
* https://tracker.ceph.com/issues/57655
262
    qa: fs:mixed-clients kernel_untar_build failure
263 200 Patrick Donnelly
* https://tracker.ceph.com/issues/62067
264
    ffsb.sh failure "Resource temporarily unavailable"
265
* https://tracker.ceph.com/issues/63411
266
    qa: flush journal may cause timeouts of `scrub status`
267
* https://tracker.ceph.com/issues/61243
268
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
269
* https://tracker.ceph.com/issues/63141
270 198 Patrick Donnelly
    test_idem_unaffected_root_squash (test_admin.TestFsAuthorizeUpdate) fails
271 148 Rishabh Dave
272 195 Venky Shankar
h3. 18 Oct 2023
273
274
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20231018.065603
275
276
* https://tracker.ceph.com/issues/52624
277
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
278
* https://tracker.ceph.com/issues/57676
279
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
280
* https://tracker.ceph.com/issues/63233
281
    mon|client|mds: valgrind reports possible leaks in the MDS
282
* https://tracker.ceph.com/issues/63141
283
    qa/cephfs: test_idem_unaffected_root_squash fails
284
* https://tracker.ceph.com/issues/59531
285
  "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio)" 
286
* https://tracker.ceph.com/issues/62658
287
    error during scrub thrashing: reached maximum tries (31) after waiting for 900 seconds
288
* https://tracker.ceph.com/issues/62580
289
    testing: Test failure: test_snapshot_remove (tasks.cephfs.test_strays.TestStrays)
290
* https://tracker.ceph.com/issues/62067
291
    ffsb.sh failure "Resource temporarily unavailable"
292
* https://tracker.ceph.com/issues/57655
293
    qa: fs:mixed-clients kernel_untar_build failure
294
* https://tracker.ceph.com/issues/62036
295
    src/mds/MDCache.cc: 5131: FAILED ceph_assert(isolated_inodes.empty())
296
* https://tracker.ceph.com/issues/58945
297
    qa: xfstests-dev's generic test suite has 20 failures with fuse client
298
* https://tracker.ceph.com/issues/62847
299
    mds: blogbench requests stuck (5mds+scrub+snaps-flush)
300
301 193 Venky Shankar
h3. 13 Oct 2023
302
303
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20231013.093215
304
305
* https://tracker.ceph.com/issues/52624
306
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
307
* https://tracker.ceph.com/issues/62936
308
    Test failure: test_mirroring_init_failure_with_recovery (tasks.cephfs.test_mirroring.TestMirroring)
309
* https://tracker.ceph.com/issues/47292
310
    cephfs-shell: test_df_for_valid_file failure
311
* https://tracker.ceph.com/issues/63141
312
    qa/cephfs: test_idem_unaffected_root_squash fails
313
* https://tracker.ceph.com/issues/62081
314
    tasks/fscrypt-common does not finish, timesout
315 1 Patrick Donnelly
* https://tracker.ceph.com/issues/58945
316
    qa: xfstests-dev's generic test suite has 20 failures with fuse client
317 194 Venky Shankar
* https://tracker.ceph.com/issues/63233
318
    mon|client|mds: valgrind reports possible leaks in the MDS
319 193 Venky Shankar
320 190 Patrick Donnelly
h3. 16 Oct 2023
321
322
https://pulpito.ceph.com/?branch=wip-batrick-testing-20231016.203825
323
324 192 Patrick Donnelly
Infrastructure issues:
325
* /teuthology/pdonnell-2023-10-19_12:04:12-fs-wip-batrick-testing-20231016.203825-distro-default-smithi/7432286/teuthology.log
326
    Host lost.
327
328 196 Patrick Donnelly
One followup fix:
329
* https://pulpito.ceph.com/pdonnell-2023-10-20_00:33:29-fs-wip-batrick-testing-20231016.203825-distro-default-smithi/
330
331 192 Patrick Donnelly
Failures:
332
333
* https://tracker.ceph.com/issues/56694
334
    qa: avoid blocking forever on hung umount
335
* https://tracker.ceph.com/issues/63089
336
    qa: tasks/mirror times out
337
* https://tracker.ceph.com/issues/52624
338
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
339
* https://tracker.ceph.com/issues/59531
340
  "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio)" 
341
* https://tracker.ceph.com/issues/57676
342
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
343
* https://tracker.ceph.com/issues/62658 
344
    error during scrub thrashing: reached maximum tries (31) after waiting for 900 seconds
345
* https://tracker.ceph.com/issues/61243
346
  test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
347
* https://tracker.ceph.com/issues/57656
348
  dbench: write failed on handle 10010 (Resource temporarily unavailable)
349
* https://tracker.ceph.com/issues/63233
350
  mon|client|mds: valgrind reports possible leaks in the MDS
351 197 Patrick Donnelly
* https://tracker.ceph.com/issues/63278
352
  kclient: may wrongly decode session messages and believe it is blocklisted (dead jobs)
353 192 Patrick Donnelly
354 189 Rishabh Dave
h3. 9 Oct 2023
355
356
https://pulpito.ceph.com/rishabh-2023-10-06_11:56:52-fs-rishabh-cephfs-mon-testing-default-smithi/
357
358
* https://tracker.ceph.com/issues/54460
359
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
360
* https://tracker.ceph.com/issues/63141
361
  test_idem_unaffected_root_squash (test_admin.TestFsAuthorizeUpdate) fails
362
* https://tracker.ceph.com/issues/62937
363
  logrotate doesn't support parallel execution on same set of logfiles
364
* https://tracker.ceph.com/issues/61400
365
  valgrind+ceph-mon issues
366
* https://tracker.ceph.com/issues/57676
367
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
368
* https://tracker.ceph.com/issues/55805
369
  error during scrub thrashing reached max tries in 900 secs
370
371 188 Venky Shankar
h3. 26 Sep 2023
372
373
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230926.081818
374
375
* https://tracker.ceph.com/issues/52624
376
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
377
* https://tracker.ceph.com/issues/62873
378
    qa: FAIL: test_client_blocklisted_oldest_tid (tasks.cephfs.test_client_limits.TestClientLimits)
379
* https://tracker.ceph.com/issues/61400
380
    valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
381
* https://tracker.ceph.com/issues/57676
382
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
383
* https://tracker.ceph.com/issues/62682
384
    mon: no mdsmap broadcast after "fs set joinable" is set to true
385
* https://tracker.ceph.com/issues/63089
386
    qa: tasks/mirror times out
387
388 185 Rishabh Dave
h3. 22 Sep 2023
389
390
https://pulpito.ceph.com/rishabh-2023-09-12_12:12:15-fs-wip-rishabh-2023sep12-b2-testing-default-smithi/
391
392
* https://tracker.ceph.com/issues/59348
393
  qa: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota
394
* https://tracker.ceph.com/issues/59344
395
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
396
* https://tracker.ceph.com/issues/59531
397
  "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio)" 
398
* https://tracker.ceph.com/issues/61574
399
  build failure for mdtest project
400
* https://tracker.ceph.com/issues/62702
401
  fsstress.sh: MDS slow requests for the internal 'rename' requests
402
* https://tracker.ceph.com/issues/57676
403
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
404
405
* https://tracker.ceph.com/issues/62863 
406
  deadlock in ceph-fuse causes teuthology job to hang and fail
407
* https://tracker.ceph.com/issues/62870
408
  test_cluster_info (tasks.cephfs.test_nfs.TestNFS)
409
* https://tracker.ceph.com/issues/62873
410
  test_client_blocklisted_oldest_tid (tasks.cephfs.test_client_limits.TestClientLimits)
411
412 186 Venky Shankar
h3. 20 Sep 2023
413
414
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230920.072635
415
416
* https://tracker.ceph.com/issues/52624
417
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
418
* https://tracker.ceph.com/issues/61400
419
    valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
420
* https://tracker.ceph.com/issues/61399
421
    libmpich: undefined references to fi_strerror
422
* https://tracker.ceph.com/issues/62081
423
    tasks/fscrypt-common does not finish, timesout
424
* https://tracker.ceph.com/issues/62658 
425
    error during scrub thrashing: reached maximum tries (31) after waiting for 900 seconds
426
* https://tracker.ceph.com/issues/62915
427
    qa/suites/fs/nfs: No orchestrator configured (try `ceph orch set backend`) while running test cases
428
* https://tracker.ceph.com/issues/59531
429
    quincy: "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio)"
430
* https://tracker.ceph.com/issues/62873
431
  qa: FAIL: test_client_blocklisted_oldest_tid (tasks.cephfs.test_client_limits.TestClientLimits)
432
* https://tracker.ceph.com/issues/62936
433
    Test failure: test_mirroring_init_failure_with_recovery (tasks.cephfs.test_mirroring.TestMirroring)
434
* https://tracker.ceph.com/issues/62937
435
    Command failed on smithi027 with status 3: 'sudo logrotate /etc/logrotate.d/ceph-test.conf'
436
* https://tracker.ceph.com/issues/62510
437
    snaptest-git-ceph.sh failure with fs/thrash
438
* https://tracker.ceph.com/issues/62081
439
    tasks/fscrypt-common does not finish, timesout
440
* https://tracker.ceph.com/issues/62126
441
    test failure: suites/blogbench.sh stops running
442 187 Venky Shankar
* https://tracker.ceph.com/issues/62682
443
    mon: no mdsmap broadcast after "fs set joinable" is set to true
444 186 Venky Shankar
445 184 Milind Changire
h3. 19 Sep 2023
446
447
http://pulpito.front.sepia.ceph.com/mchangir-2023-09-12_05:40:22-fs-wip-mchangir-testing-20230908.140927-testing-default-smithi/
448
449
* https://tracker.ceph.com/issues/58220#note-9
450
  workunit fs/test_python.sh: test_disk_quota_exceeeded_error failure
451
* https://tracker.ceph.com/issues/62702
452
  Command failed (workunit test suites/fsstress.sh) on smithi124 with status 124
453
* https://tracker.ceph.com/issues/57676
454
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
455
* https://tracker.ceph.com/issues/59348
456
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
457
* https://tracker.ceph.com/issues/52624
458
  qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
459
* https://tracker.ceph.com/issues/51964
460
  qa: test_cephfs_mirror_restart_sync_on_blocklist failure
461
* https://tracker.ceph.com/issues/61243
462
  test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
463
* https://tracker.ceph.com/issues/59344
464
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
465
* https://tracker.ceph.com/issues/62873
466
  qa: FAIL: test_client_blocklisted_oldest_tid (tasks.cephfs.test_client_limits.TestClientLimits)
467
* https://tracker.ceph.com/issues/59413
468
  cephfs: qa snaptest-git-ceph.sh failed with "got remote process result: 128"
469
* https://tracker.ceph.com/issues/53859
470
  qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
471
* https://tracker.ceph.com/issues/62482
472
  qa: cluster [WRN] Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
473
474 178 Patrick Donnelly
475 177 Venky Shankar
h3. 13 Sep 2023
476
477
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230908.065909
478
479
* https://tracker.ceph.com/issues/52624
480
      qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
481
* https://tracker.ceph.com/issues/57655
482
    qa: fs:mixed-clients kernel_untar_build failure
483
* https://tracker.ceph.com/issues/57676
484
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
485
* https://tracker.ceph.com/issues/61243
486
    qa: tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev - 17 tests failed
487
* https://tracker.ceph.com/issues/62567
488
    postgres workunit times out - MDS_SLOW_REQUEST in logs
489
* https://tracker.ceph.com/issues/61400
490
    valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
491
* https://tracker.ceph.com/issues/61399
492
    libmpich: undefined references to fi_strerror
493
* https://tracker.ceph.com/issues/57655
494
    qa: fs:mixed-clients kernel_untar_build failure
495
* https://tracker.ceph.com/issues/57676
496
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
497
* https://tracker.ceph.com/issues/51964
498
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
499
* https://tracker.ceph.com/issues/62081
500
    tasks/fscrypt-common does not finish, timesout
501 178 Patrick Donnelly
502 179 Patrick Donnelly
h3. 2023 Sep 12
503 178 Patrick Donnelly
504
https://pulpito.ceph.com/pdonnell-2023-09-12_14:07:50-fs-wip-batrick-testing-20230912.122437-distro-default-smithi/
505 1 Patrick Donnelly
506 181 Patrick Donnelly
A few failures caused by qa refactoring in https://github.com/ceph/ceph/pull/48130 ; notably:
507
508 182 Patrick Donnelly
* Test failure: test_export_pin_many (tasks.cephfs.test_exports.TestExportPin) caused by fragmentation from config changes.
509 181 Patrick Donnelly
510
Failures:
511
512 179 Patrick Donnelly
* https://tracker.ceph.com/issues/59348
513
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
514
* https://tracker.ceph.com/issues/57656
515
  dbench: write failed on handle 10010 (Resource temporarily unavailable)
516
* https://tracker.ceph.com/issues/55805
517
  error scrub thrashing reached max tries in 900 secs
518
* https://tracker.ceph.com/issues/62067
519
    ffsb.sh failure "Resource temporarily unavailable"
520
* https://tracker.ceph.com/issues/59344
521
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
522
* https://tracker.ceph.com/issues/61399
523 180 Patrick Donnelly
  libmpich: undefined references to fi_strerror
524
* https://tracker.ceph.com/issues/62832
525
  common: config_proxy deadlock during shutdown (and possibly other times)
526
* https://tracker.ceph.com/issues/59413
527 1 Patrick Donnelly
  cephfs: qa snaptest-git-ceph.sh failed with "got remote process result: 128"
528 181 Patrick Donnelly
* https://tracker.ceph.com/issues/57676
529
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
530
* https://tracker.ceph.com/issues/62567
531
  Command failed with status 1: "sudo TESTDIR=/home/ubuntu/cephtest bash -c 'sudo -u postgres -- pgbench -s 500 -i'"
532
* https://tracker.ceph.com/issues/54460
533
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
534
* https://tracker.ceph.com/issues/58220#note-9
535
  workunit fs/test_python.sh: test_disk_quota_exceeeded_error failure
536
* https://tracker.ceph.com/issues/59348
537
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
538 183 Patrick Donnelly
* https://tracker.ceph.com/issues/62847
539
    mds: blogbench requests stuck (5mds+scrub+snaps-flush)
540
* https://tracker.ceph.com/issues/62848
541
    qa: fail_fs upgrade scenario hanging
542
* https://tracker.ceph.com/issues/62081
543
    tasks/fscrypt-common does not finish, timesout
544 177 Venky Shankar
545 176 Venky Shankar
h3. 11 Sep 2023
546 175 Venky Shankar
547
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230830.153114
548
549
* https://tracker.ceph.com/issues/52624
550
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
551
* https://tracker.ceph.com/issues/61399
552
    qa: build failure for ior (the failed instance is when compiling `mdtest`)
553
* https://tracker.ceph.com/issues/57655
554
    qa: fs:mixed-clients kernel_untar_build failure
555
* https://tracker.ceph.com/issues/61399
556
    ior build failure
557
* https://tracker.ceph.com/issues/59531
558
    quincy: "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio)"
559
* https://tracker.ceph.com/issues/59344
560
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
561
* https://tracker.ceph.com/issues/59346
562
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
563
* https://tracker.ceph.com/issues/59348
564
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
565
* https://tracker.ceph.com/issues/57676
566
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
567
* https://tracker.ceph.com/issues/61243
568
  qa: tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev - 17 tests failed
569
* https://tracker.ceph.com/issues/62567
570
  postgres workunit times out - MDS_SLOW_REQUEST in logs
571
572
573 174 Rishabh Dave
h3. 6 Sep 2023 Run 2
574
575
https://pulpito.ceph.com/rishabh-2023-08-25_01:50:32-fs-wip-rishabh-2023aug3-b5-testing-default-smithi/ 
576
577
* https://tracker.ceph.com/issues/51964
578
  test_cephfs_mirror_restart_sync_on_blocklist failure
579
* https://tracker.ceph.com/issues/59348
580
  test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
581
* https://tracker.ceph.com/issues/53859
582
  qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
583
* https://tracker.ceph.com/issues/61892
584
  test_strays.TestStrays.test_snapshot_remove failed
585
* https://tracker.ceph.com/issues/54460
586
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
587
* https://tracker.ceph.com/issues/59346
588
  fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write"
589
* https://tracker.ceph.com/issues/59344
590
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
591
* https://tracker.ceph.com/issues/62484
592
  qa: ffsb.sh test failure
593
* https://tracker.ceph.com/issues/62567
594
  Command failed with status 1: "sudo TESTDIR=/home/ubuntu/cephtest bash -c 'sudo -u postgres -- pgbench -s 500 -i'"
595
  
596
* https://tracker.ceph.com/issues/61399
597
  ior build failure
598
* https://tracker.ceph.com/issues/57676
599
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
600
* https://tracker.ceph.com/issues/55805
601
  error scrub thrashing reached max tries in 900 secs
602
603 172 Rishabh Dave
h3. 6 Sep 2023
604 171 Rishabh Dave
605 173 Rishabh Dave
https://pulpito.ceph.com/rishabh-2023-08-10_20:16:46-fs-wip-rishabh-2023Aug1-b4-testing-default-smithi/
606 171 Rishabh Dave
607 1 Patrick Donnelly
* https://tracker.ceph.com/issues/53859
608
  qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
609 173 Rishabh Dave
* https://tracker.ceph.com/issues/51964
610
  qa: test_cephfs_mirror_restart_sync_on_blocklist failure
611 1 Patrick Donnelly
* https://tracker.ceph.com/issues/61892
612 173 Rishabh Dave
  test_snapshot_remove (test_strays.TestStrays) failed
613
* https://tracker.ceph.com/issues/59348
614
  qa: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota
615
* https://tracker.ceph.com/issues/54462
616
  Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128
617
* https://tracker.ceph.com/issues/62556
618
  test_acls: xfstests_dev: python2 is missing
619
* https://tracker.ceph.com/issues/62067
620
  ffsb.sh failure "Resource temporarily unavailable"
621
* https://tracker.ceph.com/issues/57656
622
  dbench: write failed on handle 10010 (Resource temporarily unavailable)
623 1 Patrick Donnelly
* https://tracker.ceph.com/issues/59346
624
  fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write"
625 171 Rishabh Dave
* https://tracker.ceph.com/issues/59344
626 173 Rishabh Dave
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
627
628 171 Rishabh Dave
* https://tracker.ceph.com/issues/61399
629
  ior build failure
630
* https://tracker.ceph.com/issues/57676
631
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
632
* https://tracker.ceph.com/issues/55805
633
  error scrub thrashing reached max tries in 900 secs
634 173 Rishabh Dave
635
* https://tracker.ceph.com/issues/62567
636
  Command failed on smithi008 with status 1: "sudo TESTDIR=/home/ubuntu/cephtest bash -c 'sudo -u postgres -- pgbench -s 500 -i'"
637
* https://tracker.ceph.com/issues/62702
638
  workunit test suites/fsstress.sh on smithi066 with status 124
639 170 Rishabh Dave
640
h3. 5 Sep 2023
641
642
https://pulpito.ceph.com/rishabh-2023-08-25_06:38:25-fs-wip-rishabh-2023aug3-b5-testing-default-smithi/
643
orch:cephadm suite run: http://pulpito.front.sepia.ceph.com/rishabh-2023-09-05_12:16:09-orch:cephadm-wip-rishabh-2023aug3-b5-testing-default-smithi/
644
  this run has failures but acc to Adam King these are not relevant and should be ignored
645
646
* https://tracker.ceph.com/issues/61892
647
  test_snapshot_remove (test_strays.TestStrays) failed
648
* https://tracker.ceph.com/issues/59348
649
  test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota
650
* https://tracker.ceph.com/issues/54462
651
  Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128
652
* https://tracker.ceph.com/issues/62067
653
  ffsb.sh failure "Resource temporarily unavailable"
654
* https://tracker.ceph.com/issues/57656 
655
  dbench: write failed on handle 10010 (Resource temporarily unavailable)
656
* https://tracker.ceph.com/issues/59346
657
  fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write"
658
* https://tracker.ceph.com/issues/59344
659
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
660
* https://tracker.ceph.com/issues/50223
661
  client.xxxx isn't responding to mclientcaps(revoke)
662
* https://tracker.ceph.com/issues/57655
663
  qa: fs:mixed-clients kernel_untar_build failure
664
* https://tracker.ceph.com/issues/62187
665
  iozone.sh: line 5: iozone: command not found
666
 
667
* https://tracker.ceph.com/issues/61399
668
  ior build failure
669
* https://tracker.ceph.com/issues/57676
670
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
671
* https://tracker.ceph.com/issues/55805
672
  error scrub thrashing reached max tries in 900 secs
673 169 Venky Shankar
674
675
h3. 31 Aug 2023
676
677
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230824.045828
678
679
* https://tracker.ceph.com/issues/52624
680
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
681
* https://tracker.ceph.com/issues/62187
682
    iozone: command not found
683
* https://tracker.ceph.com/issues/61399
684
    ior build failure
685
* https://tracker.ceph.com/issues/59531
686
    quincy: "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio)"
687
* https://tracker.ceph.com/issues/61399
688
    qa: build failure for ior (the failed instance is when compiling `mdtest`)
689
* https://tracker.ceph.com/issues/57655
690
    qa: fs:mixed-clients kernel_untar_build failure
691
* https://tracker.ceph.com/issues/59344
692
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
693
* https://tracker.ceph.com/issues/59346
694
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
695
* https://tracker.ceph.com/issues/59348
696
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
697
* https://tracker.ceph.com/issues/59413
698
    cephfs: qa snaptest-git-ceph.sh failed with "got remote process result: 128"
699
* https://tracker.ceph.com/issues/62653
700
    qa: unimplemented fcntl command: 1036 with fsstress
701
* https://tracker.ceph.com/issues/61400
702
    valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
703
* https://tracker.ceph.com/issues/62658
704
    error during scrub thrashing: reached maximum tries (31) after waiting for 900 seconds
705
* https://tracker.ceph.com/issues/62188
706
    AttributeError: 'RemoteProcess' object has no attribute 'read' (happens only with multis-auth test)
707 168 Venky Shankar
708
709
h3. 25 Aug 2023
710
711
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230822.064807
712
713
* https://tracker.ceph.com/issues/59344
714
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
715
* https://tracker.ceph.com/issues/59346
716
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
717
* https://tracker.ceph.com/issues/59348
718
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
719
* https://tracker.ceph.com/issues/57655
720
    qa: fs:mixed-clients kernel_untar_build failure
721
* https://tracker.ceph.com/issues/61243
722
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
723
* https://tracker.ceph.com/issues/61399
724
    ior build failure
725
* https://tracker.ceph.com/issues/61399
726
    qa: build failure for ior (the failed instance is when compiling `mdtest`)
727
* https://tracker.ceph.com/issues/62484
728
    qa: ffsb.sh test failure
729
* https://tracker.ceph.com/issues/59531
730
    quincy: "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio)"
731
* https://tracker.ceph.com/issues/62510
732
    snaptest-git-ceph.sh failure with fs/thrash
733 167 Venky Shankar
734
735
h3. 24 Aug 2023
736
737
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230822.060131
738
739
* https://tracker.ceph.com/issues/57676
740
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
741
* https://tracker.ceph.com/issues/51964
742
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
743
* https://tracker.ceph.com/issues/59344
744
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
745
* https://tracker.ceph.com/issues/59346
746
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
747
* https://tracker.ceph.com/issues/59348
748
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
749
* https://tracker.ceph.com/issues/61399
750
    ior build failure
751
* https://tracker.ceph.com/issues/61399
752
    qa: build failure for ior (the failed instance is when compiling `mdtest`)
753
* https://tracker.ceph.com/issues/62510
754
    snaptest-git-ceph.sh failure with fs/thrash
755
* https://tracker.ceph.com/issues/62484
756
    qa: ffsb.sh test failure
757
* https://tracker.ceph.com/issues/57087
758
    qa: test_fragmented_injection (tasks.cephfs.test_data_scan.TestDataScan) failure
759
* https://tracker.ceph.com/issues/57656
760
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
761
* https://tracker.ceph.com/issues/62187
762
    iozone: command not found
763
* https://tracker.ceph.com/issues/62188
764
    AttributeError: 'RemoteProcess' object has no attribute 'read' (happens only with multis-auth test)
765
* https://tracker.ceph.com/issues/62567
766
    postgres workunit times out - MDS_SLOW_REQUEST in logs
767 166 Venky Shankar
768
769
h3. 22 Aug 2023
770
771
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230809.035933
772
773
* https://tracker.ceph.com/issues/57676
774
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
775
* https://tracker.ceph.com/issues/51964
776
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
777
* https://tracker.ceph.com/issues/59344
778
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
779
* https://tracker.ceph.com/issues/59346
780
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
781
* https://tracker.ceph.com/issues/59348
782
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
783
* https://tracker.ceph.com/issues/61399
784
    ior build failure
785
* https://tracker.ceph.com/issues/61399
786
    qa: build failure for ior (the failed instance is when compiling `mdtest`)
787
* https://tracker.ceph.com/issues/57655
788
    qa: fs:mixed-clients kernel_untar_build failure
789
* https://tracker.ceph.com/issues/61243
790
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
791
* https://tracker.ceph.com/issues/62188
792
    AttributeError: 'RemoteProcess' object has no attribute 'read' (happens only with multis-auth test)
793
* https://tracker.ceph.com/issues/62510
794
    snaptest-git-ceph.sh failure with fs/thrash
795
* https://tracker.ceph.com/issues/62511
796
    src/mds/MDLog.cc: 299: FAILED ceph_assert(!mds_is_shutting_down)
797 165 Venky Shankar
798
799
h3. 14 Aug 2023
800
801
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230808.093601
802
803
* https://tracker.ceph.com/issues/51964
804
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
805
* https://tracker.ceph.com/issues/61400
806
    valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
807
* https://tracker.ceph.com/issues/61399
808
    ior build failure
809
* https://tracker.ceph.com/issues/59348
810
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
811
* https://tracker.ceph.com/issues/59531
812
    cluster [WRN] OSD bench result of 137706.272521 IOPS exceeded the threshold
813
* https://tracker.ceph.com/issues/59344
814
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
815
* https://tracker.ceph.com/issues/59346
816
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
817
* https://tracker.ceph.com/issues/61399
818
    qa: build failure for ior (the failed instance is when compiling `mdtest`)
819
* https://tracker.ceph.com/issues/59684 [kclient bug]
820
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
821
* https://tracker.ceph.com/issues/61243 (NEW)
822
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
823
* https://tracker.ceph.com/issues/57655
824
    qa: fs:mixed-clients kernel_untar_build failure
825
* https://tracker.ceph.com/issues/57656
826
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
827 163 Venky Shankar
828
829
h3. 28 JULY 2023
830
831
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230725.053049
832
833
* https://tracker.ceph.com/issues/51964
834
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
835
* https://tracker.ceph.com/issues/61400
836
    valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
837
* https://tracker.ceph.com/issues/61399
838
    ior build failure
839
* https://tracker.ceph.com/issues/57676
840
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
841
* https://tracker.ceph.com/issues/59348
842
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
843
* https://tracker.ceph.com/issues/59531
844
    cluster [WRN] OSD bench result of 137706.272521 IOPS exceeded the threshold
845
* https://tracker.ceph.com/issues/59344
846
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
847
* https://tracker.ceph.com/issues/59346
848
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
849
* https://github.com/ceph/ceph/pull/52556
850
    task/test_nfs: AttributeError: 'TestNFS' object has no attribute 'run_ceph_cmd' (see note #4)
851
* https://tracker.ceph.com/issues/62187
852
    iozone: command not found
853
* https://tracker.ceph.com/issues/61399
854
    qa: build failure for ior (the failed instance is when compiling `mdtest`)
855
* https://tracker.ceph.com/issues/62188
856 164 Rishabh Dave
    AttributeError: 'RemoteProcess' object has no attribute 'read' (happens only with multis-auth test)
857 158 Rishabh Dave
858
h3. 24 Jul 2023
859
860
https://pulpito.ceph.com/rishabh-2023-07-13_21:35:13-fs-wip-rishabh-2023Jul13-testing-default-smithi/
861
https://pulpito.ceph.com/rishabh-2023-07-14_10:26:42-fs-wip-rishabh-2023Jul13-testing-default-smithi/
862
There were few failure from one of the PRs under testing. Following run confirms that removing this PR fixes these failures -
863
https://pulpito.ceph.com/rishabh-2023-07-18_02:11:50-fs-wip-rishabh-2023Jul13-m-quota-testing-default-smithi/
864
One more extra run to check if blogbench.sh fail every time:
865
https://pulpito.ceph.com/rishabh-2023-07-21_17:58:19-fs-wip-rishabh-2023Jul13-m-quota-testing-default-smithi/
866
blogbench.sh failure were seen on above runs for first time, following run with main branch that confirms that "blogbench.sh" was not related to any of the PRs that are under testing -
867 161 Rishabh Dave
https://pulpito.ceph.com/rishabh-2023-07-21_21:30:53-fs-wip-rishabh-2023Jul13-base-2-testing-default-smithi/
868
869
* https://tracker.ceph.com/issues/61892
870
  test_snapshot_remove (test_strays.TestStrays) failed
871
* https://tracker.ceph.com/issues/53859
872
  test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
873
* https://tracker.ceph.com/issues/61982
874
  test_clean_stale_subvolume_snapshot_metadata (tasks.cephfs.test_volumes.TestSubvolumeSnapshots)
875
* https://tracker.ceph.com/issues/52438
876
  qa: ffsb timeout
877
* https://tracker.ceph.com/issues/54460
878
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
879
* https://tracker.ceph.com/issues/57655
880
  qa: fs:mixed-clients kernel_untar_build failure
881
* https://tracker.ceph.com/issues/48773
882
  reached max tries: scrub does not complete
883
* https://tracker.ceph.com/issues/58340
884
  mds: fsstress.sh hangs with multimds
885
* https://tracker.ceph.com/issues/61400
886
  valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
887
* https://tracker.ceph.com/issues/57206
888
  libcephfs/test.sh: ceph_test_libcephfs_reclaim
889
  
890
* https://tracker.ceph.com/issues/57656
891
  [testing] dbench: write failed on handle 10010 (Resource temporarily unavailable)
892
* https://tracker.ceph.com/issues/61399
893
  ior build failure
894
* https://tracker.ceph.com/issues/57676
895
  error during scrub thrashing: backtrace
896
  
897
* https://tracker.ceph.com/issues/38452
898
  'sudo -u postgres -- pgbench -s 500 -i' failed
899 158 Rishabh Dave
* https://tracker.ceph.com/issues/62126
900 157 Venky Shankar
  blogbench.sh failure
901
902
h3. 18 July 2023
903
904
* https://tracker.ceph.com/issues/52624
905
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
906
* https://tracker.ceph.com/issues/57676
907
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
908
* https://tracker.ceph.com/issues/54460
909
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
910
* https://tracker.ceph.com/issues/57655
911
    qa: fs:mixed-clients kernel_untar_build failure
912
* https://tracker.ceph.com/issues/51964
913
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
914
* https://tracker.ceph.com/issues/59344
915
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
916
* https://tracker.ceph.com/issues/61182
917
    cephfs-mirror-ha-workunit: reached maximum tries (50) after waiting for 300 seconds
918
* https://tracker.ceph.com/issues/61957
919
    test_client_limits.TestClientLimits.test_client_release_bug
920
* https://tracker.ceph.com/issues/59348
921
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
922
* https://tracker.ceph.com/issues/61892
923
    test_strays.TestStrays.test_snapshot_remove failed
924
* https://tracker.ceph.com/issues/59346
925
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
926
* https://tracker.ceph.com/issues/44565
927
    src/mds/SimpleLock.h: 528: FAILED ceph_assert(state == LOCK_XLOCK || state == LOCK_XLOCKDONE || state == LOCK_XLOCKSNAP || state == LOCK_LOCK_XLOCK || state == LOCK_LOCK || is_locallock())
928
* https://tracker.ceph.com/issues/62067
929
    ffsb.sh failure "Resource temporarily unavailable"
930 156 Venky Shankar
931
932
h3. 17 July 2023
933
934
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230704.040136
935
936
* https://tracker.ceph.com/issues/61982
937
    Test failure: test_clean_stale_subvolume_snapshot_metadata (tasks.cephfs.test_volumes.TestSubvolumeSnapshots)
938
* https://tracker.ceph.com/issues/59344
939
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
940
* https://tracker.ceph.com/issues/61182
941
    cephfs-mirror-ha-workunit: reached maximum tries (50) after waiting for 300 seconds
942
* https://tracker.ceph.com/issues/61957
943
    test_client_limits.TestClientLimits.test_client_release_bug
944
* https://tracker.ceph.com/issues/61400
945
    valgrind+ceph-mon: segmentation fault in rocksdb+tcmalloc
946
* https://tracker.ceph.com/issues/59348
947
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
948
* https://tracker.ceph.com/issues/61892
949
    test_strays.TestStrays.test_snapshot_remove failed
950
* https://tracker.ceph.com/issues/59346
951
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
952
* https://tracker.ceph.com/issues/62036
953
    src/mds/MDCache.cc: 5131: FAILED ceph_assert(isolated_inodes.empty())
954
* https://tracker.ceph.com/issues/61737
955
    coredump from '/bin/podman pull quay.ceph.io/ceph-ci/ceph:pacific'
956
* https://tracker.ceph.com/issues/44565
957
    src/mds/SimpleLock.h: 528: FAILED ceph_assert(state == LOCK_XLOCK || state == LOCK_XLOCKDONE || state == LOCK_XLOCKSNAP || state == LOCK_LOCK_XLOCK || state == LOCK_LOCK || is_locallock())
958 155 Rishabh Dave
959 1 Patrick Donnelly
960 153 Rishabh Dave
h3. 13 July 2023 Run 2
961 152 Rishabh Dave
962
963
https://pulpito.ceph.com/rishabh-2023-07-08_23:33:40-fs-wip-rishabh-2023Jul9-testing-default-smithi/
964
https://pulpito.ceph.com/rishabh-2023-07-09_20:19:09-fs-wip-rishabh-2023Jul9-testing-default-smithi/
965
966
* https://tracker.ceph.com/issues/61957
967
  test_client_limits.TestClientLimits.test_client_release_bug
968
* https://tracker.ceph.com/issues/61982
969
  Test failure: test_clean_stale_subvolume_snapshot_metadata (tasks.cephfs.test_volumes.TestSubvolumeSnapshots)
970
* https://tracker.ceph.com/issues/59348
971
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
972
* https://tracker.ceph.com/issues/59344
973
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
974
* https://tracker.ceph.com/issues/54460
975
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
976
* https://tracker.ceph.com/issues/57655
977
  qa: fs:mixed-clients kernel_untar_build failure
978
* https://tracker.ceph.com/issues/61400
979
  valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
980
* https://tracker.ceph.com/issues/61399
981
  ior build failure
982
983 151 Venky Shankar
h3. 13 July 2023
984
985
https://pulpito.ceph.com/vshankar-2023-07-04_11:45:30-fs-wip-vshankar-testing-20230704.040242-testing-default-smithi/
986
987
* https://tracker.ceph.com/issues/54460
988
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
989
* https://tracker.ceph.com/issues/61400
990
    valgrind+ceph-mon: segmentation fault in rocksdb+tcmalloc
991
* https://tracker.ceph.com/issues/57655
992
    qa: fs:mixed-clients kernel_untar_build failure
993
* https://tracker.ceph.com/issues/61945
994
    LibCephFS.DelegTimeout failure
995
* https://tracker.ceph.com/issues/52624
996
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
997
* https://tracker.ceph.com/issues/57676
998
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
999
* https://tracker.ceph.com/issues/59348
1000
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1001
* https://tracker.ceph.com/issues/59344
1002
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
1003
* https://tracker.ceph.com/issues/51964
1004
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1005
* https://tracker.ceph.com/issues/59346
1006
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
1007
* https://tracker.ceph.com/issues/61982
1008
    Test failure: test_clean_stale_subvolume_snapshot_metadata (tasks.cephfs.test_volumes.TestSubvolumeSnapshots)
1009 150 Rishabh Dave
1010
1011
h3. 13 Jul 2023
1012
1013
https://pulpito.ceph.com/rishabh-2023-07-05_22:21:20-fs-wip-rishabh-2023Jul5-testing-default-smithi/
1014
https://pulpito.ceph.com/rishabh-2023-07-06_19:33:28-fs-wip-rishabh-2023Jul5-testing-default-smithi/
1015
1016
* https://tracker.ceph.com/issues/61957
1017
  test_client_limits.TestClientLimits.test_client_release_bug
1018
* https://tracker.ceph.com/issues/59348
1019
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1020
* https://tracker.ceph.com/issues/59346
1021
  fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
1022
* https://tracker.ceph.com/issues/48773
1023
  scrub does not complete: reached max tries
1024
* https://tracker.ceph.com/issues/59344
1025
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
1026
* https://tracker.ceph.com/issues/52438
1027
  qa: ffsb timeout
1028
* https://tracker.ceph.com/issues/57656
1029
  [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
1030
* https://tracker.ceph.com/issues/58742
1031
  xfstests-dev: kcephfs: generic
1032
* https://tracker.ceph.com/issues/61399
1033 148 Rishabh Dave
  libmpich: undefined references to fi_strerror
1034 149 Rishabh Dave
1035 148 Rishabh Dave
h3. 12 July 2023
1036
1037
https://pulpito.ceph.com/rishabh-2023-07-05_18:32:52-fs-wip-rishabh-CephManager-in-CephFSTestCase-testing-default-smithi/
1038
https://pulpito.ceph.com/rishabh-2023-07-06_19:46:43-fs-wip-rishabh-CephManager-in-CephFSTestCase-testing-default-smithi/
1039
1040
* https://tracker.ceph.com/issues/61892
1041
  test_strays.TestStrays.test_snapshot_remove failed
1042
* https://tracker.ceph.com/issues/59348
1043
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1044
* https://tracker.ceph.com/issues/53859
1045
  qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
1046
* https://tracker.ceph.com/issues/59346
1047
  fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write"
1048
* https://tracker.ceph.com/issues/58742
1049
  xfstests-dev: kcephfs: generic
1050
* https://tracker.ceph.com/issues/59344
1051
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
1052
* https://tracker.ceph.com/issues/52438
1053
  qa: ffsb timeout
1054
* https://tracker.ceph.com/issues/57656
1055
  [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
1056
* https://tracker.ceph.com/issues/54460
1057
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1058
* https://tracker.ceph.com/issues/57655
1059
  qa: fs:mixed-clients kernel_untar_build failure
1060
* https://tracker.ceph.com/issues/61182
1061
  cephfs-mirror-ha-workunit: reached maximum tries (50) after waiting for 300 seconds
1062
* https://tracker.ceph.com/issues/61400
1063
  valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
1064 147 Rishabh Dave
* https://tracker.ceph.com/issues/48773
1065 146 Patrick Donnelly
  reached max tries: scrub does not complete
1066
1067
h3. 05 July 2023
1068
1069
https://pulpito.ceph.com/pdonnell-2023-07-05_03:38:33-fs:libcephfs-wip-pdonnell-testing-20230705.003205-distro-default-smithi/
1070
1071 137 Rishabh Dave
* https://tracker.ceph.com/issues/59346
1072 143 Rishabh Dave
  fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write"
1073
1074
h3. 27 Jun 2023
1075
1076
https://pulpito.ceph.com/rishabh-2023-06-21_23:38:17-fs-wip-rishabh-improvements-authmon-testing-default-smithi/
1077 144 Rishabh Dave
https://pulpito.ceph.com/rishabh-2023-06-23_17:37:30-fs-wip-rishabh-improvements-authmon-distro-default-smithi/
1078
1079
* https://tracker.ceph.com/issues/59348
1080
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1081
* https://tracker.ceph.com/issues/54460
1082
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1083
* https://tracker.ceph.com/issues/59346
1084
  fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write"
1085
* https://tracker.ceph.com/issues/59344
1086
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
1087
* https://tracker.ceph.com/issues/61399
1088
  libmpich: undefined references to fi_strerror
1089
* https://tracker.ceph.com/issues/50223
1090
  client.xxxx isn't responding to mclientcaps(revoke)
1091 143 Rishabh Dave
* https://tracker.ceph.com/issues/61831
1092
  Test failure: test_mirroring_init_failure_with_recovery (tasks.cephfs.test_mirroring.TestMirroring)
1093 142 Venky Shankar
1094
1095
h3. 22 June 2023
1096
1097
* https://tracker.ceph.com/issues/57676
1098
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1099
* https://tracker.ceph.com/issues/54460
1100
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1101
* https://tracker.ceph.com/issues/59344
1102
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
1103
* https://tracker.ceph.com/issues/59348
1104
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1105
* https://tracker.ceph.com/issues/61400
1106
    valgrind+ceph-mon: segmentation fault in rocksdb+tcmalloc
1107
* https://tracker.ceph.com/issues/57655
1108
    qa: fs:mixed-clients kernel_untar_build failure
1109
* https://tracker.ceph.com/issues/61394
1110
    qa/quincy: cluster [WRN] evicting unresponsive client smithi152 (4298), after 303.726 seconds" in cluster log
1111
* https://tracker.ceph.com/issues/61762
1112
    qa: wait_for_clean: failed before timeout expired
1113
* https://tracker.ceph.com/issues/61775
1114
    cephfs-mirror: mirror daemon does not shutdown (in mirror ha tests)
1115
* https://tracker.ceph.com/issues/44565
1116
    src/mds/SimpleLock.h: 528: FAILED ceph_assert(state == LOCK_XLOCK || state == LOCK_XLOCKDONE || state == LOCK_XLOCKSNAP || state == LOCK_LOCK_XLOCK || state == LOCK_LOCK || is_locallock())
1117
* https://tracker.ceph.com/issues/61790
1118
    cephfs client to mds comms remain silent after reconnect
1119
* https://tracker.ceph.com/issues/61791
1120
    snaptest-git-ceph.sh test timed out (job dead)
1121 139 Venky Shankar
1122
1123
h3. 20 June 2023
1124
1125
https://pulpito.ceph.com/vshankar-2023-06-15_04:58:28-fs-wip-vshankar-testing-20230614.124123-testing-default-smithi/
1126
1127
* https://tracker.ceph.com/issues/57676
1128
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1129
* https://tracker.ceph.com/issues/54460
1130
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1131 140 Venky Shankar
* https://tracker.ceph.com/issues/54462
1132 1 Patrick Donnelly
  Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128
1133 141 Venky Shankar
* https://tracker.ceph.com/issues/58340
1134 139 Venky Shankar
  mds: fsstress.sh hangs with multimds
1135
* https://tracker.ceph.com/issues/59344
1136
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
1137
* https://tracker.ceph.com/issues/59348
1138
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1139
* https://tracker.ceph.com/issues/57656
1140
  [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
1141
* https://tracker.ceph.com/issues/61400
1142
    valgrind+ceph-mon: segmentation fault in rocksdb+tcmalloc
1143
* https://tracker.ceph.com/issues/57655
1144
    qa: fs:mixed-clients kernel_untar_build failure
1145
* https://tracker.ceph.com/issues/44565
1146
    src/mds/SimpleLock.h: 528: FAILED ceph_assert(state == LOCK_XLOCK || state == LOCK_XLOCKDONE || state == LOCK_XLOCKSNAP || state == LOCK_LOCK_XLOCK || state == LOCK_LOCK || is_locallock())
1147
* https://tracker.ceph.com/issues/61737
1148 138 Rishabh Dave
    coredump from '/bin/podman pull quay.ceph.io/ceph-ci/ceph:pacific'
1149
1150
h3. 16 June 2023
1151
1152 1 Patrick Donnelly
https://pulpito.ceph.com/rishabh-2023-05-16_10:39:13-fs-wip-rishabh-2023May15-1524-testing-default-smithi/
1153 145 Rishabh Dave
https://pulpito.ceph.com/rishabh-2023-05-17_11:09:48-fs-wip-rishabh-2023May15-1524-testing-default-smithi/
1154 138 Rishabh Dave
https://pulpito.ceph.com/rishabh-2023-05-18_10:01:53-fs-wip-rishabh-2023May15-1524-testing-default-smithi/
1155 1 Patrick Donnelly
(bins were rebuilt with a subset of orig PRs) https://pulpito.ceph.com/rishabh-2023-06-09_10:19:22-fs-wip-rishabh-2023Jun9-1308-testing-default-smithi/
1156
1157
1158
* https://tracker.ceph.com/issues/59344
1159
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
1160 138 Rishabh Dave
* https://tracker.ceph.com/issues/59348
1161
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1162 145 Rishabh Dave
* https://tracker.ceph.com/issues/59346
1163
  fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write"
1164
* https://tracker.ceph.com/issues/57656
1165
  [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
1166
* https://tracker.ceph.com/issues/54460
1167
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1168 138 Rishabh Dave
* https://tracker.ceph.com/issues/54462
1169
  Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128
1170 145 Rishabh Dave
* https://tracker.ceph.com/issues/61399
1171
  libmpich: undefined references to fi_strerror
1172
* https://tracker.ceph.com/issues/58945
1173
  xfstests-dev: ceph-fuse: generic 
1174 138 Rishabh Dave
* https://tracker.ceph.com/issues/58742
1175 136 Patrick Donnelly
  xfstests-dev: kcephfs: generic
1176
1177
h3. 24 May 2023
1178
1179
https://pulpito.ceph.com/pdonnell-2023-05-23_18:20:18-fs-wip-pdonnell-testing-20230523.134409-distro-default-smithi/
1180
1181
* https://tracker.ceph.com/issues/57676
1182
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1183
* https://tracker.ceph.com/issues/59683
1184
    Error: Unable to find a match: userspace-rcu-devel libedit-devel device-mapper-devel with fscrypt tests
1185
* https://tracker.ceph.com/issues/61399
1186
    qa: "[Makefile:299: ior] Error 1"
1187
* https://tracker.ceph.com/issues/61265
1188
    qa: tasks.cephfs.fuse_mount:process failed to terminate after unmount
1189
* https://tracker.ceph.com/issues/59348
1190
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1191
* https://tracker.ceph.com/issues/59346
1192
    qa/workunits/fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write"
1193
* https://tracker.ceph.com/issues/61400
1194
    valgrind+ceph-mon: segmentation fault in rocksdb+tcmalloc
1195
* https://tracker.ceph.com/issues/54460
1196
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1197
* https://tracker.ceph.com/issues/51964
1198
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1199
* https://tracker.ceph.com/issues/59344
1200
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
1201
* https://tracker.ceph.com/issues/61407
1202
    mds: abort on CInode::verify_dirfrags
1203
* https://tracker.ceph.com/issues/48773
1204
    qa: scrub does not complete
1205
* https://tracker.ceph.com/issues/57655
1206
    qa: fs:mixed-clients kernel_untar_build failure
1207
* https://tracker.ceph.com/issues/61409
1208 128 Venky Shankar
    qa: _test_stale_caps does not wait for file flush before stat
1209
1210
h3. 15 May 2023
1211 130 Venky Shankar
1212 128 Venky Shankar
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230509.090020
1213
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230509.090020-6
1214
1215
* https://tracker.ceph.com/issues/52624
1216
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
1217
* https://tracker.ceph.com/issues/54460
1218
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1219
* https://tracker.ceph.com/issues/57676
1220
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1221
* https://tracker.ceph.com/issues/59684 [kclient bug]
1222
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
1223
* https://tracker.ceph.com/issues/59348
1224
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1225 131 Venky Shankar
* https://tracker.ceph.com/issues/61148
1226
    dbench test results in call trace in dmesg [kclient bug]
1227 133 Kotresh Hiremath Ravishankar
* https://tracker.ceph.com/issues/58340
1228 134 Kotresh Hiremath Ravishankar
    mds: fsstress.sh hangs with multimds
1229 125 Venky Shankar
1230
 
1231 129 Rishabh Dave
h3. 11 May 2023
1232
1233
https://pulpito.ceph.com/yuriw-2023-05-10_18:21:40-fs-wip-yuri7-testing-2023-05-10-0742-distro-default-smithi/
1234
1235
* https://tracker.ceph.com/issues/59684 [kclient bug]
1236
  Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
1237
* https://tracker.ceph.com/issues/59348
1238
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1239
* https://tracker.ceph.com/issues/57655
1240
  qa: fs:mixed-clients kernel_untar_build failure
1241
* https://tracker.ceph.com/issues/57676
1242
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
1243
* https://tracker.ceph.com/issues/55805
1244
  error during scrub thrashing reached max tries in 900 secs
1245
* https://tracker.ceph.com/issues/54460
1246
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1247
* https://tracker.ceph.com/issues/57656
1248
  [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
1249
* https://tracker.ceph.com/issues/58220
1250
  Command failed (workunit test fs/quota/quota.sh) on smithi081 with status 1:
1251 1 Patrick Donnelly
* https://tracker.ceph.com/issues/58220#note-9
1252
  workunit fs/test_python.sh: test_disk_quota_exceeeded_error failure
1253 134 Kotresh Hiremath Ravishankar
* https://tracker.ceph.com/issues/59342
1254
  qa/workunits/kernel_untar_build.sh failed when compiling the Linux source
1255 135 Kotresh Hiremath Ravishankar
* https://tracker.ceph.com/issues/58949
1256
    test_cephfs.test_disk_quota_exceeeded_error - AssertionError: DiskQuotaExceeded not raised by write
1257 129 Rishabh Dave
* https://tracker.ceph.com/issues/61243 (NEW)
1258
  test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
1259
1260 125 Venky Shankar
h3. 11 May 2023
1261 127 Venky Shankar
1262
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230509.054005
1263 126 Venky Shankar
1264 125 Venky Shankar
(no fsstress job failure [https://tracker.ceph.com/issues/58340] since https://github.com/ceph/ceph/pull/49553
1265
 was included in the branch, however, the PR got updated and needs retest).
1266
1267
* https://tracker.ceph.com/issues/52624
1268
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
1269
* https://tracker.ceph.com/issues/54460
1270
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1271
* https://tracker.ceph.com/issues/57676
1272
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1273
* https://tracker.ceph.com/issues/59683
1274
    Error: Unable to find a match: userspace-rcu-devel libedit-devel device-mapper-devel with fscrypt tests
1275
* https://tracker.ceph.com/issues/59684 [kclient bug]
1276
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
1277
* https://tracker.ceph.com/issues/59348
1278 124 Venky Shankar
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1279
1280
h3. 09 May 2023
1281
1282
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230506.143554
1283
1284
* https://tracker.ceph.com/issues/52624
1285
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
1286
* https://tracker.ceph.com/issues/58340
1287
    mds: fsstress.sh hangs with multimds
1288
* https://tracker.ceph.com/issues/54460
1289
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1290
* https://tracker.ceph.com/issues/57676
1291
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1292
* https://tracker.ceph.com/issues/51964
1293
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1294
* https://tracker.ceph.com/issues/59350
1295
    qa: test_flag_scrub_mdsdir (tasks.cephfs.test_scrub_checks.TestScrubChecks) ... ERROR
1296
* https://tracker.ceph.com/issues/59683
1297
    Error: Unable to find a match: userspace-rcu-devel libedit-devel device-mapper-devel with fscrypt tests
1298
* https://tracker.ceph.com/issues/59684 [kclient bug]
1299
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
1300
* https://tracker.ceph.com/issues/59348
1301 123 Venky Shankar
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1302
1303
h3. 10 Apr 2023
1304
1305
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230330.105356
1306
1307
* https://tracker.ceph.com/issues/52624
1308
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
1309
* https://tracker.ceph.com/issues/58340
1310
    mds: fsstress.sh hangs with multimds
1311
* https://tracker.ceph.com/issues/54460
1312
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1313
* https://tracker.ceph.com/issues/57676
1314
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1315 119 Rishabh Dave
* https://tracker.ceph.com/issues/51964
1316 120 Rishabh Dave
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1317 121 Rishabh Dave
1318 120 Rishabh Dave
h3. 31 Mar 2023
1319 122 Rishabh Dave
1320
run: http://pulpito.front.sepia.ceph.com/rishabh-2023-03-03_21:39:49-fs-wip-rishabh-2023Mar03-2316-testing-default-smithi/
1321 120 Rishabh Dave
re-run (some PRs removed from batch): http://pulpito.front.sepia.ceph.com/rishabh-2023-03-11_05:54:03-fs-wip-rishabh-2023Mar10-1727-testing-default-smithi/
1322
re-run (some PRs removed from batch): http://pulpito.front.sepia.ceph.com/rishabh-2023-03-23_08:27:28-fs-wip-rishabh-2023Mar20-2250-testing-default-smithi/
1323
1324
There were many more re-runs for "failed+dead" jobs as well as for individual jobs. half of the PRs from the batch were removed (gradually over subsequent re-runs).
1325
1326
* https://tracker.ceph.com/issues/57676
1327
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
1328
* https://tracker.ceph.com/issues/54460
1329
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1330
* https://tracker.ceph.com/issues/58220
1331
  Command failed (workunit test fs/quota/quota.sh) on smithixxx with status 1
1332
* https://tracker.ceph.com/issues/58220#note-9
1333
  workunit fs/test_python.sh: test_disk_quota_exceeeded_error failure
1334
* https://tracker.ceph.com/issues/56695
1335
  Command failed (workunit test suites/pjd.sh)
1336
* https://tracker.ceph.com/issues/58564 
1337
  workuit dbench failed with error code 1
1338
* https://tracker.ceph.com/issues/57206
1339
  libcephfs/test.sh: ceph_test_libcephfs_reclaim
1340
* https://tracker.ceph.com/issues/57580
1341
  Test failure: test_newops_getvxattr (tasks.cephfs.test_newops.TestNewOps)
1342
* https://tracker.ceph.com/issues/58940
1343
  ceph osd hit ceph_abort
1344
* https://tracker.ceph.com/issues/55805
1345 118 Venky Shankar
  error scrub thrashing reached max tries in 900 secs
1346
1347
h3. 30 March 2023
1348
1349
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230315.085747
1350
1351
* https://tracker.ceph.com/issues/58938
1352
    qa: xfstests-dev's generic test suite has 7 failures with kclient
1353
* https://tracker.ceph.com/issues/51964
1354
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1355
* https://tracker.ceph.com/issues/58340
1356 114 Venky Shankar
    mds: fsstress.sh hangs with multimds
1357
1358 115 Venky Shankar
h3. 29 March 2023
1359 114 Venky Shankar
1360
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230317.095222
1361
1362
* https://tracker.ceph.com/issues/56695
1363
    [RHEL stock] pjd test failures
1364
* https://tracker.ceph.com/issues/57676
1365
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1366
* https://tracker.ceph.com/issues/57087
1367
    qa: test_fragmented_injection (tasks.cephfs.test_data_scan.TestDataScan) failure
1368 116 Venky Shankar
* https://tracker.ceph.com/issues/58340
1369
    mds: fsstress.sh hangs with multimds
1370 114 Venky Shankar
* https://tracker.ceph.com/issues/57655
1371
    qa: fs:mixed-clients kernel_untar_build failure
1372 117 Venky Shankar
* https://tracker.ceph.com/issues/59230
1373
    Test failure: test_object_deletion (tasks.cephfs.test_damage.TestDamage)
1374 114 Venky Shankar
* https://tracker.ceph.com/issues/58938
1375 113 Venky Shankar
    qa: xfstests-dev's generic test suite has 7 failures with kclient
1376
1377
h3. 13 Mar 2023
1378
1379
* https://tracker.ceph.com/issues/56695
1380
    [RHEL stock] pjd test failures
1381
* https://tracker.ceph.com/issues/57676
1382
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1383
* https://tracker.ceph.com/issues/51964
1384
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1385
* https://tracker.ceph.com/issues/54460
1386
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1387
* https://tracker.ceph.com/issues/57656
1388 112 Venky Shankar
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
1389
1390
h3. 09 Mar 2023
1391
1392
https://pulpito.ceph.com/vshankar-2023-03-03_04:39:14-fs-wip-vshankar-testing-20230303.023823-testing-default-smithi/
1393
https://pulpito.ceph.com/vshankar-2023-03-08_15:12:36-fs-wip-vshankar-testing-20230308.112059-testing-default-smithi/
1394
1395
* https://tracker.ceph.com/issues/56695
1396
    [RHEL stock] pjd test failures
1397
* https://tracker.ceph.com/issues/57676
1398
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1399
* https://tracker.ceph.com/issues/51964
1400
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1401
* https://tracker.ceph.com/issues/54460
1402
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1403
* https://tracker.ceph.com/issues/58340
1404
    mds: fsstress.sh hangs with multimds
1405
* https://tracker.ceph.com/issues/57087
1406 111 Venky Shankar
    qa: test_fragmented_injection (tasks.cephfs.test_data_scan.TestDataScan) failure
1407
1408
h3. 07 Mar 2023
1409
1410
https://pulpito.ceph.com/vshankar-2023-03-02_09:21:58-fs-wip-vshankar-testing-20230222.044949-testing-default-smithi/
1411
https://pulpito.ceph.com/vshankar-2023-03-07_05:15:12-fs-wip-vshankar-testing-20230307.030510-testing-default-smithi/
1412
1413
* https://tracker.ceph.com/issues/56695
1414
    [RHEL stock] pjd test failures
1415
* https://tracker.ceph.com/issues/57676
1416
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1417
* https://tracker.ceph.com/issues/51964
1418
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1419
* https://tracker.ceph.com/issues/57656
1420
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
1421
* https://tracker.ceph.com/issues/57655
1422
    qa: fs:mixed-clients kernel_untar_build failure
1423
* https://tracker.ceph.com/issues/58220
1424
    Command failed (workunit test fs/quota/quota.sh) on smithi081 with status 1:
1425
* https://tracker.ceph.com/issues/54460
1426
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1427
* https://tracker.ceph.com/issues/58934
1428 109 Venky Shankar
    snaptest-git-ceph.sh failure with ceph-fuse
1429
1430
h3. 28 Feb 2023
1431
1432
https://pulpito.ceph.com/vshankar-2023-02-24_02:11:45-fs-wip-vshankar-testing-20230222.025426-testing-default-smithi/
1433
1434
* https://tracker.ceph.com/issues/56695
1435
    [RHEL stock] pjd test failures
1436
* https://tracker.ceph.com/issues/57676
1437
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1438 110 Venky Shankar
* https://tracker.ceph.com/issues/56446
1439 109 Venky Shankar
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
1440
1441 107 Venky Shankar
(teuthology infra issues causing testing delays - merging PRs which have tests passing)
1442
1443
h3. 25 Jan 2023
1444
1445
https://pulpito.ceph.com/vshankar-2023-01-25_07:57:32-fs-wip-vshankar-testing-20230125.055346-testing-default-smithi/
1446
1447
* https://tracker.ceph.com/issues/52624
1448
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
1449
* https://tracker.ceph.com/issues/56695
1450
    [RHEL stock] pjd test failures
1451
* https://tracker.ceph.com/issues/57676
1452
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1453
* https://tracker.ceph.com/issues/56446
1454
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
1455
* https://tracker.ceph.com/issues/57206
1456
    libcephfs/test.sh: ceph_test_libcephfs_reclaim
1457
* https://tracker.ceph.com/issues/58220
1458
    Command failed (workunit test fs/quota/quota.sh) on smithi081 with status 1:
1459
* https://tracker.ceph.com/issues/58340
1460
  mds: fsstress.sh hangs with multimds
1461
* https://tracker.ceph.com/issues/56011
1462
    fs/thrash: snaptest-snap-rm-cmp.sh fails in mds5sum comparison
1463
* https://tracker.ceph.com/issues/54460
1464 101 Rishabh Dave
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1465
1466
h3. 30 JAN 2023
1467
1468
run: http://pulpito.front.sepia.ceph.com/rishabh-2022-11-28_08:04:11-fs-wip-rishabh-testing-2022Nov24-1818-testing-default-smithi/
1469
re-run: http://pulpito.front.sepia.ceph.com/rishabh-2023-01-13_12:08:33-fs-wip-rishabh-testing-2022Nov24-11Jan2023-distro-default-smithi/
1470 105 Rishabh Dave
re-run of re-run: http://pulpito.front.sepia.ceph.com/rishabh-2023-01-23_18:53:32-fs-wip-rishabh-testing-2022Nov24-11Jan2023-distro-default-smithi/
1471
1472 101 Rishabh Dave
* https://tracker.ceph.com/issues/52624
1473
  qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
1474
* https://tracker.ceph.com/issues/56695
1475
  [RHEL stock] pjd test failures
1476
* https://tracker.ceph.com/issues/57676
1477
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
1478
* https://tracker.ceph.com/issues/55332
1479
  Failure in snaptest-git-ceph.sh
1480
* https://tracker.ceph.com/issues/51964
1481
  qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1482
* https://tracker.ceph.com/issues/56446
1483
  Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
1484
* https://tracker.ceph.com/issues/57655 
1485
  qa: fs:mixed-clients kernel_untar_build failure
1486
* https://tracker.ceph.com/issues/54460
1487
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1488 103 Rishabh Dave
* https://tracker.ceph.com/issues/58340
1489
  mds: fsstress.sh hangs with multimds
1490 101 Rishabh Dave
* https://tracker.ceph.com/issues/58219
1491 102 Rishabh Dave
  Command crashed: 'ceph-dencoder type inode_backtrace_t import - decode dump_json'
1492
1493
* "Failed to load ceph-mgr modules: prometheus" in cluster log"
1494 106 Rishabh Dave
  http://pulpito.front.sepia.ceph.com/rishabh-2023-01-23_18:53:32-fs-wip-rishabh-testing-2022Nov24-11Jan2023-distro-default-smithi/7134086
1495
  Acc to Venky this was fixed in https://github.com/ceph/ceph/commit/cf6089200d96fc56b08ee17a4e31f19823370dc8
1496 102 Rishabh Dave
* Created https://tracker.ceph.com/issues/58564
1497 100 Venky Shankar
  workunit test suites/dbench.sh failed error code 1
1498
1499
h3. 15 Dec 2022
1500
1501
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20221215.112736
1502
1503
* https://tracker.ceph.com/issues/52624
1504
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1505
* https://tracker.ceph.com/issues/56695
1506
    [RHEL stock] pjd test failures
1507
* https://tracker.ceph.com/issues/58219
1508
* https://tracker.ceph.com/issues/57655
1509
* qa: fs:mixed-clients kernel_untar_build failure
1510
    Test failure: test_journal_migration (tasks.cephfs.test_journal_migration.TestJournalMigration)
1511
* https://tracker.ceph.com/issues/57676
1512
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1513
* https://tracker.ceph.com/issues/58340
1514 96 Venky Shankar
    mds: fsstress.sh hangs with multimds
1515
1516
h3. 08 Dec 2022
1517 99 Venky Shankar
1518 96 Venky Shankar
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20221130.043104
1519
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20221209.043803
1520
1521
(lots of transient git.ceph.com failures)
1522
1523
* https://tracker.ceph.com/issues/52624
1524
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1525
* https://tracker.ceph.com/issues/56695
1526
    [RHEL stock] pjd test failures
1527
* https://tracker.ceph.com/issues/57655
1528
    qa: fs:mixed-clients kernel_untar_build failure
1529
* https://tracker.ceph.com/issues/58219
1530
    Test failure: test_journal_migration (tasks.cephfs.test_journal_migration.TestJournalMigration)
1531
* https://tracker.ceph.com/issues/58220
1532
    Command failed (workunit test fs/quota/quota.sh) on smithi081 with status 1:
1533 97 Venky Shankar
* https://tracker.ceph.com/issues/57676
1534
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1535 98 Venky Shankar
* https://tracker.ceph.com/issues/53859
1536
    qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
1537
* https://tracker.ceph.com/issues/54460
1538
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1539 96 Venky Shankar
* https://tracker.ceph.com/issues/58244
1540 95 Venky Shankar
    Test failure: test_rebuild_inotable (tasks.cephfs.test_data_scan.TestDataScan)
1541
1542
h3. 14 Oct 2022
1543
1544
https://pulpito.ceph.com/vshankar-2022-10-12_04:56:59-fs-wip-vshankar-testing-20221011-145847-testing-default-smithi/
1545
https://pulpito.ceph.com/vshankar-2022-10-14_04:04:57-fs-wip-vshankar-testing-20221014-072608-testing-default-smithi/
1546
1547
* https://tracker.ceph.com/issues/52624
1548
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1549
* https://tracker.ceph.com/issues/55804
1550
    Command failed (workunit test suites/pjd.sh)
1551
* https://tracker.ceph.com/issues/51964
1552
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1553
* https://tracker.ceph.com/issues/57682
1554
    client: ERROR: test_reconnect_after_blocklisted
1555 90 Rishabh Dave
* https://tracker.ceph.com/issues/54460
1556 91 Rishabh Dave
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1557
1558
h3. 10 Oct 2022
1559 92 Rishabh Dave
1560 91 Rishabh Dave
http://pulpito.front.sepia.ceph.com/rishabh-2022-09-30_19:45:21-fs-wip-rishabh-testing-30Sep2022-testing-default-smithi/
1561
1562
reruns
1563
* fs-thrash, passed: http://pulpito.front.sepia.ceph.com/rishabh-2022-10-04_13:19:47-fs-wip-rishabh-testing-30Sep2022-testing-default-smithi/
1564 94 Rishabh Dave
* fs-verify, passed: http://pulpito.front.sepia.ceph.com/rishabh-2022-10-05_12:25:37-fs-wip-rishabh-testing-30Sep2022-testing-default-smithi/
1565 91 Rishabh Dave
* cephadm failures also passed after many re-runs: http://pulpito.front.sepia.ceph.com/rishabh-2022-10-06_13:50:51-fs-wip-rishabh-testing-30Sep2022-2-testing-default-smithi/
1566 93 Rishabh Dave
    ** needed this PR to be merged in ceph-ci branch - https://github.com/ceph/ceph/pull/47458
1567 91 Rishabh Dave
1568
known bugs
1569
* https://tracker.ceph.com/issues/52624
1570
  qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1571
* https://tracker.ceph.com/issues/50223
1572
  client.xxxx isn't responding to mclientcaps(revoke
1573
* https://tracker.ceph.com/issues/57299
1574
  qa: test_dump_loads fails with JSONDecodeError
1575
* https://tracker.ceph.com/issues/57655 [Exist in main as well]
1576
  qa: fs:mixed-clients kernel_untar_build failure
1577
* https://tracker.ceph.com/issues/57206
1578 90 Rishabh Dave
  libcephfs/test.sh: ceph_test_libcephfs_reclaim
1579
1580
h3. 2022 Sep 29
1581
1582
http://pulpito.front.sepia.ceph.com/rishabh-2022-09-14_12:48:43-fs-wip-rishabh-testing-2022Sep9-1708-testing-default-smithi/
1583
1584
* https://tracker.ceph.com/issues/55804
1585
  Command failed (workunit test suites/pjd.sh)
1586
* https://tracker.ceph.com/issues/36593
1587
  Command failed (workunit test fs/quota/quota.sh) on smithixxx with status 1
1588
* https://tracker.ceph.com/issues/52624
1589
  qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1590
* https://tracker.ceph.com/issues/51964
1591
  qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1592
* https://tracker.ceph.com/issues/56632
1593
  Test failure: test_subvolume_snapshot_clone_quota_exceeded
1594
* https://tracker.ceph.com/issues/50821
1595 88 Patrick Donnelly
  qa: untar_snap_rm failure during mds thrashing
1596
1597
h3. 2022 Sep 26
1598
1599
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20220923.171109
1600
1601
* https://tracker.ceph.com/issues/55804
1602
    qa failure: pjd link tests failed
1603
* https://tracker.ceph.com/issues/57676
1604
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1605
* https://tracker.ceph.com/issues/52624
1606
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1607
* https://tracker.ceph.com/issues/57580
1608
    Test failure: test_newops_getvxattr (tasks.cephfs.test_newops.TestNewOps)
1609
* https://tracker.ceph.com/issues/48773
1610
    qa: scrub does not complete
1611
* https://tracker.ceph.com/issues/57299
1612
    qa: test_dump_loads fails with JSONDecodeError
1613
* https://tracker.ceph.com/issues/57280
1614
    qa: tasks/kernel_cfuse_workunits_untarbuild_blogbench fails - Failed to fetch package version from shaman
1615
* https://tracker.ceph.com/issues/57205
1616
    Test failure: test_subvolume_group_ls_filter_internal_directories (tasks.cephfs.test_volumes.TestSubvolumeGroups)
1617
* https://tracker.ceph.com/issues/57656
1618
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
1619
* https://tracker.ceph.com/issues/57677
1620
    qa: "1 MDSs behind on trimming (MDS_TRIM)"
1621
* https://tracker.ceph.com/issues/57206
1622
    libcephfs/test.sh: ceph_test_libcephfs_reclaim
1623
* https://tracker.ceph.com/issues/57446
1624
    qa: test_subvolume_snapshot_info_if_orphan_clone fails
1625 89 Patrick Donnelly
* https://tracker.ceph.com/issues/57655 [Exist in main as well]
1626
    qa: fs:mixed-clients kernel_untar_build failure
1627 88 Patrick Donnelly
* https://tracker.ceph.com/issues/57682
1628
    client: ERROR: test_reconnect_after_blocklisted
1629 87 Patrick Donnelly
1630
1631
h3. 2022 Sep 22
1632
1633
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20220920.234701
1634
1635
* https://tracker.ceph.com/issues/57299
1636
    qa: test_dump_loads fails with JSONDecodeError
1637
* https://tracker.ceph.com/issues/57205
1638
    Test failure: test_subvolume_group_ls_filter_internal_directories (tasks.cephfs.test_volumes.TestSubvolumeGroups)
1639
* https://tracker.ceph.com/issues/52624
1640
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1641
* https://tracker.ceph.com/issues/57580
1642
    Test failure: test_newops_getvxattr (tasks.cephfs.test_newops.TestNewOps)
1643
* https://tracker.ceph.com/issues/57280
1644
    qa: tasks/kernel_cfuse_workunits_untarbuild_blogbench fails - Failed to fetch package version from shaman
1645
* https://tracker.ceph.com/issues/48773
1646
    qa: scrub does not complete
1647
* https://tracker.ceph.com/issues/56446
1648
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
1649
* https://tracker.ceph.com/issues/57206
1650
    libcephfs/test.sh: ceph_test_libcephfs_reclaim
1651
* https://tracker.ceph.com/issues/51267
1652
    CommandFailedError: Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithi096 with status 1:...
1653
1654
NEW:
1655
1656
* https://tracker.ceph.com/issues/57656
1657
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
1658
* https://tracker.ceph.com/issues/57655 [Exist in main as well]
1659
    qa: fs:mixed-clients kernel_untar_build failure
1660
* https://tracker.ceph.com/issues/57657
1661
    mds: scrub locates mismatch between child accounted_rstats and self rstats
1662
1663
Segfault probably caused by: https://github.com/ceph/ceph/pull/47795#issuecomment-1255724799
1664 80 Venky Shankar
1665 79 Venky Shankar
1666
h3. 2022 Sep 16
1667
1668
https://pulpito.ceph.com/?branch=wip-vshankar-testing1-20220905-132828
1669
1670
* https://tracker.ceph.com/issues/57446
1671
    qa: test_subvolume_snapshot_info_if_orphan_clone fails
1672
* https://tracker.ceph.com/issues/57299
1673
    qa: test_dump_loads fails with JSONDecodeError
1674
* https://tracker.ceph.com/issues/50223
1675
    client.xxxx isn't responding to mclientcaps(revoke)
1676
* https://tracker.ceph.com/issues/52624
1677
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1678
* https://tracker.ceph.com/issues/57205
1679
    Test failure: test_subvolume_group_ls_filter_internal_directories (tasks.cephfs.test_volumes.TestSubvolumeGroups)
1680
* https://tracker.ceph.com/issues/57280
1681
    qa: tasks/kernel_cfuse_workunits_untarbuild_blogbench fails - Failed to fetch package version from shaman
1682
* https://tracker.ceph.com/issues/51282
1683
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
1684
* https://tracker.ceph.com/issues/48203
1685
  https://tracker.ceph.com/issues/36593
1686
    qa: quota failure
1687
    qa: quota failure caused by clients stepping on each other
1688
* https://tracker.ceph.com/issues/57580
1689 77 Rishabh Dave
    Test failure: test_newops_getvxattr (tasks.cephfs.test_newops.TestNewOps)
1690
1691 76 Rishabh Dave
1692
h3. 2022 Aug 26
1693
1694
http://pulpito.front.sepia.ceph.com/rishabh-2022-08-22_17:49:59-fs-wip-rishabh-testing-2022Aug19-testing-default-smithi/
1695
http://pulpito.front.sepia.ceph.com/rishabh-2022-08-24_11:56:51-fs-wip-rishabh-testing-2022Aug19-testing-default-smithi/
1696
1697
* https://tracker.ceph.com/issues/57206
1698
  libcephfs/test.sh: ceph_test_libcephfs_reclaim
1699
* https://tracker.ceph.com/issues/56632
1700
  Test failure: test_subvolume_snapshot_clone_quota_exceeded (tasks.cephfs.test_volumes.TestSubvolumeSnapshotClones)
1701
* https://tracker.ceph.com/issues/56446
1702
  Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
1703
* https://tracker.ceph.com/issues/51964
1704
  qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1705
* https://tracker.ceph.com/issues/53859
1706
  qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
1707
1708
* https://tracker.ceph.com/issues/54460
1709
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1710
* https://tracker.ceph.com/issues/54462
1711
  Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128
1712
* https://tracker.ceph.com/issues/54460
1713
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1714
* https://tracker.ceph.com/issues/36593
1715
  Command failed (workunit test fs/quota/quota.sh) on smithixxx with status 1
1716
1717
* https://tracker.ceph.com/issues/52624
1718
  qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1719
* https://tracker.ceph.com/issues/55804
1720
  Command failed (workunit test suites/pjd.sh)
1721
* https://tracker.ceph.com/issues/50223
1722
  client.xxxx isn't responding to mclientcaps(revoke)
1723 75 Venky Shankar
1724
1725
h3. 2022 Aug 22
1726
1727
https://pulpito.ceph.com/vshankar-2022-08-12_09:34:24-fs-wip-vshankar-testing1-20220812-072441-testing-default-smithi/
1728
https://pulpito.ceph.com/vshankar-2022-08-18_04:30:42-fs-wip-vshankar-testing1-20220818-082047-testing-default-smithi/ (drop problematic PR and re-run)
1729
1730
* https://tracker.ceph.com/issues/52624
1731
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1732
* https://tracker.ceph.com/issues/56446
1733
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
1734
* https://tracker.ceph.com/issues/55804
1735
    Command failed (workunit test suites/pjd.sh)
1736
* https://tracker.ceph.com/issues/51278
1737
    mds: "FAILED ceph_assert(!segments.empty())"
1738
* https://tracker.ceph.com/issues/54460
1739
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1740
* https://tracker.ceph.com/issues/57205
1741
    Test failure: test_subvolume_group_ls_filter_internal_directories (tasks.cephfs.test_volumes.TestSubvolumeGroups)
1742
* https://tracker.ceph.com/issues/57206
1743
    ceph_test_libcephfs_reclaim crashes during test
1744
* https://tracker.ceph.com/issues/53859
1745
  Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
1746
* https://tracker.ceph.com/issues/50223
1747 72 Venky Shankar
    client.xxxx isn't responding to mclientcaps(revoke)
1748
1749
h3. 2022 Aug 12
1750
1751
https://pulpito.ceph.com/vshankar-2022-08-10_04:06:00-fs-wip-vshankar-testing-20220805-190751-testing-default-smithi/
1752
https://pulpito.ceph.com/vshankar-2022-08-11_12:16:58-fs-wip-vshankar-testing-20220811-145809-testing-default-smithi/ (drop problematic PR and re-run)
1753
1754
* https://tracker.ceph.com/issues/52624
1755
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1756
* https://tracker.ceph.com/issues/56446
1757
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
1758
* https://tracker.ceph.com/issues/51964
1759
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1760
* https://tracker.ceph.com/issues/55804
1761
    Command failed (workunit test suites/pjd.sh)
1762
* https://tracker.ceph.com/issues/50223
1763
    client.xxxx isn't responding to mclientcaps(revoke)
1764
* https://tracker.ceph.com/issues/50821
1765 73 Venky Shankar
    qa: untar_snap_rm failure during mds thrashing
1766 72 Venky Shankar
* https://tracker.ceph.com/issues/54460
1767 71 Venky Shankar
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1768
1769
h3. 2022 Aug 04
1770
1771
https://pulpito.ceph.com/?branch=wip-vshankar-testing1-20220804-123835 (only mgr/volumes, mgr/stats)
1772
1773 69 Rishabh Dave
Unrealted teuthology failure on rhel
1774 68 Rishabh Dave
1775
h3. 2022 Jul 25
1776
1777
http://pulpito.front.sepia.ceph.com/rishabh-2022-07-22_11:34:20-fs-wip-rishabh-testing-2022Jul22-1400-testing-default-smithi/
1778
1779 74 Rishabh Dave
1st re-run: http://pulpito.front.sepia.ceph.com/rishabh-2022-07-24_03:51:19-fs-wip-rishabh-testing-2022Jul22-1400-testing-default-smithi
1780
2nd re-run: http://pulpito.front.sepia.ceph.com/rishabh-2022-07-24_08:53:36-fs-wip-rishabh-testing-2022Jul22-1400-testing-default-smithi/
1781 68 Rishabh Dave
3rd re-run: http://pulpito.front.sepia.ceph.com/rishabh-2022-07-24_08:53:36-fs-wip-rishabh-testing-2022Jul22-1400-testing-default-smithi/
1782
4th (final) re-run: http://pulpito.front.sepia.ceph.com/rishabh-2022-07-28_03:59:01-fs-wip-rishabh-testing-2022Jul28-0143-testing-default-smithi/
1783
1784
* https://tracker.ceph.com/issues/55804
1785
  Command failed (workunit test suites/pjd.sh)
1786
* https://tracker.ceph.com/issues/50223
1787
  client.xxxx isn't responding to mclientcaps(revoke)
1788
1789
* https://tracker.ceph.com/issues/54460
1790
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1791 1 Patrick Donnelly
* https://tracker.ceph.com/issues/36593
1792 74 Rishabh Dave
  Command failed (workunit test fs/quota/quota.sh) on smithixxx with status 1
1793 68 Rishabh Dave
* https://tracker.ceph.com/issues/54462
1794 67 Patrick Donnelly
  Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128~
1795
1796
h3. 2022 July 22
1797
1798
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20220721.235756
1799
1800
MDS_HEALTH_DUMMY error in log fixed by followup commit.
1801
transient selinux ping failure
1802
1803
* https://tracker.ceph.com/issues/56694
1804
    qa: avoid blocking forever on hung umount
1805
* https://tracker.ceph.com/issues/56695
1806
    [RHEL stock] pjd test failures
1807
* https://tracker.ceph.com/issues/56696
1808
    admin keyring disappears during qa run
1809
* https://tracker.ceph.com/issues/56697
1810
    qa: fs/snaps fails for fuse
1811
* https://tracker.ceph.com/issues/50222
1812
    osd: 5.2s0 deep-scrub : stat mismatch
1813
* https://tracker.ceph.com/issues/56698
1814
    client: FAILED ceph_assert(_size == 0)
1815
* https://tracker.ceph.com/issues/50223
1816
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
1817 66 Rishabh Dave
1818 65 Rishabh Dave
1819
h3. 2022 Jul 15
1820
1821
http://pulpito.front.sepia.ceph.com/rishabh-2022-07-08_23:53:34-fs-wip-rishabh-testing-2022Jul08-1820-testing-default-smithi/
1822
1823
re-run: http://pulpito.front.sepia.ceph.com/rishabh-2022-07-15_06:42:04-fs-wip-rishabh-testing-2022Jul08-1820-testing-default-smithi/
1824
1825
* https://tracker.ceph.com/issues/53859
1826
  Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
1827
* https://tracker.ceph.com/issues/55804
1828
  Command failed (workunit test suites/pjd.sh)
1829
* https://tracker.ceph.com/issues/50223
1830
  client.xxxx isn't responding to mclientcaps(revoke)
1831
* https://tracker.ceph.com/issues/50222
1832
  osd: deep-scrub : stat mismatch
1833
1834
* https://tracker.ceph.com/issues/56632
1835
  Test failure: test_subvolume_snapshot_clone_quota_exceeded (tasks.cephfs.test_volumes.TestSubvolumeSnapshotClones)
1836
* https://tracker.ceph.com/issues/56634
1837
  workunit test fs/snaps/snaptest-intodir.sh
1838
* https://tracker.ceph.com/issues/56644
1839
  Test failure: test_rapid_creation (tasks.cephfs.test_fragment.TestFragmentation)
1840
1841 61 Rishabh Dave
1842
1843
h3. 2022 July 05
1844 62 Rishabh Dave
1845 64 Rishabh Dave
http://pulpito.front.sepia.ceph.com/rishabh-2022-07-02_14:14:52-fs-wip-rishabh-testing-20220702-1631-testing-default-smithi/
1846
1847
On 1st re-run some jobs passed - http://pulpito.front.sepia.ceph.com/rishabh-2022-07-03_15:10:28-fs-wip-rishabh-testing-20220702-1631-distro-default-smithi/
1848
1849
On 2nd re-run only few jobs failed -
1850 62 Rishabh Dave
http://pulpito.front.sepia.ceph.com/rishabh-2022-07-06_05:24:29-fs-wip-rishabh-testing-20220705-2132-distro-default-smithi/
1851
http://pulpito.front.sepia.ceph.com/rishabh-2022-07-06_05:24:29-fs-wip-rishabh-testing-20220705-2132-distro-default-smithi/
1852
1853
* https://tracker.ceph.com/issues/56446
1854
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
1855
* https://tracker.ceph.com/issues/55804
1856
    Command failed (workunit test suites/pjd.sh) on smithi047 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/
1857
1858
* https://tracker.ceph.com/issues/56445
1859 63 Rishabh Dave
    Command failed on smithi080 with status 123: "find /home/ubuntu/cephtest/archive/syslog -name '*.log' -print0 | sudo xargs -0 --no-run-if-empty -- gzip --"
1860
* https://tracker.ceph.com/issues/51267
1861
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithi098 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1
1862 62 Rishabh Dave
* https://tracker.ceph.com/issues/50224
1863
    Test failure: test_mirroring_init_failure_with_recovery (tasks.cephfs.test_mirroring.TestMirroring)
1864 61 Rishabh Dave
1865 58 Venky Shankar
1866
1867
h3. 2022 July 04
1868
1869
https://pulpito.ceph.com/vshankar-2022-06-29_09:19:00-fs-wip-vshankar-testing-20220627-100931-testing-default-smithi/
1870
(rhel runs were borked due to: https://lists.ceph.io/hyperkitty/list/dev@ceph.io/thread/JSZQFUKVLDND4W33PXDGCABPHNSPT6SS/, tests ran with --filter-out=rhel)
1871
1872
* https://tracker.ceph.com/issues/56445
1873 59 Rishabh Dave
    Command failed on smithi162 with status 123: "find /home/ubuntu/cephtest/archive/syslog -name '*.log' -print0 | sudo xargs -0 --no-run-if-empty -- gzip --"
1874
* https://tracker.ceph.com/issues/56446
1875
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
1876
* https://tracker.ceph.com/issues/51964
1877 60 Rishabh Dave
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1878 59 Rishabh Dave
* https://tracker.ceph.com/issues/52624
1879 57 Venky Shankar
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1880
1881
h3. 2022 June 20
1882
1883
https://pulpito.ceph.com/vshankar-2022-06-15_04:03:39-fs-wip-vshankar-testing1-20220615-072516-testing-default-smithi/
1884
https://pulpito.ceph.com/vshankar-2022-06-19_08:22:46-fs-wip-vshankar-testing1-20220619-102531-testing-default-smithi/
1885
1886
* https://tracker.ceph.com/issues/52624
1887
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1888
* https://tracker.ceph.com/issues/55804
1889
    qa failure: pjd link tests failed
1890
* https://tracker.ceph.com/issues/54108
1891
    qa: iogen workunit: "The following counters failed to be set on mds daemons: {'mds.exported', 'mds.imported'}"
1892
* https://tracker.ceph.com/issues/55332
1893 56 Patrick Donnelly
    Failure in snaptest-git-ceph.sh (it's an async unlink/create bug)
1894
1895
h3. 2022 June 13
1896
1897
https://pulpito.ceph.com/pdonnell-2022-06-12_05:08:12-fs:workload-wip-pdonnell-testing-20220612.004943-distro-default-smithi/
1898
1899
* https://tracker.ceph.com/issues/56024
1900
    cephadm: removes ceph.conf during qa run causing command failure
1901
* https://tracker.ceph.com/issues/48773
1902
    qa: scrub does not complete
1903
* https://tracker.ceph.com/issues/56012
1904
    mds: src/mds/MDLog.cc: 283: FAILED ceph_assert(!mds->is_ any_replay())
1905 55 Venky Shankar
1906 54 Venky Shankar
1907
h3. 2022 Jun 13
1908
1909
https://pulpito.ceph.com/vshankar-2022-06-07_00:25:50-fs-wip-vshankar-testing-20220606-223254-testing-default-smithi/
1910
https://pulpito.ceph.com/vshankar-2022-06-10_01:04:46-fs-wip-vshankar-testing-20220609-175550-testing-default-smithi/
1911
1912
* https://tracker.ceph.com/issues/52624
1913
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1914
* https://tracker.ceph.com/issues/51964
1915
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1916
* https://tracker.ceph.com/issues/53859
1917
    qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
1918
* https://tracker.ceph.com/issues/55804
1919
    qa failure: pjd link tests failed
1920
* https://tracker.ceph.com/issues/56003
1921
    client: src/include/xlist.h: 81: FAILED ceph_assert(_size == 0)
1922
* https://tracker.ceph.com/issues/56011
1923
    fs/thrash: snaptest-snap-rm-cmp.sh fails in mds5sum comparison
1924
* https://tracker.ceph.com/issues/56012
1925 53 Venky Shankar
    mds: src/mds/MDLog.cc: 283: FAILED ceph_assert(!mds->is_ any_replay())
1926
1927
h3. 2022 Jun 07
1928
1929
https://pulpito.ceph.com/vshankar-2022-06-06_21:25:41-fs-wip-vshankar-testing1-20220606-230129-testing-default-smithi/
1930
https://pulpito.ceph.com/vshankar-2022-06-07_10:53:31-fs-wip-vshankar-testing1-20220607-104134-testing-default-smithi/ (rerun after dropping a problematic PR)
1931
1932
* https://tracker.ceph.com/issues/52624
1933
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1934
* https://tracker.ceph.com/issues/50223
1935
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
1936
* https://tracker.ceph.com/issues/50224
1937 51 Venky Shankar
    qa: test_mirroring_init_failure_with_recovery failure
1938
1939
h3. 2022 May 12
1940 52 Venky Shankar
1941 51 Venky Shankar
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20220509-125847
1942
https://pulpito.ceph.com/vshankar-2022-05-13_17:09:16-fs-wip-vshankar-testing-20220513-120051-testing-default-smithi/ (drop prs + rerun)
1943
1944
* https://tracker.ceph.com/issues/52624
1945
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1946
* https://tracker.ceph.com/issues/50223
1947
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
1948
* https://tracker.ceph.com/issues/55332
1949
    Failure in snaptest-git-ceph.sh
1950
* https://tracker.ceph.com/issues/53859
1951 1 Patrick Donnelly
    qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
1952 52 Venky Shankar
* https://tracker.ceph.com/issues/55538
1953
    Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead)
1954 51 Venky Shankar
* https://tracker.ceph.com/issues/55258
1955 49 Venky Shankar
    lots of "heartbeat_check: no reply from X.X.X.X" in OSD logs (cropss up again, though very infrequent)
1956
1957 50 Venky Shankar
h3. 2022 May 04
1958
1959
https://pulpito.ceph.com/vshankar-2022-05-01_13:18:44-fs-wip-vshankar-testing1-20220428-204527-testing-default-smithi/
1960 49 Venky Shankar
https://pulpito.ceph.com/vshankar-2022-05-02_16:58:59-fs-wip-vshankar-testing1-20220502-201957-testing-default-smithi/ (after dropping PRs)
1961
1962
* https://tracker.ceph.com/issues/52624
1963
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1964
* https://tracker.ceph.com/issues/50223
1965
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
1966
* https://tracker.ceph.com/issues/55332
1967
    Failure in snaptest-git-ceph.sh
1968
* https://tracker.ceph.com/issues/53859
1969
    qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
1970
* https://tracker.ceph.com/issues/55516
1971
    qa: fs suite tests failing with "json.decoder.JSONDecodeError: Extra data: line 2 column 82 (char 82)"
1972
* https://tracker.ceph.com/issues/55537
1973
    mds: crash during fs:upgrade test
1974
* https://tracker.ceph.com/issues/55538
1975 48 Venky Shankar
    Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead)
1976
1977
h3. 2022 Apr 25
1978
1979
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20220420-113951 (owner vshankar)
1980
1981
* https://tracker.ceph.com/issues/52624
1982
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1983
* https://tracker.ceph.com/issues/50223
1984
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
1985
* https://tracker.ceph.com/issues/55258
1986
    lots of "heartbeat_check: no reply from X.X.X.X" in OSD logs
1987
* https://tracker.ceph.com/issues/55377
1988 47 Venky Shankar
    kclient: mds revoke Fwb caps stuck after the kclient tries writebcak once
1989
1990
h3. 2022 Apr 14
1991
1992
https://pulpito.ceph.com/?branch=wip-vshankar-testing1-20220411-144044
1993
1994
* https://tracker.ceph.com/issues/52624
1995
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1996
* https://tracker.ceph.com/issues/50223
1997
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
1998
* https://tracker.ceph.com/issues/52438
1999
    qa: ffsb timeout
2000
* https://tracker.ceph.com/issues/55170
2001
    mds: crash during rejoin (CDir::fetch_keys)
2002
* https://tracker.ceph.com/issues/55331
2003
    pjd failure
2004
* https://tracker.ceph.com/issues/48773
2005
    qa: scrub does not complete
2006
* https://tracker.ceph.com/issues/55332
2007
    Failure in snaptest-git-ceph.sh
2008
* https://tracker.ceph.com/issues/55258
2009 45 Venky Shankar
    lots of "heartbeat_check: no reply from X.X.X.X" in OSD logs
2010
2011 46 Venky Shankar
h3. 2022 Apr 11
2012 45 Venky Shankar
2013
https://pulpito.ceph.com/?branch=wip-vshankar-testing-55110-20220408-203242
2014
2015
* https://tracker.ceph.com/issues/48773
2016
    qa: scrub does not complete
2017
* https://tracker.ceph.com/issues/52624
2018
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2019
* https://tracker.ceph.com/issues/52438
2020
    qa: ffsb timeout
2021
* https://tracker.ceph.com/issues/48680
2022
    mds: scrubbing stuck "scrub active (0 inodes in the stack)"
2023
* https://tracker.ceph.com/issues/55236
2024
    qa: fs/snaps tests fails with "hit max job timeout"
2025
* https://tracker.ceph.com/issues/54108
2026
    qa: iogen workunit: "The following counters failed to be set on mds daemons: {'mds.exported', 'mds.imported'}"
2027
* https://tracker.ceph.com/issues/54971
2028
    Test failure: test_perf_stats_stale_metrics (tasks.cephfs.test_mds_metrics.TestMDSMetrics)
2029
* https://tracker.ceph.com/issues/50223
2030
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2031
* https://tracker.ceph.com/issues/55258
2032 44 Venky Shankar
    lots of "heartbeat_check: no reply from X.X.X.X" in OSD logs
2033 42 Venky Shankar
2034 43 Venky Shankar
h3. 2022 Mar 21
2035
2036
https://pulpito.ceph.com/vshankar-2022-03-20_02:16:37-fs-wip-vshankar-testing-20220319-163539-testing-default-smithi/
2037
2038
Run didn't go well, lots of failures - debugging by dropping PRs and running against master branch. Only merging unrelated PRs that pass tests.
2039
2040
2041 42 Venky Shankar
h3. 2022 Mar 08
2042
2043
https://pulpito.ceph.com/vshankar-2022-02-28_04:32:15-fs-wip-vshankar-testing-20220226-211550-testing-default-smithi/
2044
2045
rerun with
2046
- (drop) https://github.com/ceph/ceph/pull/44679
2047
- (drop) https://github.com/ceph/ceph/pull/44958
2048
https://pulpito.ceph.com/vshankar-2022-03-06_14:47:51-fs-wip-vshankar-testing-20220304-132102-testing-default-smithi/
2049
2050
* https://tracker.ceph.com/issues/54419 (new)
2051
    `ceph orch upgrade start` seems to never reach completion
2052
* https://tracker.ceph.com/issues/51964
2053
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
2054
* https://tracker.ceph.com/issues/52624
2055
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2056
* https://tracker.ceph.com/issues/50223
2057
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2058
* https://tracker.ceph.com/issues/52438
2059
    qa: ffsb timeout
2060
* https://tracker.ceph.com/issues/50821
2061
    qa: untar_snap_rm failure during mds thrashing
2062 41 Venky Shankar
2063
2064
h3. 2022 Feb 09
2065
2066
https://pulpito.ceph.com/vshankar-2022-02-05_17:27:49-fs-wip-vshankar-testing-20220201-113815-testing-default-smithi/
2067
2068
rerun with
2069
- (drop) https://github.com/ceph/ceph/pull/37938
2070
- (drop) https://github.com/ceph/ceph/pull/44335
2071
- (drop) https://github.com/ceph/ceph/pull/44491
2072
- (drop) https://github.com/ceph/ceph/pull/44501
2073
https://pulpito.ceph.com/vshankar-2022-02-08_14:27:29-fs-wip-vshankar-testing-20220208-181241-testing-default-smithi/
2074
2075
* https://tracker.ceph.com/issues/51964
2076
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
2077
* https://tracker.ceph.com/issues/54066
2078
    test_subvolume_no_upgrade_v1_sanity fails with `AssertionError: 1000 != 0`
2079
* https://tracker.ceph.com/issues/48773
2080
    qa: scrub does not complete
2081
* https://tracker.ceph.com/issues/52624
2082
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2083
* https://tracker.ceph.com/issues/50223
2084
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2085
* https://tracker.ceph.com/issues/52438
2086 40 Patrick Donnelly
    qa: ffsb timeout
2087
2088
h3. 2022 Feb 01
2089
2090
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20220127.171526
2091
2092
* https://tracker.ceph.com/issues/54107
2093
    kclient: hang during umount
2094
* https://tracker.ceph.com/issues/54106
2095
    kclient: hang during workunit cleanup
2096
* https://tracker.ceph.com/issues/54108
2097
    qa: iogen workunit: "The following counters failed to be set on mds daemons: {'mds.exported', 'mds.imported'}"
2098
* https://tracker.ceph.com/issues/48773
2099
    qa: scrub does not complete
2100
* https://tracker.ceph.com/issues/52438
2101
    qa: ffsb timeout
2102 36 Venky Shankar
2103
2104
h3. 2022 Jan 13
2105 39 Venky Shankar
2106 36 Venky Shankar
https://pulpito.ceph.com/vshankar-2022-01-06_13:18:41-fs-wip-vshankar-testing-20220106-145819-testing-default-smithi/
2107 38 Venky Shankar
2108
rerun with:
2109 36 Venky Shankar
- (add) https://github.com/ceph/ceph/pull/44570
2110
- (drop) https://github.com/ceph/ceph/pull/43184
2111
https://pulpito.ceph.com/vshankar-2022-01-13_04:42:40-fs-wip-vshankar-testing-20220106-145819-testing-default-smithi/
2112
2113
* https://tracker.ceph.com/issues/50223
2114
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2115
* https://tracker.ceph.com/issues/51282
2116
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2117
* https://tracker.ceph.com/issues/48773
2118
    qa: scrub does not complete
2119
* https://tracker.ceph.com/issues/52624
2120
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2121
* https://tracker.ceph.com/issues/53859
2122 34 Venky Shankar
    qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
2123
2124
h3. 2022 Jan 03
2125
2126
https://pulpito.ceph.com/vshankar-2021-12-22_07:37:44-fs-wip-vshankar-testing-20211216-114012-testing-default-smithi/
2127
https://pulpito.ceph.com/vshankar-2022-01-03_12:27:45-fs-wip-vshankar-testing-20220103-142738-testing-default-smithi/ (rerun)
2128
2129
* https://tracker.ceph.com/issues/50223
2130
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2131
* https://tracker.ceph.com/issues/51964
2132
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
2133
* https://tracker.ceph.com/issues/51267
2134
    CommandFailedError: Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithi096 with status 1:...
2135
* https://tracker.ceph.com/issues/51282
2136
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2137
* https://tracker.ceph.com/issues/50821
2138
    qa: untar_snap_rm failure during mds thrashing
2139 35 Ramana Raja
* https://tracker.ceph.com/issues/51278
2140
    mds: "FAILED ceph_assert(!segments.empty())"
2141
* https://tracker.ceph.com/issues/52279
2142 34 Venky Shankar
    cephadm tests fail due to: error adding seccomp filter rule for syscall bdflush: requested action matches default action of filter
2143 33 Patrick Donnelly
2144
2145
h3. 2021 Dec 22
2146
2147
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211222.014316
2148
2149
* https://tracker.ceph.com/issues/52624
2150
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2151
* https://tracker.ceph.com/issues/50223
2152
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2153
* https://tracker.ceph.com/issues/52279
2154
    cephadm tests fail due to: error adding seccomp filter rule for syscall bdflush: requested action matches default action of filter
2155
* https://tracker.ceph.com/issues/50224
2156
    qa: test_mirroring_init_failure_with_recovery failure
2157
* https://tracker.ceph.com/issues/48773
2158
    qa: scrub does not complete
2159 32 Venky Shankar
2160
2161
h3. 2021 Nov 30
2162
2163
https://pulpito.ceph.com/vshankar-2021-11-24_07:14:27-fs-wip-vshankar-testing-20211124-094330-testing-default-smithi/
2164
https://pulpito.ceph.com/vshankar-2021-11-30_06:23:32-fs-wip-vshankar-testing-20211124-094330-distro-default-smithi/ (rerun w/ QA fixes)
2165
2166
* https://tracker.ceph.com/issues/53436
2167
    mds, mon: mds beacon messages get dropped? (mds never reaches up:active state)
2168
* https://tracker.ceph.com/issues/51964
2169
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
2170
* https://tracker.ceph.com/issues/48812
2171
    qa: test_scrub_pause_and_resume_with_abort failure
2172
* https://tracker.ceph.com/issues/51076
2173
    "wait_for_recovery: failed before timeout expired" during thrashosd test with EC backend.
2174
* https://tracker.ceph.com/issues/50223
2175
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2176
* https://tracker.ceph.com/issues/52624
2177
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2178
* https://tracker.ceph.com/issues/50250
2179
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
2180 31 Patrick Donnelly
2181
2182
h3. 2021 November 9
2183
2184
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211109.180315
2185
2186
* https://tracker.ceph.com/issues/53214
2187
    qa: "dd: error reading '/sys/kernel/debug/ceph/2a934501-6731-4052-a836-f42229a869be.client4874/metrics': Is a directory"
2188
* https://tracker.ceph.com/issues/48773
2189
    qa: scrub does not complete
2190
* https://tracker.ceph.com/issues/50223
2191
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2192
* https://tracker.ceph.com/issues/51282
2193
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2194
* https://tracker.ceph.com/issues/52624
2195
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2196
* https://tracker.ceph.com/issues/53216
2197
    qa: "RuntimeError: value of attributes should be either str or None. client_id"
2198
* https://tracker.ceph.com/issues/50250
2199
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
2200
2201 30 Patrick Donnelly
2202
2203
h3. 2021 November 03
2204
2205
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211103.023355
2206
2207
* https://tracker.ceph.com/issues/51964
2208
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
2209
* https://tracker.ceph.com/issues/51282
2210
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2211
* https://tracker.ceph.com/issues/52436
2212
    fs/ceph: "corrupt mdsmap"
2213
* https://tracker.ceph.com/issues/53074
2214
    pybind/mgr/cephadm: upgrade sequence does not continue if no MDS are active
2215
* https://tracker.ceph.com/issues/53150
2216
    pybind/mgr/cephadm/upgrade: tolerate MDS failures during upgrade straddling v16.2.5
2217
* https://tracker.ceph.com/issues/53155
2218
    MDSMonitor: assertion during upgrade to v16.2.5+
2219 29 Patrick Donnelly
2220
2221
h3. 2021 October 26
2222
2223
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211025.000447
2224
2225
* https://tracker.ceph.com/issues/53074
2226
    pybind/mgr/cephadm: upgrade sequence does not continue if no MDS are active
2227
* https://tracker.ceph.com/issues/52997
2228
    testing: hang ing umount
2229
* https://tracker.ceph.com/issues/50824
2230
    qa: snaptest-git-ceph bus error
2231
* https://tracker.ceph.com/issues/52436
2232
    fs/ceph: "corrupt mdsmap"
2233
* https://tracker.ceph.com/issues/48773
2234
    qa: scrub does not complete
2235
* https://tracker.ceph.com/issues/53082
2236
    ceph-fuse: segmenetation fault in Client::handle_mds_map
2237
* https://tracker.ceph.com/issues/50223
2238
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2239
* https://tracker.ceph.com/issues/52624
2240
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2241
* https://tracker.ceph.com/issues/50224
2242
    qa: test_mirroring_init_failure_with_recovery failure
2243
* https://tracker.ceph.com/issues/50821
2244
    qa: untar_snap_rm failure during mds thrashing
2245
* https://tracker.ceph.com/issues/50250
2246
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
2247
2248 27 Patrick Donnelly
2249
2250 28 Patrick Donnelly
h3. 2021 October 19
2251 27 Patrick Donnelly
2252
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211019.013028
2253
2254
* https://tracker.ceph.com/issues/52995
2255
    qa: test_standby_count_wanted failure
2256
* https://tracker.ceph.com/issues/52948
2257
    osd: fails to come up: "teuthology.misc:7 of 8 OSDs are up"
2258
* https://tracker.ceph.com/issues/52996
2259
    qa: test_perf_counters via test_openfiletable
2260
* https://tracker.ceph.com/issues/48772
2261
    qa: pjd: not ok 9, 44, 80
2262
* https://tracker.ceph.com/issues/52997
2263
    testing: hang ing umount
2264
* https://tracker.ceph.com/issues/50250
2265
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
2266
* https://tracker.ceph.com/issues/52624
2267
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2268
* https://tracker.ceph.com/issues/50223
2269
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2270
* https://tracker.ceph.com/issues/50821
2271
    qa: untar_snap_rm failure during mds thrashing
2272
* https://tracker.ceph.com/issues/48773
2273
    qa: scrub does not complete
2274 26 Patrick Donnelly
2275
2276
h3. 2021 October 12
2277
2278
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211012.192211
2279
2280
Some failures caused by teuthology bug: https://tracker.ceph.com/issues/52944
2281
2282
New test caused failure: https://github.com/ceph/ceph/pull/43297#discussion_r729883167
2283
2284
2285
* https://tracker.ceph.com/issues/51282
2286
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2287
* https://tracker.ceph.com/issues/52948
2288
    osd: fails to come up: "teuthology.misc:7 of 8 OSDs are up"
2289
* https://tracker.ceph.com/issues/48773
2290
    qa: scrub does not complete
2291
* https://tracker.ceph.com/issues/50224
2292
    qa: test_mirroring_init_failure_with_recovery failure
2293
* https://tracker.ceph.com/issues/52949
2294
    RuntimeError: The following counters failed to be set on mds daemons: {'mds.dir_split'}
2295 25 Patrick Donnelly
2296 23 Patrick Donnelly
2297 24 Patrick Donnelly
h3. 2021 October 02
2298
2299
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211002.163337
2300
2301
Some failures caused by cephadm upgrade test. Fixed in follow-up qa commit.
2302
2303
test_simple failures caused by PR in this set.
2304
2305
A few reruns because of QA infra noise.
2306
2307
* https://tracker.ceph.com/issues/52822
2308
    qa: failed pacific install on fs:upgrade
2309
* https://tracker.ceph.com/issues/52624
2310
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2311
* https://tracker.ceph.com/issues/50223
2312
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2313
* https://tracker.ceph.com/issues/48773
2314
    qa: scrub does not complete
2315
2316
2317 23 Patrick Donnelly
h3. 2021 September 20
2318
2319
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210917.174826
2320
2321
* https://tracker.ceph.com/issues/52677
2322
    qa: test_simple failure
2323
* https://tracker.ceph.com/issues/51279
2324
    kclient hangs on umount (testing branch)
2325
* https://tracker.ceph.com/issues/50223
2326
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2327
* https://tracker.ceph.com/issues/50250
2328
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
2329
* https://tracker.ceph.com/issues/52624
2330
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2331
* https://tracker.ceph.com/issues/52438
2332
    qa: ffsb timeout
2333 22 Patrick Donnelly
2334
2335
h3. 2021 September 10
2336
2337
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210910.181451
2338
2339
* https://tracker.ceph.com/issues/50223
2340
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2341
* https://tracker.ceph.com/issues/50250
2342
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
2343
* https://tracker.ceph.com/issues/52624
2344
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2345
* https://tracker.ceph.com/issues/52625
2346
    qa: test_kill_mdstable (tasks.cephfs.test_snapshots.TestSnapshots)
2347
* https://tracker.ceph.com/issues/52439
2348
    qa: acls does not compile on centos stream
2349
* https://tracker.ceph.com/issues/50821
2350
    qa: untar_snap_rm failure during mds thrashing
2351
* https://tracker.ceph.com/issues/48773
2352
    qa: scrub does not complete
2353
* https://tracker.ceph.com/issues/52626
2354
    mds: ScrubStack.cc: 831: FAILED ceph_assert(diri)
2355
* https://tracker.ceph.com/issues/51279
2356
    kclient hangs on umount (testing branch)
2357 21 Patrick Donnelly
2358
2359
h3. 2021 August 27
2360
2361
Several jobs died because of device failures.
2362
2363
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210827.024746
2364
2365
* https://tracker.ceph.com/issues/52430
2366
    mds: fast async create client mount breaks racy test
2367
* https://tracker.ceph.com/issues/52436
2368
    fs/ceph: "corrupt mdsmap"
2369
* https://tracker.ceph.com/issues/52437
2370
    mds: InoTable::replay_release_ids abort via test_inotable_sync
2371
* https://tracker.ceph.com/issues/51282
2372
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2373
* https://tracker.ceph.com/issues/52438
2374
    qa: ffsb timeout
2375
* https://tracker.ceph.com/issues/52439
2376
    qa: acls does not compile on centos stream
2377 20 Patrick Donnelly
2378
2379
h3. 2021 July 30
2380
2381
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210729.214022
2382
2383
* https://tracker.ceph.com/issues/50250
2384
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
2385
* https://tracker.ceph.com/issues/51282
2386
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2387
* https://tracker.ceph.com/issues/48773
2388
    qa: scrub does not complete
2389
* https://tracker.ceph.com/issues/51975
2390
    pybind/mgr/stats: KeyError
2391 19 Patrick Donnelly
2392
2393
h3. 2021 July 28
2394
2395
https://pulpito.ceph.com/pdonnell-2021-07-28_00:39:45-fs-wip-pdonnell-testing-20210727.213757-distro-basic-smithi/
2396
2397
with qa fix: https://pulpito.ceph.com/pdonnell-2021-07-28_16:20:28-fs-wip-pdonnell-testing-20210728.141004-distro-basic-smithi/
2398
2399
* https://tracker.ceph.com/issues/51905
2400
    qa: "error reading sessionmap 'mds1_sessionmap'"
2401
* https://tracker.ceph.com/issues/48773
2402
    qa: scrub does not complete
2403
* https://tracker.ceph.com/issues/50250
2404
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
2405
* https://tracker.ceph.com/issues/51267
2406
    CommandFailedError: Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithi096 with status 1:...
2407
* https://tracker.ceph.com/issues/51279
2408
    kclient hangs on umount (testing branch)
2409 18 Patrick Donnelly
2410
2411
h3. 2021 July 16
2412
2413
https://pulpito.ceph.com/pdonnell-2021-07-16_05:50:11-fs-wip-pdonnell-testing-20210716.022804-distro-basic-smithi/
2414
2415
* https://tracker.ceph.com/issues/48773
2416
    qa: scrub does not complete
2417
* https://tracker.ceph.com/issues/48772
2418
    qa: pjd: not ok 9, 44, 80
2419
* https://tracker.ceph.com/issues/45434
2420
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2421
* https://tracker.ceph.com/issues/51279
2422
    kclient hangs on umount (testing branch)
2423
* https://tracker.ceph.com/issues/50824
2424
    qa: snaptest-git-ceph bus error
2425 17 Patrick Donnelly
2426
2427
h3. 2021 July 04
2428
2429
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210703.052904
2430
2431
* https://tracker.ceph.com/issues/48773
2432
    qa: scrub does not complete
2433
* https://tracker.ceph.com/issues/39150
2434
    mon: "FAILED ceph_assert(session_map.sessions.empty())" when out of quorum
2435
* https://tracker.ceph.com/issues/45434
2436
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2437
* https://tracker.ceph.com/issues/51282
2438
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2439
* https://tracker.ceph.com/issues/48771
2440
    qa: iogen: workload fails to cause balancing
2441
* https://tracker.ceph.com/issues/51279
2442
    kclient hangs on umount (testing branch)
2443
* https://tracker.ceph.com/issues/50250
2444
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
2445 16 Patrick Donnelly
2446
2447
h3. 2021 July 01
2448
2449
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210701.192056
2450
2451
* https://tracker.ceph.com/issues/51197
2452
    qa: [WRN] Scrub error on inode 0x10000001520 (/client.0/tmp/t/linux-5.4/Documentation/driver-api) see mds.f log and `damage ls` output for details
2453
* https://tracker.ceph.com/issues/50866
2454
    osd: stat mismatch on objects
2455
* https://tracker.ceph.com/issues/48773
2456
    qa: scrub does not complete
2457 15 Patrick Donnelly
2458
2459
h3. 2021 June 26
2460
2461
https://pulpito.ceph.com/pdonnell-2021-06-26_00:57:00-fs-wip-pdonnell-testing-20210625.225421-distro-basic-smithi/
2462
2463
* https://tracker.ceph.com/issues/51183
2464
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
2465
* https://tracker.ceph.com/issues/51410
2466
    kclient: fails to finish reconnect during MDS thrashing (testing branch)
2467
* https://tracker.ceph.com/issues/48773
2468
    qa: scrub does not complete
2469
* https://tracker.ceph.com/issues/51282
2470
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2471
* https://tracker.ceph.com/issues/51169
2472
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
2473
* https://tracker.ceph.com/issues/48772
2474
    qa: pjd: not ok 9, 44, 80
2475 14 Patrick Donnelly
2476
2477
h3. 2021 June 21
2478
2479
https://pulpito.ceph.com/pdonnell-2021-06-22_00:27:21-fs-wip-pdonnell-testing-20210621.231646-distro-basic-smithi/
2480
2481
One failure caused by PR: https://github.com/ceph/ceph/pull/41935#issuecomment-866472599
2482
2483
* https://tracker.ceph.com/issues/51282
2484
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2485
* https://tracker.ceph.com/issues/51183
2486
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
2487
* https://tracker.ceph.com/issues/48773
2488
    qa: scrub does not complete
2489
* https://tracker.ceph.com/issues/48771
2490
    qa: iogen: workload fails to cause balancing
2491
* https://tracker.ceph.com/issues/51169
2492
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
2493
* https://tracker.ceph.com/issues/50495
2494
    libcephfs: shutdown race fails with status 141
2495
* https://tracker.ceph.com/issues/45434
2496
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2497
* https://tracker.ceph.com/issues/50824
2498
    qa: snaptest-git-ceph bus error
2499
* https://tracker.ceph.com/issues/50223
2500
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2501 13 Patrick Donnelly
2502
2503
h3. 2021 June 16
2504
2505
https://pulpito.ceph.com/pdonnell-2021-06-16_21:26:55-fs-wip-pdonnell-testing-20210616.191804-distro-basic-smithi/
2506
2507
MDS abort class of failures caused by PR: https://github.com/ceph/ceph/pull/41667
2508
2509
* https://tracker.ceph.com/issues/45434
2510
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2511
* https://tracker.ceph.com/issues/51169
2512
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
2513
* https://tracker.ceph.com/issues/43216
2514
    MDSMonitor: removes MDS coming out of quorum election
2515
* https://tracker.ceph.com/issues/51278
2516
    mds: "FAILED ceph_assert(!segments.empty())"
2517
* https://tracker.ceph.com/issues/51279
2518
    kclient hangs on umount (testing branch)
2519
* https://tracker.ceph.com/issues/51280
2520
    mds: "FAILED ceph_assert(r == 0 || r == -2)"
2521
* https://tracker.ceph.com/issues/51183
2522
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
2523
* https://tracker.ceph.com/issues/51281
2524
    qa: snaptest-snap-rm-cmp.sh: "echo 'FAIL: bad match, /tmp/a 4637e766853d1ad16a7b17079e2c6f03 != real c3883760b18d50e8d78819c54d579b00'"
2525
* https://tracker.ceph.com/issues/48773
2526
    qa: scrub does not complete
2527
* https://tracker.ceph.com/issues/51076
2528
    "wait_for_recovery: failed before timeout expired" during thrashosd test with EC backend.
2529
* https://tracker.ceph.com/issues/51228
2530
    qa: rmdir: failed to remove 'a/.snap/*': No such file or directory
2531
* https://tracker.ceph.com/issues/51282
2532
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2533 12 Patrick Donnelly
2534
2535
h3. 2021 June 14
2536
2537
https://pulpito.ceph.com/pdonnell-2021-06-14_20:53:05-fs-wip-pdonnell-testing-20210614.173325-distro-basic-smithi/
2538
2539
Some Ubuntu 20.04 upgrade fallout. In particular, upgrade tests are failing due to missing packages for 18.04 Pacific.
2540
2541
* https://tracker.ceph.com/issues/51169
2542
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
2543
* https://tracker.ceph.com/issues/51228
2544
    qa: rmdir: failed to remove 'a/.snap/*': No such file or directory
2545
* https://tracker.ceph.com/issues/48773
2546
    qa: scrub does not complete
2547
* https://tracker.ceph.com/issues/51183
2548
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
2549
* https://tracker.ceph.com/issues/45434
2550
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2551
* https://tracker.ceph.com/issues/51182
2552
    pybind/mgr/snap_schedule: Invalid command: Unexpected argument 'fs=cephfs'
2553
* https://tracker.ceph.com/issues/51229
2554
    qa: test_multi_snap_schedule list difference failure
2555
* https://tracker.ceph.com/issues/50821
2556
    qa: untar_snap_rm failure during mds thrashing
2557 11 Patrick Donnelly
2558
2559
h3. 2021 June 13
2560
2561
https://pulpito.ceph.com/pdonnell-2021-06-12_02:45:35-fs-wip-pdonnell-testing-20210612.002809-distro-basic-smithi/
2562
2563
Some Ubuntu 20.04 upgrade fallout. In particular, upgrade tests are failing due to missing packages for 18.04 Pacific.
2564
2565
* https://tracker.ceph.com/issues/51169
2566
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
2567
* https://tracker.ceph.com/issues/48773
2568
    qa: scrub does not complete
2569
* https://tracker.ceph.com/issues/51182
2570
    pybind/mgr/snap_schedule: Invalid command: Unexpected argument 'fs=cephfs'
2571
* https://tracker.ceph.com/issues/51183
2572
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
2573
* https://tracker.ceph.com/issues/51197
2574
    qa: [WRN] Scrub error on inode 0x10000001520 (/client.0/tmp/t/linux-5.4/Documentation/driver-api) see mds.f log and `damage ls` output for details
2575
* https://tracker.ceph.com/issues/45434
2576 10 Patrick Donnelly
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2577
2578
h3. 2021 June 11
2579
2580
https://pulpito.ceph.com/pdonnell-2021-06-11_18:02:10-fs-wip-pdonnell-testing-20210611.162716-distro-basic-smithi/
2581
2582
Some Ubuntu 20.04 upgrade fallout. In particular, upgrade tests are failing due to missing packages for 18.04 Pacific.
2583
2584
* https://tracker.ceph.com/issues/51169
2585
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
2586
* https://tracker.ceph.com/issues/45434
2587
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2588
* https://tracker.ceph.com/issues/48771
2589
    qa: iogen: workload fails to cause balancing
2590
* https://tracker.ceph.com/issues/43216
2591
    MDSMonitor: removes MDS coming out of quorum election
2592
* https://tracker.ceph.com/issues/51182
2593
    pybind/mgr/snap_schedule: Invalid command: Unexpected argument 'fs=cephfs'
2594
* https://tracker.ceph.com/issues/50223
2595
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2596
* https://tracker.ceph.com/issues/48773
2597
    qa: scrub does not complete
2598
* https://tracker.ceph.com/issues/51183
2599
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
2600
* https://tracker.ceph.com/issues/51184
2601
    qa: fs:bugs does not specify distro
2602 9 Patrick Donnelly
2603
2604
h3. 2021 June 03
2605
2606
https://pulpito.ceph.com/pdonnell-2021-06-03_03:40:33-fs-wip-pdonnell-testing-20210603.020013-distro-basic-smithi/
2607
2608
* https://tracker.ceph.com/issues/45434
2609
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2610
* https://tracker.ceph.com/issues/50016
2611
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
2612
* https://tracker.ceph.com/issues/50821
2613
    qa: untar_snap_rm failure during mds thrashing
2614
* https://tracker.ceph.com/issues/50622 (regression)
2615
    msg: active_connections regression
2616
* https://tracker.ceph.com/issues/49845#note-2 (regression)
2617
    qa: failed umount in test_volumes
2618
* https://tracker.ceph.com/issues/48773
2619
    qa: scrub does not complete
2620
* https://tracker.ceph.com/issues/43216
2621
    MDSMonitor: removes MDS coming out of quorum election
2622 7 Patrick Donnelly
2623
2624 8 Patrick Donnelly
h3. 2021 May 18
2625
2626
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210518.214114
2627
2628
Regression in testing kernel caused some failures. Ilya fixed those and rerun
2629
looked better. Some odd new noise in the rerun relating to packaging and "No
2630
module named 'tasks.ceph'".
2631
2632
* https://tracker.ceph.com/issues/50824
2633
    qa: snaptest-git-ceph bus error
2634
* https://tracker.ceph.com/issues/50622 (regression)
2635
    msg: active_connections regression
2636
* https://tracker.ceph.com/issues/49845#note-2 (regression)
2637
    qa: failed umount in test_volumes
2638
* https://tracker.ceph.com/issues/48203 (stock kernel update required)
2639
    qa: quota failure
2640
2641
2642 7 Patrick Donnelly
h3. 2021 May 18
2643
2644
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210518.025642
2645
2646
* https://tracker.ceph.com/issues/50821
2647
    qa: untar_snap_rm failure during mds thrashing
2648
* https://tracker.ceph.com/issues/48773
2649
    qa: scrub does not complete
2650
* https://tracker.ceph.com/issues/45591
2651
    mgr: FAILED ceph_assert(daemon != nullptr)
2652
* https://tracker.ceph.com/issues/50866
2653
    osd: stat mismatch on objects
2654
* https://tracker.ceph.com/issues/50016
2655
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
2656
* https://tracker.ceph.com/issues/50867
2657
    qa: fs:mirror: reduced data availability
2658
* https://tracker.ceph.com/issues/50821
2659
    qa: untar_snap_rm failure during mds thrashing
2660
* https://tracker.ceph.com/issues/50622 (regression)
2661
    msg: active_connections regression
2662
* https://tracker.ceph.com/issues/50223
2663
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2664
* https://tracker.ceph.com/issues/50868
2665
    qa: "kern.log.gz already exists; not overwritten"
2666
* https://tracker.ceph.com/issues/50870
2667
    qa: test_full: "rm: cannot remove 'large_file_a': Permission denied"
2668 6 Patrick Donnelly
2669
2670
h3. 2021 May 11
2671
2672
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210511.232042
2673
2674
* one class of failures caused by PR
2675
* https://tracker.ceph.com/issues/48812
2676
    qa: test_scrub_pause_and_resume_with_abort failure
2677
* https://tracker.ceph.com/issues/50390
2678
    mds: monclient: wait_auth_rotating timed out after 30
2679
* https://tracker.ceph.com/issues/48773
2680
    qa: scrub does not complete
2681
* https://tracker.ceph.com/issues/50821
2682
    qa: untar_snap_rm failure during mds thrashing
2683
* https://tracker.ceph.com/issues/50224
2684
    qa: test_mirroring_init_failure_with_recovery failure
2685
* https://tracker.ceph.com/issues/50622 (regression)
2686
    msg: active_connections regression
2687
* https://tracker.ceph.com/issues/50825
2688
    qa: snaptest-git-ceph hang during mon thrashing v2
2689
* https://tracker.ceph.com/issues/50821
2690
    qa: untar_snap_rm failure during mds thrashing
2691
* https://tracker.ceph.com/issues/50823
2692
    qa: RuntimeError: timeout waiting for cluster to stabilize
2693 5 Patrick Donnelly
2694
2695
h3. 2021 May 14
2696
2697
https://pulpito.ceph.com/pdonnell-2021-05-14_21:45:42-fs-master-distro-basic-smithi/
2698
2699
* https://tracker.ceph.com/issues/48812
2700
    qa: test_scrub_pause_and_resume_with_abort failure
2701
* https://tracker.ceph.com/issues/50821
2702
    qa: untar_snap_rm failure during mds thrashing
2703
* https://tracker.ceph.com/issues/50622 (regression)
2704
    msg: active_connections regression
2705
* https://tracker.ceph.com/issues/50822
2706
    qa: testing kernel patch for client metrics causes mds abort
2707
* https://tracker.ceph.com/issues/48773
2708
    qa: scrub does not complete
2709
* https://tracker.ceph.com/issues/50823
2710
    qa: RuntimeError: timeout waiting for cluster to stabilize
2711
* https://tracker.ceph.com/issues/50824
2712
    qa: snaptest-git-ceph bus error
2713
* https://tracker.ceph.com/issues/50825
2714
    qa: snaptest-git-ceph hang during mon thrashing v2
2715
* https://tracker.ceph.com/issues/50826
2716
    kceph: stock RHEL kernel hangs on snaptests with mon|osd thrashers
2717 4 Patrick Donnelly
2718
2719
h3. 2021 May 01
2720
2721
https://pulpito.ceph.com/pdonnell-2021-05-01_09:07:09-fs-wip-pdonnell-testing-20210501.040415-distro-basic-smithi/
2722
2723
* https://tracker.ceph.com/issues/45434
2724
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2725
* https://tracker.ceph.com/issues/50281
2726
    qa: untar_snap_rm timeout
2727
* https://tracker.ceph.com/issues/48203 (stock kernel update required)
2728
    qa: quota failure
2729
* https://tracker.ceph.com/issues/48773
2730
    qa: scrub does not complete
2731
* https://tracker.ceph.com/issues/50390
2732
    mds: monclient: wait_auth_rotating timed out after 30
2733
* https://tracker.ceph.com/issues/50250
2734
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details"
2735
* https://tracker.ceph.com/issues/50622 (regression)
2736
    msg: active_connections regression
2737
* https://tracker.ceph.com/issues/45591
2738
    mgr: FAILED ceph_assert(daemon != nullptr)
2739
* https://tracker.ceph.com/issues/50221
2740
    qa: snaptest-git-ceph failure in git diff
2741
* https://tracker.ceph.com/issues/50016
2742
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
2743 3 Patrick Donnelly
2744
2745
h3. 2021 Apr 15
2746
2747
https://pulpito.ceph.com/pdonnell-2021-04-15_01:35:57-fs-wip-pdonnell-testing-20210414.230315-distro-basic-smithi/
2748
2749
* https://tracker.ceph.com/issues/50281
2750
    qa: untar_snap_rm timeout
2751
* https://tracker.ceph.com/issues/50220
2752
    qa: dbench workload timeout
2753
* https://tracker.ceph.com/issues/50246
2754
    mds: failure replaying journal (EMetaBlob)
2755
* https://tracker.ceph.com/issues/50250
2756
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details"
2757
* https://tracker.ceph.com/issues/50016
2758
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
2759
* https://tracker.ceph.com/issues/50222
2760
    osd: 5.2s0 deep-scrub : stat mismatch
2761
* https://tracker.ceph.com/issues/45434
2762
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2763
* https://tracker.ceph.com/issues/49845
2764
    qa: failed umount in test_volumes
2765
* https://tracker.ceph.com/issues/37808
2766
    osd: osdmap cache weak_refs assert during shutdown
2767
* https://tracker.ceph.com/issues/50387
2768
    client: fs/snaps failure
2769
* https://tracker.ceph.com/issues/50389
2770
    mds: "cluster [ERR] Error recovering journal 0x203: (2) No such file or directory" in cluster log"
2771
* https://tracker.ceph.com/issues/50216
2772
    qa: "ls: cannot access 'lost+found': No such file or directory"
2773
* https://tracker.ceph.com/issues/50390
2774
    mds: monclient: wait_auth_rotating timed out after 30
2775
2776 1 Patrick Donnelly
2777
2778 2 Patrick Donnelly
h3. 2021 Apr 08
2779
2780
https://pulpito.ceph.com/pdonnell-2021-04-08_22:42:24-fs-wip-pdonnell-testing-20210408.192301-distro-basic-smithi/
2781
2782
* https://tracker.ceph.com/issues/45434
2783
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2784
* https://tracker.ceph.com/issues/50016
2785
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
2786
* https://tracker.ceph.com/issues/48773
2787
    qa: scrub does not complete
2788
* https://tracker.ceph.com/issues/50279
2789
    qa: "Replacing daemon mds.b as rank 0 with standby daemon mds.c"
2790
* https://tracker.ceph.com/issues/50246
2791
    mds: failure replaying journal (EMetaBlob)
2792
* https://tracker.ceph.com/issues/48365
2793
    qa: ffsb build failure on CentOS 8.2
2794
* https://tracker.ceph.com/issues/50216
2795
    qa: "ls: cannot access 'lost+found': No such file or directory"
2796
* https://tracker.ceph.com/issues/50223
2797
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2798
* https://tracker.ceph.com/issues/50280
2799
    cephadm: RuntimeError: uid/gid not found
2800
* https://tracker.ceph.com/issues/50281
2801
    qa: untar_snap_rm timeout
2802
2803 1 Patrick Donnelly
h3. 2021 Apr 08
2804
2805
https://pulpito.ceph.com/pdonnell-2021-04-08_04:31:36-fs-wip-pdonnell-testing-20210408.024225-distro-basic-smithi/
2806
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210408.142238 (with logic inversion / QA fix)
2807
2808
* https://tracker.ceph.com/issues/50246
2809
    mds: failure replaying journal (EMetaBlob)
2810
* https://tracker.ceph.com/issues/50250
2811
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details"
2812
2813
2814
h3. 2021 Apr 07
2815
2816
https://pulpito.ceph.com/pdonnell-2021-04-07_02:12:41-fs-wip-pdonnell-testing-20210406.213012-distro-basic-smithi/
2817
2818
* https://tracker.ceph.com/issues/50215
2819
    qa: "log [ERR] : error reading sessionmap 'mds2_sessionmap'"
2820
* https://tracker.ceph.com/issues/49466
2821
    qa: "Command failed on gibba030 with status 1: 'set -ex\nsudo dd of=/tmp/tmp.ZEeZBasJer'"
2822
* https://tracker.ceph.com/issues/50216
2823
    qa: "ls: cannot access 'lost+found': No such file or directory"
2824
* https://tracker.ceph.com/issues/48773
2825
    qa: scrub does not complete
2826
* https://tracker.ceph.com/issues/49845
2827
    qa: failed umount in test_volumes
2828
* https://tracker.ceph.com/issues/50220
2829
    qa: dbench workload timeout
2830
* https://tracker.ceph.com/issues/50221
2831
    qa: snaptest-git-ceph failure in git diff
2832
* https://tracker.ceph.com/issues/50222
2833
    osd: 5.2s0 deep-scrub : stat mismatch
2834
* https://tracker.ceph.com/issues/50223
2835
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2836
* https://tracker.ceph.com/issues/50224
2837
    qa: test_mirroring_init_failure_with_recovery failure
2838
2839
h3. 2021 Apr 01
2840
2841
https://pulpito.ceph.com/pdonnell-2021-04-01_00:45:34-fs-wip-pdonnell-testing-20210331.222326-distro-basic-smithi/
2842
2843
* https://tracker.ceph.com/issues/48772
2844
    qa: pjd: not ok 9, 44, 80
2845
* https://tracker.ceph.com/issues/50177
2846
    osd: "stalled aio... buggy kernel or bad device?"
2847
* https://tracker.ceph.com/issues/48771
2848
    qa: iogen: workload fails to cause balancing
2849
* https://tracker.ceph.com/issues/49845
2850
    qa: failed umount in test_volumes
2851
* https://tracker.ceph.com/issues/48773
2852
    qa: scrub does not complete
2853
* https://tracker.ceph.com/issues/48805
2854
    mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details"
2855
* https://tracker.ceph.com/issues/50178
2856
    qa: "TypeError: run() got an unexpected keyword argument 'shell'"
2857
* https://tracker.ceph.com/issues/45434
2858
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2859
2860
h3. 2021 Mar 24
2861
2862
https://pulpito.ceph.com/pdonnell-2021-03-24_23:26:35-fs-wip-pdonnell-testing-20210324.190252-distro-basic-smithi/
2863
2864
* https://tracker.ceph.com/issues/49500
2865
    qa: "Assertion `cb_done' failed."
2866
* https://tracker.ceph.com/issues/50019
2867
    qa: mount failure with cephadm "probably no MDS server is up?"
2868
* https://tracker.ceph.com/issues/50020
2869
    qa: "RADOS object not found (Failed to operate read op for oid cephfs_mirror)"
2870
* https://tracker.ceph.com/issues/48773
2871
    qa: scrub does not complete
2872
* https://tracker.ceph.com/issues/45434
2873
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2874
* https://tracker.ceph.com/issues/48805
2875
    mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details"
2876
* https://tracker.ceph.com/issues/48772
2877
    qa: pjd: not ok 9, 44, 80
2878
* https://tracker.ceph.com/issues/50021
2879
    qa: snaptest-git-ceph failure during mon thrashing
2880
* https://tracker.ceph.com/issues/48771
2881
    qa: iogen: workload fails to cause balancing
2882
* https://tracker.ceph.com/issues/50016
2883
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
2884
* https://tracker.ceph.com/issues/49466
2885
    qa: "Command failed on gibba030 with status 1: 'set -ex\nsudo dd of=/tmp/tmp.ZEeZBasJer'"
2886
2887
2888
h3. 2021 Mar 18
2889
2890
https://pulpito.ceph.com/pdonnell-2021-03-18_13:46:31-fs-wip-pdonnell-testing-20210318.024145-distro-basic-smithi/
2891
2892
* https://tracker.ceph.com/issues/49466
2893
    qa: "Command failed on gibba030 with status 1: 'set -ex\nsudo dd of=/tmp/tmp.ZEeZBasJer'"
2894
* https://tracker.ceph.com/issues/48773
2895
    qa: scrub does not complete
2896
* https://tracker.ceph.com/issues/48805
2897
    mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details"
2898
* https://tracker.ceph.com/issues/45434
2899
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2900
* https://tracker.ceph.com/issues/49845
2901
    qa: failed umount in test_volumes
2902
* https://tracker.ceph.com/issues/49605
2903
    mgr: drops command on the floor
2904
* https://tracker.ceph.com/issues/48203 (stock kernel update required)
2905
    qa: quota failure
2906
* https://tracker.ceph.com/issues/49928
2907
    client: items pinned in cache preventing unmount x2
2908
2909
h3. 2021 Mar 15
2910
2911
https://pulpito.ceph.com/pdonnell-2021-03-15_22:16:56-fs-wip-pdonnell-testing-20210315.182203-distro-basic-smithi/
2912
2913
* https://tracker.ceph.com/issues/49842
2914
    qa: stuck pkg install
2915
* https://tracker.ceph.com/issues/49466
2916
    qa: "Command failed on gibba030 with status 1: 'set -ex\nsudo dd of=/tmp/tmp.ZEeZBasJer'"
2917
* https://tracker.ceph.com/issues/49822
2918
    test: test_mirroring_command_idempotency (tasks.cephfs.test_admin.TestMirroringCommands) failure
2919
* https://tracker.ceph.com/issues/49240
2920
    terminate called after throwing an instance of 'std::bad_alloc'
2921
* https://tracker.ceph.com/issues/48773
2922
    qa: scrub does not complete
2923
* https://tracker.ceph.com/issues/45434
2924
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2925
* https://tracker.ceph.com/issues/49500
2926
    qa: "Assertion `cb_done' failed."
2927
* https://tracker.ceph.com/issues/49843
2928
    qa: fs/snaps/snaptest-upchildrealms.sh failure
2929
* https://tracker.ceph.com/issues/49845
2930
    qa: failed umount in test_volumes
2931
* https://tracker.ceph.com/issues/48805
2932
    mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details"
2933
* https://tracker.ceph.com/issues/49605
2934
    mgr: drops command on the floor
2935
2936
and failure caused by PR: https://github.com/ceph/ceph/pull/39969
2937
2938
2939
h3. 2021 Mar 09
2940
2941
https://pulpito.ceph.com/pdonnell-2021-03-09_03:27:39-fs-wip-pdonnell-testing-20210308.214827-distro-basic-smithi/
2942
2943
* https://tracker.ceph.com/issues/49500
2944
    qa: "Assertion `cb_done' failed."
2945
* https://tracker.ceph.com/issues/48805
2946
    mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details"
2947
* https://tracker.ceph.com/issues/48773
2948
    qa: scrub does not complete
2949
* https://tracker.ceph.com/issues/45434
2950
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2951
* https://tracker.ceph.com/issues/49240
2952
    terminate called after throwing an instance of 'std::bad_alloc'
2953
* https://tracker.ceph.com/issues/49466
2954
    qa: "Command failed on gibba030 with status 1: 'set -ex\nsudo dd of=/tmp/tmp.ZEeZBasJer'"
2955
* https://tracker.ceph.com/issues/49684
2956
    qa: fs:cephadm mount does not wait for mds to be created
2957
* https://tracker.ceph.com/issues/48771
2958
    qa: iogen: workload fails to cause balancing