Project

General

Profile

Main » History » Version 216

Venky Shankar, 01/17/2024 03:36 PM

1 79 Venky Shankar
h1. MAIN
2
3 201 Rishabh Dave
h3. ADD NEW ENTRY BELOW
4
5 216 Venky Shankar
h3. 17th Jan 2024
6
7
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20240103.072409-1
8
9
* https://tracker.ceph.com/issues/63764
10
    Test failure: test_r_with_fsname_and_no_path_in_cap (tasks.cephfs.test_multifs_auth.TestMDSCaps)
11
* https://tracker.ceph.com/issues/57676
12
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
13
* https://tracker.ceph.com/issues/51964
14
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
15
* https://tracker.ceph.com/issues/63949
16
    leak in mds.c detected by valgrind during CephFS QA run
17
* https://tracker.ceph.com/issues/62067
18
    ffsb.sh failure "Resource temporarily unavailable"
19
* https://tracker.ceph.com/issues/61243
20
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
21
* https://tracker.ceph.com/issues/63259
22
    mds: failed to store backtrace and force file system read-only
23
* https://tracker.ceph.com/issues/63265
24
    qa: fs/snaps/snaptest-git-ceph.sh failed when reseting to tag 'v0.1'
25
26
h3. 16 Jan 2024
27 215 Rishabh Dave
28 214 Rishabh Dave
https://pulpito.ceph.com/rishabh-2023-12-11_15:37:57-fs-rishabh-2023dec11-testing-default-smithi/
29
https://pulpito.ceph.com/rishabh-2023-12-17_11:19:43-fs-rishabh-2023dec11-testing-default-smithi/
30
https://pulpito.ceph.com/rishabh-2024-01-04_18:43:16-fs-rishabh-2024jan4-testing-default-smithi
31
32
* https://tracker.ceph.com/issues/63764
33
  Test failure: test_r_with_fsname_and_no_path_in_cap (tasks.cephfs.test_multifs_auth.TestMDSCaps)
34
* https://tracker.ceph.com/issues/63141
35
  qa/cephfs: test_idem_unaffected_root_squash fails
36
* https://tracker.ceph.com/issues/62067
37
  ffsb.sh failure "Resource temporarily unavailable" 
38
* https://tracker.ceph.com/issues/51964
39
  qa: test_cephfs_mirror_restart_sync_on_blocklist failure
40
* https://tracker.ceph.com/issues/54462 
41
  Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128
42
* https://tracker.ceph.com/issues/57676
43
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
44
45
* https://tracker.ceph.com/issues/63949
46
  valgrind leak in MDS
47
* https://tracker.ceph.com/issues/64041
48
  qa/cephfs: fs/upgrade/nofs suite attempts to jump more than 2 releases
49
* fsstress failure in last run was due a kernel MM layer failure, unrelated to CephFS
50
* from last run, job #7507400 failed due to MGR; FS wasn't degraded, so it's unrelated to CephFS
51
52 213 Venky Shankar
h3. 06 Dec 2023
53
54
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20231206.125818
55
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20231206.125818-x (rerun w/ squid kickoff changes)
56
57
* https://tracker.ceph.com/issues/63764
58
    Test failure: test_r_with_fsname_and_no_path_in_cap (tasks.cephfs.test_multifs_auth.TestMDSCaps)
59
* https://tracker.ceph.com/issues/63233
60
    mon|client|mds: valgrind reports possible leaks in the MDS
61
* https://tracker.ceph.com/issues/57676
62
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
63
* https://tracker.ceph.com/issues/62580
64
    testing: Test failure: test_snapshot_remove (tasks.cephfs.test_strays.TestStrays)
65
* https://tracker.ceph.com/issues/62067
66
    ffsb.sh failure "Resource temporarily unavailable"
67
* https://tracker.ceph.com/issues/61243
68
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
69
* https://tracker.ceph.com/issues/62081
70
    tasks/fscrypt-common does not finish, timesout
71
* https://tracker.ceph.com/issues/63265
72
    qa: fs/snaps/snaptest-git-ceph.sh failed when reseting to tag 'v0.1'
73
* https://tracker.ceph.com/issues/63806
74
    ffsb.sh workunit failure (MDS: std::out_of_range, damaged)
75
76 211 Patrick Donnelly
h3. 30 Nov 2023
77
78
https://pulpito.ceph.com/pdonnell-2023-11-30_08:05:19-fs:shell-wip-batrick-testing-20231130.014408-distro-default-smithi/
79
80
* https://tracker.ceph.com/issues/63699
81 212 Patrick Donnelly
    qa: failed cephfs-shell test_reading_conf
82
* https://tracker.ceph.com/issues/63700
83
    qa: test_cd_with_args failure
84 211 Patrick Donnelly
85 210 Venky Shankar
h3. 29 Nov 2023
86
87
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20231107.042705
88
89
* https://tracker.ceph.com/issues/63233
90
    mon|client|mds: valgrind reports possible leaks in the MDS
91
* https://tracker.ceph.com/issues/63141
92
    qa/cephfs: test_idem_unaffected_root_squash fails
93
* https://tracker.ceph.com/issues/57676
94
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
95
* https://tracker.ceph.com/issues/57655
96
    qa: fs:mixed-clients kernel_untar_build failure
97
* https://tracker.ceph.com/issues/62067
98
    ffsb.sh failure "Resource temporarily unavailable"
99
* https://tracker.ceph.com/issues/61243
100
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
101
* https://tracker.ceph.com/issues/62510 (pending RHEL back port)
    snaptest-git-ceph.sh failure with fs/thrash
102
* https://tracker.ceph.com/issues/62810
103
    Failure in snaptest-git-ceph.sh (it's an async unlink/create bug) -- Need to fix again
104
105 206 Venky Shankar
h3. 14 Nov 2023
106 207 Milind Changire
(Milind)
107
108
https://pulpito.ceph.com/mchangir-2023-11-13_10:27:15-fs-wip-mchangir-testing-20231110.052303-testing-default-smithi/
109
110
* https://tracker.ceph.com/issues/53859
111
  qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
112
* https://tracker.ceph.com/issues/63233
113
  mon|client|mds: valgrind reports possible leaks in the MDS
114
* https://tracker.ceph.com/issues/63521
115
  qa: Test failure: test_scrub_merge_dirfrags (tasks.cephfs.test_scrub_checks.TestScrubChecks)
116
* https://tracker.ceph.com/issues/57655
117
  qa: fs:mixed-clients kernel_untar_build failure
118
* https://tracker.ceph.com/issues/62580
119
    testing: Test failure: test_snapshot_remove (tasks.cephfs.test_strays.TestStrays)
120
* https://tracker.ceph.com/issues/57676
121
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
122
* https://tracker.ceph.com/issues/61243
123
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
124
* https://tracker.ceph.com/issues/63141
125
    qa/cephfs: test_idem_unaffected_root_squash fails
126
* https://tracker.ceph.com/issues/51964
127
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
128
* https://tracker.ceph.com/issues/63522
129
    No module named 'tasks.ceph_fuse'
130
    No module named 'tasks.kclient'
131
    No module named 'tasks.cephfs.fuse_mount'
132
    No module named 'tasks.ceph'
133
* https://tracker.ceph.com/issues/63523
134
    Command failed - qa/workunits/fs/misc/general_vxattrs.sh
135
136
137
h3. 14 Nov 2023
138 206 Venky Shankar
139
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20231106.073650
140
141
(nvm the fs:upgrade test failure - the PR is excluded from merge)
142
143
* https://tracker.ceph.com/issues/57676
144
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
145
* https://tracker.ceph.com/issues/63233
146
    mon|client|mds: valgrind reports possible leaks in the MDS
147
* https://tracker.ceph.com/issues/63141
148
    qa/cephfs: test_idem_unaffected_root_squash fails
149
* https://tracker.ceph.com/issues/62580
150
    testing: Test failure: test_snapshot_remove (tasks.cephfs.test_strays.TestStrays)
151
* https://tracker.ceph.com/issues/57655
152
    qa: fs:mixed-clients kernel_untar_build failure
153
* https://tracker.ceph.com/issues/51964
154
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
155
* https://tracker.ceph.com/issues/63519
156
    ceph-fuse: reef ceph-fuse crashes with main branch ceph-mds
157
* https://tracker.ceph.com/issues/57087
158
    qa: test_fragmented_injection (tasks.cephfs.test_data_scan.TestDataScan) failure
159
* https://tracker.ceph.com/issues/58945
160
    qa: xfstests-dev's generic test suite has 20 failures with fuse client
161
162 204 Rishabh Dave
h3. 7 Nov 2023
163
164 205 Rishabh Dave
fs: https://pulpito.ceph.com/rishabh-2023-11-04_04:30:51-fs-rishabh-2023nov3-testing-default-smithi/
165
re-run: https://pulpito.ceph.com/rishabh-2023-11-05_14:10:09-fs-rishabh-2023nov3-testing-default-smithi/
166
smoke: https://pulpito.ceph.com/rishabh-2023-11-08_08:39:05-smoke-rishabh-2023nov3-testing-default-smithi/
167 204 Rishabh Dave
168
* https://tracker.ceph.com/issues/53859
169
  qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
170
* https://tracker.ceph.com/issues/63233
171
  mon|client|mds: valgrind reports possible leaks in the MDS
172
* https://tracker.ceph.com/issues/57655
173
  qa: fs:mixed-clients kernel_untar_build failure
174
* https://tracker.ceph.com/issues/57676
175
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
176
177
* https://tracker.ceph.com/issues/63473
178
  fsstress.sh failed with errno 124
179
180 202 Rishabh Dave
h3. 3 Nov 2023
181 203 Rishabh Dave
182 202 Rishabh Dave
https://pulpito.ceph.com/rishabh-2023-10-27_06:26:52-fs-rishabh-2023oct26-testing-default-smithi/
183
184
* https://tracker.ceph.com/issues/63141
185
  qa/cephfs: test_idem_unaffected_root_squash fails
186
* https://tracker.ceph.com/issues/63233
187
  mon|client|mds: valgrind reports possible leaks in the MDS
188
* https://tracker.ceph.com/issues/57656
189
  dbench: write failed on handle 10010 (Resource temporarily unavailable)
190
* https://tracker.ceph.com/issues/57655
191
  qa: fs:mixed-clients kernel_untar_build failure
192
* https://tracker.ceph.com/issues/57676
193
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
194
195
* https://tracker.ceph.com/issues/59531
196
  "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio)"
197
* https://tracker.ceph.com/issues/52624
198
  qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
199
200 198 Patrick Donnelly
h3. 24 October 2023
201
202
https://pulpito.ceph.com/?branch=wip-batrick-testing-20231024.144545
203
204 200 Patrick Donnelly
Two failures:
205
206
https://pulpito.ceph.com/pdonnell-2023-10-26_05:21:22-fs-wip-batrick-testing-20231024.144545-distro-default-smithi/7438459/
207
https://pulpito.ceph.com/pdonnell-2023-10-26_05:21:22-fs-wip-batrick-testing-20231024.144545-distro-default-smithi/7438468/
208
209
probably related to https://github.com/ceph/ceph/pull/53255. Killing the mount as part of the test did not complete. Will research more.
210
211 198 Patrick Donnelly
* https://tracker.ceph.com/issues/52624
212
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
213
* https://tracker.ceph.com/issues/57676
214 199 Patrick Donnelly
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
215
* https://tracker.ceph.com/issues/63233
216
    mon|client|mds: valgrind reports possible leaks in the MDS
217
* https://tracker.ceph.com/issues/59531
218
    "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS 
219
* https://tracker.ceph.com/issues/57655
220
    qa: fs:mixed-clients kernel_untar_build failure
221 200 Patrick Donnelly
* https://tracker.ceph.com/issues/62067
222
    ffsb.sh failure "Resource temporarily unavailable"
223
* https://tracker.ceph.com/issues/63411
224
    qa: flush journal may cause timeouts of `scrub status`
225
* https://tracker.ceph.com/issues/61243
226
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
227
* https://tracker.ceph.com/issues/63141
228 198 Patrick Donnelly
    test_idem_unaffected_root_squash (test_admin.TestFsAuthorizeUpdate) fails
229 148 Rishabh Dave
230 195 Venky Shankar
h3. 18 Oct 2023
231
232
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20231018.065603
233
234
* https://tracker.ceph.com/issues/52624
235
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
236
* https://tracker.ceph.com/issues/57676
237
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
238
* https://tracker.ceph.com/issues/63233
239
    mon|client|mds: valgrind reports possible leaks in the MDS
240
* https://tracker.ceph.com/issues/63141
241
    qa/cephfs: test_idem_unaffected_root_squash fails
242
* https://tracker.ceph.com/issues/59531
243
  "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio)" 
244
* https://tracker.ceph.com/issues/62658
245
    error during scrub thrashing: reached maximum tries (31) after waiting for 900 seconds
246
* https://tracker.ceph.com/issues/62580
247
    testing: Test failure: test_snapshot_remove (tasks.cephfs.test_strays.TestStrays)
248
* https://tracker.ceph.com/issues/62067
249
    ffsb.sh failure "Resource temporarily unavailable"
250
* https://tracker.ceph.com/issues/57655
251
    qa: fs:mixed-clients kernel_untar_build failure
252
* https://tracker.ceph.com/issues/62036
253
    src/mds/MDCache.cc: 5131: FAILED ceph_assert(isolated_inodes.empty())
254
* https://tracker.ceph.com/issues/58945
255
    qa: xfstests-dev's generic test suite has 20 failures with fuse client
256
* https://tracker.ceph.com/issues/62847
257
    mds: blogbench requests stuck (5mds+scrub+snaps-flush)
258
259 193 Venky Shankar
h3. 13 Oct 2023
260
261
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20231013.093215
262
263
* https://tracker.ceph.com/issues/52624
264
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
265
* https://tracker.ceph.com/issues/62936
266
    Test failure: test_mirroring_init_failure_with_recovery (tasks.cephfs.test_mirroring.TestMirroring)
267
* https://tracker.ceph.com/issues/47292
268
    cephfs-shell: test_df_for_valid_file failure
269
* https://tracker.ceph.com/issues/63141
270
    qa/cephfs: test_idem_unaffected_root_squash fails
271
* https://tracker.ceph.com/issues/62081
272
    tasks/fscrypt-common does not finish, timesout
273 1 Patrick Donnelly
* https://tracker.ceph.com/issues/58945
274
    qa: xfstests-dev's generic test suite has 20 failures with fuse client
275 194 Venky Shankar
* https://tracker.ceph.com/issues/63233
276
    mon|client|mds: valgrind reports possible leaks in the MDS
277 193 Venky Shankar
278 190 Patrick Donnelly
h3. 16 Oct 2023
279
280
https://pulpito.ceph.com/?branch=wip-batrick-testing-20231016.203825
281
282 192 Patrick Donnelly
Infrastructure issues:
283
* /teuthology/pdonnell-2023-10-19_12:04:12-fs-wip-batrick-testing-20231016.203825-distro-default-smithi/7432286/teuthology.log
284
    Host lost.
285
286 196 Patrick Donnelly
One followup fix:
287
* https://pulpito.ceph.com/pdonnell-2023-10-20_00:33:29-fs-wip-batrick-testing-20231016.203825-distro-default-smithi/
288
289 192 Patrick Donnelly
Failures:
290
291
* https://tracker.ceph.com/issues/56694
292
    qa: avoid blocking forever on hung umount
293
* https://tracker.ceph.com/issues/63089
294
    qa: tasks/mirror times out
295
* https://tracker.ceph.com/issues/52624
296
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
297
* https://tracker.ceph.com/issues/59531
298
  "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio)" 
299
* https://tracker.ceph.com/issues/57676
300
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
301
* https://tracker.ceph.com/issues/62658 
302
    error during scrub thrashing: reached maximum tries (31) after waiting for 900 seconds
303
* https://tracker.ceph.com/issues/61243
304
  test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
305
* https://tracker.ceph.com/issues/57656
306
  dbench: write failed on handle 10010 (Resource temporarily unavailable)
307
* https://tracker.ceph.com/issues/63233
308
  mon|client|mds: valgrind reports possible leaks in the MDS
309 197 Patrick Donnelly
* https://tracker.ceph.com/issues/63278
310
  kclient: may wrongly decode session messages and believe it is blocklisted (dead jobs)
311 192 Patrick Donnelly
312 189 Rishabh Dave
h3. 9 Oct 2023
313
314
https://pulpito.ceph.com/rishabh-2023-10-06_11:56:52-fs-rishabh-cephfs-mon-testing-default-smithi/
315
316
* https://tracker.ceph.com/issues/54460
317
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
318
* https://tracker.ceph.com/issues/63141
319
  test_idem_unaffected_root_squash (test_admin.TestFsAuthorizeUpdate) fails
320
* https://tracker.ceph.com/issues/62937
321
  logrotate doesn't support parallel execution on same set of logfiles
322
* https://tracker.ceph.com/issues/61400
323
  valgrind+ceph-mon issues
324
* https://tracker.ceph.com/issues/57676
325
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
326
* https://tracker.ceph.com/issues/55805
327
  error during scrub thrashing reached max tries in 900 secs
328
329 188 Venky Shankar
h3. 26 Sep 2023
330
331
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230926.081818
332
333
* https://tracker.ceph.com/issues/52624
334
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
335
* https://tracker.ceph.com/issues/62873
336
    qa: FAIL: test_client_blocklisted_oldest_tid (tasks.cephfs.test_client_limits.TestClientLimits)
337
* https://tracker.ceph.com/issues/61400
338
    valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
339
* https://tracker.ceph.com/issues/57676
340
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
341
* https://tracker.ceph.com/issues/62682
342
    mon: no mdsmap broadcast after "fs set joinable" is set to true
343
* https://tracker.ceph.com/issues/63089
344
    qa: tasks/mirror times out
345
346 185 Rishabh Dave
h3. 22 Sep 2023
347
348
https://pulpito.ceph.com/rishabh-2023-09-12_12:12:15-fs-wip-rishabh-2023sep12-b2-testing-default-smithi/
349
350
* https://tracker.ceph.com/issues/59348
351
  qa: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota
352
* https://tracker.ceph.com/issues/59344
353
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
354
* https://tracker.ceph.com/issues/59531
355
  "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio)" 
356
* https://tracker.ceph.com/issues/61574
357
  build failure for mdtest project
358
* https://tracker.ceph.com/issues/62702
359
  fsstress.sh: MDS slow requests for the internal 'rename' requests
360
* https://tracker.ceph.com/issues/57676
361
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
362
363
* https://tracker.ceph.com/issues/62863 
364
  deadlock in ceph-fuse causes teuthology job to hang and fail
365
* https://tracker.ceph.com/issues/62870
366
  test_cluster_info (tasks.cephfs.test_nfs.TestNFS)
367
* https://tracker.ceph.com/issues/62873
368
  test_client_blocklisted_oldest_tid (tasks.cephfs.test_client_limits.TestClientLimits)
369
370 186 Venky Shankar
h3. 20 Sep 2023
371
372
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230920.072635
373
374
* https://tracker.ceph.com/issues/52624
375
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
376
* https://tracker.ceph.com/issues/61400
377
    valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
378
* https://tracker.ceph.com/issues/61399
379
    libmpich: undefined references to fi_strerror
380
* https://tracker.ceph.com/issues/62081
381
    tasks/fscrypt-common does not finish, timesout
382
* https://tracker.ceph.com/issues/62658 
383
    error during scrub thrashing: reached maximum tries (31) after waiting for 900 seconds
384
* https://tracker.ceph.com/issues/62915
385
    qa/suites/fs/nfs: No orchestrator configured (try `ceph orch set backend`) while running test cases
386
* https://tracker.ceph.com/issues/59531
387
    quincy: "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio)"
388
* https://tracker.ceph.com/issues/62873
389
  qa: FAIL: test_client_blocklisted_oldest_tid (tasks.cephfs.test_client_limits.TestClientLimits)
390
* https://tracker.ceph.com/issues/62936
391
    Test failure: test_mirroring_init_failure_with_recovery (tasks.cephfs.test_mirroring.TestMirroring)
392
* https://tracker.ceph.com/issues/62937
393
    Command failed on smithi027 with status 3: 'sudo logrotate /etc/logrotate.d/ceph-test.conf'
394
* https://tracker.ceph.com/issues/62510
395
    snaptest-git-ceph.sh failure with fs/thrash
396
* https://tracker.ceph.com/issues/62081
397
    tasks/fscrypt-common does not finish, timesout
398
* https://tracker.ceph.com/issues/62126
399
    test failure: suites/blogbench.sh stops running
400 187 Venky Shankar
* https://tracker.ceph.com/issues/62682
401
    mon: no mdsmap broadcast after "fs set joinable" is set to true
402 186 Venky Shankar
403 184 Milind Changire
h3. 19 Sep 2023
404
405
http://pulpito.front.sepia.ceph.com/mchangir-2023-09-12_05:40:22-fs-wip-mchangir-testing-20230908.140927-testing-default-smithi/
406
407
* https://tracker.ceph.com/issues/58220#note-9
408
  workunit fs/test_python.sh: test_disk_quota_exceeeded_error failure
409
* https://tracker.ceph.com/issues/62702
410
  Command failed (workunit test suites/fsstress.sh) on smithi124 with status 124
411
* https://tracker.ceph.com/issues/57676
412
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
413
* https://tracker.ceph.com/issues/59348
414
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
415
* https://tracker.ceph.com/issues/52624
416
  qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
417
* https://tracker.ceph.com/issues/51964
418
  qa: test_cephfs_mirror_restart_sync_on_blocklist failure
419
* https://tracker.ceph.com/issues/61243
420
  test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
421
* https://tracker.ceph.com/issues/59344
422
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
423
* https://tracker.ceph.com/issues/62873
424
  qa: FAIL: test_client_blocklisted_oldest_tid (tasks.cephfs.test_client_limits.TestClientLimits)
425
* https://tracker.ceph.com/issues/59413
426
  cephfs: qa snaptest-git-ceph.sh failed with "got remote process result: 128"
427
* https://tracker.ceph.com/issues/53859
428
  qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
429
* https://tracker.ceph.com/issues/62482
430
  qa: cluster [WRN] Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
431
432 178 Patrick Donnelly
433 177 Venky Shankar
h3. 13 Sep 2023
434
435
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230908.065909
436
437
* https://tracker.ceph.com/issues/52624
438
      qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
439
* https://tracker.ceph.com/issues/57655
440
    qa: fs:mixed-clients kernel_untar_build failure
441
* https://tracker.ceph.com/issues/57676
442
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
443
* https://tracker.ceph.com/issues/61243
444
    qa: tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev - 17 tests failed
445
* https://tracker.ceph.com/issues/62567
446
    postgres workunit times out - MDS_SLOW_REQUEST in logs
447
* https://tracker.ceph.com/issues/61400
448
    valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
449
* https://tracker.ceph.com/issues/61399
450
    libmpich: undefined references to fi_strerror
451
* https://tracker.ceph.com/issues/57655
452
    qa: fs:mixed-clients kernel_untar_build failure
453
* https://tracker.ceph.com/issues/57676
454
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
455
* https://tracker.ceph.com/issues/51964
456
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
457
* https://tracker.ceph.com/issues/62081
458
    tasks/fscrypt-common does not finish, timesout
459 178 Patrick Donnelly
460 179 Patrick Donnelly
h3. 2023 Sep 12
461 178 Patrick Donnelly
462
https://pulpito.ceph.com/pdonnell-2023-09-12_14:07:50-fs-wip-batrick-testing-20230912.122437-distro-default-smithi/
463 1 Patrick Donnelly
464 181 Patrick Donnelly
A few failures caused by qa refactoring in https://github.com/ceph/ceph/pull/48130 ; notably:
465
466 182 Patrick Donnelly
* Test failure: test_export_pin_many (tasks.cephfs.test_exports.TestExportPin) caused by fragmentation from config changes.
467 181 Patrick Donnelly
468
Failures:
469
470 179 Patrick Donnelly
* https://tracker.ceph.com/issues/59348
471
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
472
* https://tracker.ceph.com/issues/57656
473
  dbench: write failed on handle 10010 (Resource temporarily unavailable)
474
* https://tracker.ceph.com/issues/55805
475
  error scrub thrashing reached max tries in 900 secs
476
* https://tracker.ceph.com/issues/62067
477
    ffsb.sh failure "Resource temporarily unavailable"
478
* https://tracker.ceph.com/issues/59344
479
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
480
* https://tracker.ceph.com/issues/61399
481 180 Patrick Donnelly
  libmpich: undefined references to fi_strerror
482
* https://tracker.ceph.com/issues/62832
483
  common: config_proxy deadlock during shutdown (and possibly other times)
484
* https://tracker.ceph.com/issues/59413
485 1 Patrick Donnelly
  cephfs: qa snaptest-git-ceph.sh failed with "got remote process result: 128"
486 181 Patrick Donnelly
* https://tracker.ceph.com/issues/57676
487
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
488
* https://tracker.ceph.com/issues/62567
489
  Command failed with status 1: "sudo TESTDIR=/home/ubuntu/cephtest bash -c 'sudo -u postgres -- pgbench -s 500 -i'"
490
* https://tracker.ceph.com/issues/54460
491
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
492
* https://tracker.ceph.com/issues/58220#note-9
493
  workunit fs/test_python.sh: test_disk_quota_exceeeded_error failure
494
* https://tracker.ceph.com/issues/59348
495
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
496 183 Patrick Donnelly
* https://tracker.ceph.com/issues/62847
497
    mds: blogbench requests stuck (5mds+scrub+snaps-flush)
498
* https://tracker.ceph.com/issues/62848
499
    qa: fail_fs upgrade scenario hanging
500
* https://tracker.ceph.com/issues/62081
501
    tasks/fscrypt-common does not finish, timesout
502 177 Venky Shankar
503 176 Venky Shankar
h3. 11 Sep 2023
504 175 Venky Shankar
505
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230830.153114
506
507
* https://tracker.ceph.com/issues/52624
508
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
509
* https://tracker.ceph.com/issues/61399
510
    qa: build failure for ior (the failed instance is when compiling `mdtest`)
511
* https://tracker.ceph.com/issues/57655
512
    qa: fs:mixed-clients kernel_untar_build failure
513
* https://tracker.ceph.com/issues/61399
514
    ior build failure
515
* https://tracker.ceph.com/issues/59531
516
    quincy: "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio)"
517
* https://tracker.ceph.com/issues/59344
518
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
519
* https://tracker.ceph.com/issues/59346
520
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
521
* https://tracker.ceph.com/issues/59348
522
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
523
* https://tracker.ceph.com/issues/57676
524
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
525
* https://tracker.ceph.com/issues/61243
526
  qa: tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev - 17 tests failed
527
* https://tracker.ceph.com/issues/62567
528
  postgres workunit times out - MDS_SLOW_REQUEST in logs
529
530
531 174 Rishabh Dave
h3. 6 Sep 2023 Run 2
532
533
https://pulpito.ceph.com/rishabh-2023-08-25_01:50:32-fs-wip-rishabh-2023aug3-b5-testing-default-smithi/ 
534
535
* https://tracker.ceph.com/issues/51964
536
  test_cephfs_mirror_restart_sync_on_blocklist failure
537
* https://tracker.ceph.com/issues/59348
538
  test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
539
* https://tracker.ceph.com/issues/53859
540
  qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
541
* https://tracker.ceph.com/issues/61892
542
  test_strays.TestStrays.test_snapshot_remove failed
543
* https://tracker.ceph.com/issues/54460
544
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
545
* https://tracker.ceph.com/issues/59346
546
  fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write"
547
* https://tracker.ceph.com/issues/59344
548
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
549
* https://tracker.ceph.com/issues/62484
550
  qa: ffsb.sh test failure
551
* https://tracker.ceph.com/issues/62567
552
  Command failed with status 1: "sudo TESTDIR=/home/ubuntu/cephtest bash -c 'sudo -u postgres -- pgbench -s 500 -i'"
553
  
554
* https://tracker.ceph.com/issues/61399
555
  ior build failure
556
* https://tracker.ceph.com/issues/57676
557
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
558
* https://tracker.ceph.com/issues/55805
559
  error scrub thrashing reached max tries in 900 secs
560
561 172 Rishabh Dave
h3. 6 Sep 2023
562 171 Rishabh Dave
563 173 Rishabh Dave
https://pulpito.ceph.com/rishabh-2023-08-10_20:16:46-fs-wip-rishabh-2023Aug1-b4-testing-default-smithi/
564 171 Rishabh Dave
565 1 Patrick Donnelly
* https://tracker.ceph.com/issues/53859
566
  qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
567 173 Rishabh Dave
* https://tracker.ceph.com/issues/51964
568
  qa: test_cephfs_mirror_restart_sync_on_blocklist failure
569 1 Patrick Donnelly
* https://tracker.ceph.com/issues/61892
570 173 Rishabh Dave
  test_snapshot_remove (test_strays.TestStrays) failed
571
* https://tracker.ceph.com/issues/59348
572
  qa: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota
573
* https://tracker.ceph.com/issues/54462
574
  Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128
575
* https://tracker.ceph.com/issues/62556
576
  test_acls: xfstests_dev: python2 is missing
577
* https://tracker.ceph.com/issues/62067
578
  ffsb.sh failure "Resource temporarily unavailable"
579
* https://tracker.ceph.com/issues/57656
580
  dbench: write failed on handle 10010 (Resource temporarily unavailable)
581 1 Patrick Donnelly
* https://tracker.ceph.com/issues/59346
582
  fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write"
583 171 Rishabh Dave
* https://tracker.ceph.com/issues/59344
584 173 Rishabh Dave
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
585
586 171 Rishabh Dave
* https://tracker.ceph.com/issues/61399
587
  ior build failure
588
* https://tracker.ceph.com/issues/57676
589
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
590
* https://tracker.ceph.com/issues/55805
591
  error scrub thrashing reached max tries in 900 secs
592 173 Rishabh Dave
593
* https://tracker.ceph.com/issues/62567
594
  Command failed on smithi008 with status 1: "sudo TESTDIR=/home/ubuntu/cephtest bash -c 'sudo -u postgres -- pgbench -s 500 -i'"
595
* https://tracker.ceph.com/issues/62702
596
  workunit test suites/fsstress.sh on smithi066 with status 124
597 170 Rishabh Dave
598
h3. 5 Sep 2023
599
600
https://pulpito.ceph.com/rishabh-2023-08-25_06:38:25-fs-wip-rishabh-2023aug3-b5-testing-default-smithi/
601
orch:cephadm suite run: http://pulpito.front.sepia.ceph.com/rishabh-2023-09-05_12:16:09-orch:cephadm-wip-rishabh-2023aug3-b5-testing-default-smithi/
602
  this run has failures but acc to Adam King these are not relevant and should be ignored
603
604
* https://tracker.ceph.com/issues/61892
605
  test_snapshot_remove (test_strays.TestStrays) failed
606
* https://tracker.ceph.com/issues/59348
607
  test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota
608
* https://tracker.ceph.com/issues/54462
609
  Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128
610
* https://tracker.ceph.com/issues/62067
611
  ffsb.sh failure "Resource temporarily unavailable"
612
* https://tracker.ceph.com/issues/57656 
613
  dbench: write failed on handle 10010 (Resource temporarily unavailable)
614
* https://tracker.ceph.com/issues/59346
615
  fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write"
616
* https://tracker.ceph.com/issues/59344
617
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
618
* https://tracker.ceph.com/issues/50223
619
  client.xxxx isn't responding to mclientcaps(revoke)
620
* https://tracker.ceph.com/issues/57655
621
  qa: fs:mixed-clients kernel_untar_build failure
622
* https://tracker.ceph.com/issues/62187
623
  iozone.sh: line 5: iozone: command not found
624
 
625
* https://tracker.ceph.com/issues/61399
626
  ior build failure
627
* https://tracker.ceph.com/issues/57676
628
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
629
* https://tracker.ceph.com/issues/55805
630
  error scrub thrashing reached max tries in 900 secs
631 169 Venky Shankar
632
633
h3. 31 Aug 2023
634
635
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230824.045828
636
637
* https://tracker.ceph.com/issues/52624
638
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
639
* https://tracker.ceph.com/issues/62187
640
    iozone: command not found
641
* https://tracker.ceph.com/issues/61399
642
    ior build failure
643
* https://tracker.ceph.com/issues/59531
644
    quincy: "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio)"
645
* https://tracker.ceph.com/issues/61399
646
    qa: build failure for ior (the failed instance is when compiling `mdtest`)
647
* https://tracker.ceph.com/issues/57655
648
    qa: fs:mixed-clients kernel_untar_build failure
649
* https://tracker.ceph.com/issues/59344
650
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
651
* https://tracker.ceph.com/issues/59346
652
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
653
* https://tracker.ceph.com/issues/59348
654
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
655
* https://tracker.ceph.com/issues/59413
656
    cephfs: qa snaptest-git-ceph.sh failed with "got remote process result: 128"
657
* https://tracker.ceph.com/issues/62653
658
    qa: unimplemented fcntl command: 1036 with fsstress
659
* https://tracker.ceph.com/issues/61400
660
    valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
661
* https://tracker.ceph.com/issues/62658
662
    error during scrub thrashing: reached maximum tries (31) after waiting for 900 seconds
663
* https://tracker.ceph.com/issues/62188
664
    AttributeError: 'RemoteProcess' object has no attribute 'read' (happens only with multis-auth test)
665 168 Venky Shankar
666
667
h3. 25 Aug 2023
668
669
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230822.064807
670
671
* https://tracker.ceph.com/issues/59344
672
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
673
* https://tracker.ceph.com/issues/59346
674
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
675
* https://tracker.ceph.com/issues/59348
676
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
677
* https://tracker.ceph.com/issues/57655
678
    qa: fs:mixed-clients kernel_untar_build failure
679
* https://tracker.ceph.com/issues/61243
680
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
681
* https://tracker.ceph.com/issues/61399
682
    ior build failure
683
* https://tracker.ceph.com/issues/61399
684
    qa: build failure for ior (the failed instance is when compiling `mdtest`)
685
* https://tracker.ceph.com/issues/62484
686
    qa: ffsb.sh test failure
687
* https://tracker.ceph.com/issues/59531
688
    quincy: "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio)"
689
* https://tracker.ceph.com/issues/62510
690
    snaptest-git-ceph.sh failure with fs/thrash
691 167 Venky Shankar
692
693
h3. 24 Aug 2023
694
695
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230822.060131
696
697
* https://tracker.ceph.com/issues/57676
698
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
699
* https://tracker.ceph.com/issues/51964
700
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
701
* https://tracker.ceph.com/issues/59344
702
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
703
* https://tracker.ceph.com/issues/59346
704
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
705
* https://tracker.ceph.com/issues/59348
706
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
707
* https://tracker.ceph.com/issues/61399
708
    ior build failure
709
* https://tracker.ceph.com/issues/61399
710
    qa: build failure for ior (the failed instance is when compiling `mdtest`)
711
* https://tracker.ceph.com/issues/62510
712
    snaptest-git-ceph.sh failure with fs/thrash
713
* https://tracker.ceph.com/issues/62484
714
    qa: ffsb.sh test failure
715
* https://tracker.ceph.com/issues/57087
716
    qa: test_fragmented_injection (tasks.cephfs.test_data_scan.TestDataScan) failure
717
* https://tracker.ceph.com/issues/57656
718
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
719
* https://tracker.ceph.com/issues/62187
720
    iozone: command not found
721
* https://tracker.ceph.com/issues/62188
722
    AttributeError: 'RemoteProcess' object has no attribute 'read' (happens only with multis-auth test)
723
* https://tracker.ceph.com/issues/62567
724
    postgres workunit times out - MDS_SLOW_REQUEST in logs
725 166 Venky Shankar
726
727
h3. 22 Aug 2023
728
729
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230809.035933
730
731
* https://tracker.ceph.com/issues/57676
732
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
733
* https://tracker.ceph.com/issues/51964
734
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
735
* https://tracker.ceph.com/issues/59344
736
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
737
* https://tracker.ceph.com/issues/59346
738
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
739
* https://tracker.ceph.com/issues/59348
740
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
741
* https://tracker.ceph.com/issues/61399
742
    ior build failure
743
* https://tracker.ceph.com/issues/61399
744
    qa: build failure for ior (the failed instance is when compiling `mdtest`)
745
* https://tracker.ceph.com/issues/57655
746
    qa: fs:mixed-clients kernel_untar_build failure
747
* https://tracker.ceph.com/issues/61243
748
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
749
* https://tracker.ceph.com/issues/62188
750
    AttributeError: 'RemoteProcess' object has no attribute 'read' (happens only with multis-auth test)
751
* https://tracker.ceph.com/issues/62510
752
    snaptest-git-ceph.sh failure with fs/thrash
753
* https://tracker.ceph.com/issues/62511
754
    src/mds/MDLog.cc: 299: FAILED ceph_assert(!mds_is_shutting_down)
755 165 Venky Shankar
756
757
h3. 14 Aug 2023
758
759
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230808.093601
760
761
* https://tracker.ceph.com/issues/51964
762
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
763
* https://tracker.ceph.com/issues/61400
764
    valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
765
* https://tracker.ceph.com/issues/61399
766
    ior build failure
767
* https://tracker.ceph.com/issues/59348
768
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
769
* https://tracker.ceph.com/issues/59531
770
    cluster [WRN] OSD bench result of 137706.272521 IOPS exceeded the threshold
771
* https://tracker.ceph.com/issues/59344
772
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
773
* https://tracker.ceph.com/issues/59346
774
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
775
* https://tracker.ceph.com/issues/61399
776
    qa: build failure for ior (the failed instance is when compiling `mdtest`)
777
* https://tracker.ceph.com/issues/59684 [kclient bug]
778
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
779
* https://tracker.ceph.com/issues/61243 (NEW)
780
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
781
* https://tracker.ceph.com/issues/57655
782
    qa: fs:mixed-clients kernel_untar_build failure
783
* https://tracker.ceph.com/issues/57656
784
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
785 163 Venky Shankar
786
787
h3. 28 JULY 2023
788
789
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230725.053049
790
791
* https://tracker.ceph.com/issues/51964
792
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
793
* https://tracker.ceph.com/issues/61400
794
    valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
795
* https://tracker.ceph.com/issues/61399
796
    ior build failure
797
* https://tracker.ceph.com/issues/57676
798
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
799
* https://tracker.ceph.com/issues/59348
800
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
801
* https://tracker.ceph.com/issues/59531
802
    cluster [WRN] OSD bench result of 137706.272521 IOPS exceeded the threshold
803
* https://tracker.ceph.com/issues/59344
804
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
805
* https://tracker.ceph.com/issues/59346
806
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
807
* https://github.com/ceph/ceph/pull/52556
808
    task/test_nfs: AttributeError: 'TestNFS' object has no attribute 'run_ceph_cmd' (see note #4)
809
* https://tracker.ceph.com/issues/62187
810
    iozone: command not found
811
* https://tracker.ceph.com/issues/61399
812
    qa: build failure for ior (the failed instance is when compiling `mdtest`)
813
* https://tracker.ceph.com/issues/62188
814 164 Rishabh Dave
    AttributeError: 'RemoteProcess' object has no attribute 'read' (happens only with multis-auth test)
815 158 Rishabh Dave
816
h3. 24 Jul 2023
817
818
https://pulpito.ceph.com/rishabh-2023-07-13_21:35:13-fs-wip-rishabh-2023Jul13-testing-default-smithi/
819
https://pulpito.ceph.com/rishabh-2023-07-14_10:26:42-fs-wip-rishabh-2023Jul13-testing-default-smithi/
820
There were few failure from one of the PRs under testing. Following run confirms that removing this PR fixes these failures -
821
https://pulpito.ceph.com/rishabh-2023-07-18_02:11:50-fs-wip-rishabh-2023Jul13-m-quota-testing-default-smithi/
822
One more extra run to check if blogbench.sh fail every time:
823
https://pulpito.ceph.com/rishabh-2023-07-21_17:58:19-fs-wip-rishabh-2023Jul13-m-quota-testing-default-smithi/
824
blogbench.sh failure were seen on above runs for first time, following run with main branch that confirms that "blogbench.sh" was not related to any of the PRs that are under testing -
825 161 Rishabh Dave
https://pulpito.ceph.com/rishabh-2023-07-21_21:30:53-fs-wip-rishabh-2023Jul13-base-2-testing-default-smithi/
826
827
* https://tracker.ceph.com/issues/61892
828
  test_snapshot_remove (test_strays.TestStrays) failed
829
* https://tracker.ceph.com/issues/53859
830
  test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
831
* https://tracker.ceph.com/issues/61982
832
  test_clean_stale_subvolume_snapshot_metadata (tasks.cephfs.test_volumes.TestSubvolumeSnapshots)
833
* https://tracker.ceph.com/issues/52438
834
  qa: ffsb timeout
835
* https://tracker.ceph.com/issues/54460
836
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
837
* https://tracker.ceph.com/issues/57655
838
  qa: fs:mixed-clients kernel_untar_build failure
839
* https://tracker.ceph.com/issues/48773
840
  reached max tries: scrub does not complete
841
* https://tracker.ceph.com/issues/58340
842
  mds: fsstress.sh hangs with multimds
843
* https://tracker.ceph.com/issues/61400
844
  valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
845
* https://tracker.ceph.com/issues/57206
846
  libcephfs/test.sh: ceph_test_libcephfs_reclaim
847
  
848
* https://tracker.ceph.com/issues/57656
849
  [testing] dbench: write failed on handle 10010 (Resource temporarily unavailable)
850
* https://tracker.ceph.com/issues/61399
851
  ior build failure
852
* https://tracker.ceph.com/issues/57676
853
  error during scrub thrashing: backtrace
854
  
855
* https://tracker.ceph.com/issues/38452
856
  'sudo -u postgres -- pgbench -s 500 -i' failed
857 158 Rishabh Dave
* https://tracker.ceph.com/issues/62126
858 157 Venky Shankar
  blogbench.sh failure
859
860
h3. 18 July 2023
861
862
* https://tracker.ceph.com/issues/52624
863
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
864
* https://tracker.ceph.com/issues/57676
865
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
866
* https://tracker.ceph.com/issues/54460
867
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
868
* https://tracker.ceph.com/issues/57655
869
    qa: fs:mixed-clients kernel_untar_build failure
870
* https://tracker.ceph.com/issues/51964
871
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
872
* https://tracker.ceph.com/issues/59344
873
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
874
* https://tracker.ceph.com/issues/61182
875
    cephfs-mirror-ha-workunit: reached maximum tries (50) after waiting for 300 seconds
876
* https://tracker.ceph.com/issues/61957
877
    test_client_limits.TestClientLimits.test_client_release_bug
878
* https://tracker.ceph.com/issues/59348
879
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
880
* https://tracker.ceph.com/issues/61892
881
    test_strays.TestStrays.test_snapshot_remove failed
882
* https://tracker.ceph.com/issues/59346
883
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
884
* https://tracker.ceph.com/issues/44565
885
    src/mds/SimpleLock.h: 528: FAILED ceph_assert(state == LOCK_XLOCK || state == LOCK_XLOCKDONE || state == LOCK_XLOCKSNAP || state == LOCK_LOCK_XLOCK || state == LOCK_LOCK || is_locallock())
886
* https://tracker.ceph.com/issues/62067
887
    ffsb.sh failure "Resource temporarily unavailable"
888 156 Venky Shankar
889
890
h3. 17 July 2023
891
892
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230704.040136
893
894
* https://tracker.ceph.com/issues/61982
895
    Test failure: test_clean_stale_subvolume_snapshot_metadata (tasks.cephfs.test_volumes.TestSubvolumeSnapshots)
896
* https://tracker.ceph.com/issues/59344
897
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
898
* https://tracker.ceph.com/issues/61182
899
    cephfs-mirror-ha-workunit: reached maximum tries (50) after waiting for 300 seconds
900
* https://tracker.ceph.com/issues/61957
901
    test_client_limits.TestClientLimits.test_client_release_bug
902
* https://tracker.ceph.com/issues/61400
903
    valgrind+ceph-mon: segmentation fault in rocksdb+tcmalloc
904
* https://tracker.ceph.com/issues/59348
905
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
906
* https://tracker.ceph.com/issues/61892
907
    test_strays.TestStrays.test_snapshot_remove failed
908
* https://tracker.ceph.com/issues/59346
909
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
910
* https://tracker.ceph.com/issues/62036
911
    src/mds/MDCache.cc: 5131: FAILED ceph_assert(isolated_inodes.empty())
912
* https://tracker.ceph.com/issues/61737
913
    coredump from '/bin/podman pull quay.ceph.io/ceph-ci/ceph:pacific'
914
* https://tracker.ceph.com/issues/44565
915
    src/mds/SimpleLock.h: 528: FAILED ceph_assert(state == LOCK_XLOCK || state == LOCK_XLOCKDONE || state == LOCK_XLOCKSNAP || state == LOCK_LOCK_XLOCK || state == LOCK_LOCK || is_locallock())
916 155 Rishabh Dave
917 1 Patrick Donnelly
918 153 Rishabh Dave
h3. 13 July 2023 Run 2
919 152 Rishabh Dave
920
921
https://pulpito.ceph.com/rishabh-2023-07-08_23:33:40-fs-wip-rishabh-2023Jul9-testing-default-smithi/
922
https://pulpito.ceph.com/rishabh-2023-07-09_20:19:09-fs-wip-rishabh-2023Jul9-testing-default-smithi/
923
924
* https://tracker.ceph.com/issues/61957
925
  test_client_limits.TestClientLimits.test_client_release_bug
926
* https://tracker.ceph.com/issues/61982
927
  Test failure: test_clean_stale_subvolume_snapshot_metadata (tasks.cephfs.test_volumes.TestSubvolumeSnapshots)
928
* https://tracker.ceph.com/issues/59348
929
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
930
* https://tracker.ceph.com/issues/59344
931
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
932
* https://tracker.ceph.com/issues/54460
933
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
934
* https://tracker.ceph.com/issues/57655
935
  qa: fs:mixed-clients kernel_untar_build failure
936
* https://tracker.ceph.com/issues/61400
937
  valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
938
* https://tracker.ceph.com/issues/61399
939
  ior build failure
940
941 151 Venky Shankar
h3. 13 July 2023
942
943
https://pulpito.ceph.com/vshankar-2023-07-04_11:45:30-fs-wip-vshankar-testing-20230704.040242-testing-default-smithi/
944
945
* https://tracker.ceph.com/issues/54460
946
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
947
* https://tracker.ceph.com/issues/61400
948
    valgrind+ceph-mon: segmentation fault in rocksdb+tcmalloc
949
* https://tracker.ceph.com/issues/57655
950
    qa: fs:mixed-clients kernel_untar_build failure
951
* https://tracker.ceph.com/issues/61945
952
    LibCephFS.DelegTimeout failure
953
* https://tracker.ceph.com/issues/52624
954
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
955
* https://tracker.ceph.com/issues/57676
956
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
957
* https://tracker.ceph.com/issues/59348
958
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
959
* https://tracker.ceph.com/issues/59344
960
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
961
* https://tracker.ceph.com/issues/51964
962
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
963
* https://tracker.ceph.com/issues/59346
964
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
965
* https://tracker.ceph.com/issues/61982
966
    Test failure: test_clean_stale_subvolume_snapshot_metadata (tasks.cephfs.test_volumes.TestSubvolumeSnapshots)
967 150 Rishabh Dave
968
969
h3. 13 Jul 2023
970
971
https://pulpito.ceph.com/rishabh-2023-07-05_22:21:20-fs-wip-rishabh-2023Jul5-testing-default-smithi/
972
https://pulpito.ceph.com/rishabh-2023-07-06_19:33:28-fs-wip-rishabh-2023Jul5-testing-default-smithi/
973
974
* https://tracker.ceph.com/issues/61957
975
  test_client_limits.TestClientLimits.test_client_release_bug
976
* https://tracker.ceph.com/issues/59348
977
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
978
* https://tracker.ceph.com/issues/59346
979
  fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
980
* https://tracker.ceph.com/issues/48773
981
  scrub does not complete: reached max tries
982
* https://tracker.ceph.com/issues/59344
983
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
984
* https://tracker.ceph.com/issues/52438
985
  qa: ffsb timeout
986
* https://tracker.ceph.com/issues/57656
987
  [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
988
* https://tracker.ceph.com/issues/58742
989
  xfstests-dev: kcephfs: generic
990
* https://tracker.ceph.com/issues/61399
991 148 Rishabh Dave
  libmpich: undefined references to fi_strerror
992 149 Rishabh Dave
993 148 Rishabh Dave
h3. 12 July 2023
994
995
https://pulpito.ceph.com/rishabh-2023-07-05_18:32:52-fs-wip-rishabh-CephManager-in-CephFSTestCase-testing-default-smithi/
996
https://pulpito.ceph.com/rishabh-2023-07-06_19:46:43-fs-wip-rishabh-CephManager-in-CephFSTestCase-testing-default-smithi/
997
998
* https://tracker.ceph.com/issues/61892
999
  test_strays.TestStrays.test_snapshot_remove failed
1000
* https://tracker.ceph.com/issues/59348
1001
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1002
* https://tracker.ceph.com/issues/53859
1003
  qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
1004
* https://tracker.ceph.com/issues/59346
1005
  fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write"
1006
* https://tracker.ceph.com/issues/58742
1007
  xfstests-dev: kcephfs: generic
1008
* https://tracker.ceph.com/issues/59344
1009
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
1010
* https://tracker.ceph.com/issues/52438
1011
  qa: ffsb timeout
1012
* https://tracker.ceph.com/issues/57656
1013
  [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
1014
* https://tracker.ceph.com/issues/54460
1015
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1016
* https://tracker.ceph.com/issues/57655
1017
  qa: fs:mixed-clients kernel_untar_build failure
1018
* https://tracker.ceph.com/issues/61182
1019
  cephfs-mirror-ha-workunit: reached maximum tries (50) after waiting for 300 seconds
1020
* https://tracker.ceph.com/issues/61400
1021
  valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
1022 147 Rishabh Dave
* https://tracker.ceph.com/issues/48773
1023 146 Patrick Donnelly
  reached max tries: scrub does not complete
1024
1025
h3. 05 July 2023
1026
1027
https://pulpito.ceph.com/pdonnell-2023-07-05_03:38:33-fs:libcephfs-wip-pdonnell-testing-20230705.003205-distro-default-smithi/
1028
1029 137 Rishabh Dave
* https://tracker.ceph.com/issues/59346
1030 143 Rishabh Dave
  fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write"
1031
1032
h3. 27 Jun 2023
1033
1034
https://pulpito.ceph.com/rishabh-2023-06-21_23:38:17-fs-wip-rishabh-improvements-authmon-testing-default-smithi/
1035 144 Rishabh Dave
https://pulpito.ceph.com/rishabh-2023-06-23_17:37:30-fs-wip-rishabh-improvements-authmon-distro-default-smithi/
1036
1037
* https://tracker.ceph.com/issues/59348
1038
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1039
* https://tracker.ceph.com/issues/54460
1040
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1041
* https://tracker.ceph.com/issues/59346
1042
  fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write"
1043
* https://tracker.ceph.com/issues/59344
1044
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
1045
* https://tracker.ceph.com/issues/61399
1046
  libmpich: undefined references to fi_strerror
1047
* https://tracker.ceph.com/issues/50223
1048
  client.xxxx isn't responding to mclientcaps(revoke)
1049 143 Rishabh Dave
* https://tracker.ceph.com/issues/61831
1050
  Test failure: test_mirroring_init_failure_with_recovery (tasks.cephfs.test_mirroring.TestMirroring)
1051 142 Venky Shankar
1052
1053
h3. 22 June 2023
1054
1055
* https://tracker.ceph.com/issues/57676
1056
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1057
* https://tracker.ceph.com/issues/54460
1058
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1059
* https://tracker.ceph.com/issues/59344
1060
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
1061
* https://tracker.ceph.com/issues/59348
1062
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1063
* https://tracker.ceph.com/issues/61400
1064
    valgrind+ceph-mon: segmentation fault in rocksdb+tcmalloc
1065
* https://tracker.ceph.com/issues/57655
1066
    qa: fs:mixed-clients kernel_untar_build failure
1067
* https://tracker.ceph.com/issues/61394
1068
    qa/quincy: cluster [WRN] evicting unresponsive client smithi152 (4298), after 303.726 seconds" in cluster log
1069
* https://tracker.ceph.com/issues/61762
1070
    qa: wait_for_clean: failed before timeout expired
1071
* https://tracker.ceph.com/issues/61775
1072
    cephfs-mirror: mirror daemon does not shutdown (in mirror ha tests)
1073
* https://tracker.ceph.com/issues/44565
1074
    src/mds/SimpleLock.h: 528: FAILED ceph_assert(state == LOCK_XLOCK || state == LOCK_XLOCKDONE || state == LOCK_XLOCKSNAP || state == LOCK_LOCK_XLOCK || state == LOCK_LOCK || is_locallock())
1075
* https://tracker.ceph.com/issues/61790
1076
    cephfs client to mds comms remain silent after reconnect
1077
* https://tracker.ceph.com/issues/61791
1078
    snaptest-git-ceph.sh test timed out (job dead)
1079 139 Venky Shankar
1080
1081
h3. 20 June 2023
1082
1083
https://pulpito.ceph.com/vshankar-2023-06-15_04:58:28-fs-wip-vshankar-testing-20230614.124123-testing-default-smithi/
1084
1085
* https://tracker.ceph.com/issues/57676
1086
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1087
* https://tracker.ceph.com/issues/54460
1088
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1089 140 Venky Shankar
* https://tracker.ceph.com/issues/54462
1090 1 Patrick Donnelly
  Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128
1091 141 Venky Shankar
* https://tracker.ceph.com/issues/58340
1092 139 Venky Shankar
  mds: fsstress.sh hangs with multimds
1093
* https://tracker.ceph.com/issues/59344
1094
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
1095
* https://tracker.ceph.com/issues/59348
1096
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1097
* https://tracker.ceph.com/issues/57656
1098
  [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
1099
* https://tracker.ceph.com/issues/61400
1100
    valgrind+ceph-mon: segmentation fault in rocksdb+tcmalloc
1101
* https://tracker.ceph.com/issues/57655
1102
    qa: fs:mixed-clients kernel_untar_build failure
1103
* https://tracker.ceph.com/issues/44565
1104
    src/mds/SimpleLock.h: 528: FAILED ceph_assert(state == LOCK_XLOCK || state == LOCK_XLOCKDONE || state == LOCK_XLOCKSNAP || state == LOCK_LOCK_XLOCK || state == LOCK_LOCK || is_locallock())
1105
* https://tracker.ceph.com/issues/61737
1106 138 Rishabh Dave
    coredump from '/bin/podman pull quay.ceph.io/ceph-ci/ceph:pacific'
1107
1108
h3. 16 June 2023
1109
1110 1 Patrick Donnelly
https://pulpito.ceph.com/rishabh-2023-05-16_10:39:13-fs-wip-rishabh-2023May15-1524-testing-default-smithi/
1111 145 Rishabh Dave
https://pulpito.ceph.com/rishabh-2023-05-17_11:09:48-fs-wip-rishabh-2023May15-1524-testing-default-smithi/
1112 138 Rishabh Dave
https://pulpito.ceph.com/rishabh-2023-05-18_10:01:53-fs-wip-rishabh-2023May15-1524-testing-default-smithi/
1113 1 Patrick Donnelly
(bins were rebuilt with a subset of orig PRs) https://pulpito.ceph.com/rishabh-2023-06-09_10:19:22-fs-wip-rishabh-2023Jun9-1308-testing-default-smithi/
1114
1115
1116
* https://tracker.ceph.com/issues/59344
1117
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
1118 138 Rishabh Dave
* https://tracker.ceph.com/issues/59348
1119
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1120 145 Rishabh Dave
* https://tracker.ceph.com/issues/59346
1121
  fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write"
1122
* https://tracker.ceph.com/issues/57656
1123
  [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
1124
* https://tracker.ceph.com/issues/54460
1125
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1126 138 Rishabh Dave
* https://tracker.ceph.com/issues/54462
1127
  Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128
1128 145 Rishabh Dave
* https://tracker.ceph.com/issues/61399
1129
  libmpich: undefined references to fi_strerror
1130
* https://tracker.ceph.com/issues/58945
1131
  xfstests-dev: ceph-fuse: generic 
1132 138 Rishabh Dave
* https://tracker.ceph.com/issues/58742
1133 136 Patrick Donnelly
  xfstests-dev: kcephfs: generic
1134
1135
h3. 24 May 2023
1136
1137
https://pulpito.ceph.com/pdonnell-2023-05-23_18:20:18-fs-wip-pdonnell-testing-20230523.134409-distro-default-smithi/
1138
1139
* https://tracker.ceph.com/issues/57676
1140
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1141
* https://tracker.ceph.com/issues/59683
1142
    Error: Unable to find a match: userspace-rcu-devel libedit-devel device-mapper-devel with fscrypt tests
1143
* https://tracker.ceph.com/issues/61399
1144
    qa: "[Makefile:299: ior] Error 1"
1145
* https://tracker.ceph.com/issues/61265
1146
    qa: tasks.cephfs.fuse_mount:process failed to terminate after unmount
1147
* https://tracker.ceph.com/issues/59348
1148
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1149
* https://tracker.ceph.com/issues/59346
1150
    qa/workunits/fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write"
1151
* https://tracker.ceph.com/issues/61400
1152
    valgrind+ceph-mon: segmentation fault in rocksdb+tcmalloc
1153
* https://tracker.ceph.com/issues/54460
1154
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1155
* https://tracker.ceph.com/issues/51964
1156
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1157
* https://tracker.ceph.com/issues/59344
1158
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
1159
* https://tracker.ceph.com/issues/61407
1160
    mds: abort on CInode::verify_dirfrags
1161
* https://tracker.ceph.com/issues/48773
1162
    qa: scrub does not complete
1163
* https://tracker.ceph.com/issues/57655
1164
    qa: fs:mixed-clients kernel_untar_build failure
1165
* https://tracker.ceph.com/issues/61409
1166 128 Venky Shankar
    qa: _test_stale_caps does not wait for file flush before stat
1167
1168
h3. 15 May 2023
1169 130 Venky Shankar
1170 128 Venky Shankar
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230509.090020
1171
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230509.090020-6
1172
1173
* https://tracker.ceph.com/issues/52624
1174
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
1175
* https://tracker.ceph.com/issues/54460
1176
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1177
* https://tracker.ceph.com/issues/57676
1178
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1179
* https://tracker.ceph.com/issues/59684 [kclient bug]
1180
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
1181
* https://tracker.ceph.com/issues/59348
1182
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1183 131 Venky Shankar
* https://tracker.ceph.com/issues/61148
1184
    dbench test results in call trace in dmesg [kclient bug]
1185 133 Kotresh Hiremath Ravishankar
* https://tracker.ceph.com/issues/58340
1186 134 Kotresh Hiremath Ravishankar
    mds: fsstress.sh hangs with multimds
1187 125 Venky Shankar
1188
 
1189 129 Rishabh Dave
h3. 11 May 2023
1190
1191
https://pulpito.ceph.com/yuriw-2023-05-10_18:21:40-fs-wip-yuri7-testing-2023-05-10-0742-distro-default-smithi/
1192
1193
* https://tracker.ceph.com/issues/59684 [kclient bug]
1194
  Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
1195
* https://tracker.ceph.com/issues/59348
1196
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1197
* https://tracker.ceph.com/issues/57655
1198
  qa: fs:mixed-clients kernel_untar_build failure
1199
* https://tracker.ceph.com/issues/57676
1200
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
1201
* https://tracker.ceph.com/issues/55805
1202
  error during scrub thrashing reached max tries in 900 secs
1203
* https://tracker.ceph.com/issues/54460
1204
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1205
* https://tracker.ceph.com/issues/57656
1206
  [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
1207
* https://tracker.ceph.com/issues/58220
1208
  Command failed (workunit test fs/quota/quota.sh) on smithi081 with status 1:
1209 1 Patrick Donnelly
* https://tracker.ceph.com/issues/58220#note-9
1210
  workunit fs/test_python.sh: test_disk_quota_exceeeded_error failure
1211 134 Kotresh Hiremath Ravishankar
* https://tracker.ceph.com/issues/59342
1212
  qa/workunits/kernel_untar_build.sh failed when compiling the Linux source
1213 135 Kotresh Hiremath Ravishankar
* https://tracker.ceph.com/issues/58949
1214
    test_cephfs.test_disk_quota_exceeeded_error - AssertionError: DiskQuotaExceeded not raised by write
1215 129 Rishabh Dave
* https://tracker.ceph.com/issues/61243 (NEW)
1216
  test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
1217
1218 125 Venky Shankar
h3. 11 May 2023
1219 127 Venky Shankar
1220
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230509.054005
1221 126 Venky Shankar
1222 125 Venky Shankar
(no fsstress job failure [https://tracker.ceph.com/issues/58340] since https://github.com/ceph/ceph/pull/49553
1223
 was included in the branch, however, the PR got updated and needs retest).
1224
1225
* https://tracker.ceph.com/issues/52624
1226
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
1227
* https://tracker.ceph.com/issues/54460
1228
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1229
* https://tracker.ceph.com/issues/57676
1230
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1231
* https://tracker.ceph.com/issues/59683
1232
    Error: Unable to find a match: userspace-rcu-devel libedit-devel device-mapper-devel with fscrypt tests
1233
* https://tracker.ceph.com/issues/59684 [kclient bug]
1234
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
1235
* https://tracker.ceph.com/issues/59348
1236 124 Venky Shankar
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1237
1238
h3. 09 May 2023
1239
1240
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230506.143554
1241
1242
* https://tracker.ceph.com/issues/52624
1243
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
1244
* https://tracker.ceph.com/issues/58340
1245
    mds: fsstress.sh hangs with multimds
1246
* https://tracker.ceph.com/issues/54460
1247
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1248
* https://tracker.ceph.com/issues/57676
1249
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1250
* https://tracker.ceph.com/issues/51964
1251
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1252
* https://tracker.ceph.com/issues/59350
1253
    qa: test_flag_scrub_mdsdir (tasks.cephfs.test_scrub_checks.TestScrubChecks) ... ERROR
1254
* https://tracker.ceph.com/issues/59683
1255
    Error: Unable to find a match: userspace-rcu-devel libedit-devel device-mapper-devel with fscrypt tests
1256
* https://tracker.ceph.com/issues/59684 [kclient bug]
1257
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
1258
* https://tracker.ceph.com/issues/59348
1259 123 Venky Shankar
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1260
1261
h3. 10 Apr 2023
1262
1263
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230330.105356
1264
1265
* https://tracker.ceph.com/issues/52624
1266
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
1267
* https://tracker.ceph.com/issues/58340
1268
    mds: fsstress.sh hangs with multimds
1269
* https://tracker.ceph.com/issues/54460
1270
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1271
* https://tracker.ceph.com/issues/57676
1272
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1273 119 Rishabh Dave
* https://tracker.ceph.com/issues/51964
1274 120 Rishabh Dave
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1275 121 Rishabh Dave
1276 120 Rishabh Dave
h3. 31 Mar 2023
1277 122 Rishabh Dave
1278
run: http://pulpito.front.sepia.ceph.com/rishabh-2023-03-03_21:39:49-fs-wip-rishabh-2023Mar03-2316-testing-default-smithi/
1279 120 Rishabh Dave
re-run (some PRs removed from batch): http://pulpito.front.sepia.ceph.com/rishabh-2023-03-11_05:54:03-fs-wip-rishabh-2023Mar10-1727-testing-default-smithi/
1280
re-run (some PRs removed from batch): http://pulpito.front.sepia.ceph.com/rishabh-2023-03-23_08:27:28-fs-wip-rishabh-2023Mar20-2250-testing-default-smithi/
1281
1282
There were many more re-runs for "failed+dead" jobs as well as for individual jobs. half of the PRs from the batch were removed (gradually over subsequent re-runs).
1283
1284
* https://tracker.ceph.com/issues/57676
1285
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
1286
* https://tracker.ceph.com/issues/54460
1287
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1288
* https://tracker.ceph.com/issues/58220
1289
  Command failed (workunit test fs/quota/quota.sh) on smithixxx with status 1
1290
* https://tracker.ceph.com/issues/58220#note-9
1291
  workunit fs/test_python.sh: test_disk_quota_exceeeded_error failure
1292
* https://tracker.ceph.com/issues/56695
1293
  Command failed (workunit test suites/pjd.sh)
1294
* https://tracker.ceph.com/issues/58564 
1295
  workuit dbench failed with error code 1
1296
* https://tracker.ceph.com/issues/57206
1297
  libcephfs/test.sh: ceph_test_libcephfs_reclaim
1298
* https://tracker.ceph.com/issues/57580
1299
  Test failure: test_newops_getvxattr (tasks.cephfs.test_newops.TestNewOps)
1300
* https://tracker.ceph.com/issues/58940
1301
  ceph osd hit ceph_abort
1302
* https://tracker.ceph.com/issues/55805
1303 118 Venky Shankar
  error scrub thrashing reached max tries in 900 secs
1304
1305
h3. 30 March 2023
1306
1307
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230315.085747
1308
1309
* https://tracker.ceph.com/issues/58938
1310
    qa: xfstests-dev's generic test suite has 7 failures with kclient
1311
* https://tracker.ceph.com/issues/51964
1312
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1313
* https://tracker.ceph.com/issues/58340
1314 114 Venky Shankar
    mds: fsstress.sh hangs with multimds
1315
1316 115 Venky Shankar
h3. 29 March 2023
1317 114 Venky Shankar
1318
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230317.095222
1319
1320
* https://tracker.ceph.com/issues/56695
1321
    [RHEL stock] pjd test failures
1322
* https://tracker.ceph.com/issues/57676
1323
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1324
* https://tracker.ceph.com/issues/57087
1325
    qa: test_fragmented_injection (tasks.cephfs.test_data_scan.TestDataScan) failure
1326 116 Venky Shankar
* https://tracker.ceph.com/issues/58340
1327
    mds: fsstress.sh hangs with multimds
1328 114 Venky Shankar
* https://tracker.ceph.com/issues/57655
1329
    qa: fs:mixed-clients kernel_untar_build failure
1330 117 Venky Shankar
* https://tracker.ceph.com/issues/59230
1331
    Test failure: test_object_deletion (tasks.cephfs.test_damage.TestDamage)
1332 114 Venky Shankar
* https://tracker.ceph.com/issues/58938
1333 113 Venky Shankar
    qa: xfstests-dev's generic test suite has 7 failures with kclient
1334
1335
h3. 13 Mar 2023
1336
1337
* https://tracker.ceph.com/issues/56695
1338
    [RHEL stock] pjd test failures
1339
* https://tracker.ceph.com/issues/57676
1340
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1341
* https://tracker.ceph.com/issues/51964
1342
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1343
* https://tracker.ceph.com/issues/54460
1344
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1345
* https://tracker.ceph.com/issues/57656
1346 112 Venky Shankar
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
1347
1348
h3. 09 Mar 2023
1349
1350
https://pulpito.ceph.com/vshankar-2023-03-03_04:39:14-fs-wip-vshankar-testing-20230303.023823-testing-default-smithi/
1351
https://pulpito.ceph.com/vshankar-2023-03-08_15:12:36-fs-wip-vshankar-testing-20230308.112059-testing-default-smithi/
1352
1353
* https://tracker.ceph.com/issues/56695
1354
    [RHEL stock] pjd test failures
1355
* https://tracker.ceph.com/issues/57676
1356
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1357
* https://tracker.ceph.com/issues/51964
1358
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1359
* https://tracker.ceph.com/issues/54460
1360
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1361
* https://tracker.ceph.com/issues/58340
1362
    mds: fsstress.sh hangs with multimds
1363
* https://tracker.ceph.com/issues/57087
1364 111 Venky Shankar
    qa: test_fragmented_injection (tasks.cephfs.test_data_scan.TestDataScan) failure
1365
1366
h3. 07 Mar 2023
1367
1368
https://pulpito.ceph.com/vshankar-2023-03-02_09:21:58-fs-wip-vshankar-testing-20230222.044949-testing-default-smithi/
1369
https://pulpito.ceph.com/vshankar-2023-03-07_05:15:12-fs-wip-vshankar-testing-20230307.030510-testing-default-smithi/
1370
1371
* https://tracker.ceph.com/issues/56695
1372
    [RHEL stock] pjd test failures
1373
* https://tracker.ceph.com/issues/57676
1374
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1375
* https://tracker.ceph.com/issues/51964
1376
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1377
* https://tracker.ceph.com/issues/57656
1378
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
1379
* https://tracker.ceph.com/issues/57655
1380
    qa: fs:mixed-clients kernel_untar_build failure
1381
* https://tracker.ceph.com/issues/58220
1382
    Command failed (workunit test fs/quota/quota.sh) on smithi081 with status 1:
1383
* https://tracker.ceph.com/issues/54460
1384
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1385
* https://tracker.ceph.com/issues/58934
1386 109 Venky Shankar
    snaptest-git-ceph.sh failure with ceph-fuse
1387
1388
h3. 28 Feb 2023
1389
1390
https://pulpito.ceph.com/vshankar-2023-02-24_02:11:45-fs-wip-vshankar-testing-20230222.025426-testing-default-smithi/
1391
1392
* https://tracker.ceph.com/issues/56695
1393
    [RHEL stock] pjd test failures
1394
* https://tracker.ceph.com/issues/57676
1395
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1396 110 Venky Shankar
* https://tracker.ceph.com/issues/56446
1397 109 Venky Shankar
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
1398
1399 107 Venky Shankar
(teuthology infra issues causing testing delays - merging PRs which have tests passing)
1400
1401
h3. 25 Jan 2023
1402
1403
https://pulpito.ceph.com/vshankar-2023-01-25_07:57:32-fs-wip-vshankar-testing-20230125.055346-testing-default-smithi/
1404
1405
* https://tracker.ceph.com/issues/52624
1406
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
1407
* https://tracker.ceph.com/issues/56695
1408
    [RHEL stock] pjd test failures
1409
* https://tracker.ceph.com/issues/57676
1410
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1411
* https://tracker.ceph.com/issues/56446
1412
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
1413
* https://tracker.ceph.com/issues/57206
1414
    libcephfs/test.sh: ceph_test_libcephfs_reclaim
1415
* https://tracker.ceph.com/issues/58220
1416
    Command failed (workunit test fs/quota/quota.sh) on smithi081 with status 1:
1417
* https://tracker.ceph.com/issues/58340
1418
  mds: fsstress.sh hangs with multimds
1419
* https://tracker.ceph.com/issues/56011
1420
    fs/thrash: snaptest-snap-rm-cmp.sh fails in mds5sum comparison
1421
* https://tracker.ceph.com/issues/54460
1422 101 Rishabh Dave
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1423
1424
h3. 30 JAN 2023
1425
1426
run: http://pulpito.front.sepia.ceph.com/rishabh-2022-11-28_08:04:11-fs-wip-rishabh-testing-2022Nov24-1818-testing-default-smithi/
1427
re-run: http://pulpito.front.sepia.ceph.com/rishabh-2023-01-13_12:08:33-fs-wip-rishabh-testing-2022Nov24-11Jan2023-distro-default-smithi/
1428 105 Rishabh Dave
re-run of re-run: http://pulpito.front.sepia.ceph.com/rishabh-2023-01-23_18:53:32-fs-wip-rishabh-testing-2022Nov24-11Jan2023-distro-default-smithi/
1429
1430 101 Rishabh Dave
* https://tracker.ceph.com/issues/52624
1431
  qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
1432
* https://tracker.ceph.com/issues/56695
1433
  [RHEL stock] pjd test failures
1434
* https://tracker.ceph.com/issues/57676
1435
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
1436
* https://tracker.ceph.com/issues/55332
1437
  Failure in snaptest-git-ceph.sh
1438
* https://tracker.ceph.com/issues/51964
1439
  qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1440
* https://tracker.ceph.com/issues/56446
1441
  Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
1442
* https://tracker.ceph.com/issues/57655 
1443
  qa: fs:mixed-clients kernel_untar_build failure
1444
* https://tracker.ceph.com/issues/54460
1445
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1446 103 Rishabh Dave
* https://tracker.ceph.com/issues/58340
1447
  mds: fsstress.sh hangs with multimds
1448 101 Rishabh Dave
* https://tracker.ceph.com/issues/58219
1449 102 Rishabh Dave
  Command crashed: 'ceph-dencoder type inode_backtrace_t import - decode dump_json'
1450
1451
* "Failed to load ceph-mgr modules: prometheus" in cluster log"
1452 106 Rishabh Dave
  http://pulpito.front.sepia.ceph.com/rishabh-2023-01-23_18:53:32-fs-wip-rishabh-testing-2022Nov24-11Jan2023-distro-default-smithi/7134086
1453
  Acc to Venky this was fixed in https://github.com/ceph/ceph/commit/cf6089200d96fc56b08ee17a4e31f19823370dc8
1454 102 Rishabh Dave
* Created https://tracker.ceph.com/issues/58564
1455 100 Venky Shankar
  workunit test suites/dbench.sh failed error code 1
1456
1457
h3. 15 Dec 2022
1458
1459
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20221215.112736
1460
1461
* https://tracker.ceph.com/issues/52624
1462
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1463
* https://tracker.ceph.com/issues/56695
1464
    [RHEL stock] pjd test failures
1465
* https://tracker.ceph.com/issues/58219
1466
* https://tracker.ceph.com/issues/57655
1467
* qa: fs:mixed-clients kernel_untar_build failure
1468
    Test failure: test_journal_migration (tasks.cephfs.test_journal_migration.TestJournalMigration)
1469
* https://tracker.ceph.com/issues/57676
1470
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1471
* https://tracker.ceph.com/issues/58340
1472 96 Venky Shankar
    mds: fsstress.sh hangs with multimds
1473
1474
h3. 08 Dec 2022
1475 99 Venky Shankar
1476 96 Venky Shankar
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20221130.043104
1477
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20221209.043803
1478
1479
(lots of transient git.ceph.com failures)
1480
1481
* https://tracker.ceph.com/issues/52624
1482
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1483
* https://tracker.ceph.com/issues/56695
1484
    [RHEL stock] pjd test failures
1485
* https://tracker.ceph.com/issues/57655
1486
    qa: fs:mixed-clients kernel_untar_build failure
1487
* https://tracker.ceph.com/issues/58219
1488
    Test failure: test_journal_migration (tasks.cephfs.test_journal_migration.TestJournalMigration)
1489
* https://tracker.ceph.com/issues/58220
1490
    Command failed (workunit test fs/quota/quota.sh) on smithi081 with status 1:
1491 97 Venky Shankar
* https://tracker.ceph.com/issues/57676
1492
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1493 98 Venky Shankar
* https://tracker.ceph.com/issues/53859
1494
    qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
1495
* https://tracker.ceph.com/issues/54460
1496
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1497 96 Venky Shankar
* https://tracker.ceph.com/issues/58244
1498 95 Venky Shankar
    Test failure: test_rebuild_inotable (tasks.cephfs.test_data_scan.TestDataScan)
1499
1500
h3. 14 Oct 2022
1501
1502
https://pulpito.ceph.com/vshankar-2022-10-12_04:56:59-fs-wip-vshankar-testing-20221011-145847-testing-default-smithi/
1503
https://pulpito.ceph.com/vshankar-2022-10-14_04:04:57-fs-wip-vshankar-testing-20221014-072608-testing-default-smithi/
1504
1505
* https://tracker.ceph.com/issues/52624
1506
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1507
* https://tracker.ceph.com/issues/55804
1508
    Command failed (workunit test suites/pjd.sh)
1509
* https://tracker.ceph.com/issues/51964
1510
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1511
* https://tracker.ceph.com/issues/57682
1512
    client: ERROR: test_reconnect_after_blocklisted
1513 90 Rishabh Dave
* https://tracker.ceph.com/issues/54460
1514 91 Rishabh Dave
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1515
1516
h3. 10 Oct 2022
1517 92 Rishabh Dave
1518 91 Rishabh Dave
http://pulpito.front.sepia.ceph.com/rishabh-2022-09-30_19:45:21-fs-wip-rishabh-testing-30Sep2022-testing-default-smithi/
1519
1520
reruns
1521
* fs-thrash, passed: http://pulpito.front.sepia.ceph.com/rishabh-2022-10-04_13:19:47-fs-wip-rishabh-testing-30Sep2022-testing-default-smithi/
1522 94 Rishabh Dave
* fs-verify, passed: http://pulpito.front.sepia.ceph.com/rishabh-2022-10-05_12:25:37-fs-wip-rishabh-testing-30Sep2022-testing-default-smithi/
1523 91 Rishabh Dave
* cephadm failures also passed after many re-runs: http://pulpito.front.sepia.ceph.com/rishabh-2022-10-06_13:50:51-fs-wip-rishabh-testing-30Sep2022-2-testing-default-smithi/
1524 93 Rishabh Dave
    ** needed this PR to be merged in ceph-ci branch - https://github.com/ceph/ceph/pull/47458
1525 91 Rishabh Dave
1526
known bugs
1527
* https://tracker.ceph.com/issues/52624
1528
  qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1529
* https://tracker.ceph.com/issues/50223
1530
  client.xxxx isn't responding to mclientcaps(revoke
1531
* https://tracker.ceph.com/issues/57299
1532
  qa: test_dump_loads fails with JSONDecodeError
1533
* https://tracker.ceph.com/issues/57655 [Exist in main as well]
1534
  qa: fs:mixed-clients kernel_untar_build failure
1535
* https://tracker.ceph.com/issues/57206
1536 90 Rishabh Dave
  libcephfs/test.sh: ceph_test_libcephfs_reclaim
1537
1538
h3. 2022 Sep 29
1539
1540
http://pulpito.front.sepia.ceph.com/rishabh-2022-09-14_12:48:43-fs-wip-rishabh-testing-2022Sep9-1708-testing-default-smithi/
1541
1542
* https://tracker.ceph.com/issues/55804
1543
  Command failed (workunit test suites/pjd.sh)
1544
* https://tracker.ceph.com/issues/36593
1545
  Command failed (workunit test fs/quota/quota.sh) on smithixxx with status 1
1546
* https://tracker.ceph.com/issues/52624
1547
  qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1548
* https://tracker.ceph.com/issues/51964
1549
  qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1550
* https://tracker.ceph.com/issues/56632
1551
  Test failure: test_subvolume_snapshot_clone_quota_exceeded
1552
* https://tracker.ceph.com/issues/50821
1553 88 Patrick Donnelly
  qa: untar_snap_rm failure during mds thrashing
1554
1555
h3. 2022 Sep 26
1556
1557
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20220923.171109
1558
1559
* https://tracker.ceph.com/issues/55804
1560
    qa failure: pjd link tests failed
1561
* https://tracker.ceph.com/issues/57676
1562
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1563
* https://tracker.ceph.com/issues/52624
1564
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1565
* https://tracker.ceph.com/issues/57580
1566
    Test failure: test_newops_getvxattr (tasks.cephfs.test_newops.TestNewOps)
1567
* https://tracker.ceph.com/issues/48773
1568
    qa: scrub does not complete
1569
* https://tracker.ceph.com/issues/57299
1570
    qa: test_dump_loads fails with JSONDecodeError
1571
* https://tracker.ceph.com/issues/57280
1572
    qa: tasks/kernel_cfuse_workunits_untarbuild_blogbench fails - Failed to fetch package version from shaman
1573
* https://tracker.ceph.com/issues/57205
1574
    Test failure: test_subvolume_group_ls_filter_internal_directories (tasks.cephfs.test_volumes.TestSubvolumeGroups)
1575
* https://tracker.ceph.com/issues/57656
1576
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
1577
* https://tracker.ceph.com/issues/57677
1578
    qa: "1 MDSs behind on trimming (MDS_TRIM)"
1579
* https://tracker.ceph.com/issues/57206
1580
    libcephfs/test.sh: ceph_test_libcephfs_reclaim
1581
* https://tracker.ceph.com/issues/57446
1582
    qa: test_subvolume_snapshot_info_if_orphan_clone fails
1583 89 Patrick Donnelly
* https://tracker.ceph.com/issues/57655 [Exist in main as well]
1584
    qa: fs:mixed-clients kernel_untar_build failure
1585 88 Patrick Donnelly
* https://tracker.ceph.com/issues/57682
1586
    client: ERROR: test_reconnect_after_blocklisted
1587 87 Patrick Donnelly
1588
1589
h3. 2022 Sep 22
1590
1591
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20220920.234701
1592
1593
* https://tracker.ceph.com/issues/57299
1594
    qa: test_dump_loads fails with JSONDecodeError
1595
* https://tracker.ceph.com/issues/57205
1596
    Test failure: test_subvolume_group_ls_filter_internal_directories (tasks.cephfs.test_volumes.TestSubvolumeGroups)
1597
* https://tracker.ceph.com/issues/52624
1598
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1599
* https://tracker.ceph.com/issues/57580
1600
    Test failure: test_newops_getvxattr (tasks.cephfs.test_newops.TestNewOps)
1601
* https://tracker.ceph.com/issues/57280
1602
    qa: tasks/kernel_cfuse_workunits_untarbuild_blogbench fails - Failed to fetch package version from shaman
1603
* https://tracker.ceph.com/issues/48773
1604
    qa: scrub does not complete
1605
* https://tracker.ceph.com/issues/56446
1606
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
1607
* https://tracker.ceph.com/issues/57206
1608
    libcephfs/test.sh: ceph_test_libcephfs_reclaim
1609
* https://tracker.ceph.com/issues/51267
1610
    CommandFailedError: Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithi096 with status 1:...
1611
1612
NEW:
1613
1614
* https://tracker.ceph.com/issues/57656
1615
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
1616
* https://tracker.ceph.com/issues/57655 [Exist in main as well]
1617
    qa: fs:mixed-clients kernel_untar_build failure
1618
* https://tracker.ceph.com/issues/57657
1619
    mds: scrub locates mismatch between child accounted_rstats and self rstats
1620
1621
Segfault probably caused by: https://github.com/ceph/ceph/pull/47795#issuecomment-1255724799
1622 80 Venky Shankar
1623 79 Venky Shankar
1624
h3. 2022 Sep 16
1625
1626
https://pulpito.ceph.com/?branch=wip-vshankar-testing1-20220905-132828
1627
1628
* https://tracker.ceph.com/issues/57446
1629
    qa: test_subvolume_snapshot_info_if_orphan_clone fails
1630
* https://tracker.ceph.com/issues/57299
1631
    qa: test_dump_loads fails with JSONDecodeError
1632
* https://tracker.ceph.com/issues/50223
1633
    client.xxxx isn't responding to mclientcaps(revoke)
1634
* https://tracker.ceph.com/issues/52624
1635
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1636
* https://tracker.ceph.com/issues/57205
1637
    Test failure: test_subvolume_group_ls_filter_internal_directories (tasks.cephfs.test_volumes.TestSubvolumeGroups)
1638
* https://tracker.ceph.com/issues/57280
1639
    qa: tasks/kernel_cfuse_workunits_untarbuild_blogbench fails - Failed to fetch package version from shaman
1640
* https://tracker.ceph.com/issues/51282
1641
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
1642
* https://tracker.ceph.com/issues/48203
1643
  https://tracker.ceph.com/issues/36593
1644
    qa: quota failure
1645
    qa: quota failure caused by clients stepping on each other
1646
* https://tracker.ceph.com/issues/57580
1647 77 Rishabh Dave
    Test failure: test_newops_getvxattr (tasks.cephfs.test_newops.TestNewOps)
1648
1649 76 Rishabh Dave
1650
h3. 2022 Aug 26
1651
1652
http://pulpito.front.sepia.ceph.com/rishabh-2022-08-22_17:49:59-fs-wip-rishabh-testing-2022Aug19-testing-default-smithi/
1653
http://pulpito.front.sepia.ceph.com/rishabh-2022-08-24_11:56:51-fs-wip-rishabh-testing-2022Aug19-testing-default-smithi/
1654
1655
* https://tracker.ceph.com/issues/57206
1656
  libcephfs/test.sh: ceph_test_libcephfs_reclaim
1657
* https://tracker.ceph.com/issues/56632
1658
  Test failure: test_subvolume_snapshot_clone_quota_exceeded (tasks.cephfs.test_volumes.TestSubvolumeSnapshotClones)
1659
* https://tracker.ceph.com/issues/56446
1660
  Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
1661
* https://tracker.ceph.com/issues/51964
1662
  qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1663
* https://tracker.ceph.com/issues/53859
1664
  qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
1665
1666
* https://tracker.ceph.com/issues/54460
1667
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1668
* https://tracker.ceph.com/issues/54462
1669
  Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128
1670
* https://tracker.ceph.com/issues/54460
1671
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1672
* https://tracker.ceph.com/issues/36593
1673
  Command failed (workunit test fs/quota/quota.sh) on smithixxx with status 1
1674
1675
* https://tracker.ceph.com/issues/52624
1676
  qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1677
* https://tracker.ceph.com/issues/55804
1678
  Command failed (workunit test suites/pjd.sh)
1679
* https://tracker.ceph.com/issues/50223
1680
  client.xxxx isn't responding to mclientcaps(revoke)
1681 75 Venky Shankar
1682
1683
h3. 2022 Aug 22
1684
1685
https://pulpito.ceph.com/vshankar-2022-08-12_09:34:24-fs-wip-vshankar-testing1-20220812-072441-testing-default-smithi/
1686
https://pulpito.ceph.com/vshankar-2022-08-18_04:30:42-fs-wip-vshankar-testing1-20220818-082047-testing-default-smithi/ (drop problematic PR and re-run)
1687
1688
* https://tracker.ceph.com/issues/52624
1689
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1690
* https://tracker.ceph.com/issues/56446
1691
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
1692
* https://tracker.ceph.com/issues/55804
1693
    Command failed (workunit test suites/pjd.sh)
1694
* https://tracker.ceph.com/issues/51278
1695
    mds: "FAILED ceph_assert(!segments.empty())"
1696
* https://tracker.ceph.com/issues/54460
1697
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1698
* https://tracker.ceph.com/issues/57205
1699
    Test failure: test_subvolume_group_ls_filter_internal_directories (tasks.cephfs.test_volumes.TestSubvolumeGroups)
1700
* https://tracker.ceph.com/issues/57206
1701
    ceph_test_libcephfs_reclaim crashes during test
1702
* https://tracker.ceph.com/issues/53859
1703
  Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
1704
* https://tracker.ceph.com/issues/50223
1705 72 Venky Shankar
    client.xxxx isn't responding to mclientcaps(revoke)
1706
1707
h3. 2022 Aug 12
1708
1709
https://pulpito.ceph.com/vshankar-2022-08-10_04:06:00-fs-wip-vshankar-testing-20220805-190751-testing-default-smithi/
1710
https://pulpito.ceph.com/vshankar-2022-08-11_12:16:58-fs-wip-vshankar-testing-20220811-145809-testing-default-smithi/ (drop problematic PR and re-run)
1711
1712
* https://tracker.ceph.com/issues/52624
1713
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1714
* https://tracker.ceph.com/issues/56446
1715
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
1716
* https://tracker.ceph.com/issues/51964
1717
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1718
* https://tracker.ceph.com/issues/55804
1719
    Command failed (workunit test suites/pjd.sh)
1720
* https://tracker.ceph.com/issues/50223
1721
    client.xxxx isn't responding to mclientcaps(revoke)
1722
* https://tracker.ceph.com/issues/50821
1723 73 Venky Shankar
    qa: untar_snap_rm failure during mds thrashing
1724 72 Venky Shankar
* https://tracker.ceph.com/issues/54460
1725 71 Venky Shankar
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1726
1727
h3. 2022 Aug 04
1728
1729
https://pulpito.ceph.com/?branch=wip-vshankar-testing1-20220804-123835 (only mgr/volumes, mgr/stats)
1730
1731 69 Rishabh Dave
Unrealted teuthology failure on rhel
1732 68 Rishabh Dave
1733
h3. 2022 Jul 25
1734
1735
http://pulpito.front.sepia.ceph.com/rishabh-2022-07-22_11:34:20-fs-wip-rishabh-testing-2022Jul22-1400-testing-default-smithi/
1736
1737 74 Rishabh Dave
1st re-run: http://pulpito.front.sepia.ceph.com/rishabh-2022-07-24_03:51:19-fs-wip-rishabh-testing-2022Jul22-1400-testing-default-smithi
1738
2nd re-run: http://pulpito.front.sepia.ceph.com/rishabh-2022-07-24_08:53:36-fs-wip-rishabh-testing-2022Jul22-1400-testing-default-smithi/
1739 68 Rishabh Dave
3rd re-run: http://pulpito.front.sepia.ceph.com/rishabh-2022-07-24_08:53:36-fs-wip-rishabh-testing-2022Jul22-1400-testing-default-smithi/
1740
4th (final) re-run: http://pulpito.front.sepia.ceph.com/rishabh-2022-07-28_03:59:01-fs-wip-rishabh-testing-2022Jul28-0143-testing-default-smithi/
1741
1742
* https://tracker.ceph.com/issues/55804
1743
  Command failed (workunit test suites/pjd.sh)
1744
* https://tracker.ceph.com/issues/50223
1745
  client.xxxx isn't responding to mclientcaps(revoke)
1746
1747
* https://tracker.ceph.com/issues/54460
1748
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1749 1 Patrick Donnelly
* https://tracker.ceph.com/issues/36593
1750 74 Rishabh Dave
  Command failed (workunit test fs/quota/quota.sh) on smithixxx with status 1
1751 68 Rishabh Dave
* https://tracker.ceph.com/issues/54462
1752 67 Patrick Donnelly
  Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128~
1753
1754
h3. 2022 July 22
1755
1756
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20220721.235756
1757
1758
MDS_HEALTH_DUMMY error in log fixed by followup commit.
1759
transient selinux ping failure
1760
1761
* https://tracker.ceph.com/issues/56694
1762
    qa: avoid blocking forever on hung umount
1763
* https://tracker.ceph.com/issues/56695
1764
    [RHEL stock] pjd test failures
1765
* https://tracker.ceph.com/issues/56696
1766
    admin keyring disappears during qa run
1767
* https://tracker.ceph.com/issues/56697
1768
    qa: fs/snaps fails for fuse
1769
* https://tracker.ceph.com/issues/50222
1770
    osd: 5.2s0 deep-scrub : stat mismatch
1771
* https://tracker.ceph.com/issues/56698
1772
    client: FAILED ceph_assert(_size == 0)
1773
* https://tracker.ceph.com/issues/50223
1774
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
1775 66 Rishabh Dave
1776 65 Rishabh Dave
1777
h3. 2022 Jul 15
1778
1779
http://pulpito.front.sepia.ceph.com/rishabh-2022-07-08_23:53:34-fs-wip-rishabh-testing-2022Jul08-1820-testing-default-smithi/
1780
1781
re-run: http://pulpito.front.sepia.ceph.com/rishabh-2022-07-15_06:42:04-fs-wip-rishabh-testing-2022Jul08-1820-testing-default-smithi/
1782
1783
* https://tracker.ceph.com/issues/53859
1784
  Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
1785
* https://tracker.ceph.com/issues/55804
1786
  Command failed (workunit test suites/pjd.sh)
1787
* https://tracker.ceph.com/issues/50223
1788
  client.xxxx isn't responding to mclientcaps(revoke)
1789
* https://tracker.ceph.com/issues/50222
1790
  osd: deep-scrub : stat mismatch
1791
1792
* https://tracker.ceph.com/issues/56632
1793
  Test failure: test_subvolume_snapshot_clone_quota_exceeded (tasks.cephfs.test_volumes.TestSubvolumeSnapshotClones)
1794
* https://tracker.ceph.com/issues/56634
1795
  workunit test fs/snaps/snaptest-intodir.sh
1796
* https://tracker.ceph.com/issues/56644
1797
  Test failure: test_rapid_creation (tasks.cephfs.test_fragment.TestFragmentation)
1798
1799 61 Rishabh Dave
1800
1801
h3. 2022 July 05
1802 62 Rishabh Dave
1803 64 Rishabh Dave
http://pulpito.front.sepia.ceph.com/rishabh-2022-07-02_14:14:52-fs-wip-rishabh-testing-20220702-1631-testing-default-smithi/
1804
1805
On 1st re-run some jobs passed - http://pulpito.front.sepia.ceph.com/rishabh-2022-07-03_15:10:28-fs-wip-rishabh-testing-20220702-1631-distro-default-smithi/
1806
1807
On 2nd re-run only few jobs failed -
1808 62 Rishabh Dave
http://pulpito.front.sepia.ceph.com/rishabh-2022-07-06_05:24:29-fs-wip-rishabh-testing-20220705-2132-distro-default-smithi/
1809
http://pulpito.front.sepia.ceph.com/rishabh-2022-07-06_05:24:29-fs-wip-rishabh-testing-20220705-2132-distro-default-smithi/
1810
1811
* https://tracker.ceph.com/issues/56446
1812
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
1813
* https://tracker.ceph.com/issues/55804
1814
    Command failed (workunit test suites/pjd.sh) on smithi047 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/
1815
1816
* https://tracker.ceph.com/issues/56445
1817 63 Rishabh Dave
    Command failed on smithi080 with status 123: "find /home/ubuntu/cephtest/archive/syslog -name '*.log' -print0 | sudo xargs -0 --no-run-if-empty -- gzip --"
1818
* https://tracker.ceph.com/issues/51267
1819
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithi098 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1
1820 62 Rishabh Dave
* https://tracker.ceph.com/issues/50224
1821
    Test failure: test_mirroring_init_failure_with_recovery (tasks.cephfs.test_mirroring.TestMirroring)
1822 61 Rishabh Dave
1823 58 Venky Shankar
1824
1825
h3. 2022 July 04
1826
1827
https://pulpito.ceph.com/vshankar-2022-06-29_09:19:00-fs-wip-vshankar-testing-20220627-100931-testing-default-smithi/
1828
(rhel runs were borked due to: https://lists.ceph.io/hyperkitty/list/dev@ceph.io/thread/JSZQFUKVLDND4W33PXDGCABPHNSPT6SS/, tests ran with --filter-out=rhel)
1829
1830
* https://tracker.ceph.com/issues/56445
1831 59 Rishabh Dave
    Command failed on smithi162 with status 123: "find /home/ubuntu/cephtest/archive/syslog -name '*.log' -print0 | sudo xargs -0 --no-run-if-empty -- gzip --"
1832
* https://tracker.ceph.com/issues/56446
1833
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
1834
* https://tracker.ceph.com/issues/51964
1835 60 Rishabh Dave
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1836 59 Rishabh Dave
* https://tracker.ceph.com/issues/52624
1837 57 Venky Shankar
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1838
1839
h3. 2022 June 20
1840
1841
https://pulpito.ceph.com/vshankar-2022-06-15_04:03:39-fs-wip-vshankar-testing1-20220615-072516-testing-default-smithi/
1842
https://pulpito.ceph.com/vshankar-2022-06-19_08:22:46-fs-wip-vshankar-testing1-20220619-102531-testing-default-smithi/
1843
1844
* https://tracker.ceph.com/issues/52624
1845
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1846
* https://tracker.ceph.com/issues/55804
1847
    qa failure: pjd link tests failed
1848
* https://tracker.ceph.com/issues/54108
1849
    qa: iogen workunit: "The following counters failed to be set on mds daemons: {'mds.exported', 'mds.imported'}"
1850
* https://tracker.ceph.com/issues/55332
1851 56 Patrick Donnelly
    Failure in snaptest-git-ceph.sh (it's an async unlink/create bug)
1852
1853
h3. 2022 June 13
1854
1855
https://pulpito.ceph.com/pdonnell-2022-06-12_05:08:12-fs:workload-wip-pdonnell-testing-20220612.004943-distro-default-smithi/
1856
1857
* https://tracker.ceph.com/issues/56024
1858
    cephadm: removes ceph.conf during qa run causing command failure
1859
* https://tracker.ceph.com/issues/48773
1860
    qa: scrub does not complete
1861
* https://tracker.ceph.com/issues/56012
1862
    mds: src/mds/MDLog.cc: 283: FAILED ceph_assert(!mds->is_ any_replay())
1863 55 Venky Shankar
1864 54 Venky Shankar
1865
h3. 2022 Jun 13
1866
1867
https://pulpito.ceph.com/vshankar-2022-06-07_00:25:50-fs-wip-vshankar-testing-20220606-223254-testing-default-smithi/
1868
https://pulpito.ceph.com/vshankar-2022-06-10_01:04:46-fs-wip-vshankar-testing-20220609-175550-testing-default-smithi/
1869
1870
* https://tracker.ceph.com/issues/52624
1871
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1872
* https://tracker.ceph.com/issues/51964
1873
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1874
* https://tracker.ceph.com/issues/53859
1875
    qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
1876
* https://tracker.ceph.com/issues/55804
1877
    qa failure: pjd link tests failed
1878
* https://tracker.ceph.com/issues/56003
1879
    client: src/include/xlist.h: 81: FAILED ceph_assert(_size == 0)
1880
* https://tracker.ceph.com/issues/56011
1881
    fs/thrash: snaptest-snap-rm-cmp.sh fails in mds5sum comparison
1882
* https://tracker.ceph.com/issues/56012
1883 53 Venky Shankar
    mds: src/mds/MDLog.cc: 283: FAILED ceph_assert(!mds->is_ any_replay())
1884
1885
h3. 2022 Jun 07
1886
1887
https://pulpito.ceph.com/vshankar-2022-06-06_21:25:41-fs-wip-vshankar-testing1-20220606-230129-testing-default-smithi/
1888
https://pulpito.ceph.com/vshankar-2022-06-07_10:53:31-fs-wip-vshankar-testing1-20220607-104134-testing-default-smithi/ (rerun after dropping a problematic PR)
1889
1890
* https://tracker.ceph.com/issues/52624
1891
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1892
* https://tracker.ceph.com/issues/50223
1893
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
1894
* https://tracker.ceph.com/issues/50224
1895 51 Venky Shankar
    qa: test_mirroring_init_failure_with_recovery failure
1896
1897
h3. 2022 May 12
1898 52 Venky Shankar
1899 51 Venky Shankar
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20220509-125847
1900
https://pulpito.ceph.com/vshankar-2022-05-13_17:09:16-fs-wip-vshankar-testing-20220513-120051-testing-default-smithi/ (drop prs + rerun)
1901
1902
* https://tracker.ceph.com/issues/52624
1903
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1904
* https://tracker.ceph.com/issues/50223
1905
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
1906
* https://tracker.ceph.com/issues/55332
1907
    Failure in snaptest-git-ceph.sh
1908
* https://tracker.ceph.com/issues/53859
1909 1 Patrick Donnelly
    qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
1910 52 Venky Shankar
* https://tracker.ceph.com/issues/55538
1911
    Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead)
1912 51 Venky Shankar
* https://tracker.ceph.com/issues/55258
1913 49 Venky Shankar
    lots of "heartbeat_check: no reply from X.X.X.X" in OSD logs (cropss up again, though very infrequent)
1914
1915 50 Venky Shankar
h3. 2022 May 04
1916
1917
https://pulpito.ceph.com/vshankar-2022-05-01_13:18:44-fs-wip-vshankar-testing1-20220428-204527-testing-default-smithi/
1918 49 Venky Shankar
https://pulpito.ceph.com/vshankar-2022-05-02_16:58:59-fs-wip-vshankar-testing1-20220502-201957-testing-default-smithi/ (after dropping PRs)
1919
1920
* https://tracker.ceph.com/issues/52624
1921
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1922
* https://tracker.ceph.com/issues/50223
1923
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
1924
* https://tracker.ceph.com/issues/55332
1925
    Failure in snaptest-git-ceph.sh
1926
* https://tracker.ceph.com/issues/53859
1927
    qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
1928
* https://tracker.ceph.com/issues/55516
1929
    qa: fs suite tests failing with "json.decoder.JSONDecodeError: Extra data: line 2 column 82 (char 82)"
1930
* https://tracker.ceph.com/issues/55537
1931
    mds: crash during fs:upgrade test
1932
* https://tracker.ceph.com/issues/55538
1933 48 Venky Shankar
    Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead)
1934
1935
h3. 2022 Apr 25
1936
1937
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20220420-113951 (owner vshankar)
1938
1939
* https://tracker.ceph.com/issues/52624
1940
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1941
* https://tracker.ceph.com/issues/50223
1942
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
1943
* https://tracker.ceph.com/issues/55258
1944
    lots of "heartbeat_check: no reply from X.X.X.X" in OSD logs
1945
* https://tracker.ceph.com/issues/55377
1946 47 Venky Shankar
    kclient: mds revoke Fwb caps stuck after the kclient tries writebcak once
1947
1948
h3. 2022 Apr 14
1949
1950
https://pulpito.ceph.com/?branch=wip-vshankar-testing1-20220411-144044
1951
1952
* https://tracker.ceph.com/issues/52624
1953
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1954
* https://tracker.ceph.com/issues/50223
1955
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
1956
* https://tracker.ceph.com/issues/52438
1957
    qa: ffsb timeout
1958
* https://tracker.ceph.com/issues/55170
1959
    mds: crash during rejoin (CDir::fetch_keys)
1960
* https://tracker.ceph.com/issues/55331
1961
    pjd failure
1962
* https://tracker.ceph.com/issues/48773
1963
    qa: scrub does not complete
1964
* https://tracker.ceph.com/issues/55332
1965
    Failure in snaptest-git-ceph.sh
1966
* https://tracker.ceph.com/issues/55258
1967 45 Venky Shankar
    lots of "heartbeat_check: no reply from X.X.X.X" in OSD logs
1968
1969 46 Venky Shankar
h3. 2022 Apr 11
1970 45 Venky Shankar
1971
https://pulpito.ceph.com/?branch=wip-vshankar-testing-55110-20220408-203242
1972
1973
* https://tracker.ceph.com/issues/48773
1974
    qa: scrub does not complete
1975
* https://tracker.ceph.com/issues/52624
1976
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1977
* https://tracker.ceph.com/issues/52438
1978
    qa: ffsb timeout
1979
* https://tracker.ceph.com/issues/48680
1980
    mds: scrubbing stuck "scrub active (0 inodes in the stack)"
1981
* https://tracker.ceph.com/issues/55236
1982
    qa: fs/snaps tests fails with "hit max job timeout"
1983
* https://tracker.ceph.com/issues/54108
1984
    qa: iogen workunit: "The following counters failed to be set on mds daemons: {'mds.exported', 'mds.imported'}"
1985
* https://tracker.ceph.com/issues/54971
1986
    Test failure: test_perf_stats_stale_metrics (tasks.cephfs.test_mds_metrics.TestMDSMetrics)
1987
* https://tracker.ceph.com/issues/50223
1988
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
1989
* https://tracker.ceph.com/issues/55258
1990 44 Venky Shankar
    lots of "heartbeat_check: no reply from X.X.X.X" in OSD logs
1991 42 Venky Shankar
1992 43 Venky Shankar
h3. 2022 Mar 21
1993
1994
https://pulpito.ceph.com/vshankar-2022-03-20_02:16:37-fs-wip-vshankar-testing-20220319-163539-testing-default-smithi/
1995
1996
Run didn't go well, lots of failures - debugging by dropping PRs and running against master branch. Only merging unrelated PRs that pass tests.
1997
1998
1999 42 Venky Shankar
h3. 2022 Mar 08
2000
2001
https://pulpito.ceph.com/vshankar-2022-02-28_04:32:15-fs-wip-vshankar-testing-20220226-211550-testing-default-smithi/
2002
2003
rerun with
2004
- (drop) https://github.com/ceph/ceph/pull/44679
2005
- (drop) https://github.com/ceph/ceph/pull/44958
2006
https://pulpito.ceph.com/vshankar-2022-03-06_14:47:51-fs-wip-vshankar-testing-20220304-132102-testing-default-smithi/
2007
2008
* https://tracker.ceph.com/issues/54419 (new)
2009
    `ceph orch upgrade start` seems to never reach completion
2010
* https://tracker.ceph.com/issues/51964
2011
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
2012
* https://tracker.ceph.com/issues/52624
2013
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2014
* https://tracker.ceph.com/issues/50223
2015
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2016
* https://tracker.ceph.com/issues/52438
2017
    qa: ffsb timeout
2018
* https://tracker.ceph.com/issues/50821
2019
    qa: untar_snap_rm failure during mds thrashing
2020 41 Venky Shankar
2021
2022
h3. 2022 Feb 09
2023
2024
https://pulpito.ceph.com/vshankar-2022-02-05_17:27:49-fs-wip-vshankar-testing-20220201-113815-testing-default-smithi/
2025
2026
rerun with
2027
- (drop) https://github.com/ceph/ceph/pull/37938
2028
- (drop) https://github.com/ceph/ceph/pull/44335
2029
- (drop) https://github.com/ceph/ceph/pull/44491
2030
- (drop) https://github.com/ceph/ceph/pull/44501
2031
https://pulpito.ceph.com/vshankar-2022-02-08_14:27:29-fs-wip-vshankar-testing-20220208-181241-testing-default-smithi/
2032
2033
* https://tracker.ceph.com/issues/51964
2034
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
2035
* https://tracker.ceph.com/issues/54066
2036
    test_subvolume_no_upgrade_v1_sanity fails with `AssertionError: 1000 != 0`
2037
* https://tracker.ceph.com/issues/48773
2038
    qa: scrub does not complete
2039
* https://tracker.ceph.com/issues/52624
2040
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2041
* https://tracker.ceph.com/issues/50223
2042
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2043
* https://tracker.ceph.com/issues/52438
2044 40 Patrick Donnelly
    qa: ffsb timeout
2045
2046
h3. 2022 Feb 01
2047
2048
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20220127.171526
2049
2050
* https://tracker.ceph.com/issues/54107
2051
    kclient: hang during umount
2052
* https://tracker.ceph.com/issues/54106
2053
    kclient: hang during workunit cleanup
2054
* https://tracker.ceph.com/issues/54108
2055
    qa: iogen workunit: "The following counters failed to be set on mds daemons: {'mds.exported', 'mds.imported'}"
2056
* https://tracker.ceph.com/issues/48773
2057
    qa: scrub does not complete
2058
* https://tracker.ceph.com/issues/52438
2059
    qa: ffsb timeout
2060 36 Venky Shankar
2061
2062
h3. 2022 Jan 13
2063 39 Venky Shankar
2064 36 Venky Shankar
https://pulpito.ceph.com/vshankar-2022-01-06_13:18:41-fs-wip-vshankar-testing-20220106-145819-testing-default-smithi/
2065 38 Venky Shankar
2066
rerun with:
2067 36 Venky Shankar
- (add) https://github.com/ceph/ceph/pull/44570
2068
- (drop) https://github.com/ceph/ceph/pull/43184
2069
https://pulpito.ceph.com/vshankar-2022-01-13_04:42:40-fs-wip-vshankar-testing-20220106-145819-testing-default-smithi/
2070
2071
* https://tracker.ceph.com/issues/50223
2072
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2073
* https://tracker.ceph.com/issues/51282
2074
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2075
* https://tracker.ceph.com/issues/48773
2076
    qa: scrub does not complete
2077
* https://tracker.ceph.com/issues/52624
2078
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2079
* https://tracker.ceph.com/issues/53859
2080 34 Venky Shankar
    qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
2081
2082
h3. 2022 Jan 03
2083
2084
https://pulpito.ceph.com/vshankar-2021-12-22_07:37:44-fs-wip-vshankar-testing-20211216-114012-testing-default-smithi/
2085
https://pulpito.ceph.com/vshankar-2022-01-03_12:27:45-fs-wip-vshankar-testing-20220103-142738-testing-default-smithi/ (rerun)
2086
2087
* https://tracker.ceph.com/issues/50223
2088
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2089
* https://tracker.ceph.com/issues/51964
2090
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
2091
* https://tracker.ceph.com/issues/51267
2092
    CommandFailedError: Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithi096 with status 1:...
2093
* https://tracker.ceph.com/issues/51282
2094
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2095
* https://tracker.ceph.com/issues/50821
2096
    qa: untar_snap_rm failure during mds thrashing
2097 35 Ramana Raja
* https://tracker.ceph.com/issues/51278
2098
    mds: "FAILED ceph_assert(!segments.empty())"
2099
* https://tracker.ceph.com/issues/52279
2100 34 Venky Shankar
    cephadm tests fail due to: error adding seccomp filter rule for syscall bdflush: requested action matches default action of filter
2101 33 Patrick Donnelly
2102
2103
h3. 2021 Dec 22
2104
2105
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211222.014316
2106
2107
* https://tracker.ceph.com/issues/52624
2108
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2109
* https://tracker.ceph.com/issues/50223
2110
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2111
* https://tracker.ceph.com/issues/52279
2112
    cephadm tests fail due to: error adding seccomp filter rule for syscall bdflush: requested action matches default action of filter
2113
* https://tracker.ceph.com/issues/50224
2114
    qa: test_mirroring_init_failure_with_recovery failure
2115
* https://tracker.ceph.com/issues/48773
2116
    qa: scrub does not complete
2117 32 Venky Shankar
2118
2119
h3. 2021 Nov 30
2120
2121
https://pulpito.ceph.com/vshankar-2021-11-24_07:14:27-fs-wip-vshankar-testing-20211124-094330-testing-default-smithi/
2122
https://pulpito.ceph.com/vshankar-2021-11-30_06:23:32-fs-wip-vshankar-testing-20211124-094330-distro-default-smithi/ (rerun w/ QA fixes)
2123
2124
* https://tracker.ceph.com/issues/53436
2125
    mds, mon: mds beacon messages get dropped? (mds never reaches up:active state)
2126
* https://tracker.ceph.com/issues/51964
2127
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
2128
* https://tracker.ceph.com/issues/48812
2129
    qa: test_scrub_pause_and_resume_with_abort failure
2130
* https://tracker.ceph.com/issues/51076
2131
    "wait_for_recovery: failed before timeout expired" during thrashosd test with EC backend.
2132
* https://tracker.ceph.com/issues/50223
2133
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2134
* https://tracker.ceph.com/issues/52624
2135
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2136
* https://tracker.ceph.com/issues/50250
2137
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
2138 31 Patrick Donnelly
2139
2140
h3. 2021 November 9
2141
2142
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211109.180315
2143
2144
* https://tracker.ceph.com/issues/53214
2145
    qa: "dd: error reading '/sys/kernel/debug/ceph/2a934501-6731-4052-a836-f42229a869be.client4874/metrics': Is a directory"
2146
* https://tracker.ceph.com/issues/48773
2147
    qa: scrub does not complete
2148
* https://tracker.ceph.com/issues/50223
2149
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2150
* https://tracker.ceph.com/issues/51282
2151
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2152
* https://tracker.ceph.com/issues/52624
2153
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2154
* https://tracker.ceph.com/issues/53216
2155
    qa: "RuntimeError: value of attributes should be either str or None. client_id"
2156
* https://tracker.ceph.com/issues/50250
2157
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
2158
2159 30 Patrick Donnelly
2160
2161
h3. 2021 November 03
2162
2163
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211103.023355
2164
2165
* https://tracker.ceph.com/issues/51964
2166
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
2167
* https://tracker.ceph.com/issues/51282
2168
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2169
* https://tracker.ceph.com/issues/52436
2170
    fs/ceph: "corrupt mdsmap"
2171
* https://tracker.ceph.com/issues/53074
2172
    pybind/mgr/cephadm: upgrade sequence does not continue if no MDS are active
2173
* https://tracker.ceph.com/issues/53150
2174
    pybind/mgr/cephadm/upgrade: tolerate MDS failures during upgrade straddling v16.2.5
2175
* https://tracker.ceph.com/issues/53155
2176
    MDSMonitor: assertion during upgrade to v16.2.5+
2177 29 Patrick Donnelly
2178
2179
h3. 2021 October 26
2180
2181
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211025.000447
2182
2183
* https://tracker.ceph.com/issues/53074
2184
    pybind/mgr/cephadm: upgrade sequence does not continue if no MDS are active
2185
* https://tracker.ceph.com/issues/52997
2186
    testing: hang ing umount
2187
* https://tracker.ceph.com/issues/50824
2188
    qa: snaptest-git-ceph bus error
2189
* https://tracker.ceph.com/issues/52436
2190
    fs/ceph: "corrupt mdsmap"
2191
* https://tracker.ceph.com/issues/48773
2192
    qa: scrub does not complete
2193
* https://tracker.ceph.com/issues/53082
2194
    ceph-fuse: segmenetation fault in Client::handle_mds_map
2195
* https://tracker.ceph.com/issues/50223
2196
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2197
* https://tracker.ceph.com/issues/52624
2198
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2199
* https://tracker.ceph.com/issues/50224
2200
    qa: test_mirroring_init_failure_with_recovery failure
2201
* https://tracker.ceph.com/issues/50821
2202
    qa: untar_snap_rm failure during mds thrashing
2203
* https://tracker.ceph.com/issues/50250
2204
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
2205
2206 27 Patrick Donnelly
2207
2208 28 Patrick Donnelly
h3. 2021 October 19
2209 27 Patrick Donnelly
2210
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211019.013028
2211
2212
* https://tracker.ceph.com/issues/52995
2213
    qa: test_standby_count_wanted failure
2214
* https://tracker.ceph.com/issues/52948
2215
    osd: fails to come up: "teuthology.misc:7 of 8 OSDs are up"
2216
* https://tracker.ceph.com/issues/52996
2217
    qa: test_perf_counters via test_openfiletable
2218
* https://tracker.ceph.com/issues/48772
2219
    qa: pjd: not ok 9, 44, 80
2220
* https://tracker.ceph.com/issues/52997
2221
    testing: hang ing umount
2222
* https://tracker.ceph.com/issues/50250
2223
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
2224
* https://tracker.ceph.com/issues/52624
2225
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2226
* https://tracker.ceph.com/issues/50223
2227
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2228
* https://tracker.ceph.com/issues/50821
2229
    qa: untar_snap_rm failure during mds thrashing
2230
* https://tracker.ceph.com/issues/48773
2231
    qa: scrub does not complete
2232 26 Patrick Donnelly
2233
2234
h3. 2021 October 12
2235
2236
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211012.192211
2237
2238
Some failures caused by teuthology bug: https://tracker.ceph.com/issues/52944
2239
2240
New test caused failure: https://github.com/ceph/ceph/pull/43297#discussion_r729883167
2241
2242
2243
* https://tracker.ceph.com/issues/51282
2244
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2245
* https://tracker.ceph.com/issues/52948
2246
    osd: fails to come up: "teuthology.misc:7 of 8 OSDs are up"
2247
* https://tracker.ceph.com/issues/48773
2248
    qa: scrub does not complete
2249
* https://tracker.ceph.com/issues/50224
2250
    qa: test_mirroring_init_failure_with_recovery failure
2251
* https://tracker.ceph.com/issues/52949
2252
    RuntimeError: The following counters failed to be set on mds daemons: {'mds.dir_split'}
2253 25 Patrick Donnelly
2254 23 Patrick Donnelly
2255 24 Patrick Donnelly
h3. 2021 October 02
2256
2257
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211002.163337
2258
2259
Some failures caused by cephadm upgrade test. Fixed in follow-up qa commit.
2260
2261
test_simple failures caused by PR in this set.
2262
2263
A few reruns because of QA infra noise.
2264
2265
* https://tracker.ceph.com/issues/52822
2266
    qa: failed pacific install on fs:upgrade
2267
* https://tracker.ceph.com/issues/52624
2268
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2269
* https://tracker.ceph.com/issues/50223
2270
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2271
* https://tracker.ceph.com/issues/48773
2272
    qa: scrub does not complete
2273
2274
2275 23 Patrick Donnelly
h3. 2021 September 20
2276
2277
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210917.174826
2278
2279
* https://tracker.ceph.com/issues/52677
2280
    qa: test_simple failure
2281
* https://tracker.ceph.com/issues/51279
2282
    kclient hangs on umount (testing branch)
2283
* https://tracker.ceph.com/issues/50223
2284
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2285
* https://tracker.ceph.com/issues/50250
2286
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
2287
* https://tracker.ceph.com/issues/52624
2288
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2289
* https://tracker.ceph.com/issues/52438
2290
    qa: ffsb timeout
2291 22 Patrick Donnelly
2292
2293
h3. 2021 September 10
2294
2295
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210910.181451
2296
2297
* https://tracker.ceph.com/issues/50223
2298
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2299
* https://tracker.ceph.com/issues/50250
2300
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
2301
* https://tracker.ceph.com/issues/52624
2302
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2303
* https://tracker.ceph.com/issues/52625
2304
    qa: test_kill_mdstable (tasks.cephfs.test_snapshots.TestSnapshots)
2305
* https://tracker.ceph.com/issues/52439
2306
    qa: acls does not compile on centos stream
2307
* https://tracker.ceph.com/issues/50821
2308
    qa: untar_snap_rm failure during mds thrashing
2309
* https://tracker.ceph.com/issues/48773
2310
    qa: scrub does not complete
2311
* https://tracker.ceph.com/issues/52626
2312
    mds: ScrubStack.cc: 831: FAILED ceph_assert(diri)
2313
* https://tracker.ceph.com/issues/51279
2314
    kclient hangs on umount (testing branch)
2315 21 Patrick Donnelly
2316
2317
h3. 2021 August 27
2318
2319
Several jobs died because of device failures.
2320
2321
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210827.024746
2322
2323
* https://tracker.ceph.com/issues/52430
2324
    mds: fast async create client mount breaks racy test
2325
* https://tracker.ceph.com/issues/52436
2326
    fs/ceph: "corrupt mdsmap"
2327
* https://tracker.ceph.com/issues/52437
2328
    mds: InoTable::replay_release_ids abort via test_inotable_sync
2329
* https://tracker.ceph.com/issues/51282
2330
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2331
* https://tracker.ceph.com/issues/52438
2332
    qa: ffsb timeout
2333
* https://tracker.ceph.com/issues/52439
2334
    qa: acls does not compile on centos stream
2335 20 Patrick Donnelly
2336
2337
h3. 2021 July 30
2338
2339
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210729.214022
2340
2341
* https://tracker.ceph.com/issues/50250
2342
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
2343
* https://tracker.ceph.com/issues/51282
2344
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2345
* https://tracker.ceph.com/issues/48773
2346
    qa: scrub does not complete
2347
* https://tracker.ceph.com/issues/51975
2348
    pybind/mgr/stats: KeyError
2349 19 Patrick Donnelly
2350
2351
h3. 2021 July 28
2352
2353
https://pulpito.ceph.com/pdonnell-2021-07-28_00:39:45-fs-wip-pdonnell-testing-20210727.213757-distro-basic-smithi/
2354
2355
with qa fix: https://pulpito.ceph.com/pdonnell-2021-07-28_16:20:28-fs-wip-pdonnell-testing-20210728.141004-distro-basic-smithi/
2356
2357
* https://tracker.ceph.com/issues/51905
2358
    qa: "error reading sessionmap 'mds1_sessionmap'"
2359
* https://tracker.ceph.com/issues/48773
2360
    qa: scrub does not complete
2361
* https://tracker.ceph.com/issues/50250
2362
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
2363
* https://tracker.ceph.com/issues/51267
2364
    CommandFailedError: Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithi096 with status 1:...
2365
* https://tracker.ceph.com/issues/51279
2366
    kclient hangs on umount (testing branch)
2367 18 Patrick Donnelly
2368
2369
h3. 2021 July 16
2370
2371
https://pulpito.ceph.com/pdonnell-2021-07-16_05:50:11-fs-wip-pdonnell-testing-20210716.022804-distro-basic-smithi/
2372
2373
* https://tracker.ceph.com/issues/48773
2374
    qa: scrub does not complete
2375
* https://tracker.ceph.com/issues/48772
2376
    qa: pjd: not ok 9, 44, 80
2377
* https://tracker.ceph.com/issues/45434
2378
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2379
* https://tracker.ceph.com/issues/51279
2380
    kclient hangs on umount (testing branch)
2381
* https://tracker.ceph.com/issues/50824
2382
    qa: snaptest-git-ceph bus error
2383 17 Patrick Donnelly
2384
2385
h3. 2021 July 04
2386
2387
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210703.052904
2388
2389
* https://tracker.ceph.com/issues/48773
2390
    qa: scrub does not complete
2391
* https://tracker.ceph.com/issues/39150
2392
    mon: "FAILED ceph_assert(session_map.sessions.empty())" when out of quorum
2393
* https://tracker.ceph.com/issues/45434
2394
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2395
* https://tracker.ceph.com/issues/51282
2396
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2397
* https://tracker.ceph.com/issues/48771
2398
    qa: iogen: workload fails to cause balancing
2399
* https://tracker.ceph.com/issues/51279
2400
    kclient hangs on umount (testing branch)
2401
* https://tracker.ceph.com/issues/50250
2402
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
2403 16 Patrick Donnelly
2404
2405
h3. 2021 July 01
2406
2407
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210701.192056
2408
2409
* https://tracker.ceph.com/issues/51197
2410
    qa: [WRN] Scrub error on inode 0x10000001520 (/client.0/tmp/t/linux-5.4/Documentation/driver-api) see mds.f log and `damage ls` output for details
2411
* https://tracker.ceph.com/issues/50866
2412
    osd: stat mismatch on objects
2413
* https://tracker.ceph.com/issues/48773
2414
    qa: scrub does not complete
2415 15 Patrick Donnelly
2416
2417
h3. 2021 June 26
2418
2419
https://pulpito.ceph.com/pdonnell-2021-06-26_00:57:00-fs-wip-pdonnell-testing-20210625.225421-distro-basic-smithi/
2420
2421
* https://tracker.ceph.com/issues/51183
2422
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
2423
* https://tracker.ceph.com/issues/51410
2424
    kclient: fails to finish reconnect during MDS thrashing (testing branch)
2425
* https://tracker.ceph.com/issues/48773
2426
    qa: scrub does not complete
2427
* https://tracker.ceph.com/issues/51282
2428
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2429
* https://tracker.ceph.com/issues/51169
2430
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
2431
* https://tracker.ceph.com/issues/48772
2432
    qa: pjd: not ok 9, 44, 80
2433 14 Patrick Donnelly
2434
2435
h3. 2021 June 21
2436
2437
https://pulpito.ceph.com/pdonnell-2021-06-22_00:27:21-fs-wip-pdonnell-testing-20210621.231646-distro-basic-smithi/
2438
2439
One failure caused by PR: https://github.com/ceph/ceph/pull/41935#issuecomment-866472599
2440
2441
* https://tracker.ceph.com/issues/51282
2442
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2443
* https://tracker.ceph.com/issues/51183
2444
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
2445
* https://tracker.ceph.com/issues/48773
2446
    qa: scrub does not complete
2447
* https://tracker.ceph.com/issues/48771
2448
    qa: iogen: workload fails to cause balancing
2449
* https://tracker.ceph.com/issues/51169
2450
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
2451
* https://tracker.ceph.com/issues/50495
2452
    libcephfs: shutdown race fails with status 141
2453
* https://tracker.ceph.com/issues/45434
2454
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2455
* https://tracker.ceph.com/issues/50824
2456
    qa: snaptest-git-ceph bus error
2457
* https://tracker.ceph.com/issues/50223
2458
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2459 13 Patrick Donnelly
2460
2461
h3. 2021 June 16
2462
2463
https://pulpito.ceph.com/pdonnell-2021-06-16_21:26:55-fs-wip-pdonnell-testing-20210616.191804-distro-basic-smithi/
2464
2465
MDS abort class of failures caused by PR: https://github.com/ceph/ceph/pull/41667
2466
2467
* https://tracker.ceph.com/issues/45434
2468
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2469
* https://tracker.ceph.com/issues/51169
2470
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
2471
* https://tracker.ceph.com/issues/43216
2472
    MDSMonitor: removes MDS coming out of quorum election
2473
* https://tracker.ceph.com/issues/51278
2474
    mds: "FAILED ceph_assert(!segments.empty())"
2475
* https://tracker.ceph.com/issues/51279
2476
    kclient hangs on umount (testing branch)
2477
* https://tracker.ceph.com/issues/51280
2478
    mds: "FAILED ceph_assert(r == 0 || r == -2)"
2479
* https://tracker.ceph.com/issues/51183
2480
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
2481
* https://tracker.ceph.com/issues/51281
2482
    qa: snaptest-snap-rm-cmp.sh: "echo 'FAIL: bad match, /tmp/a 4637e766853d1ad16a7b17079e2c6f03 != real c3883760b18d50e8d78819c54d579b00'"
2483
* https://tracker.ceph.com/issues/48773
2484
    qa: scrub does not complete
2485
* https://tracker.ceph.com/issues/51076
2486
    "wait_for_recovery: failed before timeout expired" during thrashosd test with EC backend.
2487
* https://tracker.ceph.com/issues/51228
2488
    qa: rmdir: failed to remove 'a/.snap/*': No such file or directory
2489
* https://tracker.ceph.com/issues/51282
2490
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2491 12 Patrick Donnelly
2492
2493
h3. 2021 June 14
2494
2495
https://pulpito.ceph.com/pdonnell-2021-06-14_20:53:05-fs-wip-pdonnell-testing-20210614.173325-distro-basic-smithi/
2496
2497
Some Ubuntu 20.04 upgrade fallout. In particular, upgrade tests are failing due to missing packages for 18.04 Pacific.
2498
2499
* https://tracker.ceph.com/issues/51169
2500
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
2501
* https://tracker.ceph.com/issues/51228
2502
    qa: rmdir: failed to remove 'a/.snap/*': No such file or directory
2503
* https://tracker.ceph.com/issues/48773
2504
    qa: scrub does not complete
2505
* https://tracker.ceph.com/issues/51183
2506
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
2507
* https://tracker.ceph.com/issues/45434
2508
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2509
* https://tracker.ceph.com/issues/51182
2510
    pybind/mgr/snap_schedule: Invalid command: Unexpected argument 'fs=cephfs'
2511
* https://tracker.ceph.com/issues/51229
2512
    qa: test_multi_snap_schedule list difference failure
2513
* https://tracker.ceph.com/issues/50821
2514
    qa: untar_snap_rm failure during mds thrashing
2515 11 Patrick Donnelly
2516
2517
h3. 2021 June 13
2518
2519
https://pulpito.ceph.com/pdonnell-2021-06-12_02:45:35-fs-wip-pdonnell-testing-20210612.002809-distro-basic-smithi/
2520
2521
Some Ubuntu 20.04 upgrade fallout. In particular, upgrade tests are failing due to missing packages for 18.04 Pacific.
2522
2523
* https://tracker.ceph.com/issues/51169
2524
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
2525
* https://tracker.ceph.com/issues/48773
2526
    qa: scrub does not complete
2527
* https://tracker.ceph.com/issues/51182
2528
    pybind/mgr/snap_schedule: Invalid command: Unexpected argument 'fs=cephfs'
2529
* https://tracker.ceph.com/issues/51183
2530
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
2531
* https://tracker.ceph.com/issues/51197
2532
    qa: [WRN] Scrub error on inode 0x10000001520 (/client.0/tmp/t/linux-5.4/Documentation/driver-api) see mds.f log and `damage ls` output for details
2533
* https://tracker.ceph.com/issues/45434
2534 10 Patrick Donnelly
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2535
2536
h3. 2021 June 11
2537
2538
https://pulpito.ceph.com/pdonnell-2021-06-11_18:02:10-fs-wip-pdonnell-testing-20210611.162716-distro-basic-smithi/
2539
2540
Some Ubuntu 20.04 upgrade fallout. In particular, upgrade tests are failing due to missing packages for 18.04 Pacific.
2541
2542
* https://tracker.ceph.com/issues/51169
2543
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
2544
* https://tracker.ceph.com/issues/45434
2545
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2546
* https://tracker.ceph.com/issues/48771
2547
    qa: iogen: workload fails to cause balancing
2548
* https://tracker.ceph.com/issues/43216
2549
    MDSMonitor: removes MDS coming out of quorum election
2550
* https://tracker.ceph.com/issues/51182
2551
    pybind/mgr/snap_schedule: Invalid command: Unexpected argument 'fs=cephfs'
2552
* https://tracker.ceph.com/issues/50223
2553
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2554
* https://tracker.ceph.com/issues/48773
2555
    qa: scrub does not complete
2556
* https://tracker.ceph.com/issues/51183
2557
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
2558
* https://tracker.ceph.com/issues/51184
2559
    qa: fs:bugs does not specify distro
2560 9 Patrick Donnelly
2561
2562
h3. 2021 June 03
2563
2564
https://pulpito.ceph.com/pdonnell-2021-06-03_03:40:33-fs-wip-pdonnell-testing-20210603.020013-distro-basic-smithi/
2565
2566
* https://tracker.ceph.com/issues/45434
2567
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2568
* https://tracker.ceph.com/issues/50016
2569
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
2570
* https://tracker.ceph.com/issues/50821
2571
    qa: untar_snap_rm failure during mds thrashing
2572
* https://tracker.ceph.com/issues/50622 (regression)
2573
    msg: active_connections regression
2574
* https://tracker.ceph.com/issues/49845#note-2 (regression)
2575
    qa: failed umount in test_volumes
2576
* https://tracker.ceph.com/issues/48773
2577
    qa: scrub does not complete
2578
* https://tracker.ceph.com/issues/43216
2579
    MDSMonitor: removes MDS coming out of quorum election
2580 7 Patrick Donnelly
2581
2582 8 Patrick Donnelly
h3. 2021 May 18
2583
2584
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210518.214114
2585
2586
Regression in testing kernel caused some failures. Ilya fixed those and rerun
2587
looked better. Some odd new noise in the rerun relating to packaging and "No
2588
module named 'tasks.ceph'".
2589
2590
* https://tracker.ceph.com/issues/50824
2591
    qa: snaptest-git-ceph bus error
2592
* https://tracker.ceph.com/issues/50622 (regression)
2593
    msg: active_connections regression
2594
* https://tracker.ceph.com/issues/49845#note-2 (regression)
2595
    qa: failed umount in test_volumes
2596
* https://tracker.ceph.com/issues/48203 (stock kernel update required)
2597
    qa: quota failure
2598
2599
2600 7 Patrick Donnelly
h3. 2021 May 18
2601
2602
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210518.025642
2603
2604
* https://tracker.ceph.com/issues/50821
2605
    qa: untar_snap_rm failure during mds thrashing
2606
* https://tracker.ceph.com/issues/48773
2607
    qa: scrub does not complete
2608
* https://tracker.ceph.com/issues/45591
2609
    mgr: FAILED ceph_assert(daemon != nullptr)
2610
* https://tracker.ceph.com/issues/50866
2611
    osd: stat mismatch on objects
2612
* https://tracker.ceph.com/issues/50016
2613
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
2614
* https://tracker.ceph.com/issues/50867
2615
    qa: fs:mirror: reduced data availability
2616
* https://tracker.ceph.com/issues/50821
2617
    qa: untar_snap_rm failure during mds thrashing
2618
* https://tracker.ceph.com/issues/50622 (regression)
2619
    msg: active_connections regression
2620
* https://tracker.ceph.com/issues/50223
2621
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2622
* https://tracker.ceph.com/issues/50868
2623
    qa: "kern.log.gz already exists; not overwritten"
2624
* https://tracker.ceph.com/issues/50870
2625
    qa: test_full: "rm: cannot remove 'large_file_a': Permission denied"
2626 6 Patrick Donnelly
2627
2628
h3. 2021 May 11
2629
2630
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210511.232042
2631
2632
* one class of failures caused by PR
2633
* https://tracker.ceph.com/issues/48812
2634
    qa: test_scrub_pause_and_resume_with_abort failure
2635
* https://tracker.ceph.com/issues/50390
2636
    mds: monclient: wait_auth_rotating timed out after 30
2637
* https://tracker.ceph.com/issues/48773
2638
    qa: scrub does not complete
2639
* https://tracker.ceph.com/issues/50821
2640
    qa: untar_snap_rm failure during mds thrashing
2641
* https://tracker.ceph.com/issues/50224
2642
    qa: test_mirroring_init_failure_with_recovery failure
2643
* https://tracker.ceph.com/issues/50622 (regression)
2644
    msg: active_connections regression
2645
* https://tracker.ceph.com/issues/50825
2646
    qa: snaptest-git-ceph hang during mon thrashing v2
2647
* https://tracker.ceph.com/issues/50821
2648
    qa: untar_snap_rm failure during mds thrashing
2649
* https://tracker.ceph.com/issues/50823
2650
    qa: RuntimeError: timeout waiting for cluster to stabilize
2651 5 Patrick Donnelly
2652
2653
h3. 2021 May 14
2654
2655
https://pulpito.ceph.com/pdonnell-2021-05-14_21:45:42-fs-master-distro-basic-smithi/
2656
2657
* https://tracker.ceph.com/issues/48812
2658
    qa: test_scrub_pause_and_resume_with_abort failure
2659
* https://tracker.ceph.com/issues/50821
2660
    qa: untar_snap_rm failure during mds thrashing
2661
* https://tracker.ceph.com/issues/50622 (regression)
2662
    msg: active_connections regression
2663
* https://tracker.ceph.com/issues/50822
2664
    qa: testing kernel patch for client metrics causes mds abort
2665
* https://tracker.ceph.com/issues/48773
2666
    qa: scrub does not complete
2667
* https://tracker.ceph.com/issues/50823
2668
    qa: RuntimeError: timeout waiting for cluster to stabilize
2669
* https://tracker.ceph.com/issues/50824
2670
    qa: snaptest-git-ceph bus error
2671
* https://tracker.ceph.com/issues/50825
2672
    qa: snaptest-git-ceph hang during mon thrashing v2
2673
* https://tracker.ceph.com/issues/50826
2674
    kceph: stock RHEL kernel hangs on snaptests with mon|osd thrashers
2675 4 Patrick Donnelly
2676
2677
h3. 2021 May 01
2678
2679
https://pulpito.ceph.com/pdonnell-2021-05-01_09:07:09-fs-wip-pdonnell-testing-20210501.040415-distro-basic-smithi/
2680
2681
* https://tracker.ceph.com/issues/45434
2682
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2683
* https://tracker.ceph.com/issues/50281
2684
    qa: untar_snap_rm timeout
2685
* https://tracker.ceph.com/issues/48203 (stock kernel update required)
2686
    qa: quota failure
2687
* https://tracker.ceph.com/issues/48773
2688
    qa: scrub does not complete
2689
* https://tracker.ceph.com/issues/50390
2690
    mds: monclient: wait_auth_rotating timed out after 30
2691
* https://tracker.ceph.com/issues/50250
2692
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details"
2693
* https://tracker.ceph.com/issues/50622 (regression)
2694
    msg: active_connections regression
2695
* https://tracker.ceph.com/issues/45591
2696
    mgr: FAILED ceph_assert(daemon != nullptr)
2697
* https://tracker.ceph.com/issues/50221
2698
    qa: snaptest-git-ceph failure in git diff
2699
* https://tracker.ceph.com/issues/50016
2700
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
2701 3 Patrick Donnelly
2702
2703
h3. 2021 Apr 15
2704
2705
https://pulpito.ceph.com/pdonnell-2021-04-15_01:35:57-fs-wip-pdonnell-testing-20210414.230315-distro-basic-smithi/
2706
2707
* https://tracker.ceph.com/issues/50281
2708
    qa: untar_snap_rm timeout
2709
* https://tracker.ceph.com/issues/50220
2710
    qa: dbench workload timeout
2711
* https://tracker.ceph.com/issues/50246
2712
    mds: failure replaying journal (EMetaBlob)
2713
* https://tracker.ceph.com/issues/50250
2714
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details"
2715
* https://tracker.ceph.com/issues/50016
2716
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
2717
* https://tracker.ceph.com/issues/50222
2718
    osd: 5.2s0 deep-scrub : stat mismatch
2719
* https://tracker.ceph.com/issues/45434
2720
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2721
* https://tracker.ceph.com/issues/49845
2722
    qa: failed umount in test_volumes
2723
* https://tracker.ceph.com/issues/37808
2724
    osd: osdmap cache weak_refs assert during shutdown
2725
* https://tracker.ceph.com/issues/50387
2726
    client: fs/snaps failure
2727
* https://tracker.ceph.com/issues/50389
2728
    mds: "cluster [ERR] Error recovering journal 0x203: (2) No such file or directory" in cluster log"
2729
* https://tracker.ceph.com/issues/50216
2730
    qa: "ls: cannot access 'lost+found': No such file or directory"
2731
* https://tracker.ceph.com/issues/50390
2732
    mds: monclient: wait_auth_rotating timed out after 30
2733
2734 1 Patrick Donnelly
2735
2736 2 Patrick Donnelly
h3. 2021 Apr 08
2737
2738
https://pulpito.ceph.com/pdonnell-2021-04-08_22:42:24-fs-wip-pdonnell-testing-20210408.192301-distro-basic-smithi/
2739
2740
* https://tracker.ceph.com/issues/45434
2741
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2742
* https://tracker.ceph.com/issues/50016
2743
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
2744
* https://tracker.ceph.com/issues/48773
2745
    qa: scrub does not complete
2746
* https://tracker.ceph.com/issues/50279
2747
    qa: "Replacing daemon mds.b as rank 0 with standby daemon mds.c"
2748
* https://tracker.ceph.com/issues/50246
2749
    mds: failure replaying journal (EMetaBlob)
2750
* https://tracker.ceph.com/issues/48365
2751
    qa: ffsb build failure on CentOS 8.2
2752
* https://tracker.ceph.com/issues/50216
2753
    qa: "ls: cannot access 'lost+found': No such file or directory"
2754
* https://tracker.ceph.com/issues/50223
2755
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2756
* https://tracker.ceph.com/issues/50280
2757
    cephadm: RuntimeError: uid/gid not found
2758
* https://tracker.ceph.com/issues/50281
2759
    qa: untar_snap_rm timeout
2760
2761 1 Patrick Donnelly
h3. 2021 Apr 08
2762
2763
https://pulpito.ceph.com/pdonnell-2021-04-08_04:31:36-fs-wip-pdonnell-testing-20210408.024225-distro-basic-smithi/
2764
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210408.142238 (with logic inversion / QA fix)
2765
2766
* https://tracker.ceph.com/issues/50246
2767
    mds: failure replaying journal (EMetaBlob)
2768
* https://tracker.ceph.com/issues/50250
2769
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details"
2770
2771
2772
h3. 2021 Apr 07
2773
2774
https://pulpito.ceph.com/pdonnell-2021-04-07_02:12:41-fs-wip-pdonnell-testing-20210406.213012-distro-basic-smithi/
2775
2776
* https://tracker.ceph.com/issues/50215
2777
    qa: "log [ERR] : error reading sessionmap 'mds2_sessionmap'"
2778
* https://tracker.ceph.com/issues/49466
2779
    qa: "Command failed on gibba030 with status 1: 'set -ex\nsudo dd of=/tmp/tmp.ZEeZBasJer'"
2780
* https://tracker.ceph.com/issues/50216
2781
    qa: "ls: cannot access 'lost+found': No such file or directory"
2782
* https://tracker.ceph.com/issues/48773
2783
    qa: scrub does not complete
2784
* https://tracker.ceph.com/issues/49845
2785
    qa: failed umount in test_volumes
2786
* https://tracker.ceph.com/issues/50220
2787
    qa: dbench workload timeout
2788
* https://tracker.ceph.com/issues/50221
2789
    qa: snaptest-git-ceph failure in git diff
2790
* https://tracker.ceph.com/issues/50222
2791
    osd: 5.2s0 deep-scrub : stat mismatch
2792
* https://tracker.ceph.com/issues/50223
2793
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2794
* https://tracker.ceph.com/issues/50224
2795
    qa: test_mirroring_init_failure_with_recovery failure
2796
2797
h3. 2021 Apr 01
2798
2799
https://pulpito.ceph.com/pdonnell-2021-04-01_00:45:34-fs-wip-pdonnell-testing-20210331.222326-distro-basic-smithi/
2800
2801
* https://tracker.ceph.com/issues/48772
2802
    qa: pjd: not ok 9, 44, 80
2803
* https://tracker.ceph.com/issues/50177
2804
    osd: "stalled aio... buggy kernel or bad device?"
2805
* https://tracker.ceph.com/issues/48771
2806
    qa: iogen: workload fails to cause balancing
2807
* https://tracker.ceph.com/issues/49845
2808
    qa: failed umount in test_volumes
2809
* https://tracker.ceph.com/issues/48773
2810
    qa: scrub does not complete
2811
* https://tracker.ceph.com/issues/48805
2812
    mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details"
2813
* https://tracker.ceph.com/issues/50178
2814
    qa: "TypeError: run() got an unexpected keyword argument 'shell'"
2815
* https://tracker.ceph.com/issues/45434
2816
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2817
2818
h3. 2021 Mar 24
2819
2820
https://pulpito.ceph.com/pdonnell-2021-03-24_23:26:35-fs-wip-pdonnell-testing-20210324.190252-distro-basic-smithi/
2821
2822
* https://tracker.ceph.com/issues/49500
2823
    qa: "Assertion `cb_done' failed."
2824
* https://tracker.ceph.com/issues/50019
2825
    qa: mount failure with cephadm "probably no MDS server is up?"
2826
* https://tracker.ceph.com/issues/50020
2827
    qa: "RADOS object not found (Failed to operate read op for oid cephfs_mirror)"
2828
* https://tracker.ceph.com/issues/48773
2829
    qa: scrub does not complete
2830
* https://tracker.ceph.com/issues/45434
2831
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2832
* https://tracker.ceph.com/issues/48805
2833
    mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details"
2834
* https://tracker.ceph.com/issues/48772
2835
    qa: pjd: not ok 9, 44, 80
2836
* https://tracker.ceph.com/issues/50021
2837
    qa: snaptest-git-ceph failure during mon thrashing
2838
* https://tracker.ceph.com/issues/48771
2839
    qa: iogen: workload fails to cause balancing
2840
* https://tracker.ceph.com/issues/50016
2841
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
2842
* https://tracker.ceph.com/issues/49466
2843
    qa: "Command failed on gibba030 with status 1: 'set -ex\nsudo dd of=/tmp/tmp.ZEeZBasJer'"
2844
2845
2846
h3. 2021 Mar 18
2847
2848
https://pulpito.ceph.com/pdonnell-2021-03-18_13:46:31-fs-wip-pdonnell-testing-20210318.024145-distro-basic-smithi/
2849
2850
* https://tracker.ceph.com/issues/49466
2851
    qa: "Command failed on gibba030 with status 1: 'set -ex\nsudo dd of=/tmp/tmp.ZEeZBasJer'"
2852
* https://tracker.ceph.com/issues/48773
2853
    qa: scrub does not complete
2854
* https://tracker.ceph.com/issues/48805
2855
    mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details"
2856
* https://tracker.ceph.com/issues/45434
2857
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2858
* https://tracker.ceph.com/issues/49845
2859
    qa: failed umount in test_volumes
2860
* https://tracker.ceph.com/issues/49605
2861
    mgr: drops command on the floor
2862
* https://tracker.ceph.com/issues/48203 (stock kernel update required)
2863
    qa: quota failure
2864
* https://tracker.ceph.com/issues/49928
2865
    client: items pinned in cache preventing unmount x2
2866
2867
h3. 2021 Mar 15
2868
2869
https://pulpito.ceph.com/pdonnell-2021-03-15_22:16:56-fs-wip-pdonnell-testing-20210315.182203-distro-basic-smithi/
2870
2871
* https://tracker.ceph.com/issues/49842
2872
    qa: stuck pkg install
2873
* https://tracker.ceph.com/issues/49466
2874
    qa: "Command failed on gibba030 with status 1: 'set -ex\nsudo dd of=/tmp/tmp.ZEeZBasJer'"
2875
* https://tracker.ceph.com/issues/49822
2876
    test: test_mirroring_command_idempotency (tasks.cephfs.test_admin.TestMirroringCommands) failure
2877
* https://tracker.ceph.com/issues/49240
2878
    terminate called after throwing an instance of 'std::bad_alloc'
2879
* https://tracker.ceph.com/issues/48773
2880
    qa: scrub does not complete
2881
* https://tracker.ceph.com/issues/45434
2882
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2883
* https://tracker.ceph.com/issues/49500
2884
    qa: "Assertion `cb_done' failed."
2885
* https://tracker.ceph.com/issues/49843
2886
    qa: fs/snaps/snaptest-upchildrealms.sh failure
2887
* https://tracker.ceph.com/issues/49845
2888
    qa: failed umount in test_volumes
2889
* https://tracker.ceph.com/issues/48805
2890
    mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details"
2891
* https://tracker.ceph.com/issues/49605
2892
    mgr: drops command on the floor
2893
2894
and failure caused by PR: https://github.com/ceph/ceph/pull/39969
2895
2896
2897
h3. 2021 Mar 09
2898
2899
https://pulpito.ceph.com/pdonnell-2021-03-09_03:27:39-fs-wip-pdonnell-testing-20210308.214827-distro-basic-smithi/
2900
2901
* https://tracker.ceph.com/issues/49500
2902
    qa: "Assertion `cb_done' failed."
2903
* https://tracker.ceph.com/issues/48805
2904
    mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details"
2905
* https://tracker.ceph.com/issues/48773
2906
    qa: scrub does not complete
2907
* https://tracker.ceph.com/issues/45434
2908
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2909
* https://tracker.ceph.com/issues/49240
2910
    terminate called after throwing an instance of 'std::bad_alloc'
2911
* https://tracker.ceph.com/issues/49466
2912
    qa: "Command failed on gibba030 with status 1: 'set -ex\nsudo dd of=/tmp/tmp.ZEeZBasJer'"
2913
* https://tracker.ceph.com/issues/49684
2914
    qa: fs:cephadm mount does not wait for mds to be created
2915
* https://tracker.ceph.com/issues/48771
2916
    qa: iogen: workload fails to cause balancing