Project

General

Profile

Main » History » Version 210

Venky Shankar, 11/29/2023 07:44 AM

1 79 Venky Shankar
h1. MAIN
2
3 201 Rishabh Dave
h3. ADD NEW ENTRY BELOW
4
5 210 Venky Shankar
h3. 29 Nov 2023
6
7
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20231107.042705
8
9
* https://tracker.ceph.com/issues/63233
10
    mon|client|mds: valgrind reports possible leaks in the MDS
11
* https://tracker.ceph.com/issues/63141
12
    qa/cephfs: test_idem_unaffected_root_squash fails
13
* https://tracker.ceph.com/issues/57676
14
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
15
* https://tracker.ceph.com/issues/57655
16
    qa: fs:mixed-clients kernel_untar_build failure
17
* https://tracker.ceph.com/issues/62067
18
    ffsb.sh failure "Resource temporarily unavailable"
19
* https://tracker.ceph.com/issues/61243
20
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
21
* https://tracker.ceph.com/issues/62510 (pending RHEL back port)
    snaptest-git-ceph.sh failure with fs/thrash
22
* https://tracker.ceph.com/issues/62810
23
    Failure in snaptest-git-ceph.sh (it's an async unlink/create bug) -- Need to fix again
24
25 206 Venky Shankar
h3. 14 Nov 2023
26 207 Milind Changire
(Milind)
27
28
https://pulpito.ceph.com/mchangir-2023-11-13_10:27:15-fs-wip-mchangir-testing-20231110.052303-testing-default-smithi/
29
30
* https://tracker.ceph.com/issues/53859
31
  qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
32
* https://tracker.ceph.com/issues/63233
33
  mon|client|mds: valgrind reports possible leaks in the MDS
34
* https://tracker.ceph.com/issues/63521
35
  qa: Test failure: test_scrub_merge_dirfrags (tasks.cephfs.test_scrub_checks.TestScrubChecks)
36
* https://tracker.ceph.com/issues/57655
37
  qa: fs:mixed-clients kernel_untar_build failure
38
* https://tracker.ceph.com/issues/62580
39
    testing: Test failure: test_snapshot_remove (tasks.cephfs.test_strays.TestStrays)
40
* https://tracker.ceph.com/issues/57676
41
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
42
* https://tracker.ceph.com/issues/61243
43
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
44
* https://tracker.ceph.com/issues/63141
45
    qa/cephfs: test_idem_unaffected_root_squash fails
46
* https://tracker.ceph.com/issues/51964
47
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
48
* https://tracker.ceph.com/issues/63522
49
    No module named 'tasks.ceph_fuse'
50
    No module named 'tasks.kclient'
51
    No module named 'tasks.cephfs.fuse_mount'
52
    No module named 'tasks.ceph'
53
* https://tracker.ceph.com/issues/63523
54
    Command failed - qa/workunits/fs/misc/general_vxattrs.sh
55
56
57
h3. 14 Nov 2023
58 206 Venky Shankar
59
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20231106.073650
60
61
(nvm the fs:upgrade test failure - the PR is excluded from merge)
62
63
* https://tracker.ceph.com/issues/57676
64
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
65
* https://tracker.ceph.com/issues/63233
66
    mon|client|mds: valgrind reports possible leaks in the MDS
67
* https://tracker.ceph.com/issues/63141
68
    qa/cephfs: test_idem_unaffected_root_squash fails
69
* https://tracker.ceph.com/issues/62580
70
    testing: Test failure: test_snapshot_remove (tasks.cephfs.test_strays.TestStrays)
71
* https://tracker.ceph.com/issues/57655
72
    qa: fs:mixed-clients kernel_untar_build failure
73
* https://tracker.ceph.com/issues/51964
74
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
75
* https://tracker.ceph.com/issues/63519
76
    ceph-fuse: reef ceph-fuse crashes with main branch ceph-mds
77
* https://tracker.ceph.com/issues/57087
78
    qa: test_fragmented_injection (tasks.cephfs.test_data_scan.TestDataScan) failure
79
* https://tracker.ceph.com/issues/58945
80
    qa: xfstests-dev's generic test suite has 20 failures with fuse client
81
82 204 Rishabh Dave
h3. 7 Nov 2023
83
84 205 Rishabh Dave
fs: https://pulpito.ceph.com/rishabh-2023-11-04_04:30:51-fs-rishabh-2023nov3-testing-default-smithi/
85
re-run: https://pulpito.ceph.com/rishabh-2023-11-05_14:10:09-fs-rishabh-2023nov3-testing-default-smithi/
86
smoke: https://pulpito.ceph.com/rishabh-2023-11-08_08:39:05-smoke-rishabh-2023nov3-testing-default-smithi/
87 204 Rishabh Dave
88
* https://tracker.ceph.com/issues/53859
89
  qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
90
* https://tracker.ceph.com/issues/63233
91
  mon|client|mds: valgrind reports possible leaks in the MDS
92
* https://tracker.ceph.com/issues/57655
93
  qa: fs:mixed-clients kernel_untar_build failure
94
* https://tracker.ceph.com/issues/57676
95
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
96
97
* https://tracker.ceph.com/issues/63473
98
  fsstress.sh failed with errno 124
99
100 202 Rishabh Dave
h3. 3 Nov 2023
101 203 Rishabh Dave
102 202 Rishabh Dave
https://pulpito.ceph.com/rishabh-2023-10-27_06:26:52-fs-rishabh-2023oct26-testing-default-smithi/
103
104
* https://tracker.ceph.com/issues/63141
105
  qa/cephfs: test_idem_unaffected_root_squash fails
106
* https://tracker.ceph.com/issues/63233
107
  mon|client|mds: valgrind reports possible leaks in the MDS
108
* https://tracker.ceph.com/issues/57656
109
  dbench: write failed on handle 10010 (Resource temporarily unavailable)
110
* https://tracker.ceph.com/issues/57655
111
  qa: fs:mixed-clients kernel_untar_build failure
112
* https://tracker.ceph.com/issues/57676
113
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
114
115
* https://tracker.ceph.com/issues/59531
116
  "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio)"
117
* https://tracker.ceph.com/issues/52624
118
  qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
119
120 198 Patrick Donnelly
h3. 24 October 2023
121
122
https://pulpito.ceph.com/?branch=wip-batrick-testing-20231024.144545
123
124 200 Patrick Donnelly
Two failures:
125
126
https://pulpito.ceph.com/pdonnell-2023-10-26_05:21:22-fs-wip-batrick-testing-20231024.144545-distro-default-smithi/7438459/
127
https://pulpito.ceph.com/pdonnell-2023-10-26_05:21:22-fs-wip-batrick-testing-20231024.144545-distro-default-smithi/7438468/
128
129
probably related to https://github.com/ceph/ceph/pull/53255. Killing the mount as part of the test did not complete. Will research more.
130
131 198 Patrick Donnelly
* https://tracker.ceph.com/issues/52624
132
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
133
* https://tracker.ceph.com/issues/57676
134 199 Patrick Donnelly
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
135
* https://tracker.ceph.com/issues/63233
136
    mon|client|mds: valgrind reports possible leaks in the MDS
137
* https://tracker.ceph.com/issues/59531
138
    "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS 
139
* https://tracker.ceph.com/issues/57655
140
    qa: fs:mixed-clients kernel_untar_build failure
141 200 Patrick Donnelly
* https://tracker.ceph.com/issues/62067
142
    ffsb.sh failure "Resource temporarily unavailable"
143
* https://tracker.ceph.com/issues/63411
144
    qa: flush journal may cause timeouts of `scrub status`
145
* https://tracker.ceph.com/issues/61243
146
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
147
* https://tracker.ceph.com/issues/63141
148 198 Patrick Donnelly
    test_idem_unaffected_root_squash (test_admin.TestFsAuthorizeUpdate) fails
149 148 Rishabh Dave
150 195 Venky Shankar
h3. 18 Oct 2023
151
152
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20231018.065603
153
154
* https://tracker.ceph.com/issues/52624
155
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
156
* https://tracker.ceph.com/issues/57676
157
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
158
* https://tracker.ceph.com/issues/63233
159
    mon|client|mds: valgrind reports possible leaks in the MDS
160
* https://tracker.ceph.com/issues/63141
161
    qa/cephfs: test_idem_unaffected_root_squash fails
162
* https://tracker.ceph.com/issues/59531
163
  "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio)" 
164
* https://tracker.ceph.com/issues/62658
165
    error during scrub thrashing: reached maximum tries (31) after waiting for 900 seconds
166
* https://tracker.ceph.com/issues/62580
167
    testing: Test failure: test_snapshot_remove (tasks.cephfs.test_strays.TestStrays)
168
* https://tracker.ceph.com/issues/62067
169
    ffsb.sh failure "Resource temporarily unavailable"
170
* https://tracker.ceph.com/issues/57655
171
    qa: fs:mixed-clients kernel_untar_build failure
172
* https://tracker.ceph.com/issues/62036
173
    src/mds/MDCache.cc: 5131: FAILED ceph_assert(isolated_inodes.empty())
174
* https://tracker.ceph.com/issues/58945
175
    qa: xfstests-dev's generic test suite has 20 failures with fuse client
176
* https://tracker.ceph.com/issues/62847
177
    mds: blogbench requests stuck (5mds+scrub+snaps-flush)
178
179 193 Venky Shankar
h3. 13 Oct 2023
180
181
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20231013.093215
182
183
* https://tracker.ceph.com/issues/52624
184
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
185
* https://tracker.ceph.com/issues/62936
186
    Test failure: test_mirroring_init_failure_with_recovery (tasks.cephfs.test_mirroring.TestMirroring)
187
* https://tracker.ceph.com/issues/47292
188
    cephfs-shell: test_df_for_valid_file failure
189
* https://tracker.ceph.com/issues/63141
190
    qa/cephfs: test_idem_unaffected_root_squash fails
191
* https://tracker.ceph.com/issues/62081
192
    tasks/fscrypt-common does not finish, timesout
193 1 Patrick Donnelly
* https://tracker.ceph.com/issues/58945
194
    qa: xfstests-dev's generic test suite has 20 failures with fuse client
195 194 Venky Shankar
* https://tracker.ceph.com/issues/63233
196
    mon|client|mds: valgrind reports possible leaks in the MDS
197 193 Venky Shankar
198 190 Patrick Donnelly
h3. 16 Oct 2023
199
200
https://pulpito.ceph.com/?branch=wip-batrick-testing-20231016.203825
201
202 192 Patrick Donnelly
Infrastructure issues:
203
* /teuthology/pdonnell-2023-10-19_12:04:12-fs-wip-batrick-testing-20231016.203825-distro-default-smithi/7432286/teuthology.log
204
    Host lost.
205
206 196 Patrick Donnelly
One followup fix:
207
* https://pulpito.ceph.com/pdonnell-2023-10-20_00:33:29-fs-wip-batrick-testing-20231016.203825-distro-default-smithi/
208
209 192 Patrick Donnelly
Failures:
210
211
* https://tracker.ceph.com/issues/56694
212
    qa: avoid blocking forever on hung umount
213
* https://tracker.ceph.com/issues/63089
214
    qa: tasks/mirror times out
215
* https://tracker.ceph.com/issues/52624
216
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
217
* https://tracker.ceph.com/issues/59531
218
  "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio)" 
219
* https://tracker.ceph.com/issues/57676
220
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
221
* https://tracker.ceph.com/issues/62658 
222
    error during scrub thrashing: reached maximum tries (31) after waiting for 900 seconds
223
* https://tracker.ceph.com/issues/61243
224
  test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
225
* https://tracker.ceph.com/issues/57656
226
  dbench: write failed on handle 10010 (Resource temporarily unavailable)
227
* https://tracker.ceph.com/issues/63233
228
  mon|client|mds: valgrind reports possible leaks in the MDS
229 197 Patrick Donnelly
* https://tracker.ceph.com/issues/63278
230
  kclient: may wrongly decode session messages and believe it is blocklisted (dead jobs)
231 192 Patrick Donnelly
232 189 Rishabh Dave
h3. 9 Oct 2023
233
234
https://pulpito.ceph.com/rishabh-2023-10-06_11:56:52-fs-rishabh-cephfs-mon-testing-default-smithi/
235
236
* https://tracker.ceph.com/issues/54460
237
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
238
* https://tracker.ceph.com/issues/63141
239
  test_idem_unaffected_root_squash (test_admin.TestFsAuthorizeUpdate) fails
240
* https://tracker.ceph.com/issues/62937
241
  logrotate doesn't support parallel execution on same set of logfiles
242
* https://tracker.ceph.com/issues/61400
243
  valgrind+ceph-mon issues
244
* https://tracker.ceph.com/issues/57676
245
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
246
* https://tracker.ceph.com/issues/55805
247
  error during scrub thrashing reached max tries in 900 secs
248
249 188 Venky Shankar
h3. 26 Sep 2023
250
251
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230926.081818
252
253
* https://tracker.ceph.com/issues/52624
254
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
255
* https://tracker.ceph.com/issues/62873
256
    qa: FAIL: test_client_blocklisted_oldest_tid (tasks.cephfs.test_client_limits.TestClientLimits)
257
* https://tracker.ceph.com/issues/61400
258
    valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
259
* https://tracker.ceph.com/issues/57676
260
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
261
* https://tracker.ceph.com/issues/62682
262
    mon: no mdsmap broadcast after "fs set joinable" is set to true
263
* https://tracker.ceph.com/issues/63089
264
    qa: tasks/mirror times out
265
266 185 Rishabh Dave
h3. 22 Sep 2023
267
268
https://pulpito.ceph.com/rishabh-2023-09-12_12:12:15-fs-wip-rishabh-2023sep12-b2-testing-default-smithi/
269
270
* https://tracker.ceph.com/issues/59348
271
  qa: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota
272
* https://tracker.ceph.com/issues/59344
273
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
274
* https://tracker.ceph.com/issues/59531
275
  "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio)" 
276
* https://tracker.ceph.com/issues/61574
277
  build failure for mdtest project
278
* https://tracker.ceph.com/issues/62702
279
  fsstress.sh: MDS slow requests for the internal 'rename' requests
280
* https://tracker.ceph.com/issues/57676
281
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
282
283
* https://tracker.ceph.com/issues/62863 
284
  deadlock in ceph-fuse causes teuthology job to hang and fail
285
* https://tracker.ceph.com/issues/62870
286
  test_cluster_info (tasks.cephfs.test_nfs.TestNFS)
287
* https://tracker.ceph.com/issues/62873
288
  test_client_blocklisted_oldest_tid (tasks.cephfs.test_client_limits.TestClientLimits)
289
290 186 Venky Shankar
h3. 20 Sep 2023
291
292
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230920.072635
293
294
* https://tracker.ceph.com/issues/52624
295
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
296
* https://tracker.ceph.com/issues/61400
297
    valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
298
* https://tracker.ceph.com/issues/61399
299
    libmpich: undefined references to fi_strerror
300
* https://tracker.ceph.com/issues/62081
301
    tasks/fscrypt-common does not finish, timesout
302
* https://tracker.ceph.com/issues/62658 
303
    error during scrub thrashing: reached maximum tries (31) after waiting for 900 seconds
304
* https://tracker.ceph.com/issues/62915
305
    qa/suites/fs/nfs: No orchestrator configured (try `ceph orch set backend`) while running test cases
306
* https://tracker.ceph.com/issues/59531
307
    quincy: "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio)"
308
* https://tracker.ceph.com/issues/62873
309
  qa: FAIL: test_client_blocklisted_oldest_tid (tasks.cephfs.test_client_limits.TestClientLimits)
310
* https://tracker.ceph.com/issues/62936
311
    Test failure: test_mirroring_init_failure_with_recovery (tasks.cephfs.test_mirroring.TestMirroring)
312
* https://tracker.ceph.com/issues/62937
313
    Command failed on smithi027 with status 3: 'sudo logrotate /etc/logrotate.d/ceph-test.conf'
314
* https://tracker.ceph.com/issues/62510
315
    snaptest-git-ceph.sh failure with fs/thrash
316
* https://tracker.ceph.com/issues/62081
317
    tasks/fscrypt-common does not finish, timesout
318
* https://tracker.ceph.com/issues/62126
319
    test failure: suites/blogbench.sh stops running
320 187 Venky Shankar
* https://tracker.ceph.com/issues/62682
321
    mon: no mdsmap broadcast after "fs set joinable" is set to true
322 186 Venky Shankar
323 184 Milind Changire
h3. 19 Sep 2023
324
325
http://pulpito.front.sepia.ceph.com/mchangir-2023-09-12_05:40:22-fs-wip-mchangir-testing-20230908.140927-testing-default-smithi/
326
327
* https://tracker.ceph.com/issues/58220#note-9
328
  workunit fs/test_python.sh: test_disk_quota_exceeeded_error failure
329
* https://tracker.ceph.com/issues/62702
330
  Command failed (workunit test suites/fsstress.sh) on smithi124 with status 124
331
* https://tracker.ceph.com/issues/57676
332
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
333
* https://tracker.ceph.com/issues/59348
334
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
335
* https://tracker.ceph.com/issues/52624
336
  qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
337
* https://tracker.ceph.com/issues/51964
338
  qa: test_cephfs_mirror_restart_sync_on_blocklist failure
339
* https://tracker.ceph.com/issues/61243
340
  test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
341
* https://tracker.ceph.com/issues/59344
342
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
343
* https://tracker.ceph.com/issues/62873
344
  qa: FAIL: test_client_blocklisted_oldest_tid (tasks.cephfs.test_client_limits.TestClientLimits)
345
* https://tracker.ceph.com/issues/59413
346
  cephfs: qa snaptest-git-ceph.sh failed with "got remote process result: 128"
347
* https://tracker.ceph.com/issues/53859
348
  qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
349
* https://tracker.ceph.com/issues/62482
350
  qa: cluster [WRN] Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
351
352 178 Patrick Donnelly
353 177 Venky Shankar
h3. 13 Sep 2023
354
355
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230908.065909
356
357
* https://tracker.ceph.com/issues/52624
358
      qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
359
* https://tracker.ceph.com/issues/57655
360
    qa: fs:mixed-clients kernel_untar_build failure
361
* https://tracker.ceph.com/issues/57676
362
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
363
* https://tracker.ceph.com/issues/61243
364
    qa: tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev - 17 tests failed
365
* https://tracker.ceph.com/issues/62567
366
    postgres workunit times out - MDS_SLOW_REQUEST in logs
367
* https://tracker.ceph.com/issues/61400
368
    valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
369
* https://tracker.ceph.com/issues/61399
370
    libmpich: undefined references to fi_strerror
371
* https://tracker.ceph.com/issues/57655
372
    qa: fs:mixed-clients kernel_untar_build failure
373
* https://tracker.ceph.com/issues/57676
374
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
375
* https://tracker.ceph.com/issues/51964
376
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
377
* https://tracker.ceph.com/issues/62081
378
    tasks/fscrypt-common does not finish, timesout
379 178 Patrick Donnelly
380 179 Patrick Donnelly
h3. 2023 Sep 12
381 178 Patrick Donnelly
382
https://pulpito.ceph.com/pdonnell-2023-09-12_14:07:50-fs-wip-batrick-testing-20230912.122437-distro-default-smithi/
383 1 Patrick Donnelly
384 181 Patrick Donnelly
A few failures caused by qa refactoring in https://github.com/ceph/ceph/pull/48130 ; notably:
385
386 182 Patrick Donnelly
* Test failure: test_export_pin_many (tasks.cephfs.test_exports.TestExportPin) caused by fragmentation from config changes.
387 181 Patrick Donnelly
388
Failures:
389
390 179 Patrick Donnelly
* https://tracker.ceph.com/issues/59348
391
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
392
* https://tracker.ceph.com/issues/57656
393
  dbench: write failed on handle 10010 (Resource temporarily unavailable)
394
* https://tracker.ceph.com/issues/55805
395
  error scrub thrashing reached max tries in 900 secs
396
* https://tracker.ceph.com/issues/62067
397
    ffsb.sh failure "Resource temporarily unavailable"
398
* https://tracker.ceph.com/issues/59344
399
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
400
* https://tracker.ceph.com/issues/61399
401 180 Patrick Donnelly
  libmpich: undefined references to fi_strerror
402
* https://tracker.ceph.com/issues/62832
403
  common: config_proxy deadlock during shutdown (and possibly other times)
404
* https://tracker.ceph.com/issues/59413
405 1 Patrick Donnelly
  cephfs: qa snaptest-git-ceph.sh failed with "got remote process result: 128"
406 181 Patrick Donnelly
* https://tracker.ceph.com/issues/57676
407
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
408
* https://tracker.ceph.com/issues/62567
409
  Command failed with status 1: "sudo TESTDIR=/home/ubuntu/cephtest bash -c 'sudo -u postgres -- pgbench -s 500 -i'"
410
* https://tracker.ceph.com/issues/54460
411
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
412
* https://tracker.ceph.com/issues/58220#note-9
413
  workunit fs/test_python.sh: test_disk_quota_exceeeded_error failure
414
* https://tracker.ceph.com/issues/59348
415
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
416 183 Patrick Donnelly
* https://tracker.ceph.com/issues/62847
417
    mds: blogbench requests stuck (5mds+scrub+snaps-flush)
418
* https://tracker.ceph.com/issues/62848
419
    qa: fail_fs upgrade scenario hanging
420
* https://tracker.ceph.com/issues/62081
421
    tasks/fscrypt-common does not finish, timesout
422 177 Venky Shankar
423 176 Venky Shankar
h3. 11 Sep 2023
424 175 Venky Shankar
425
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230830.153114
426
427
* https://tracker.ceph.com/issues/52624
428
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
429
* https://tracker.ceph.com/issues/61399
430
    qa: build failure for ior (the failed instance is when compiling `mdtest`)
431
* https://tracker.ceph.com/issues/57655
432
    qa: fs:mixed-clients kernel_untar_build failure
433
* https://tracker.ceph.com/issues/61399
434
    ior build failure
435
* https://tracker.ceph.com/issues/59531
436
    quincy: "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio)"
437
* https://tracker.ceph.com/issues/59344
438
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
439
* https://tracker.ceph.com/issues/59346
440
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
441
* https://tracker.ceph.com/issues/59348
442
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
443
* https://tracker.ceph.com/issues/57676
444
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
445
* https://tracker.ceph.com/issues/61243
446
  qa: tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev - 17 tests failed
447
* https://tracker.ceph.com/issues/62567
448
  postgres workunit times out - MDS_SLOW_REQUEST in logs
449
450
451 174 Rishabh Dave
h3. 6 Sep 2023 Run 2
452
453
https://pulpito.ceph.com/rishabh-2023-08-25_01:50:32-fs-wip-rishabh-2023aug3-b5-testing-default-smithi/ 
454
455
* https://tracker.ceph.com/issues/51964
456
  test_cephfs_mirror_restart_sync_on_blocklist failure
457
* https://tracker.ceph.com/issues/59348
458
  test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
459
* https://tracker.ceph.com/issues/53859
460
  qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
461
* https://tracker.ceph.com/issues/61892
462
  test_strays.TestStrays.test_snapshot_remove failed
463
* https://tracker.ceph.com/issues/54460
464
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
465
* https://tracker.ceph.com/issues/59346
466
  fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write"
467
* https://tracker.ceph.com/issues/59344
468
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
469
* https://tracker.ceph.com/issues/62484
470
  qa: ffsb.sh test failure
471
* https://tracker.ceph.com/issues/62567
472
  Command failed with status 1: "sudo TESTDIR=/home/ubuntu/cephtest bash -c 'sudo -u postgres -- pgbench -s 500 -i'"
473
  
474
* https://tracker.ceph.com/issues/61399
475
  ior build failure
476
* https://tracker.ceph.com/issues/57676
477
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
478
* https://tracker.ceph.com/issues/55805
479
  error scrub thrashing reached max tries in 900 secs
480
481 172 Rishabh Dave
h3. 6 Sep 2023
482 171 Rishabh Dave
483 173 Rishabh Dave
https://pulpito.ceph.com/rishabh-2023-08-10_20:16:46-fs-wip-rishabh-2023Aug1-b4-testing-default-smithi/
484 171 Rishabh Dave
485 1 Patrick Donnelly
* https://tracker.ceph.com/issues/53859
486
  qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
487 173 Rishabh Dave
* https://tracker.ceph.com/issues/51964
488
  qa: test_cephfs_mirror_restart_sync_on_blocklist failure
489 1 Patrick Donnelly
* https://tracker.ceph.com/issues/61892
490 173 Rishabh Dave
  test_snapshot_remove (test_strays.TestStrays) failed
491
* https://tracker.ceph.com/issues/59348
492
  qa: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota
493
* https://tracker.ceph.com/issues/54462
494
  Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128
495
* https://tracker.ceph.com/issues/62556
496
  test_acls: xfstests_dev: python2 is missing
497
* https://tracker.ceph.com/issues/62067
498
  ffsb.sh failure "Resource temporarily unavailable"
499
* https://tracker.ceph.com/issues/57656
500
  dbench: write failed on handle 10010 (Resource temporarily unavailable)
501 1 Patrick Donnelly
* https://tracker.ceph.com/issues/59346
502
  fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write"
503 171 Rishabh Dave
* https://tracker.ceph.com/issues/59344
504 173 Rishabh Dave
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
505
506 171 Rishabh Dave
* https://tracker.ceph.com/issues/61399
507
  ior build failure
508
* https://tracker.ceph.com/issues/57676
509
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
510
* https://tracker.ceph.com/issues/55805
511
  error scrub thrashing reached max tries in 900 secs
512 173 Rishabh Dave
513
* https://tracker.ceph.com/issues/62567
514
  Command failed on smithi008 with status 1: "sudo TESTDIR=/home/ubuntu/cephtest bash -c 'sudo -u postgres -- pgbench -s 500 -i'"
515
* https://tracker.ceph.com/issues/62702
516
  workunit test suites/fsstress.sh on smithi066 with status 124
517 170 Rishabh Dave
518
h3. 5 Sep 2023
519
520
https://pulpito.ceph.com/rishabh-2023-08-25_06:38:25-fs-wip-rishabh-2023aug3-b5-testing-default-smithi/
521
orch:cephadm suite run: http://pulpito.front.sepia.ceph.com/rishabh-2023-09-05_12:16:09-orch:cephadm-wip-rishabh-2023aug3-b5-testing-default-smithi/
522
  this run has failures but acc to Adam King these are not relevant and should be ignored
523
524
* https://tracker.ceph.com/issues/61892
525
  test_snapshot_remove (test_strays.TestStrays) failed
526
* https://tracker.ceph.com/issues/59348
527
  test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota
528
* https://tracker.ceph.com/issues/54462
529
  Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128
530
* https://tracker.ceph.com/issues/62067
531
  ffsb.sh failure "Resource temporarily unavailable"
532
* https://tracker.ceph.com/issues/57656 
533
  dbench: write failed on handle 10010 (Resource temporarily unavailable)
534
* https://tracker.ceph.com/issues/59346
535
  fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write"
536
* https://tracker.ceph.com/issues/59344
537
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
538
* https://tracker.ceph.com/issues/50223
539
  client.xxxx isn't responding to mclientcaps(revoke)
540
* https://tracker.ceph.com/issues/57655
541
  qa: fs:mixed-clients kernel_untar_build failure
542
* https://tracker.ceph.com/issues/62187
543
  iozone.sh: line 5: iozone: command not found
544
 
545
* https://tracker.ceph.com/issues/61399
546
  ior build failure
547
* https://tracker.ceph.com/issues/57676
548
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
549
* https://tracker.ceph.com/issues/55805
550
  error scrub thrashing reached max tries in 900 secs
551 169 Venky Shankar
552
553
h3. 31 Aug 2023
554
555
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230824.045828
556
557
* https://tracker.ceph.com/issues/52624
558
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
559
* https://tracker.ceph.com/issues/62187
560
    iozone: command not found
561
* https://tracker.ceph.com/issues/61399
562
    ior build failure
563
* https://tracker.ceph.com/issues/59531
564
    quincy: "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio)"
565
* https://tracker.ceph.com/issues/61399
566
    qa: build failure for ior (the failed instance is when compiling `mdtest`)
567
* https://tracker.ceph.com/issues/57655
568
    qa: fs:mixed-clients kernel_untar_build failure
569
* https://tracker.ceph.com/issues/59344
570
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
571
* https://tracker.ceph.com/issues/59346
572
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
573
* https://tracker.ceph.com/issues/59348
574
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
575
* https://tracker.ceph.com/issues/59413
576
    cephfs: qa snaptest-git-ceph.sh failed with "got remote process result: 128"
577
* https://tracker.ceph.com/issues/62653
578
    qa: unimplemented fcntl command: 1036 with fsstress
579
* https://tracker.ceph.com/issues/61400
580
    valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
581
* https://tracker.ceph.com/issues/62658
582
    error during scrub thrashing: reached maximum tries (31) after waiting for 900 seconds
583
* https://tracker.ceph.com/issues/62188
584
    AttributeError: 'RemoteProcess' object has no attribute 'read' (happens only with multis-auth test)
585 168 Venky Shankar
586
587
h3. 25 Aug 2023
588
589
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230822.064807
590
591
* https://tracker.ceph.com/issues/59344
592
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
593
* https://tracker.ceph.com/issues/59346
594
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
595
* https://tracker.ceph.com/issues/59348
596
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
597
* https://tracker.ceph.com/issues/57655
598
    qa: fs:mixed-clients kernel_untar_build failure
599
* https://tracker.ceph.com/issues/61243
600
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
601
* https://tracker.ceph.com/issues/61399
602
    ior build failure
603
* https://tracker.ceph.com/issues/61399
604
    qa: build failure for ior (the failed instance is when compiling `mdtest`)
605
* https://tracker.ceph.com/issues/62484
606
    qa: ffsb.sh test failure
607
* https://tracker.ceph.com/issues/59531
608
    quincy: "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio)"
609
* https://tracker.ceph.com/issues/62510
610
    snaptest-git-ceph.sh failure with fs/thrash
611 167 Venky Shankar
612
613
h3. 24 Aug 2023
614
615
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230822.060131
616
617
* https://tracker.ceph.com/issues/57676
618
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
619
* https://tracker.ceph.com/issues/51964
620
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
621
* https://tracker.ceph.com/issues/59344
622
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
623
* https://tracker.ceph.com/issues/59346
624
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
625
* https://tracker.ceph.com/issues/59348
626
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
627
* https://tracker.ceph.com/issues/61399
628
    ior build failure
629
* https://tracker.ceph.com/issues/61399
630
    qa: build failure for ior (the failed instance is when compiling `mdtest`)
631
* https://tracker.ceph.com/issues/62510
632
    snaptest-git-ceph.sh failure with fs/thrash
633
* https://tracker.ceph.com/issues/62484
634
    qa: ffsb.sh test failure
635
* https://tracker.ceph.com/issues/57087
636
    qa: test_fragmented_injection (tasks.cephfs.test_data_scan.TestDataScan) failure
637
* https://tracker.ceph.com/issues/57656
638
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
639
* https://tracker.ceph.com/issues/62187
640
    iozone: command not found
641
* https://tracker.ceph.com/issues/62188
642
    AttributeError: 'RemoteProcess' object has no attribute 'read' (happens only with multis-auth test)
643
* https://tracker.ceph.com/issues/62567
644
    postgres workunit times out - MDS_SLOW_REQUEST in logs
645 166 Venky Shankar
646
647
h3. 22 Aug 2023
648
649
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230809.035933
650
651
* https://tracker.ceph.com/issues/57676
652
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
653
* https://tracker.ceph.com/issues/51964
654
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
655
* https://tracker.ceph.com/issues/59344
656
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
657
* https://tracker.ceph.com/issues/59346
658
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
659
* https://tracker.ceph.com/issues/59348
660
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
661
* https://tracker.ceph.com/issues/61399
662
    ior build failure
663
* https://tracker.ceph.com/issues/61399
664
    qa: build failure for ior (the failed instance is when compiling `mdtest`)
665
* https://tracker.ceph.com/issues/57655
666
    qa: fs:mixed-clients kernel_untar_build failure
667
* https://tracker.ceph.com/issues/61243
668
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
669
* https://tracker.ceph.com/issues/62188
670
    AttributeError: 'RemoteProcess' object has no attribute 'read' (happens only with multis-auth test)
671
* https://tracker.ceph.com/issues/62510
672
    snaptest-git-ceph.sh failure with fs/thrash
673
* https://tracker.ceph.com/issues/62511
674
    src/mds/MDLog.cc: 299: FAILED ceph_assert(!mds_is_shutting_down)
675 165 Venky Shankar
676
677
h3. 14 Aug 2023
678
679
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230808.093601
680
681
* https://tracker.ceph.com/issues/51964
682
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
683
* https://tracker.ceph.com/issues/61400
684
    valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
685
* https://tracker.ceph.com/issues/61399
686
    ior build failure
687
* https://tracker.ceph.com/issues/59348
688
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
689
* https://tracker.ceph.com/issues/59531
690
    cluster [WRN] OSD bench result of 137706.272521 IOPS exceeded the threshold
691
* https://tracker.ceph.com/issues/59344
692
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
693
* https://tracker.ceph.com/issues/59346
694
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
695
* https://tracker.ceph.com/issues/61399
696
    qa: build failure for ior (the failed instance is when compiling `mdtest`)
697
* https://tracker.ceph.com/issues/59684 [kclient bug]
698
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
699
* https://tracker.ceph.com/issues/61243 (NEW)
700
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
701
* https://tracker.ceph.com/issues/57655
702
    qa: fs:mixed-clients kernel_untar_build failure
703
* https://tracker.ceph.com/issues/57656
704
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
705 163 Venky Shankar
706
707
h3. 28 JULY 2023
708
709
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230725.053049
710
711
* https://tracker.ceph.com/issues/51964
712
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
713
* https://tracker.ceph.com/issues/61400
714
    valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
715
* https://tracker.ceph.com/issues/61399
716
    ior build failure
717
* https://tracker.ceph.com/issues/57676
718
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
719
* https://tracker.ceph.com/issues/59348
720
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
721
* https://tracker.ceph.com/issues/59531
722
    cluster [WRN] OSD bench result of 137706.272521 IOPS exceeded the threshold
723
* https://tracker.ceph.com/issues/59344
724
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
725
* https://tracker.ceph.com/issues/59346
726
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
727
* https://github.com/ceph/ceph/pull/52556
728
    task/test_nfs: AttributeError: 'TestNFS' object has no attribute 'run_ceph_cmd' (see note #4)
729
* https://tracker.ceph.com/issues/62187
730
    iozone: command not found
731
* https://tracker.ceph.com/issues/61399
732
    qa: build failure for ior (the failed instance is when compiling `mdtest`)
733
* https://tracker.ceph.com/issues/62188
734 164 Rishabh Dave
    AttributeError: 'RemoteProcess' object has no attribute 'read' (happens only with multis-auth test)
735 158 Rishabh Dave
736
h3. 24 Jul 2023
737
738
https://pulpito.ceph.com/rishabh-2023-07-13_21:35:13-fs-wip-rishabh-2023Jul13-testing-default-smithi/
739
https://pulpito.ceph.com/rishabh-2023-07-14_10:26:42-fs-wip-rishabh-2023Jul13-testing-default-smithi/
740
There were few failure from one of the PRs under testing. Following run confirms that removing this PR fixes these failures -
741
https://pulpito.ceph.com/rishabh-2023-07-18_02:11:50-fs-wip-rishabh-2023Jul13-m-quota-testing-default-smithi/
742
One more extra run to check if blogbench.sh fail every time:
743
https://pulpito.ceph.com/rishabh-2023-07-21_17:58:19-fs-wip-rishabh-2023Jul13-m-quota-testing-default-smithi/
744
blogbench.sh failure were seen on above runs for first time, following run with main branch that confirms that "blogbench.sh" was not related to any of the PRs that are under testing -
745 161 Rishabh Dave
https://pulpito.ceph.com/rishabh-2023-07-21_21:30:53-fs-wip-rishabh-2023Jul13-base-2-testing-default-smithi/
746
747
* https://tracker.ceph.com/issues/61892
748
  test_snapshot_remove (test_strays.TestStrays) failed
749
* https://tracker.ceph.com/issues/53859
750
  test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
751
* https://tracker.ceph.com/issues/61982
752
  test_clean_stale_subvolume_snapshot_metadata (tasks.cephfs.test_volumes.TestSubvolumeSnapshots)
753
* https://tracker.ceph.com/issues/52438
754
  qa: ffsb timeout
755
* https://tracker.ceph.com/issues/54460
756
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
757
* https://tracker.ceph.com/issues/57655
758
  qa: fs:mixed-clients kernel_untar_build failure
759
* https://tracker.ceph.com/issues/48773
760
  reached max tries: scrub does not complete
761
* https://tracker.ceph.com/issues/58340
762
  mds: fsstress.sh hangs with multimds
763
* https://tracker.ceph.com/issues/61400
764
  valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
765
* https://tracker.ceph.com/issues/57206
766
  libcephfs/test.sh: ceph_test_libcephfs_reclaim
767
  
768
* https://tracker.ceph.com/issues/57656
769
  [testing] dbench: write failed on handle 10010 (Resource temporarily unavailable)
770
* https://tracker.ceph.com/issues/61399
771
  ior build failure
772
* https://tracker.ceph.com/issues/57676
773
  error during scrub thrashing: backtrace
774
  
775
* https://tracker.ceph.com/issues/38452
776
  'sudo -u postgres -- pgbench -s 500 -i' failed
777 158 Rishabh Dave
* https://tracker.ceph.com/issues/62126
778 157 Venky Shankar
  blogbench.sh failure
779
780
h3. 18 July 2023
781
782
* https://tracker.ceph.com/issues/52624
783
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
784
* https://tracker.ceph.com/issues/57676
785
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
786
* https://tracker.ceph.com/issues/54460
787
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
788
* https://tracker.ceph.com/issues/57655
789
    qa: fs:mixed-clients kernel_untar_build failure
790
* https://tracker.ceph.com/issues/51964
791
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
792
* https://tracker.ceph.com/issues/59344
793
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
794
* https://tracker.ceph.com/issues/61182
795
    cephfs-mirror-ha-workunit: reached maximum tries (50) after waiting for 300 seconds
796
* https://tracker.ceph.com/issues/61957
797
    test_client_limits.TestClientLimits.test_client_release_bug
798
* https://tracker.ceph.com/issues/59348
799
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
800
* https://tracker.ceph.com/issues/61892
801
    test_strays.TestStrays.test_snapshot_remove failed
802
* https://tracker.ceph.com/issues/59346
803
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
804
* https://tracker.ceph.com/issues/44565
805
    src/mds/SimpleLock.h: 528: FAILED ceph_assert(state == LOCK_XLOCK || state == LOCK_XLOCKDONE || state == LOCK_XLOCKSNAP || state == LOCK_LOCK_XLOCK || state == LOCK_LOCK || is_locallock())
806
* https://tracker.ceph.com/issues/62067
807
    ffsb.sh failure "Resource temporarily unavailable"
808 156 Venky Shankar
809
810
h3. 17 July 2023
811
812
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230704.040136
813
814
* https://tracker.ceph.com/issues/61982
815
    Test failure: test_clean_stale_subvolume_snapshot_metadata (tasks.cephfs.test_volumes.TestSubvolumeSnapshots)
816
* https://tracker.ceph.com/issues/59344
817
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
818
* https://tracker.ceph.com/issues/61182
819
    cephfs-mirror-ha-workunit: reached maximum tries (50) after waiting for 300 seconds
820
* https://tracker.ceph.com/issues/61957
821
    test_client_limits.TestClientLimits.test_client_release_bug
822
* https://tracker.ceph.com/issues/61400
823
    valgrind+ceph-mon: segmentation fault in rocksdb+tcmalloc
824
* https://tracker.ceph.com/issues/59348
825
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
826
* https://tracker.ceph.com/issues/61892
827
    test_strays.TestStrays.test_snapshot_remove failed
828
* https://tracker.ceph.com/issues/59346
829
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
830
* https://tracker.ceph.com/issues/62036
831
    src/mds/MDCache.cc: 5131: FAILED ceph_assert(isolated_inodes.empty())
832
* https://tracker.ceph.com/issues/61737
833
    coredump from '/bin/podman pull quay.ceph.io/ceph-ci/ceph:pacific'
834
* https://tracker.ceph.com/issues/44565
835
    src/mds/SimpleLock.h: 528: FAILED ceph_assert(state == LOCK_XLOCK || state == LOCK_XLOCKDONE || state == LOCK_XLOCKSNAP || state == LOCK_LOCK_XLOCK || state == LOCK_LOCK || is_locallock())
836 155 Rishabh Dave
837 1 Patrick Donnelly
838 153 Rishabh Dave
h3. 13 July 2023 Run 2
839 152 Rishabh Dave
840
841
https://pulpito.ceph.com/rishabh-2023-07-08_23:33:40-fs-wip-rishabh-2023Jul9-testing-default-smithi/
842
https://pulpito.ceph.com/rishabh-2023-07-09_20:19:09-fs-wip-rishabh-2023Jul9-testing-default-smithi/
843
844
* https://tracker.ceph.com/issues/61957
845
  test_client_limits.TestClientLimits.test_client_release_bug
846
* https://tracker.ceph.com/issues/61982
847
  Test failure: test_clean_stale_subvolume_snapshot_metadata (tasks.cephfs.test_volumes.TestSubvolumeSnapshots)
848
* https://tracker.ceph.com/issues/59348
849
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
850
* https://tracker.ceph.com/issues/59344
851
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
852
* https://tracker.ceph.com/issues/54460
853
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
854
* https://tracker.ceph.com/issues/57655
855
  qa: fs:mixed-clients kernel_untar_build failure
856
* https://tracker.ceph.com/issues/61400
857
  valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
858
* https://tracker.ceph.com/issues/61399
859
  ior build failure
860
861 151 Venky Shankar
h3. 13 July 2023
862
863
https://pulpito.ceph.com/vshankar-2023-07-04_11:45:30-fs-wip-vshankar-testing-20230704.040242-testing-default-smithi/
864
865
* https://tracker.ceph.com/issues/54460
866
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
867
* https://tracker.ceph.com/issues/61400
868
    valgrind+ceph-mon: segmentation fault in rocksdb+tcmalloc
869
* https://tracker.ceph.com/issues/57655
870
    qa: fs:mixed-clients kernel_untar_build failure
871
* https://tracker.ceph.com/issues/61945
872
    LibCephFS.DelegTimeout failure
873
* https://tracker.ceph.com/issues/52624
874
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
875
* https://tracker.ceph.com/issues/57676
876
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
877
* https://tracker.ceph.com/issues/59348
878
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
879
* https://tracker.ceph.com/issues/59344
880
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
881
* https://tracker.ceph.com/issues/51964
882
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
883
* https://tracker.ceph.com/issues/59346
884
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
885
* https://tracker.ceph.com/issues/61982
886
    Test failure: test_clean_stale_subvolume_snapshot_metadata (tasks.cephfs.test_volumes.TestSubvolumeSnapshots)
887 150 Rishabh Dave
888
889
h3. 13 Jul 2023
890
891
https://pulpito.ceph.com/rishabh-2023-07-05_22:21:20-fs-wip-rishabh-2023Jul5-testing-default-smithi/
892
https://pulpito.ceph.com/rishabh-2023-07-06_19:33:28-fs-wip-rishabh-2023Jul5-testing-default-smithi/
893
894
* https://tracker.ceph.com/issues/61957
895
  test_client_limits.TestClientLimits.test_client_release_bug
896
* https://tracker.ceph.com/issues/59348
897
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
898
* https://tracker.ceph.com/issues/59346
899
  fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
900
* https://tracker.ceph.com/issues/48773
901
  scrub does not complete: reached max tries
902
* https://tracker.ceph.com/issues/59344
903
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
904
* https://tracker.ceph.com/issues/52438
905
  qa: ffsb timeout
906
* https://tracker.ceph.com/issues/57656
907
  [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
908
* https://tracker.ceph.com/issues/58742
909
  xfstests-dev: kcephfs: generic
910
* https://tracker.ceph.com/issues/61399
911 148 Rishabh Dave
  libmpich: undefined references to fi_strerror
912 149 Rishabh Dave
913 148 Rishabh Dave
h3. 12 July 2023
914
915
https://pulpito.ceph.com/rishabh-2023-07-05_18:32:52-fs-wip-rishabh-CephManager-in-CephFSTestCase-testing-default-smithi/
916
https://pulpito.ceph.com/rishabh-2023-07-06_19:46:43-fs-wip-rishabh-CephManager-in-CephFSTestCase-testing-default-smithi/
917
918
* https://tracker.ceph.com/issues/61892
919
  test_strays.TestStrays.test_snapshot_remove failed
920
* https://tracker.ceph.com/issues/59348
921
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
922
* https://tracker.ceph.com/issues/53859
923
  qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
924
* https://tracker.ceph.com/issues/59346
925
  fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write"
926
* https://tracker.ceph.com/issues/58742
927
  xfstests-dev: kcephfs: generic
928
* https://tracker.ceph.com/issues/59344
929
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
930
* https://tracker.ceph.com/issues/52438
931
  qa: ffsb timeout
932
* https://tracker.ceph.com/issues/57656
933
  [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
934
* https://tracker.ceph.com/issues/54460
935
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
936
* https://tracker.ceph.com/issues/57655
937
  qa: fs:mixed-clients kernel_untar_build failure
938
* https://tracker.ceph.com/issues/61182
939
  cephfs-mirror-ha-workunit: reached maximum tries (50) after waiting for 300 seconds
940
* https://tracker.ceph.com/issues/61400
941
  valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
942 147 Rishabh Dave
* https://tracker.ceph.com/issues/48773
943 146 Patrick Donnelly
  reached max tries: scrub does not complete
944
945
h3. 05 July 2023
946
947
https://pulpito.ceph.com/pdonnell-2023-07-05_03:38:33-fs:libcephfs-wip-pdonnell-testing-20230705.003205-distro-default-smithi/
948
949 137 Rishabh Dave
* https://tracker.ceph.com/issues/59346
950 143 Rishabh Dave
  fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write"
951
952
h3. 27 Jun 2023
953
954
https://pulpito.ceph.com/rishabh-2023-06-21_23:38:17-fs-wip-rishabh-improvements-authmon-testing-default-smithi/
955 144 Rishabh Dave
https://pulpito.ceph.com/rishabh-2023-06-23_17:37:30-fs-wip-rishabh-improvements-authmon-distro-default-smithi/
956
957
* https://tracker.ceph.com/issues/59348
958
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
959
* https://tracker.ceph.com/issues/54460
960
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
961
* https://tracker.ceph.com/issues/59346
962
  fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write"
963
* https://tracker.ceph.com/issues/59344
964
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
965
* https://tracker.ceph.com/issues/61399
966
  libmpich: undefined references to fi_strerror
967
* https://tracker.ceph.com/issues/50223
968
  client.xxxx isn't responding to mclientcaps(revoke)
969 143 Rishabh Dave
* https://tracker.ceph.com/issues/61831
970
  Test failure: test_mirroring_init_failure_with_recovery (tasks.cephfs.test_mirroring.TestMirroring)
971 142 Venky Shankar
972
973
h3. 22 June 2023
974
975
* https://tracker.ceph.com/issues/57676
976
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
977
* https://tracker.ceph.com/issues/54460
978
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
979
* https://tracker.ceph.com/issues/59344
980
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
981
* https://tracker.ceph.com/issues/59348
982
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
983
* https://tracker.ceph.com/issues/61400
984
    valgrind+ceph-mon: segmentation fault in rocksdb+tcmalloc
985
* https://tracker.ceph.com/issues/57655
986
    qa: fs:mixed-clients kernel_untar_build failure
987
* https://tracker.ceph.com/issues/61394
988
    qa/quincy: cluster [WRN] evicting unresponsive client smithi152 (4298), after 303.726 seconds" in cluster log
989
* https://tracker.ceph.com/issues/61762
990
    qa: wait_for_clean: failed before timeout expired
991
* https://tracker.ceph.com/issues/61775
992
    cephfs-mirror: mirror daemon does not shutdown (in mirror ha tests)
993
* https://tracker.ceph.com/issues/44565
994
    src/mds/SimpleLock.h: 528: FAILED ceph_assert(state == LOCK_XLOCK || state == LOCK_XLOCKDONE || state == LOCK_XLOCKSNAP || state == LOCK_LOCK_XLOCK || state == LOCK_LOCK || is_locallock())
995
* https://tracker.ceph.com/issues/61790
996
    cephfs client to mds comms remain silent after reconnect
997
* https://tracker.ceph.com/issues/61791
998
    snaptest-git-ceph.sh test timed out (job dead)
999 139 Venky Shankar
1000
1001
h3. 20 June 2023
1002
1003
https://pulpito.ceph.com/vshankar-2023-06-15_04:58:28-fs-wip-vshankar-testing-20230614.124123-testing-default-smithi/
1004
1005
* https://tracker.ceph.com/issues/57676
1006
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1007
* https://tracker.ceph.com/issues/54460
1008
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1009 140 Venky Shankar
* https://tracker.ceph.com/issues/54462
1010 1 Patrick Donnelly
  Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128
1011 141 Venky Shankar
* https://tracker.ceph.com/issues/58340
1012 139 Venky Shankar
  mds: fsstress.sh hangs with multimds
1013
* https://tracker.ceph.com/issues/59344
1014
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
1015
* https://tracker.ceph.com/issues/59348
1016
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1017
* https://tracker.ceph.com/issues/57656
1018
  [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
1019
* https://tracker.ceph.com/issues/61400
1020
    valgrind+ceph-mon: segmentation fault in rocksdb+tcmalloc
1021
* https://tracker.ceph.com/issues/57655
1022
    qa: fs:mixed-clients kernel_untar_build failure
1023
* https://tracker.ceph.com/issues/44565
1024
    src/mds/SimpleLock.h: 528: FAILED ceph_assert(state == LOCK_XLOCK || state == LOCK_XLOCKDONE || state == LOCK_XLOCKSNAP || state == LOCK_LOCK_XLOCK || state == LOCK_LOCK || is_locallock())
1025
* https://tracker.ceph.com/issues/61737
1026 138 Rishabh Dave
    coredump from '/bin/podman pull quay.ceph.io/ceph-ci/ceph:pacific'
1027
1028
h3. 16 June 2023
1029
1030 1 Patrick Donnelly
https://pulpito.ceph.com/rishabh-2023-05-16_10:39:13-fs-wip-rishabh-2023May15-1524-testing-default-smithi/
1031 145 Rishabh Dave
https://pulpito.ceph.com/rishabh-2023-05-17_11:09:48-fs-wip-rishabh-2023May15-1524-testing-default-smithi/
1032 138 Rishabh Dave
https://pulpito.ceph.com/rishabh-2023-05-18_10:01:53-fs-wip-rishabh-2023May15-1524-testing-default-smithi/
1033 1 Patrick Donnelly
(bins were rebuilt with a subset of orig PRs) https://pulpito.ceph.com/rishabh-2023-06-09_10:19:22-fs-wip-rishabh-2023Jun9-1308-testing-default-smithi/
1034
1035
1036
* https://tracker.ceph.com/issues/59344
1037
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
1038 138 Rishabh Dave
* https://tracker.ceph.com/issues/59348
1039
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1040 145 Rishabh Dave
* https://tracker.ceph.com/issues/59346
1041
  fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write"
1042
* https://tracker.ceph.com/issues/57656
1043
  [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
1044
* https://tracker.ceph.com/issues/54460
1045
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1046 138 Rishabh Dave
* https://tracker.ceph.com/issues/54462
1047
  Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128
1048 145 Rishabh Dave
* https://tracker.ceph.com/issues/61399
1049
  libmpich: undefined references to fi_strerror
1050
* https://tracker.ceph.com/issues/58945
1051
  xfstests-dev: ceph-fuse: generic 
1052 138 Rishabh Dave
* https://tracker.ceph.com/issues/58742
1053 136 Patrick Donnelly
  xfstests-dev: kcephfs: generic
1054
1055
h3. 24 May 2023
1056
1057
https://pulpito.ceph.com/pdonnell-2023-05-23_18:20:18-fs-wip-pdonnell-testing-20230523.134409-distro-default-smithi/
1058
1059
* https://tracker.ceph.com/issues/57676
1060
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1061
* https://tracker.ceph.com/issues/59683
1062
    Error: Unable to find a match: userspace-rcu-devel libedit-devel device-mapper-devel with fscrypt tests
1063
* https://tracker.ceph.com/issues/61399
1064
    qa: "[Makefile:299: ior] Error 1"
1065
* https://tracker.ceph.com/issues/61265
1066
    qa: tasks.cephfs.fuse_mount:process failed to terminate after unmount
1067
* https://tracker.ceph.com/issues/59348
1068
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1069
* https://tracker.ceph.com/issues/59346
1070
    qa/workunits/fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write"
1071
* https://tracker.ceph.com/issues/61400
1072
    valgrind+ceph-mon: segmentation fault in rocksdb+tcmalloc
1073
* https://tracker.ceph.com/issues/54460
1074
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1075
* https://tracker.ceph.com/issues/51964
1076
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1077
* https://tracker.ceph.com/issues/59344
1078
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
1079
* https://tracker.ceph.com/issues/61407
1080
    mds: abort on CInode::verify_dirfrags
1081
* https://tracker.ceph.com/issues/48773
1082
    qa: scrub does not complete
1083
* https://tracker.ceph.com/issues/57655
1084
    qa: fs:mixed-clients kernel_untar_build failure
1085
* https://tracker.ceph.com/issues/61409
1086 128 Venky Shankar
    qa: _test_stale_caps does not wait for file flush before stat
1087
1088
h3. 15 May 2023
1089 130 Venky Shankar
1090 128 Venky Shankar
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230509.090020
1091
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230509.090020-6
1092
1093
* https://tracker.ceph.com/issues/52624
1094
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
1095
* https://tracker.ceph.com/issues/54460
1096
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1097
* https://tracker.ceph.com/issues/57676
1098
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1099
* https://tracker.ceph.com/issues/59684 [kclient bug]
1100
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
1101
* https://tracker.ceph.com/issues/59348
1102
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1103 131 Venky Shankar
* https://tracker.ceph.com/issues/61148
1104
    dbench test results in call trace in dmesg [kclient bug]
1105 133 Kotresh Hiremath Ravishankar
* https://tracker.ceph.com/issues/58340
1106 134 Kotresh Hiremath Ravishankar
    mds: fsstress.sh hangs with multimds
1107 125 Venky Shankar
1108
 
1109 129 Rishabh Dave
h3. 11 May 2023
1110
1111
https://pulpito.ceph.com/yuriw-2023-05-10_18:21:40-fs-wip-yuri7-testing-2023-05-10-0742-distro-default-smithi/
1112
1113
* https://tracker.ceph.com/issues/59684 [kclient bug]
1114
  Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
1115
* https://tracker.ceph.com/issues/59348
1116
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1117
* https://tracker.ceph.com/issues/57655
1118
  qa: fs:mixed-clients kernel_untar_build failure
1119
* https://tracker.ceph.com/issues/57676
1120
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
1121
* https://tracker.ceph.com/issues/55805
1122
  error during scrub thrashing reached max tries in 900 secs
1123
* https://tracker.ceph.com/issues/54460
1124
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1125
* https://tracker.ceph.com/issues/57656
1126
  [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
1127
* https://tracker.ceph.com/issues/58220
1128
  Command failed (workunit test fs/quota/quota.sh) on smithi081 with status 1:
1129 1 Patrick Donnelly
* https://tracker.ceph.com/issues/58220#note-9
1130
  workunit fs/test_python.sh: test_disk_quota_exceeeded_error failure
1131 134 Kotresh Hiremath Ravishankar
* https://tracker.ceph.com/issues/59342
1132
  qa/workunits/kernel_untar_build.sh failed when compiling the Linux source
1133 135 Kotresh Hiremath Ravishankar
* https://tracker.ceph.com/issues/58949
1134
    test_cephfs.test_disk_quota_exceeeded_error - AssertionError: DiskQuotaExceeded not raised by write
1135 129 Rishabh Dave
* https://tracker.ceph.com/issues/61243 (NEW)
1136
  test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
1137
1138 125 Venky Shankar
h3. 11 May 2023
1139 127 Venky Shankar
1140
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230509.054005
1141 126 Venky Shankar
1142 125 Venky Shankar
(no fsstress job failure [https://tracker.ceph.com/issues/58340] since https://github.com/ceph/ceph/pull/49553
1143
 was included in the branch, however, the PR got updated and needs retest).
1144
1145
* https://tracker.ceph.com/issues/52624
1146
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
1147
* https://tracker.ceph.com/issues/54460
1148
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1149
* https://tracker.ceph.com/issues/57676
1150
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1151
* https://tracker.ceph.com/issues/59683
1152
    Error: Unable to find a match: userspace-rcu-devel libedit-devel device-mapper-devel with fscrypt tests
1153
* https://tracker.ceph.com/issues/59684 [kclient bug]
1154
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
1155
* https://tracker.ceph.com/issues/59348
1156 124 Venky Shankar
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1157
1158
h3. 09 May 2023
1159
1160
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230506.143554
1161
1162
* https://tracker.ceph.com/issues/52624
1163
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
1164
* https://tracker.ceph.com/issues/58340
1165
    mds: fsstress.sh hangs with multimds
1166
* https://tracker.ceph.com/issues/54460
1167
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1168
* https://tracker.ceph.com/issues/57676
1169
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1170
* https://tracker.ceph.com/issues/51964
1171
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1172
* https://tracker.ceph.com/issues/59350
1173
    qa: test_flag_scrub_mdsdir (tasks.cephfs.test_scrub_checks.TestScrubChecks) ... ERROR
1174
* https://tracker.ceph.com/issues/59683
1175
    Error: Unable to find a match: userspace-rcu-devel libedit-devel device-mapper-devel with fscrypt tests
1176
* https://tracker.ceph.com/issues/59684 [kclient bug]
1177
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
1178
* https://tracker.ceph.com/issues/59348
1179 123 Venky Shankar
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1180
1181
h3. 10 Apr 2023
1182
1183
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230330.105356
1184
1185
* https://tracker.ceph.com/issues/52624
1186
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
1187
* https://tracker.ceph.com/issues/58340
1188
    mds: fsstress.sh hangs with multimds
1189
* https://tracker.ceph.com/issues/54460
1190
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1191
* https://tracker.ceph.com/issues/57676
1192
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1193 119 Rishabh Dave
* https://tracker.ceph.com/issues/51964
1194 120 Rishabh Dave
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1195 121 Rishabh Dave
1196 120 Rishabh Dave
h3. 31 Mar 2023
1197 122 Rishabh Dave
1198
run: http://pulpito.front.sepia.ceph.com/rishabh-2023-03-03_21:39:49-fs-wip-rishabh-2023Mar03-2316-testing-default-smithi/
1199 120 Rishabh Dave
re-run (some PRs removed from batch): http://pulpito.front.sepia.ceph.com/rishabh-2023-03-11_05:54:03-fs-wip-rishabh-2023Mar10-1727-testing-default-smithi/
1200
re-run (some PRs removed from batch): http://pulpito.front.sepia.ceph.com/rishabh-2023-03-23_08:27:28-fs-wip-rishabh-2023Mar20-2250-testing-default-smithi/
1201
1202
There were many more re-runs for "failed+dead" jobs as well as for individual jobs. half of the PRs from the batch were removed (gradually over subsequent re-runs).
1203
1204
* https://tracker.ceph.com/issues/57676
1205
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
1206
* https://tracker.ceph.com/issues/54460
1207
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1208
* https://tracker.ceph.com/issues/58220
1209
  Command failed (workunit test fs/quota/quota.sh) on smithixxx with status 1
1210
* https://tracker.ceph.com/issues/58220#note-9
1211
  workunit fs/test_python.sh: test_disk_quota_exceeeded_error failure
1212
* https://tracker.ceph.com/issues/56695
1213
  Command failed (workunit test suites/pjd.sh)
1214
* https://tracker.ceph.com/issues/58564 
1215
  workuit dbench failed with error code 1
1216
* https://tracker.ceph.com/issues/57206
1217
  libcephfs/test.sh: ceph_test_libcephfs_reclaim
1218
* https://tracker.ceph.com/issues/57580
1219
  Test failure: test_newops_getvxattr (tasks.cephfs.test_newops.TestNewOps)
1220
* https://tracker.ceph.com/issues/58940
1221
  ceph osd hit ceph_abort
1222
* https://tracker.ceph.com/issues/55805
1223 118 Venky Shankar
  error scrub thrashing reached max tries in 900 secs
1224
1225
h3. 30 March 2023
1226
1227
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230315.085747
1228
1229
* https://tracker.ceph.com/issues/58938
1230
    qa: xfstests-dev's generic test suite has 7 failures with kclient
1231
* https://tracker.ceph.com/issues/51964
1232
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1233
* https://tracker.ceph.com/issues/58340
1234 114 Venky Shankar
    mds: fsstress.sh hangs with multimds
1235
1236 115 Venky Shankar
h3. 29 March 2023
1237 114 Venky Shankar
1238
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230317.095222
1239
1240
* https://tracker.ceph.com/issues/56695
1241
    [RHEL stock] pjd test failures
1242
* https://tracker.ceph.com/issues/57676
1243
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1244
* https://tracker.ceph.com/issues/57087
1245
    qa: test_fragmented_injection (tasks.cephfs.test_data_scan.TestDataScan) failure
1246 116 Venky Shankar
* https://tracker.ceph.com/issues/58340
1247
    mds: fsstress.sh hangs with multimds
1248 114 Venky Shankar
* https://tracker.ceph.com/issues/57655
1249
    qa: fs:mixed-clients kernel_untar_build failure
1250 117 Venky Shankar
* https://tracker.ceph.com/issues/59230
1251
    Test failure: test_object_deletion (tasks.cephfs.test_damage.TestDamage)
1252 114 Venky Shankar
* https://tracker.ceph.com/issues/58938
1253 113 Venky Shankar
    qa: xfstests-dev's generic test suite has 7 failures with kclient
1254
1255
h3. 13 Mar 2023
1256
1257
* https://tracker.ceph.com/issues/56695
1258
    [RHEL stock] pjd test failures
1259
* https://tracker.ceph.com/issues/57676
1260
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1261
* https://tracker.ceph.com/issues/51964
1262
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1263
* https://tracker.ceph.com/issues/54460
1264
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1265
* https://tracker.ceph.com/issues/57656
1266 112 Venky Shankar
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
1267
1268
h3. 09 Mar 2023
1269
1270
https://pulpito.ceph.com/vshankar-2023-03-03_04:39:14-fs-wip-vshankar-testing-20230303.023823-testing-default-smithi/
1271
https://pulpito.ceph.com/vshankar-2023-03-08_15:12:36-fs-wip-vshankar-testing-20230308.112059-testing-default-smithi/
1272
1273
* https://tracker.ceph.com/issues/56695
1274
    [RHEL stock] pjd test failures
1275
* https://tracker.ceph.com/issues/57676
1276
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1277
* https://tracker.ceph.com/issues/51964
1278
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1279
* https://tracker.ceph.com/issues/54460
1280
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1281
* https://tracker.ceph.com/issues/58340
1282
    mds: fsstress.sh hangs with multimds
1283
* https://tracker.ceph.com/issues/57087
1284 111 Venky Shankar
    qa: test_fragmented_injection (tasks.cephfs.test_data_scan.TestDataScan) failure
1285
1286
h3. 07 Mar 2023
1287
1288
https://pulpito.ceph.com/vshankar-2023-03-02_09:21:58-fs-wip-vshankar-testing-20230222.044949-testing-default-smithi/
1289
https://pulpito.ceph.com/vshankar-2023-03-07_05:15:12-fs-wip-vshankar-testing-20230307.030510-testing-default-smithi/
1290
1291
* https://tracker.ceph.com/issues/56695
1292
    [RHEL stock] pjd test failures
1293
* https://tracker.ceph.com/issues/57676
1294
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1295
* https://tracker.ceph.com/issues/51964
1296
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1297
* https://tracker.ceph.com/issues/57656
1298
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
1299
* https://tracker.ceph.com/issues/57655
1300
    qa: fs:mixed-clients kernel_untar_build failure
1301
* https://tracker.ceph.com/issues/58220
1302
    Command failed (workunit test fs/quota/quota.sh) on smithi081 with status 1:
1303
* https://tracker.ceph.com/issues/54460
1304
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1305
* https://tracker.ceph.com/issues/58934
1306 109 Venky Shankar
    snaptest-git-ceph.sh failure with ceph-fuse
1307
1308
h3. 28 Feb 2023
1309
1310
https://pulpito.ceph.com/vshankar-2023-02-24_02:11:45-fs-wip-vshankar-testing-20230222.025426-testing-default-smithi/
1311
1312
* https://tracker.ceph.com/issues/56695
1313
    [RHEL stock] pjd test failures
1314
* https://tracker.ceph.com/issues/57676
1315
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1316 110 Venky Shankar
* https://tracker.ceph.com/issues/56446
1317 109 Venky Shankar
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
1318
1319 107 Venky Shankar
(teuthology infra issues causing testing delays - merging PRs which have tests passing)
1320
1321
h3. 25 Jan 2023
1322
1323
https://pulpito.ceph.com/vshankar-2023-01-25_07:57:32-fs-wip-vshankar-testing-20230125.055346-testing-default-smithi/
1324
1325
* https://tracker.ceph.com/issues/52624
1326
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
1327
* https://tracker.ceph.com/issues/56695
1328
    [RHEL stock] pjd test failures
1329
* https://tracker.ceph.com/issues/57676
1330
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1331
* https://tracker.ceph.com/issues/56446
1332
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
1333
* https://tracker.ceph.com/issues/57206
1334
    libcephfs/test.sh: ceph_test_libcephfs_reclaim
1335
* https://tracker.ceph.com/issues/58220
1336
    Command failed (workunit test fs/quota/quota.sh) on smithi081 with status 1:
1337
* https://tracker.ceph.com/issues/58340
1338
  mds: fsstress.sh hangs with multimds
1339
* https://tracker.ceph.com/issues/56011
1340
    fs/thrash: snaptest-snap-rm-cmp.sh fails in mds5sum comparison
1341
* https://tracker.ceph.com/issues/54460
1342 101 Rishabh Dave
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1343
1344
h3. 30 JAN 2023
1345
1346
run: http://pulpito.front.sepia.ceph.com/rishabh-2022-11-28_08:04:11-fs-wip-rishabh-testing-2022Nov24-1818-testing-default-smithi/
1347
re-run: http://pulpito.front.sepia.ceph.com/rishabh-2023-01-13_12:08:33-fs-wip-rishabh-testing-2022Nov24-11Jan2023-distro-default-smithi/
1348 105 Rishabh Dave
re-run of re-run: http://pulpito.front.sepia.ceph.com/rishabh-2023-01-23_18:53:32-fs-wip-rishabh-testing-2022Nov24-11Jan2023-distro-default-smithi/
1349
1350 101 Rishabh Dave
* https://tracker.ceph.com/issues/52624
1351
  qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
1352
* https://tracker.ceph.com/issues/56695
1353
  [RHEL stock] pjd test failures
1354
* https://tracker.ceph.com/issues/57676
1355
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
1356
* https://tracker.ceph.com/issues/55332
1357
  Failure in snaptest-git-ceph.sh
1358
* https://tracker.ceph.com/issues/51964
1359
  qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1360
* https://tracker.ceph.com/issues/56446
1361
  Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
1362
* https://tracker.ceph.com/issues/57655 
1363
  qa: fs:mixed-clients kernel_untar_build failure
1364
* https://tracker.ceph.com/issues/54460
1365
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1366 103 Rishabh Dave
* https://tracker.ceph.com/issues/58340
1367
  mds: fsstress.sh hangs with multimds
1368 101 Rishabh Dave
* https://tracker.ceph.com/issues/58219
1369 102 Rishabh Dave
  Command crashed: 'ceph-dencoder type inode_backtrace_t import - decode dump_json'
1370
1371
* "Failed to load ceph-mgr modules: prometheus" in cluster log"
1372 106 Rishabh Dave
  http://pulpito.front.sepia.ceph.com/rishabh-2023-01-23_18:53:32-fs-wip-rishabh-testing-2022Nov24-11Jan2023-distro-default-smithi/7134086
1373
  Acc to Venky this was fixed in https://github.com/ceph/ceph/commit/cf6089200d96fc56b08ee17a4e31f19823370dc8
1374 102 Rishabh Dave
* Created https://tracker.ceph.com/issues/58564
1375 100 Venky Shankar
  workunit test suites/dbench.sh failed error code 1
1376
1377
h3. 15 Dec 2022
1378
1379
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20221215.112736
1380
1381
* https://tracker.ceph.com/issues/52624
1382
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1383
* https://tracker.ceph.com/issues/56695
1384
    [RHEL stock] pjd test failures
1385
* https://tracker.ceph.com/issues/58219
1386
* https://tracker.ceph.com/issues/57655
1387
* qa: fs:mixed-clients kernel_untar_build failure
1388
    Test failure: test_journal_migration (tasks.cephfs.test_journal_migration.TestJournalMigration)
1389
* https://tracker.ceph.com/issues/57676
1390
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1391
* https://tracker.ceph.com/issues/58340
1392 96 Venky Shankar
    mds: fsstress.sh hangs with multimds
1393
1394
h3. 08 Dec 2022
1395 99 Venky Shankar
1396 96 Venky Shankar
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20221130.043104
1397
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20221209.043803
1398
1399
(lots of transient git.ceph.com failures)
1400
1401
* https://tracker.ceph.com/issues/52624
1402
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1403
* https://tracker.ceph.com/issues/56695
1404
    [RHEL stock] pjd test failures
1405
* https://tracker.ceph.com/issues/57655
1406
    qa: fs:mixed-clients kernel_untar_build failure
1407
* https://tracker.ceph.com/issues/58219
1408
    Test failure: test_journal_migration (tasks.cephfs.test_journal_migration.TestJournalMigration)
1409
* https://tracker.ceph.com/issues/58220
1410
    Command failed (workunit test fs/quota/quota.sh) on smithi081 with status 1:
1411 97 Venky Shankar
* https://tracker.ceph.com/issues/57676
1412
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1413 98 Venky Shankar
* https://tracker.ceph.com/issues/53859
1414
    qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
1415
* https://tracker.ceph.com/issues/54460
1416
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1417 96 Venky Shankar
* https://tracker.ceph.com/issues/58244
1418 95 Venky Shankar
    Test failure: test_rebuild_inotable (tasks.cephfs.test_data_scan.TestDataScan)
1419
1420
h3. 14 Oct 2022
1421
1422
https://pulpito.ceph.com/vshankar-2022-10-12_04:56:59-fs-wip-vshankar-testing-20221011-145847-testing-default-smithi/
1423
https://pulpito.ceph.com/vshankar-2022-10-14_04:04:57-fs-wip-vshankar-testing-20221014-072608-testing-default-smithi/
1424
1425
* https://tracker.ceph.com/issues/52624
1426
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1427
* https://tracker.ceph.com/issues/55804
1428
    Command failed (workunit test suites/pjd.sh)
1429
* https://tracker.ceph.com/issues/51964
1430
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1431
* https://tracker.ceph.com/issues/57682
1432
    client: ERROR: test_reconnect_after_blocklisted
1433 90 Rishabh Dave
* https://tracker.ceph.com/issues/54460
1434 91 Rishabh Dave
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1435
1436
h3. 10 Oct 2022
1437 92 Rishabh Dave
1438 91 Rishabh Dave
http://pulpito.front.sepia.ceph.com/rishabh-2022-09-30_19:45:21-fs-wip-rishabh-testing-30Sep2022-testing-default-smithi/
1439
1440
reruns
1441
* fs-thrash, passed: http://pulpito.front.sepia.ceph.com/rishabh-2022-10-04_13:19:47-fs-wip-rishabh-testing-30Sep2022-testing-default-smithi/
1442 94 Rishabh Dave
* fs-verify, passed: http://pulpito.front.sepia.ceph.com/rishabh-2022-10-05_12:25:37-fs-wip-rishabh-testing-30Sep2022-testing-default-smithi/
1443 91 Rishabh Dave
* cephadm failures also passed after many re-runs: http://pulpito.front.sepia.ceph.com/rishabh-2022-10-06_13:50:51-fs-wip-rishabh-testing-30Sep2022-2-testing-default-smithi/
1444 93 Rishabh Dave
    ** needed this PR to be merged in ceph-ci branch - https://github.com/ceph/ceph/pull/47458
1445 91 Rishabh Dave
1446
known bugs
1447
* https://tracker.ceph.com/issues/52624
1448
  qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1449
* https://tracker.ceph.com/issues/50223
1450
  client.xxxx isn't responding to mclientcaps(revoke
1451
* https://tracker.ceph.com/issues/57299
1452
  qa: test_dump_loads fails with JSONDecodeError
1453
* https://tracker.ceph.com/issues/57655 [Exist in main as well]
1454
  qa: fs:mixed-clients kernel_untar_build failure
1455
* https://tracker.ceph.com/issues/57206
1456 90 Rishabh Dave
  libcephfs/test.sh: ceph_test_libcephfs_reclaim
1457
1458
h3. 2022 Sep 29
1459
1460
http://pulpito.front.sepia.ceph.com/rishabh-2022-09-14_12:48:43-fs-wip-rishabh-testing-2022Sep9-1708-testing-default-smithi/
1461
1462
* https://tracker.ceph.com/issues/55804
1463
  Command failed (workunit test suites/pjd.sh)
1464
* https://tracker.ceph.com/issues/36593
1465
  Command failed (workunit test fs/quota/quota.sh) on smithixxx with status 1
1466
* https://tracker.ceph.com/issues/52624
1467
  qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1468
* https://tracker.ceph.com/issues/51964
1469
  qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1470
* https://tracker.ceph.com/issues/56632
1471
  Test failure: test_subvolume_snapshot_clone_quota_exceeded
1472
* https://tracker.ceph.com/issues/50821
1473 88 Patrick Donnelly
  qa: untar_snap_rm failure during mds thrashing
1474
1475
h3. 2022 Sep 26
1476
1477
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20220923.171109
1478
1479
* https://tracker.ceph.com/issues/55804
1480
    qa failure: pjd link tests failed
1481
* https://tracker.ceph.com/issues/57676
1482
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1483
* https://tracker.ceph.com/issues/52624
1484
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1485
* https://tracker.ceph.com/issues/57580
1486
    Test failure: test_newops_getvxattr (tasks.cephfs.test_newops.TestNewOps)
1487
* https://tracker.ceph.com/issues/48773
1488
    qa: scrub does not complete
1489
* https://tracker.ceph.com/issues/57299
1490
    qa: test_dump_loads fails with JSONDecodeError
1491
* https://tracker.ceph.com/issues/57280
1492
    qa: tasks/kernel_cfuse_workunits_untarbuild_blogbench fails - Failed to fetch package version from shaman
1493
* https://tracker.ceph.com/issues/57205
1494
    Test failure: test_subvolume_group_ls_filter_internal_directories (tasks.cephfs.test_volumes.TestSubvolumeGroups)
1495
* https://tracker.ceph.com/issues/57656
1496
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
1497
* https://tracker.ceph.com/issues/57677
1498
    qa: "1 MDSs behind on trimming (MDS_TRIM)"
1499
* https://tracker.ceph.com/issues/57206
1500
    libcephfs/test.sh: ceph_test_libcephfs_reclaim
1501
* https://tracker.ceph.com/issues/57446
1502
    qa: test_subvolume_snapshot_info_if_orphan_clone fails
1503 89 Patrick Donnelly
* https://tracker.ceph.com/issues/57655 [Exist in main as well]
1504
    qa: fs:mixed-clients kernel_untar_build failure
1505 88 Patrick Donnelly
* https://tracker.ceph.com/issues/57682
1506
    client: ERROR: test_reconnect_after_blocklisted
1507 87 Patrick Donnelly
1508
1509
h3. 2022 Sep 22
1510
1511
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20220920.234701
1512
1513
* https://tracker.ceph.com/issues/57299
1514
    qa: test_dump_loads fails with JSONDecodeError
1515
* https://tracker.ceph.com/issues/57205
1516
    Test failure: test_subvolume_group_ls_filter_internal_directories (tasks.cephfs.test_volumes.TestSubvolumeGroups)
1517
* https://tracker.ceph.com/issues/52624
1518
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1519
* https://tracker.ceph.com/issues/57580
1520
    Test failure: test_newops_getvxattr (tasks.cephfs.test_newops.TestNewOps)
1521
* https://tracker.ceph.com/issues/57280
1522
    qa: tasks/kernel_cfuse_workunits_untarbuild_blogbench fails - Failed to fetch package version from shaman
1523
* https://tracker.ceph.com/issues/48773
1524
    qa: scrub does not complete
1525
* https://tracker.ceph.com/issues/56446
1526
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
1527
* https://tracker.ceph.com/issues/57206
1528
    libcephfs/test.sh: ceph_test_libcephfs_reclaim
1529
* https://tracker.ceph.com/issues/51267
1530
    CommandFailedError: Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithi096 with status 1:...
1531
1532
NEW:
1533
1534
* https://tracker.ceph.com/issues/57656
1535
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
1536
* https://tracker.ceph.com/issues/57655 [Exist in main as well]
1537
    qa: fs:mixed-clients kernel_untar_build failure
1538
* https://tracker.ceph.com/issues/57657
1539
    mds: scrub locates mismatch between child accounted_rstats and self rstats
1540
1541
Segfault probably caused by: https://github.com/ceph/ceph/pull/47795#issuecomment-1255724799
1542 80 Venky Shankar
1543 79 Venky Shankar
1544
h3. 2022 Sep 16
1545
1546
https://pulpito.ceph.com/?branch=wip-vshankar-testing1-20220905-132828
1547
1548
* https://tracker.ceph.com/issues/57446
1549
    qa: test_subvolume_snapshot_info_if_orphan_clone fails
1550
* https://tracker.ceph.com/issues/57299
1551
    qa: test_dump_loads fails with JSONDecodeError
1552
* https://tracker.ceph.com/issues/50223
1553
    client.xxxx isn't responding to mclientcaps(revoke)
1554
* https://tracker.ceph.com/issues/52624
1555
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1556
* https://tracker.ceph.com/issues/57205
1557
    Test failure: test_subvolume_group_ls_filter_internal_directories (tasks.cephfs.test_volumes.TestSubvolumeGroups)
1558
* https://tracker.ceph.com/issues/57280
1559
    qa: tasks/kernel_cfuse_workunits_untarbuild_blogbench fails - Failed to fetch package version from shaman
1560
* https://tracker.ceph.com/issues/51282
1561
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
1562
* https://tracker.ceph.com/issues/48203
1563
  https://tracker.ceph.com/issues/36593
1564
    qa: quota failure
1565
    qa: quota failure caused by clients stepping on each other
1566
* https://tracker.ceph.com/issues/57580
1567 77 Rishabh Dave
    Test failure: test_newops_getvxattr (tasks.cephfs.test_newops.TestNewOps)
1568
1569 76 Rishabh Dave
1570
h3. 2022 Aug 26
1571
1572
http://pulpito.front.sepia.ceph.com/rishabh-2022-08-22_17:49:59-fs-wip-rishabh-testing-2022Aug19-testing-default-smithi/
1573
http://pulpito.front.sepia.ceph.com/rishabh-2022-08-24_11:56:51-fs-wip-rishabh-testing-2022Aug19-testing-default-smithi/
1574
1575
* https://tracker.ceph.com/issues/57206
1576
  libcephfs/test.sh: ceph_test_libcephfs_reclaim
1577
* https://tracker.ceph.com/issues/56632
1578
  Test failure: test_subvolume_snapshot_clone_quota_exceeded (tasks.cephfs.test_volumes.TestSubvolumeSnapshotClones)
1579
* https://tracker.ceph.com/issues/56446
1580
  Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
1581
* https://tracker.ceph.com/issues/51964
1582
  qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1583
* https://tracker.ceph.com/issues/53859
1584
  qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
1585
1586
* https://tracker.ceph.com/issues/54460
1587
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1588
* https://tracker.ceph.com/issues/54462
1589
  Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128
1590
* https://tracker.ceph.com/issues/54460
1591
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1592
* https://tracker.ceph.com/issues/36593
1593
  Command failed (workunit test fs/quota/quota.sh) on smithixxx with status 1
1594
1595
* https://tracker.ceph.com/issues/52624
1596
  qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1597
* https://tracker.ceph.com/issues/55804
1598
  Command failed (workunit test suites/pjd.sh)
1599
* https://tracker.ceph.com/issues/50223
1600
  client.xxxx isn't responding to mclientcaps(revoke)
1601 75 Venky Shankar
1602
1603
h3. 2022 Aug 22
1604
1605
https://pulpito.ceph.com/vshankar-2022-08-12_09:34:24-fs-wip-vshankar-testing1-20220812-072441-testing-default-smithi/
1606
https://pulpito.ceph.com/vshankar-2022-08-18_04:30:42-fs-wip-vshankar-testing1-20220818-082047-testing-default-smithi/ (drop problematic PR and re-run)
1607
1608
* https://tracker.ceph.com/issues/52624
1609
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1610
* https://tracker.ceph.com/issues/56446
1611
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
1612
* https://tracker.ceph.com/issues/55804
1613
    Command failed (workunit test suites/pjd.sh)
1614
* https://tracker.ceph.com/issues/51278
1615
    mds: "FAILED ceph_assert(!segments.empty())"
1616
* https://tracker.ceph.com/issues/54460
1617
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1618
* https://tracker.ceph.com/issues/57205
1619
    Test failure: test_subvolume_group_ls_filter_internal_directories (tasks.cephfs.test_volumes.TestSubvolumeGroups)
1620
* https://tracker.ceph.com/issues/57206
1621
    ceph_test_libcephfs_reclaim crashes during test
1622
* https://tracker.ceph.com/issues/53859
1623
  Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
1624
* https://tracker.ceph.com/issues/50223
1625 72 Venky Shankar
    client.xxxx isn't responding to mclientcaps(revoke)
1626
1627
h3. 2022 Aug 12
1628
1629
https://pulpito.ceph.com/vshankar-2022-08-10_04:06:00-fs-wip-vshankar-testing-20220805-190751-testing-default-smithi/
1630
https://pulpito.ceph.com/vshankar-2022-08-11_12:16:58-fs-wip-vshankar-testing-20220811-145809-testing-default-smithi/ (drop problematic PR and re-run)
1631
1632
* https://tracker.ceph.com/issues/52624
1633
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1634
* https://tracker.ceph.com/issues/56446
1635
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
1636
* https://tracker.ceph.com/issues/51964
1637
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1638
* https://tracker.ceph.com/issues/55804
1639
    Command failed (workunit test suites/pjd.sh)
1640
* https://tracker.ceph.com/issues/50223
1641
    client.xxxx isn't responding to mclientcaps(revoke)
1642
* https://tracker.ceph.com/issues/50821
1643 73 Venky Shankar
    qa: untar_snap_rm failure during mds thrashing
1644 72 Venky Shankar
* https://tracker.ceph.com/issues/54460
1645 71 Venky Shankar
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1646
1647
h3. 2022 Aug 04
1648
1649
https://pulpito.ceph.com/?branch=wip-vshankar-testing1-20220804-123835 (only mgr/volumes, mgr/stats)
1650
1651 69 Rishabh Dave
Unrealted teuthology failure on rhel
1652 68 Rishabh Dave
1653
h3. 2022 Jul 25
1654
1655
http://pulpito.front.sepia.ceph.com/rishabh-2022-07-22_11:34:20-fs-wip-rishabh-testing-2022Jul22-1400-testing-default-smithi/
1656
1657 74 Rishabh Dave
1st re-run: http://pulpito.front.sepia.ceph.com/rishabh-2022-07-24_03:51:19-fs-wip-rishabh-testing-2022Jul22-1400-testing-default-smithi
1658
2nd re-run: http://pulpito.front.sepia.ceph.com/rishabh-2022-07-24_08:53:36-fs-wip-rishabh-testing-2022Jul22-1400-testing-default-smithi/
1659 68 Rishabh Dave
3rd re-run: http://pulpito.front.sepia.ceph.com/rishabh-2022-07-24_08:53:36-fs-wip-rishabh-testing-2022Jul22-1400-testing-default-smithi/
1660
4th (final) re-run: http://pulpito.front.sepia.ceph.com/rishabh-2022-07-28_03:59:01-fs-wip-rishabh-testing-2022Jul28-0143-testing-default-smithi/
1661
1662
* https://tracker.ceph.com/issues/55804
1663
  Command failed (workunit test suites/pjd.sh)
1664
* https://tracker.ceph.com/issues/50223
1665
  client.xxxx isn't responding to mclientcaps(revoke)
1666
1667
* https://tracker.ceph.com/issues/54460
1668
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1669 1 Patrick Donnelly
* https://tracker.ceph.com/issues/36593
1670 74 Rishabh Dave
  Command failed (workunit test fs/quota/quota.sh) on smithixxx with status 1
1671 68 Rishabh Dave
* https://tracker.ceph.com/issues/54462
1672 67 Patrick Donnelly
  Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128~
1673
1674
h3. 2022 July 22
1675
1676
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20220721.235756
1677
1678
MDS_HEALTH_DUMMY error in log fixed by followup commit.
1679
transient selinux ping failure
1680
1681
* https://tracker.ceph.com/issues/56694
1682
    qa: avoid blocking forever on hung umount
1683
* https://tracker.ceph.com/issues/56695
1684
    [RHEL stock] pjd test failures
1685
* https://tracker.ceph.com/issues/56696
1686
    admin keyring disappears during qa run
1687
* https://tracker.ceph.com/issues/56697
1688
    qa: fs/snaps fails for fuse
1689
* https://tracker.ceph.com/issues/50222
1690
    osd: 5.2s0 deep-scrub : stat mismatch
1691
* https://tracker.ceph.com/issues/56698
1692
    client: FAILED ceph_assert(_size == 0)
1693
* https://tracker.ceph.com/issues/50223
1694
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
1695 66 Rishabh Dave
1696 65 Rishabh Dave
1697
h3. 2022 Jul 15
1698
1699
http://pulpito.front.sepia.ceph.com/rishabh-2022-07-08_23:53:34-fs-wip-rishabh-testing-2022Jul08-1820-testing-default-smithi/
1700
1701
re-run: http://pulpito.front.sepia.ceph.com/rishabh-2022-07-15_06:42:04-fs-wip-rishabh-testing-2022Jul08-1820-testing-default-smithi/
1702
1703
* https://tracker.ceph.com/issues/53859
1704
  Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
1705
* https://tracker.ceph.com/issues/55804
1706
  Command failed (workunit test suites/pjd.sh)
1707
* https://tracker.ceph.com/issues/50223
1708
  client.xxxx isn't responding to mclientcaps(revoke)
1709
* https://tracker.ceph.com/issues/50222
1710
  osd: deep-scrub : stat mismatch
1711
1712
* https://tracker.ceph.com/issues/56632
1713
  Test failure: test_subvolume_snapshot_clone_quota_exceeded (tasks.cephfs.test_volumes.TestSubvolumeSnapshotClones)
1714
* https://tracker.ceph.com/issues/56634
1715
  workunit test fs/snaps/snaptest-intodir.sh
1716
* https://tracker.ceph.com/issues/56644
1717
  Test failure: test_rapid_creation (tasks.cephfs.test_fragment.TestFragmentation)
1718
1719 61 Rishabh Dave
1720
1721
h3. 2022 July 05
1722 62 Rishabh Dave
1723 64 Rishabh Dave
http://pulpito.front.sepia.ceph.com/rishabh-2022-07-02_14:14:52-fs-wip-rishabh-testing-20220702-1631-testing-default-smithi/
1724
1725
On 1st re-run some jobs passed - http://pulpito.front.sepia.ceph.com/rishabh-2022-07-03_15:10:28-fs-wip-rishabh-testing-20220702-1631-distro-default-smithi/
1726
1727
On 2nd re-run only few jobs failed -
1728 62 Rishabh Dave
http://pulpito.front.sepia.ceph.com/rishabh-2022-07-06_05:24:29-fs-wip-rishabh-testing-20220705-2132-distro-default-smithi/
1729
http://pulpito.front.sepia.ceph.com/rishabh-2022-07-06_05:24:29-fs-wip-rishabh-testing-20220705-2132-distro-default-smithi/
1730
1731
* https://tracker.ceph.com/issues/56446
1732
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
1733
* https://tracker.ceph.com/issues/55804
1734
    Command failed (workunit test suites/pjd.sh) on smithi047 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/
1735
1736
* https://tracker.ceph.com/issues/56445
1737 63 Rishabh Dave
    Command failed on smithi080 with status 123: "find /home/ubuntu/cephtest/archive/syslog -name '*.log' -print0 | sudo xargs -0 --no-run-if-empty -- gzip --"
1738
* https://tracker.ceph.com/issues/51267
1739
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithi098 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1
1740 62 Rishabh Dave
* https://tracker.ceph.com/issues/50224
1741
    Test failure: test_mirroring_init_failure_with_recovery (tasks.cephfs.test_mirroring.TestMirroring)
1742 61 Rishabh Dave
1743 58 Venky Shankar
1744
1745
h3. 2022 July 04
1746
1747
https://pulpito.ceph.com/vshankar-2022-06-29_09:19:00-fs-wip-vshankar-testing-20220627-100931-testing-default-smithi/
1748
(rhel runs were borked due to: https://lists.ceph.io/hyperkitty/list/dev@ceph.io/thread/JSZQFUKVLDND4W33PXDGCABPHNSPT6SS/, tests ran with --filter-out=rhel)
1749
1750
* https://tracker.ceph.com/issues/56445
1751 59 Rishabh Dave
    Command failed on smithi162 with status 123: "find /home/ubuntu/cephtest/archive/syslog -name '*.log' -print0 | sudo xargs -0 --no-run-if-empty -- gzip --"
1752
* https://tracker.ceph.com/issues/56446
1753
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
1754
* https://tracker.ceph.com/issues/51964
1755 60 Rishabh Dave
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1756 59 Rishabh Dave
* https://tracker.ceph.com/issues/52624
1757 57 Venky Shankar
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1758
1759
h3. 2022 June 20
1760
1761
https://pulpito.ceph.com/vshankar-2022-06-15_04:03:39-fs-wip-vshankar-testing1-20220615-072516-testing-default-smithi/
1762
https://pulpito.ceph.com/vshankar-2022-06-19_08:22:46-fs-wip-vshankar-testing1-20220619-102531-testing-default-smithi/
1763
1764
* https://tracker.ceph.com/issues/52624
1765
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1766
* https://tracker.ceph.com/issues/55804
1767
    qa failure: pjd link tests failed
1768
* https://tracker.ceph.com/issues/54108
1769
    qa: iogen workunit: "The following counters failed to be set on mds daemons: {'mds.exported', 'mds.imported'}"
1770
* https://tracker.ceph.com/issues/55332
1771 56 Patrick Donnelly
    Failure in snaptest-git-ceph.sh (it's an async unlink/create bug)
1772
1773
h3. 2022 June 13
1774
1775
https://pulpito.ceph.com/pdonnell-2022-06-12_05:08:12-fs:workload-wip-pdonnell-testing-20220612.004943-distro-default-smithi/
1776
1777
* https://tracker.ceph.com/issues/56024
1778
    cephadm: removes ceph.conf during qa run causing command failure
1779
* https://tracker.ceph.com/issues/48773
1780
    qa: scrub does not complete
1781
* https://tracker.ceph.com/issues/56012
1782
    mds: src/mds/MDLog.cc: 283: FAILED ceph_assert(!mds->is_ any_replay())
1783 55 Venky Shankar
1784 54 Venky Shankar
1785
h3. 2022 Jun 13
1786
1787
https://pulpito.ceph.com/vshankar-2022-06-07_00:25:50-fs-wip-vshankar-testing-20220606-223254-testing-default-smithi/
1788
https://pulpito.ceph.com/vshankar-2022-06-10_01:04:46-fs-wip-vshankar-testing-20220609-175550-testing-default-smithi/
1789
1790
* https://tracker.ceph.com/issues/52624
1791
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1792
* https://tracker.ceph.com/issues/51964
1793
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1794
* https://tracker.ceph.com/issues/53859
1795
    qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
1796
* https://tracker.ceph.com/issues/55804
1797
    qa failure: pjd link tests failed
1798
* https://tracker.ceph.com/issues/56003
1799
    client: src/include/xlist.h: 81: FAILED ceph_assert(_size == 0)
1800
* https://tracker.ceph.com/issues/56011
1801
    fs/thrash: snaptest-snap-rm-cmp.sh fails in mds5sum comparison
1802
* https://tracker.ceph.com/issues/56012
1803 53 Venky Shankar
    mds: src/mds/MDLog.cc: 283: FAILED ceph_assert(!mds->is_ any_replay())
1804
1805
h3. 2022 Jun 07
1806
1807
https://pulpito.ceph.com/vshankar-2022-06-06_21:25:41-fs-wip-vshankar-testing1-20220606-230129-testing-default-smithi/
1808
https://pulpito.ceph.com/vshankar-2022-06-07_10:53:31-fs-wip-vshankar-testing1-20220607-104134-testing-default-smithi/ (rerun after dropping a problematic PR)
1809
1810
* https://tracker.ceph.com/issues/52624
1811
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1812
* https://tracker.ceph.com/issues/50223
1813
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
1814
* https://tracker.ceph.com/issues/50224
1815 51 Venky Shankar
    qa: test_mirroring_init_failure_with_recovery failure
1816
1817
h3. 2022 May 12
1818 52 Venky Shankar
1819 51 Venky Shankar
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20220509-125847
1820
https://pulpito.ceph.com/vshankar-2022-05-13_17:09:16-fs-wip-vshankar-testing-20220513-120051-testing-default-smithi/ (drop prs + rerun)
1821
1822
* https://tracker.ceph.com/issues/52624
1823
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1824
* https://tracker.ceph.com/issues/50223
1825
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
1826
* https://tracker.ceph.com/issues/55332
1827
    Failure in snaptest-git-ceph.sh
1828
* https://tracker.ceph.com/issues/53859
1829 1 Patrick Donnelly
    qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
1830 52 Venky Shankar
* https://tracker.ceph.com/issues/55538
1831
    Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead)
1832 51 Venky Shankar
* https://tracker.ceph.com/issues/55258
1833 49 Venky Shankar
    lots of "heartbeat_check: no reply from X.X.X.X" in OSD logs (cropss up again, though very infrequent)
1834
1835 50 Venky Shankar
h3. 2022 May 04
1836
1837
https://pulpito.ceph.com/vshankar-2022-05-01_13:18:44-fs-wip-vshankar-testing1-20220428-204527-testing-default-smithi/
1838 49 Venky Shankar
https://pulpito.ceph.com/vshankar-2022-05-02_16:58:59-fs-wip-vshankar-testing1-20220502-201957-testing-default-smithi/ (after dropping PRs)
1839
1840
* https://tracker.ceph.com/issues/52624
1841
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1842
* https://tracker.ceph.com/issues/50223
1843
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
1844
* https://tracker.ceph.com/issues/55332
1845
    Failure in snaptest-git-ceph.sh
1846
* https://tracker.ceph.com/issues/53859
1847
    qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
1848
* https://tracker.ceph.com/issues/55516
1849
    qa: fs suite tests failing with "json.decoder.JSONDecodeError: Extra data: line 2 column 82 (char 82)"
1850
* https://tracker.ceph.com/issues/55537
1851
    mds: crash during fs:upgrade test
1852
* https://tracker.ceph.com/issues/55538
1853 48 Venky Shankar
    Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead)
1854
1855
h3. 2022 Apr 25
1856
1857
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20220420-113951 (owner vshankar)
1858
1859
* https://tracker.ceph.com/issues/52624
1860
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1861
* https://tracker.ceph.com/issues/50223
1862
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
1863
* https://tracker.ceph.com/issues/55258
1864
    lots of "heartbeat_check: no reply from X.X.X.X" in OSD logs
1865
* https://tracker.ceph.com/issues/55377
1866 47 Venky Shankar
    kclient: mds revoke Fwb caps stuck after the kclient tries writebcak once
1867
1868
h3. 2022 Apr 14
1869
1870
https://pulpito.ceph.com/?branch=wip-vshankar-testing1-20220411-144044
1871
1872
* https://tracker.ceph.com/issues/52624
1873
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1874
* https://tracker.ceph.com/issues/50223
1875
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
1876
* https://tracker.ceph.com/issues/52438
1877
    qa: ffsb timeout
1878
* https://tracker.ceph.com/issues/55170
1879
    mds: crash during rejoin (CDir::fetch_keys)
1880
* https://tracker.ceph.com/issues/55331
1881
    pjd failure
1882
* https://tracker.ceph.com/issues/48773
1883
    qa: scrub does not complete
1884
* https://tracker.ceph.com/issues/55332
1885
    Failure in snaptest-git-ceph.sh
1886
* https://tracker.ceph.com/issues/55258
1887 45 Venky Shankar
    lots of "heartbeat_check: no reply from X.X.X.X" in OSD logs
1888
1889 46 Venky Shankar
h3. 2022 Apr 11
1890 45 Venky Shankar
1891
https://pulpito.ceph.com/?branch=wip-vshankar-testing-55110-20220408-203242
1892
1893
* https://tracker.ceph.com/issues/48773
1894
    qa: scrub does not complete
1895
* https://tracker.ceph.com/issues/52624
1896
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1897
* https://tracker.ceph.com/issues/52438
1898
    qa: ffsb timeout
1899
* https://tracker.ceph.com/issues/48680
1900
    mds: scrubbing stuck "scrub active (0 inodes in the stack)"
1901
* https://tracker.ceph.com/issues/55236
1902
    qa: fs/snaps tests fails with "hit max job timeout"
1903
* https://tracker.ceph.com/issues/54108
1904
    qa: iogen workunit: "The following counters failed to be set on mds daemons: {'mds.exported', 'mds.imported'}"
1905
* https://tracker.ceph.com/issues/54971
1906
    Test failure: test_perf_stats_stale_metrics (tasks.cephfs.test_mds_metrics.TestMDSMetrics)
1907
* https://tracker.ceph.com/issues/50223
1908
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
1909
* https://tracker.ceph.com/issues/55258
1910 44 Venky Shankar
    lots of "heartbeat_check: no reply from X.X.X.X" in OSD logs
1911 42 Venky Shankar
1912 43 Venky Shankar
h3. 2022 Mar 21
1913
1914
https://pulpito.ceph.com/vshankar-2022-03-20_02:16:37-fs-wip-vshankar-testing-20220319-163539-testing-default-smithi/
1915
1916
Run didn't go well, lots of failures - debugging by dropping PRs and running against master branch. Only merging unrelated PRs that pass tests.
1917
1918
1919 42 Venky Shankar
h3. 2022 Mar 08
1920
1921
https://pulpito.ceph.com/vshankar-2022-02-28_04:32:15-fs-wip-vshankar-testing-20220226-211550-testing-default-smithi/
1922
1923
rerun with
1924
- (drop) https://github.com/ceph/ceph/pull/44679
1925
- (drop) https://github.com/ceph/ceph/pull/44958
1926
https://pulpito.ceph.com/vshankar-2022-03-06_14:47:51-fs-wip-vshankar-testing-20220304-132102-testing-default-smithi/
1927
1928
* https://tracker.ceph.com/issues/54419 (new)
1929
    `ceph orch upgrade start` seems to never reach completion
1930
* https://tracker.ceph.com/issues/51964
1931
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1932
* https://tracker.ceph.com/issues/52624
1933
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1934
* https://tracker.ceph.com/issues/50223
1935
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
1936
* https://tracker.ceph.com/issues/52438
1937
    qa: ffsb timeout
1938
* https://tracker.ceph.com/issues/50821
1939
    qa: untar_snap_rm failure during mds thrashing
1940 41 Venky Shankar
1941
1942
h3. 2022 Feb 09
1943
1944
https://pulpito.ceph.com/vshankar-2022-02-05_17:27:49-fs-wip-vshankar-testing-20220201-113815-testing-default-smithi/
1945
1946
rerun with
1947
- (drop) https://github.com/ceph/ceph/pull/37938
1948
- (drop) https://github.com/ceph/ceph/pull/44335
1949
- (drop) https://github.com/ceph/ceph/pull/44491
1950
- (drop) https://github.com/ceph/ceph/pull/44501
1951
https://pulpito.ceph.com/vshankar-2022-02-08_14:27:29-fs-wip-vshankar-testing-20220208-181241-testing-default-smithi/
1952
1953
* https://tracker.ceph.com/issues/51964
1954
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1955
* https://tracker.ceph.com/issues/54066
1956
    test_subvolume_no_upgrade_v1_sanity fails with `AssertionError: 1000 != 0`
1957
* https://tracker.ceph.com/issues/48773
1958
    qa: scrub does not complete
1959
* https://tracker.ceph.com/issues/52624
1960
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1961
* https://tracker.ceph.com/issues/50223
1962
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
1963
* https://tracker.ceph.com/issues/52438
1964 40 Patrick Donnelly
    qa: ffsb timeout
1965
1966
h3. 2022 Feb 01
1967
1968
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20220127.171526
1969
1970
* https://tracker.ceph.com/issues/54107
1971
    kclient: hang during umount
1972
* https://tracker.ceph.com/issues/54106
1973
    kclient: hang during workunit cleanup
1974
* https://tracker.ceph.com/issues/54108
1975
    qa: iogen workunit: "The following counters failed to be set on mds daemons: {'mds.exported', 'mds.imported'}"
1976
* https://tracker.ceph.com/issues/48773
1977
    qa: scrub does not complete
1978
* https://tracker.ceph.com/issues/52438
1979
    qa: ffsb timeout
1980 36 Venky Shankar
1981
1982
h3. 2022 Jan 13
1983 39 Venky Shankar
1984 36 Venky Shankar
https://pulpito.ceph.com/vshankar-2022-01-06_13:18:41-fs-wip-vshankar-testing-20220106-145819-testing-default-smithi/
1985 38 Venky Shankar
1986
rerun with:
1987 36 Venky Shankar
- (add) https://github.com/ceph/ceph/pull/44570
1988
- (drop) https://github.com/ceph/ceph/pull/43184
1989
https://pulpito.ceph.com/vshankar-2022-01-13_04:42:40-fs-wip-vshankar-testing-20220106-145819-testing-default-smithi/
1990
1991
* https://tracker.ceph.com/issues/50223
1992
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
1993
* https://tracker.ceph.com/issues/51282
1994
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
1995
* https://tracker.ceph.com/issues/48773
1996
    qa: scrub does not complete
1997
* https://tracker.ceph.com/issues/52624
1998
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1999
* https://tracker.ceph.com/issues/53859
2000 34 Venky Shankar
    qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
2001
2002
h3. 2022 Jan 03
2003
2004
https://pulpito.ceph.com/vshankar-2021-12-22_07:37:44-fs-wip-vshankar-testing-20211216-114012-testing-default-smithi/
2005
https://pulpito.ceph.com/vshankar-2022-01-03_12:27:45-fs-wip-vshankar-testing-20220103-142738-testing-default-smithi/ (rerun)
2006
2007
* https://tracker.ceph.com/issues/50223
2008
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2009
* https://tracker.ceph.com/issues/51964
2010
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
2011
* https://tracker.ceph.com/issues/51267
2012
    CommandFailedError: Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithi096 with status 1:...
2013
* https://tracker.ceph.com/issues/51282
2014
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2015
* https://tracker.ceph.com/issues/50821
2016
    qa: untar_snap_rm failure during mds thrashing
2017 35 Ramana Raja
* https://tracker.ceph.com/issues/51278
2018
    mds: "FAILED ceph_assert(!segments.empty())"
2019
* https://tracker.ceph.com/issues/52279
2020 34 Venky Shankar
    cephadm tests fail due to: error adding seccomp filter rule for syscall bdflush: requested action matches default action of filter
2021 33 Patrick Donnelly
2022
2023
h3. 2021 Dec 22
2024
2025
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211222.014316
2026
2027
* https://tracker.ceph.com/issues/52624
2028
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2029
* https://tracker.ceph.com/issues/50223
2030
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2031
* https://tracker.ceph.com/issues/52279
2032
    cephadm tests fail due to: error adding seccomp filter rule for syscall bdflush: requested action matches default action of filter
2033
* https://tracker.ceph.com/issues/50224
2034
    qa: test_mirroring_init_failure_with_recovery failure
2035
* https://tracker.ceph.com/issues/48773
2036
    qa: scrub does not complete
2037 32 Venky Shankar
2038
2039
h3. 2021 Nov 30
2040
2041
https://pulpito.ceph.com/vshankar-2021-11-24_07:14:27-fs-wip-vshankar-testing-20211124-094330-testing-default-smithi/
2042
https://pulpito.ceph.com/vshankar-2021-11-30_06:23:32-fs-wip-vshankar-testing-20211124-094330-distro-default-smithi/ (rerun w/ QA fixes)
2043
2044
* https://tracker.ceph.com/issues/53436
2045
    mds, mon: mds beacon messages get dropped? (mds never reaches up:active state)
2046
* https://tracker.ceph.com/issues/51964
2047
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
2048
* https://tracker.ceph.com/issues/48812
2049
    qa: test_scrub_pause_and_resume_with_abort failure
2050
* https://tracker.ceph.com/issues/51076
2051
    "wait_for_recovery: failed before timeout expired" during thrashosd test with EC backend.
2052
* https://tracker.ceph.com/issues/50223
2053
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2054
* https://tracker.ceph.com/issues/52624
2055
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2056
* https://tracker.ceph.com/issues/50250
2057
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
2058 31 Patrick Donnelly
2059
2060
h3. 2021 November 9
2061
2062
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211109.180315
2063
2064
* https://tracker.ceph.com/issues/53214
2065
    qa: "dd: error reading '/sys/kernel/debug/ceph/2a934501-6731-4052-a836-f42229a869be.client4874/metrics': Is a directory"
2066
* https://tracker.ceph.com/issues/48773
2067
    qa: scrub does not complete
2068
* https://tracker.ceph.com/issues/50223
2069
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2070
* https://tracker.ceph.com/issues/51282
2071
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2072
* https://tracker.ceph.com/issues/52624
2073
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2074
* https://tracker.ceph.com/issues/53216
2075
    qa: "RuntimeError: value of attributes should be either str or None. client_id"
2076
* https://tracker.ceph.com/issues/50250
2077
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
2078
2079 30 Patrick Donnelly
2080
2081
h3. 2021 November 03
2082
2083
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211103.023355
2084
2085
* https://tracker.ceph.com/issues/51964
2086
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
2087
* https://tracker.ceph.com/issues/51282
2088
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2089
* https://tracker.ceph.com/issues/52436
2090
    fs/ceph: "corrupt mdsmap"
2091
* https://tracker.ceph.com/issues/53074
2092
    pybind/mgr/cephadm: upgrade sequence does not continue if no MDS are active
2093
* https://tracker.ceph.com/issues/53150
2094
    pybind/mgr/cephadm/upgrade: tolerate MDS failures during upgrade straddling v16.2.5
2095
* https://tracker.ceph.com/issues/53155
2096
    MDSMonitor: assertion during upgrade to v16.2.5+
2097 29 Patrick Donnelly
2098
2099
h3. 2021 October 26
2100
2101
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211025.000447
2102
2103
* https://tracker.ceph.com/issues/53074
2104
    pybind/mgr/cephadm: upgrade sequence does not continue if no MDS are active
2105
* https://tracker.ceph.com/issues/52997
2106
    testing: hang ing umount
2107
* https://tracker.ceph.com/issues/50824
2108
    qa: snaptest-git-ceph bus error
2109
* https://tracker.ceph.com/issues/52436
2110
    fs/ceph: "corrupt mdsmap"
2111
* https://tracker.ceph.com/issues/48773
2112
    qa: scrub does not complete
2113
* https://tracker.ceph.com/issues/53082
2114
    ceph-fuse: segmenetation fault in Client::handle_mds_map
2115
* https://tracker.ceph.com/issues/50223
2116
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2117
* https://tracker.ceph.com/issues/52624
2118
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2119
* https://tracker.ceph.com/issues/50224
2120
    qa: test_mirroring_init_failure_with_recovery failure
2121
* https://tracker.ceph.com/issues/50821
2122
    qa: untar_snap_rm failure during mds thrashing
2123
* https://tracker.ceph.com/issues/50250
2124
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
2125
2126 27 Patrick Donnelly
2127
2128 28 Patrick Donnelly
h3. 2021 October 19
2129 27 Patrick Donnelly
2130
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211019.013028
2131
2132
* https://tracker.ceph.com/issues/52995
2133
    qa: test_standby_count_wanted failure
2134
* https://tracker.ceph.com/issues/52948
2135
    osd: fails to come up: "teuthology.misc:7 of 8 OSDs are up"
2136
* https://tracker.ceph.com/issues/52996
2137
    qa: test_perf_counters via test_openfiletable
2138
* https://tracker.ceph.com/issues/48772
2139
    qa: pjd: not ok 9, 44, 80
2140
* https://tracker.ceph.com/issues/52997
2141
    testing: hang ing umount
2142
* https://tracker.ceph.com/issues/50250
2143
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
2144
* https://tracker.ceph.com/issues/52624
2145
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2146
* https://tracker.ceph.com/issues/50223
2147
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2148
* https://tracker.ceph.com/issues/50821
2149
    qa: untar_snap_rm failure during mds thrashing
2150
* https://tracker.ceph.com/issues/48773
2151
    qa: scrub does not complete
2152 26 Patrick Donnelly
2153
2154
h3. 2021 October 12
2155
2156
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211012.192211
2157
2158
Some failures caused by teuthology bug: https://tracker.ceph.com/issues/52944
2159
2160
New test caused failure: https://github.com/ceph/ceph/pull/43297#discussion_r729883167
2161
2162
2163
* https://tracker.ceph.com/issues/51282
2164
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2165
* https://tracker.ceph.com/issues/52948
2166
    osd: fails to come up: "teuthology.misc:7 of 8 OSDs are up"
2167
* https://tracker.ceph.com/issues/48773
2168
    qa: scrub does not complete
2169
* https://tracker.ceph.com/issues/50224
2170
    qa: test_mirroring_init_failure_with_recovery failure
2171
* https://tracker.ceph.com/issues/52949
2172
    RuntimeError: The following counters failed to be set on mds daemons: {'mds.dir_split'}
2173 25 Patrick Donnelly
2174 23 Patrick Donnelly
2175 24 Patrick Donnelly
h3. 2021 October 02
2176
2177
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211002.163337
2178
2179
Some failures caused by cephadm upgrade test. Fixed in follow-up qa commit.
2180
2181
test_simple failures caused by PR in this set.
2182
2183
A few reruns because of QA infra noise.
2184
2185
* https://tracker.ceph.com/issues/52822
2186
    qa: failed pacific install on fs:upgrade
2187
* https://tracker.ceph.com/issues/52624
2188
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2189
* https://tracker.ceph.com/issues/50223
2190
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2191
* https://tracker.ceph.com/issues/48773
2192
    qa: scrub does not complete
2193
2194
2195 23 Patrick Donnelly
h3. 2021 September 20
2196
2197
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210917.174826
2198
2199
* https://tracker.ceph.com/issues/52677
2200
    qa: test_simple failure
2201
* https://tracker.ceph.com/issues/51279
2202
    kclient hangs on umount (testing branch)
2203
* https://tracker.ceph.com/issues/50223
2204
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2205
* https://tracker.ceph.com/issues/50250
2206
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
2207
* https://tracker.ceph.com/issues/52624
2208
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2209
* https://tracker.ceph.com/issues/52438
2210
    qa: ffsb timeout
2211 22 Patrick Donnelly
2212
2213
h3. 2021 September 10
2214
2215
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210910.181451
2216
2217
* https://tracker.ceph.com/issues/50223
2218
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2219
* https://tracker.ceph.com/issues/50250
2220
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
2221
* https://tracker.ceph.com/issues/52624
2222
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2223
* https://tracker.ceph.com/issues/52625
2224
    qa: test_kill_mdstable (tasks.cephfs.test_snapshots.TestSnapshots)
2225
* https://tracker.ceph.com/issues/52439
2226
    qa: acls does not compile on centos stream
2227
* https://tracker.ceph.com/issues/50821
2228
    qa: untar_snap_rm failure during mds thrashing
2229
* https://tracker.ceph.com/issues/48773
2230
    qa: scrub does not complete
2231
* https://tracker.ceph.com/issues/52626
2232
    mds: ScrubStack.cc: 831: FAILED ceph_assert(diri)
2233
* https://tracker.ceph.com/issues/51279
2234
    kclient hangs on umount (testing branch)
2235 21 Patrick Donnelly
2236
2237
h3. 2021 August 27
2238
2239
Several jobs died because of device failures.
2240
2241
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210827.024746
2242
2243
* https://tracker.ceph.com/issues/52430
2244
    mds: fast async create client mount breaks racy test
2245
* https://tracker.ceph.com/issues/52436
2246
    fs/ceph: "corrupt mdsmap"
2247
* https://tracker.ceph.com/issues/52437
2248
    mds: InoTable::replay_release_ids abort via test_inotable_sync
2249
* https://tracker.ceph.com/issues/51282
2250
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2251
* https://tracker.ceph.com/issues/52438
2252
    qa: ffsb timeout
2253
* https://tracker.ceph.com/issues/52439
2254
    qa: acls does not compile on centos stream
2255 20 Patrick Donnelly
2256
2257
h3. 2021 July 30
2258
2259
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210729.214022
2260
2261
* https://tracker.ceph.com/issues/50250
2262
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
2263
* https://tracker.ceph.com/issues/51282
2264
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2265
* https://tracker.ceph.com/issues/48773
2266
    qa: scrub does not complete
2267
* https://tracker.ceph.com/issues/51975
2268
    pybind/mgr/stats: KeyError
2269 19 Patrick Donnelly
2270
2271
h3. 2021 July 28
2272
2273
https://pulpito.ceph.com/pdonnell-2021-07-28_00:39:45-fs-wip-pdonnell-testing-20210727.213757-distro-basic-smithi/
2274
2275
with qa fix: https://pulpito.ceph.com/pdonnell-2021-07-28_16:20:28-fs-wip-pdonnell-testing-20210728.141004-distro-basic-smithi/
2276
2277
* https://tracker.ceph.com/issues/51905
2278
    qa: "error reading sessionmap 'mds1_sessionmap'"
2279
* https://tracker.ceph.com/issues/48773
2280
    qa: scrub does not complete
2281
* https://tracker.ceph.com/issues/50250
2282
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
2283
* https://tracker.ceph.com/issues/51267
2284
    CommandFailedError: Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithi096 with status 1:...
2285
* https://tracker.ceph.com/issues/51279
2286
    kclient hangs on umount (testing branch)
2287 18 Patrick Donnelly
2288
2289
h3. 2021 July 16
2290
2291
https://pulpito.ceph.com/pdonnell-2021-07-16_05:50:11-fs-wip-pdonnell-testing-20210716.022804-distro-basic-smithi/
2292
2293
* https://tracker.ceph.com/issues/48773
2294
    qa: scrub does not complete
2295
* https://tracker.ceph.com/issues/48772
2296
    qa: pjd: not ok 9, 44, 80
2297
* https://tracker.ceph.com/issues/45434
2298
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2299
* https://tracker.ceph.com/issues/51279
2300
    kclient hangs on umount (testing branch)
2301
* https://tracker.ceph.com/issues/50824
2302
    qa: snaptest-git-ceph bus error
2303 17 Patrick Donnelly
2304
2305
h3. 2021 July 04
2306
2307
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210703.052904
2308
2309
* https://tracker.ceph.com/issues/48773
2310
    qa: scrub does not complete
2311
* https://tracker.ceph.com/issues/39150
2312
    mon: "FAILED ceph_assert(session_map.sessions.empty())" when out of quorum
2313
* https://tracker.ceph.com/issues/45434
2314
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2315
* https://tracker.ceph.com/issues/51282
2316
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2317
* https://tracker.ceph.com/issues/48771
2318
    qa: iogen: workload fails to cause balancing
2319
* https://tracker.ceph.com/issues/51279
2320
    kclient hangs on umount (testing branch)
2321
* https://tracker.ceph.com/issues/50250
2322
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
2323 16 Patrick Donnelly
2324
2325
h3. 2021 July 01
2326
2327
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210701.192056
2328
2329
* https://tracker.ceph.com/issues/51197
2330
    qa: [WRN] Scrub error on inode 0x10000001520 (/client.0/tmp/t/linux-5.4/Documentation/driver-api) see mds.f log and `damage ls` output for details
2331
* https://tracker.ceph.com/issues/50866
2332
    osd: stat mismatch on objects
2333
* https://tracker.ceph.com/issues/48773
2334
    qa: scrub does not complete
2335 15 Patrick Donnelly
2336
2337
h3. 2021 June 26
2338
2339
https://pulpito.ceph.com/pdonnell-2021-06-26_00:57:00-fs-wip-pdonnell-testing-20210625.225421-distro-basic-smithi/
2340
2341
* https://tracker.ceph.com/issues/51183
2342
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
2343
* https://tracker.ceph.com/issues/51410
2344
    kclient: fails to finish reconnect during MDS thrashing (testing branch)
2345
* https://tracker.ceph.com/issues/48773
2346
    qa: scrub does not complete
2347
* https://tracker.ceph.com/issues/51282
2348
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2349
* https://tracker.ceph.com/issues/51169
2350
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
2351
* https://tracker.ceph.com/issues/48772
2352
    qa: pjd: not ok 9, 44, 80
2353 14 Patrick Donnelly
2354
2355
h3. 2021 June 21
2356
2357
https://pulpito.ceph.com/pdonnell-2021-06-22_00:27:21-fs-wip-pdonnell-testing-20210621.231646-distro-basic-smithi/
2358
2359
One failure caused by PR: https://github.com/ceph/ceph/pull/41935#issuecomment-866472599
2360
2361
* https://tracker.ceph.com/issues/51282
2362
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2363
* https://tracker.ceph.com/issues/51183
2364
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
2365
* https://tracker.ceph.com/issues/48773
2366
    qa: scrub does not complete
2367
* https://tracker.ceph.com/issues/48771
2368
    qa: iogen: workload fails to cause balancing
2369
* https://tracker.ceph.com/issues/51169
2370
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
2371
* https://tracker.ceph.com/issues/50495
2372
    libcephfs: shutdown race fails with status 141
2373
* https://tracker.ceph.com/issues/45434
2374
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2375
* https://tracker.ceph.com/issues/50824
2376
    qa: snaptest-git-ceph bus error
2377
* https://tracker.ceph.com/issues/50223
2378
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2379 13 Patrick Donnelly
2380
2381
h3. 2021 June 16
2382
2383
https://pulpito.ceph.com/pdonnell-2021-06-16_21:26:55-fs-wip-pdonnell-testing-20210616.191804-distro-basic-smithi/
2384
2385
MDS abort class of failures caused by PR: https://github.com/ceph/ceph/pull/41667
2386
2387
* https://tracker.ceph.com/issues/45434
2388
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2389
* https://tracker.ceph.com/issues/51169
2390
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
2391
* https://tracker.ceph.com/issues/43216
2392
    MDSMonitor: removes MDS coming out of quorum election
2393
* https://tracker.ceph.com/issues/51278
2394
    mds: "FAILED ceph_assert(!segments.empty())"
2395
* https://tracker.ceph.com/issues/51279
2396
    kclient hangs on umount (testing branch)
2397
* https://tracker.ceph.com/issues/51280
2398
    mds: "FAILED ceph_assert(r == 0 || r == -2)"
2399
* https://tracker.ceph.com/issues/51183
2400
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
2401
* https://tracker.ceph.com/issues/51281
2402
    qa: snaptest-snap-rm-cmp.sh: "echo 'FAIL: bad match, /tmp/a 4637e766853d1ad16a7b17079e2c6f03 != real c3883760b18d50e8d78819c54d579b00'"
2403
* https://tracker.ceph.com/issues/48773
2404
    qa: scrub does not complete
2405
* https://tracker.ceph.com/issues/51076
2406
    "wait_for_recovery: failed before timeout expired" during thrashosd test with EC backend.
2407
* https://tracker.ceph.com/issues/51228
2408
    qa: rmdir: failed to remove 'a/.snap/*': No such file or directory
2409
* https://tracker.ceph.com/issues/51282
2410
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2411 12 Patrick Donnelly
2412
2413
h3. 2021 June 14
2414
2415
https://pulpito.ceph.com/pdonnell-2021-06-14_20:53:05-fs-wip-pdonnell-testing-20210614.173325-distro-basic-smithi/
2416
2417
Some Ubuntu 20.04 upgrade fallout. In particular, upgrade tests are failing due to missing packages for 18.04 Pacific.
2418
2419
* https://tracker.ceph.com/issues/51169
2420
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
2421
* https://tracker.ceph.com/issues/51228
2422
    qa: rmdir: failed to remove 'a/.snap/*': No such file or directory
2423
* https://tracker.ceph.com/issues/48773
2424
    qa: scrub does not complete
2425
* https://tracker.ceph.com/issues/51183
2426
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
2427
* https://tracker.ceph.com/issues/45434
2428
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2429
* https://tracker.ceph.com/issues/51182
2430
    pybind/mgr/snap_schedule: Invalid command: Unexpected argument 'fs=cephfs'
2431
* https://tracker.ceph.com/issues/51229
2432
    qa: test_multi_snap_schedule list difference failure
2433
* https://tracker.ceph.com/issues/50821
2434
    qa: untar_snap_rm failure during mds thrashing
2435 11 Patrick Donnelly
2436
2437
h3. 2021 June 13
2438
2439
https://pulpito.ceph.com/pdonnell-2021-06-12_02:45:35-fs-wip-pdonnell-testing-20210612.002809-distro-basic-smithi/
2440
2441
Some Ubuntu 20.04 upgrade fallout. In particular, upgrade tests are failing due to missing packages for 18.04 Pacific.
2442
2443
* https://tracker.ceph.com/issues/51169
2444
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
2445
* https://tracker.ceph.com/issues/48773
2446
    qa: scrub does not complete
2447
* https://tracker.ceph.com/issues/51182
2448
    pybind/mgr/snap_schedule: Invalid command: Unexpected argument 'fs=cephfs'
2449
* https://tracker.ceph.com/issues/51183
2450
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
2451
* https://tracker.ceph.com/issues/51197
2452
    qa: [WRN] Scrub error on inode 0x10000001520 (/client.0/tmp/t/linux-5.4/Documentation/driver-api) see mds.f log and `damage ls` output for details
2453
* https://tracker.ceph.com/issues/45434
2454 10 Patrick Donnelly
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2455
2456
h3. 2021 June 11
2457
2458
https://pulpito.ceph.com/pdonnell-2021-06-11_18:02:10-fs-wip-pdonnell-testing-20210611.162716-distro-basic-smithi/
2459
2460
Some Ubuntu 20.04 upgrade fallout. In particular, upgrade tests are failing due to missing packages for 18.04 Pacific.
2461
2462
* https://tracker.ceph.com/issues/51169
2463
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
2464
* https://tracker.ceph.com/issues/45434
2465
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2466
* https://tracker.ceph.com/issues/48771
2467
    qa: iogen: workload fails to cause balancing
2468
* https://tracker.ceph.com/issues/43216
2469
    MDSMonitor: removes MDS coming out of quorum election
2470
* https://tracker.ceph.com/issues/51182
2471
    pybind/mgr/snap_schedule: Invalid command: Unexpected argument 'fs=cephfs'
2472
* https://tracker.ceph.com/issues/50223
2473
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2474
* https://tracker.ceph.com/issues/48773
2475
    qa: scrub does not complete
2476
* https://tracker.ceph.com/issues/51183
2477
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
2478
* https://tracker.ceph.com/issues/51184
2479
    qa: fs:bugs does not specify distro
2480 9 Patrick Donnelly
2481
2482
h3. 2021 June 03
2483
2484
https://pulpito.ceph.com/pdonnell-2021-06-03_03:40:33-fs-wip-pdonnell-testing-20210603.020013-distro-basic-smithi/
2485
2486
* https://tracker.ceph.com/issues/45434
2487
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2488
* https://tracker.ceph.com/issues/50016
2489
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
2490
* https://tracker.ceph.com/issues/50821
2491
    qa: untar_snap_rm failure during mds thrashing
2492
* https://tracker.ceph.com/issues/50622 (regression)
2493
    msg: active_connections regression
2494
* https://tracker.ceph.com/issues/49845#note-2 (regression)
2495
    qa: failed umount in test_volumes
2496
* https://tracker.ceph.com/issues/48773
2497
    qa: scrub does not complete
2498
* https://tracker.ceph.com/issues/43216
2499
    MDSMonitor: removes MDS coming out of quorum election
2500 7 Patrick Donnelly
2501
2502 8 Patrick Donnelly
h3. 2021 May 18
2503
2504
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210518.214114
2505
2506
Regression in testing kernel caused some failures. Ilya fixed those and rerun
2507
looked better. Some odd new noise in the rerun relating to packaging and "No
2508
module named 'tasks.ceph'".
2509
2510
* https://tracker.ceph.com/issues/50824
2511
    qa: snaptest-git-ceph bus error
2512
* https://tracker.ceph.com/issues/50622 (regression)
2513
    msg: active_connections regression
2514
* https://tracker.ceph.com/issues/49845#note-2 (regression)
2515
    qa: failed umount in test_volumes
2516
* https://tracker.ceph.com/issues/48203 (stock kernel update required)
2517
    qa: quota failure
2518
2519
2520 7 Patrick Donnelly
h3. 2021 May 18
2521
2522
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210518.025642
2523
2524
* https://tracker.ceph.com/issues/50821
2525
    qa: untar_snap_rm failure during mds thrashing
2526
* https://tracker.ceph.com/issues/48773
2527
    qa: scrub does not complete
2528
* https://tracker.ceph.com/issues/45591
2529
    mgr: FAILED ceph_assert(daemon != nullptr)
2530
* https://tracker.ceph.com/issues/50866
2531
    osd: stat mismatch on objects
2532
* https://tracker.ceph.com/issues/50016
2533
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
2534
* https://tracker.ceph.com/issues/50867
2535
    qa: fs:mirror: reduced data availability
2536
* https://tracker.ceph.com/issues/50821
2537
    qa: untar_snap_rm failure during mds thrashing
2538
* https://tracker.ceph.com/issues/50622 (regression)
2539
    msg: active_connections regression
2540
* https://tracker.ceph.com/issues/50223
2541
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2542
* https://tracker.ceph.com/issues/50868
2543
    qa: "kern.log.gz already exists; not overwritten"
2544
* https://tracker.ceph.com/issues/50870
2545
    qa: test_full: "rm: cannot remove 'large_file_a': Permission denied"
2546 6 Patrick Donnelly
2547
2548
h3. 2021 May 11
2549
2550
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210511.232042
2551
2552
* one class of failures caused by PR
2553
* https://tracker.ceph.com/issues/48812
2554
    qa: test_scrub_pause_and_resume_with_abort failure
2555
* https://tracker.ceph.com/issues/50390
2556
    mds: monclient: wait_auth_rotating timed out after 30
2557
* https://tracker.ceph.com/issues/48773
2558
    qa: scrub does not complete
2559
* https://tracker.ceph.com/issues/50821
2560
    qa: untar_snap_rm failure during mds thrashing
2561
* https://tracker.ceph.com/issues/50224
2562
    qa: test_mirroring_init_failure_with_recovery failure
2563
* https://tracker.ceph.com/issues/50622 (regression)
2564
    msg: active_connections regression
2565
* https://tracker.ceph.com/issues/50825
2566
    qa: snaptest-git-ceph hang during mon thrashing v2
2567
* https://tracker.ceph.com/issues/50821
2568
    qa: untar_snap_rm failure during mds thrashing
2569
* https://tracker.ceph.com/issues/50823
2570
    qa: RuntimeError: timeout waiting for cluster to stabilize
2571 5 Patrick Donnelly
2572
2573
h3. 2021 May 14
2574
2575
https://pulpito.ceph.com/pdonnell-2021-05-14_21:45:42-fs-master-distro-basic-smithi/
2576
2577
* https://tracker.ceph.com/issues/48812
2578
    qa: test_scrub_pause_and_resume_with_abort failure
2579
* https://tracker.ceph.com/issues/50821
2580
    qa: untar_snap_rm failure during mds thrashing
2581
* https://tracker.ceph.com/issues/50622 (regression)
2582
    msg: active_connections regression
2583
* https://tracker.ceph.com/issues/50822
2584
    qa: testing kernel patch for client metrics causes mds abort
2585
* https://tracker.ceph.com/issues/48773
2586
    qa: scrub does not complete
2587
* https://tracker.ceph.com/issues/50823
2588
    qa: RuntimeError: timeout waiting for cluster to stabilize
2589
* https://tracker.ceph.com/issues/50824
2590
    qa: snaptest-git-ceph bus error
2591
* https://tracker.ceph.com/issues/50825
2592
    qa: snaptest-git-ceph hang during mon thrashing v2
2593
* https://tracker.ceph.com/issues/50826
2594
    kceph: stock RHEL kernel hangs on snaptests with mon|osd thrashers
2595 4 Patrick Donnelly
2596
2597
h3. 2021 May 01
2598
2599
https://pulpito.ceph.com/pdonnell-2021-05-01_09:07:09-fs-wip-pdonnell-testing-20210501.040415-distro-basic-smithi/
2600
2601
* https://tracker.ceph.com/issues/45434
2602
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2603
* https://tracker.ceph.com/issues/50281
2604
    qa: untar_snap_rm timeout
2605
* https://tracker.ceph.com/issues/48203 (stock kernel update required)
2606
    qa: quota failure
2607
* https://tracker.ceph.com/issues/48773
2608
    qa: scrub does not complete
2609
* https://tracker.ceph.com/issues/50390
2610
    mds: monclient: wait_auth_rotating timed out after 30
2611
* https://tracker.ceph.com/issues/50250
2612
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details"
2613
* https://tracker.ceph.com/issues/50622 (regression)
2614
    msg: active_connections regression
2615
* https://tracker.ceph.com/issues/45591
2616
    mgr: FAILED ceph_assert(daemon != nullptr)
2617
* https://tracker.ceph.com/issues/50221
2618
    qa: snaptest-git-ceph failure in git diff
2619
* https://tracker.ceph.com/issues/50016
2620
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
2621 3 Patrick Donnelly
2622
2623
h3. 2021 Apr 15
2624
2625
https://pulpito.ceph.com/pdonnell-2021-04-15_01:35:57-fs-wip-pdonnell-testing-20210414.230315-distro-basic-smithi/
2626
2627
* https://tracker.ceph.com/issues/50281
2628
    qa: untar_snap_rm timeout
2629
* https://tracker.ceph.com/issues/50220
2630
    qa: dbench workload timeout
2631
* https://tracker.ceph.com/issues/50246
2632
    mds: failure replaying journal (EMetaBlob)
2633
* https://tracker.ceph.com/issues/50250
2634
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details"
2635
* https://tracker.ceph.com/issues/50016
2636
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
2637
* https://tracker.ceph.com/issues/50222
2638
    osd: 5.2s0 deep-scrub : stat mismatch
2639
* https://tracker.ceph.com/issues/45434
2640
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2641
* https://tracker.ceph.com/issues/49845
2642
    qa: failed umount in test_volumes
2643
* https://tracker.ceph.com/issues/37808
2644
    osd: osdmap cache weak_refs assert during shutdown
2645
* https://tracker.ceph.com/issues/50387
2646
    client: fs/snaps failure
2647
* https://tracker.ceph.com/issues/50389
2648
    mds: "cluster [ERR] Error recovering journal 0x203: (2) No such file or directory" in cluster log"
2649
* https://tracker.ceph.com/issues/50216
2650
    qa: "ls: cannot access 'lost+found': No such file or directory"
2651
* https://tracker.ceph.com/issues/50390
2652
    mds: monclient: wait_auth_rotating timed out after 30
2653
2654 1 Patrick Donnelly
2655
2656 2 Patrick Donnelly
h3. 2021 Apr 08
2657
2658
https://pulpito.ceph.com/pdonnell-2021-04-08_22:42:24-fs-wip-pdonnell-testing-20210408.192301-distro-basic-smithi/
2659
2660
* https://tracker.ceph.com/issues/45434
2661
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2662
* https://tracker.ceph.com/issues/50016
2663
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
2664
* https://tracker.ceph.com/issues/48773
2665
    qa: scrub does not complete
2666
* https://tracker.ceph.com/issues/50279
2667
    qa: "Replacing daemon mds.b as rank 0 with standby daemon mds.c"
2668
* https://tracker.ceph.com/issues/50246
2669
    mds: failure replaying journal (EMetaBlob)
2670
* https://tracker.ceph.com/issues/48365
2671
    qa: ffsb build failure on CentOS 8.2
2672
* https://tracker.ceph.com/issues/50216
2673
    qa: "ls: cannot access 'lost+found': No such file or directory"
2674
* https://tracker.ceph.com/issues/50223
2675
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2676
* https://tracker.ceph.com/issues/50280
2677
    cephadm: RuntimeError: uid/gid not found
2678
* https://tracker.ceph.com/issues/50281
2679
    qa: untar_snap_rm timeout
2680
2681 1 Patrick Donnelly
h3. 2021 Apr 08
2682
2683
https://pulpito.ceph.com/pdonnell-2021-04-08_04:31:36-fs-wip-pdonnell-testing-20210408.024225-distro-basic-smithi/
2684
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210408.142238 (with logic inversion / QA fix)
2685
2686
* https://tracker.ceph.com/issues/50246
2687
    mds: failure replaying journal (EMetaBlob)
2688
* https://tracker.ceph.com/issues/50250
2689
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details"
2690
2691
2692
h3. 2021 Apr 07
2693
2694
https://pulpito.ceph.com/pdonnell-2021-04-07_02:12:41-fs-wip-pdonnell-testing-20210406.213012-distro-basic-smithi/
2695
2696
* https://tracker.ceph.com/issues/50215
2697
    qa: "log [ERR] : error reading sessionmap 'mds2_sessionmap'"
2698
* https://tracker.ceph.com/issues/49466
2699
    qa: "Command failed on gibba030 with status 1: 'set -ex\nsudo dd of=/tmp/tmp.ZEeZBasJer'"
2700
* https://tracker.ceph.com/issues/50216
2701
    qa: "ls: cannot access 'lost+found': No such file or directory"
2702
* https://tracker.ceph.com/issues/48773
2703
    qa: scrub does not complete
2704
* https://tracker.ceph.com/issues/49845
2705
    qa: failed umount in test_volumes
2706
* https://tracker.ceph.com/issues/50220
2707
    qa: dbench workload timeout
2708
* https://tracker.ceph.com/issues/50221
2709
    qa: snaptest-git-ceph failure in git diff
2710
* https://tracker.ceph.com/issues/50222
2711
    osd: 5.2s0 deep-scrub : stat mismatch
2712
* https://tracker.ceph.com/issues/50223
2713
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2714
* https://tracker.ceph.com/issues/50224
2715
    qa: test_mirroring_init_failure_with_recovery failure
2716
2717
h3. 2021 Apr 01
2718
2719
https://pulpito.ceph.com/pdonnell-2021-04-01_00:45:34-fs-wip-pdonnell-testing-20210331.222326-distro-basic-smithi/
2720
2721
* https://tracker.ceph.com/issues/48772
2722
    qa: pjd: not ok 9, 44, 80
2723
* https://tracker.ceph.com/issues/50177
2724
    osd: "stalled aio... buggy kernel or bad device?"
2725
* https://tracker.ceph.com/issues/48771
2726
    qa: iogen: workload fails to cause balancing
2727
* https://tracker.ceph.com/issues/49845
2728
    qa: failed umount in test_volumes
2729
* https://tracker.ceph.com/issues/48773
2730
    qa: scrub does not complete
2731
* https://tracker.ceph.com/issues/48805
2732
    mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details"
2733
* https://tracker.ceph.com/issues/50178
2734
    qa: "TypeError: run() got an unexpected keyword argument 'shell'"
2735
* https://tracker.ceph.com/issues/45434
2736
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2737
2738
h3. 2021 Mar 24
2739
2740
https://pulpito.ceph.com/pdonnell-2021-03-24_23:26:35-fs-wip-pdonnell-testing-20210324.190252-distro-basic-smithi/
2741
2742
* https://tracker.ceph.com/issues/49500
2743
    qa: "Assertion `cb_done' failed."
2744
* https://tracker.ceph.com/issues/50019
2745
    qa: mount failure with cephadm "probably no MDS server is up?"
2746
* https://tracker.ceph.com/issues/50020
2747
    qa: "RADOS object not found (Failed to operate read op for oid cephfs_mirror)"
2748
* https://tracker.ceph.com/issues/48773
2749
    qa: scrub does not complete
2750
* https://tracker.ceph.com/issues/45434
2751
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2752
* https://tracker.ceph.com/issues/48805
2753
    mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details"
2754
* https://tracker.ceph.com/issues/48772
2755
    qa: pjd: not ok 9, 44, 80
2756
* https://tracker.ceph.com/issues/50021
2757
    qa: snaptest-git-ceph failure during mon thrashing
2758
* https://tracker.ceph.com/issues/48771
2759
    qa: iogen: workload fails to cause balancing
2760
* https://tracker.ceph.com/issues/50016
2761
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
2762
* https://tracker.ceph.com/issues/49466
2763
    qa: "Command failed on gibba030 with status 1: 'set -ex\nsudo dd of=/tmp/tmp.ZEeZBasJer'"
2764
2765
2766
h3. 2021 Mar 18
2767
2768
https://pulpito.ceph.com/pdonnell-2021-03-18_13:46:31-fs-wip-pdonnell-testing-20210318.024145-distro-basic-smithi/
2769
2770
* https://tracker.ceph.com/issues/49466
2771
    qa: "Command failed on gibba030 with status 1: 'set -ex\nsudo dd of=/tmp/tmp.ZEeZBasJer'"
2772
* https://tracker.ceph.com/issues/48773
2773
    qa: scrub does not complete
2774
* https://tracker.ceph.com/issues/48805
2775
    mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details"
2776
* https://tracker.ceph.com/issues/45434
2777
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2778
* https://tracker.ceph.com/issues/49845
2779
    qa: failed umount in test_volumes
2780
* https://tracker.ceph.com/issues/49605
2781
    mgr: drops command on the floor
2782
* https://tracker.ceph.com/issues/48203 (stock kernel update required)
2783
    qa: quota failure
2784
* https://tracker.ceph.com/issues/49928
2785
    client: items pinned in cache preventing unmount x2
2786
2787
h3. 2021 Mar 15
2788
2789
https://pulpito.ceph.com/pdonnell-2021-03-15_22:16:56-fs-wip-pdonnell-testing-20210315.182203-distro-basic-smithi/
2790
2791
* https://tracker.ceph.com/issues/49842
2792
    qa: stuck pkg install
2793
* https://tracker.ceph.com/issues/49466
2794
    qa: "Command failed on gibba030 with status 1: 'set -ex\nsudo dd of=/tmp/tmp.ZEeZBasJer'"
2795
* https://tracker.ceph.com/issues/49822
2796
    test: test_mirroring_command_idempotency (tasks.cephfs.test_admin.TestMirroringCommands) failure
2797
* https://tracker.ceph.com/issues/49240
2798
    terminate called after throwing an instance of 'std::bad_alloc'
2799
* https://tracker.ceph.com/issues/48773
2800
    qa: scrub does not complete
2801
* https://tracker.ceph.com/issues/45434
2802
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2803
* https://tracker.ceph.com/issues/49500
2804
    qa: "Assertion `cb_done' failed."
2805
* https://tracker.ceph.com/issues/49843
2806
    qa: fs/snaps/snaptest-upchildrealms.sh failure
2807
* https://tracker.ceph.com/issues/49845
2808
    qa: failed umount in test_volumes
2809
* https://tracker.ceph.com/issues/48805
2810
    mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details"
2811
* https://tracker.ceph.com/issues/49605
2812
    mgr: drops command on the floor
2813
2814
and failure caused by PR: https://github.com/ceph/ceph/pull/39969
2815
2816
2817
h3. 2021 Mar 09
2818
2819
https://pulpito.ceph.com/pdonnell-2021-03-09_03:27:39-fs-wip-pdonnell-testing-20210308.214827-distro-basic-smithi/
2820
2821
* https://tracker.ceph.com/issues/49500
2822
    qa: "Assertion `cb_done' failed."
2823
* https://tracker.ceph.com/issues/48805
2824
    mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details"
2825
* https://tracker.ceph.com/issues/48773
2826
    qa: scrub does not complete
2827
* https://tracker.ceph.com/issues/45434
2828
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2829
* https://tracker.ceph.com/issues/49240
2830
    terminate called after throwing an instance of 'std::bad_alloc'
2831
* https://tracker.ceph.com/issues/49466
2832
    qa: "Command failed on gibba030 with status 1: 'set -ex\nsudo dd of=/tmp/tmp.ZEeZBasJer'"
2833
* https://tracker.ceph.com/issues/49684
2834
    qa: fs:cephadm mount does not wait for mds to be created
2835
* https://tracker.ceph.com/issues/48771
2836
    qa: iogen: workload fails to cause balancing