Project

General

Profile

Main » History » Version 211

Patrick Donnelly, 11/30/2023 02:07 PM

1 79 Venky Shankar
h1. MAIN
2
3 201 Rishabh Dave
h3. ADD NEW ENTRY BELOW
4
5 211 Patrick Donnelly
h3. 30 Nov 2023
6
7
https://pulpito.ceph.com/pdonnell-2023-11-30_08:05:19-fs:shell-wip-batrick-testing-20231130.014408-distro-default-smithi/
8
9
* https://tracker.ceph.com/issues/63699
10
* https://tracker.ceph.com/issues/46100
11
12 210 Venky Shankar
h3. 29 Nov 2023
13
14
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20231107.042705
15
16
* https://tracker.ceph.com/issues/63233
17
    mon|client|mds: valgrind reports possible leaks in the MDS
18
* https://tracker.ceph.com/issues/63141
19
    qa/cephfs: test_idem_unaffected_root_squash fails
20
* https://tracker.ceph.com/issues/57676
21
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
22
* https://tracker.ceph.com/issues/57655
23
    qa: fs:mixed-clients kernel_untar_build failure
24
* https://tracker.ceph.com/issues/62067
25
    ffsb.sh failure "Resource temporarily unavailable"
26
* https://tracker.ceph.com/issues/61243
27
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
28
* https://tracker.ceph.com/issues/62510 (pending RHEL back port)
    snaptest-git-ceph.sh failure with fs/thrash
29
* https://tracker.ceph.com/issues/62810
30
    Failure in snaptest-git-ceph.sh (it's an async unlink/create bug) -- Need to fix again
31
32 206 Venky Shankar
h3. 14 Nov 2023
33 207 Milind Changire
(Milind)
34
35
https://pulpito.ceph.com/mchangir-2023-11-13_10:27:15-fs-wip-mchangir-testing-20231110.052303-testing-default-smithi/
36
37
* https://tracker.ceph.com/issues/53859
38
  qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
39
* https://tracker.ceph.com/issues/63233
40
  mon|client|mds: valgrind reports possible leaks in the MDS
41
* https://tracker.ceph.com/issues/63521
42
  qa: Test failure: test_scrub_merge_dirfrags (tasks.cephfs.test_scrub_checks.TestScrubChecks)
43
* https://tracker.ceph.com/issues/57655
44
  qa: fs:mixed-clients kernel_untar_build failure
45
* https://tracker.ceph.com/issues/62580
46
    testing: Test failure: test_snapshot_remove (tasks.cephfs.test_strays.TestStrays)
47
* https://tracker.ceph.com/issues/57676
48
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
49
* https://tracker.ceph.com/issues/61243
50
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
51
* https://tracker.ceph.com/issues/63141
52
    qa/cephfs: test_idem_unaffected_root_squash fails
53
* https://tracker.ceph.com/issues/51964
54
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
55
* https://tracker.ceph.com/issues/63522
56
    No module named 'tasks.ceph_fuse'
57
    No module named 'tasks.kclient'
58
    No module named 'tasks.cephfs.fuse_mount'
59
    No module named 'tasks.ceph'
60
* https://tracker.ceph.com/issues/63523
61
    Command failed - qa/workunits/fs/misc/general_vxattrs.sh
62
63
64
h3. 14 Nov 2023
65 206 Venky Shankar
66
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20231106.073650
67
68
(nvm the fs:upgrade test failure - the PR is excluded from merge)
69
70
* https://tracker.ceph.com/issues/57676
71
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
72
* https://tracker.ceph.com/issues/63233
73
    mon|client|mds: valgrind reports possible leaks in the MDS
74
* https://tracker.ceph.com/issues/63141
75
    qa/cephfs: test_idem_unaffected_root_squash fails
76
* https://tracker.ceph.com/issues/62580
77
    testing: Test failure: test_snapshot_remove (tasks.cephfs.test_strays.TestStrays)
78
* https://tracker.ceph.com/issues/57655
79
    qa: fs:mixed-clients kernel_untar_build failure
80
* https://tracker.ceph.com/issues/51964
81
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
82
* https://tracker.ceph.com/issues/63519
83
    ceph-fuse: reef ceph-fuse crashes with main branch ceph-mds
84
* https://tracker.ceph.com/issues/57087
85
    qa: test_fragmented_injection (tasks.cephfs.test_data_scan.TestDataScan) failure
86
* https://tracker.ceph.com/issues/58945
87
    qa: xfstests-dev's generic test suite has 20 failures with fuse client
88
89 204 Rishabh Dave
h3. 7 Nov 2023
90
91 205 Rishabh Dave
fs: https://pulpito.ceph.com/rishabh-2023-11-04_04:30:51-fs-rishabh-2023nov3-testing-default-smithi/
92
re-run: https://pulpito.ceph.com/rishabh-2023-11-05_14:10:09-fs-rishabh-2023nov3-testing-default-smithi/
93
smoke: https://pulpito.ceph.com/rishabh-2023-11-08_08:39:05-smoke-rishabh-2023nov3-testing-default-smithi/
94 204 Rishabh Dave
95
* https://tracker.ceph.com/issues/53859
96
  qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
97
* https://tracker.ceph.com/issues/63233
98
  mon|client|mds: valgrind reports possible leaks in the MDS
99
* https://tracker.ceph.com/issues/57655
100
  qa: fs:mixed-clients kernel_untar_build failure
101
* https://tracker.ceph.com/issues/57676
102
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
103
104
* https://tracker.ceph.com/issues/63473
105
  fsstress.sh failed with errno 124
106
107 202 Rishabh Dave
h3. 3 Nov 2023
108 203 Rishabh Dave
109 202 Rishabh Dave
https://pulpito.ceph.com/rishabh-2023-10-27_06:26:52-fs-rishabh-2023oct26-testing-default-smithi/
110
111
* https://tracker.ceph.com/issues/63141
112
  qa/cephfs: test_idem_unaffected_root_squash fails
113
* https://tracker.ceph.com/issues/63233
114
  mon|client|mds: valgrind reports possible leaks in the MDS
115
* https://tracker.ceph.com/issues/57656
116
  dbench: write failed on handle 10010 (Resource temporarily unavailable)
117
* https://tracker.ceph.com/issues/57655
118
  qa: fs:mixed-clients kernel_untar_build failure
119
* https://tracker.ceph.com/issues/57676
120
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
121
122
* https://tracker.ceph.com/issues/59531
123
  "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio)"
124
* https://tracker.ceph.com/issues/52624
125
  qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
126
127 198 Patrick Donnelly
h3. 24 October 2023
128
129
https://pulpito.ceph.com/?branch=wip-batrick-testing-20231024.144545
130
131 200 Patrick Donnelly
Two failures:
132
133
https://pulpito.ceph.com/pdonnell-2023-10-26_05:21:22-fs-wip-batrick-testing-20231024.144545-distro-default-smithi/7438459/
134
https://pulpito.ceph.com/pdonnell-2023-10-26_05:21:22-fs-wip-batrick-testing-20231024.144545-distro-default-smithi/7438468/
135
136
probably related to https://github.com/ceph/ceph/pull/53255. Killing the mount as part of the test did not complete. Will research more.
137
138 198 Patrick Donnelly
* https://tracker.ceph.com/issues/52624
139
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
140
* https://tracker.ceph.com/issues/57676
141 199 Patrick Donnelly
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
142
* https://tracker.ceph.com/issues/63233
143
    mon|client|mds: valgrind reports possible leaks in the MDS
144
* https://tracker.ceph.com/issues/59531
145
    "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS 
146
* https://tracker.ceph.com/issues/57655
147
    qa: fs:mixed-clients kernel_untar_build failure
148 200 Patrick Donnelly
* https://tracker.ceph.com/issues/62067
149
    ffsb.sh failure "Resource temporarily unavailable"
150
* https://tracker.ceph.com/issues/63411
151
    qa: flush journal may cause timeouts of `scrub status`
152
* https://tracker.ceph.com/issues/61243
153
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
154
* https://tracker.ceph.com/issues/63141
155 198 Patrick Donnelly
    test_idem_unaffected_root_squash (test_admin.TestFsAuthorizeUpdate) fails
156 148 Rishabh Dave
157 195 Venky Shankar
h3. 18 Oct 2023
158
159
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20231018.065603
160
161
* https://tracker.ceph.com/issues/52624
162
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
163
* https://tracker.ceph.com/issues/57676
164
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
165
* https://tracker.ceph.com/issues/63233
166
    mon|client|mds: valgrind reports possible leaks in the MDS
167
* https://tracker.ceph.com/issues/63141
168
    qa/cephfs: test_idem_unaffected_root_squash fails
169
* https://tracker.ceph.com/issues/59531
170
  "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio)" 
171
* https://tracker.ceph.com/issues/62658
172
    error during scrub thrashing: reached maximum tries (31) after waiting for 900 seconds
173
* https://tracker.ceph.com/issues/62580
174
    testing: Test failure: test_snapshot_remove (tasks.cephfs.test_strays.TestStrays)
175
* https://tracker.ceph.com/issues/62067
176
    ffsb.sh failure "Resource temporarily unavailable"
177
* https://tracker.ceph.com/issues/57655
178
    qa: fs:mixed-clients kernel_untar_build failure
179
* https://tracker.ceph.com/issues/62036
180
    src/mds/MDCache.cc: 5131: FAILED ceph_assert(isolated_inodes.empty())
181
* https://tracker.ceph.com/issues/58945
182
    qa: xfstests-dev's generic test suite has 20 failures with fuse client
183
* https://tracker.ceph.com/issues/62847
184
    mds: blogbench requests stuck (5mds+scrub+snaps-flush)
185
186 193 Venky Shankar
h3. 13 Oct 2023
187
188
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20231013.093215
189
190
* https://tracker.ceph.com/issues/52624
191
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
192
* https://tracker.ceph.com/issues/62936
193
    Test failure: test_mirroring_init_failure_with_recovery (tasks.cephfs.test_mirroring.TestMirroring)
194
* https://tracker.ceph.com/issues/47292
195
    cephfs-shell: test_df_for_valid_file failure
196
* https://tracker.ceph.com/issues/63141
197
    qa/cephfs: test_idem_unaffected_root_squash fails
198
* https://tracker.ceph.com/issues/62081
199
    tasks/fscrypt-common does not finish, timesout
200 1 Patrick Donnelly
* https://tracker.ceph.com/issues/58945
201
    qa: xfstests-dev's generic test suite has 20 failures with fuse client
202 194 Venky Shankar
* https://tracker.ceph.com/issues/63233
203
    mon|client|mds: valgrind reports possible leaks in the MDS
204 193 Venky Shankar
205 190 Patrick Donnelly
h3. 16 Oct 2023
206
207
https://pulpito.ceph.com/?branch=wip-batrick-testing-20231016.203825
208
209 192 Patrick Donnelly
Infrastructure issues:
210
* /teuthology/pdonnell-2023-10-19_12:04:12-fs-wip-batrick-testing-20231016.203825-distro-default-smithi/7432286/teuthology.log
211
    Host lost.
212
213 196 Patrick Donnelly
One followup fix:
214
* https://pulpito.ceph.com/pdonnell-2023-10-20_00:33:29-fs-wip-batrick-testing-20231016.203825-distro-default-smithi/
215
216 192 Patrick Donnelly
Failures:
217
218
* https://tracker.ceph.com/issues/56694
219
    qa: avoid blocking forever on hung umount
220
* https://tracker.ceph.com/issues/63089
221
    qa: tasks/mirror times out
222
* https://tracker.ceph.com/issues/52624
223
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
224
* https://tracker.ceph.com/issues/59531
225
  "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio)" 
226
* https://tracker.ceph.com/issues/57676
227
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
228
* https://tracker.ceph.com/issues/62658 
229
    error during scrub thrashing: reached maximum tries (31) after waiting for 900 seconds
230
* https://tracker.ceph.com/issues/61243
231
  test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
232
* https://tracker.ceph.com/issues/57656
233
  dbench: write failed on handle 10010 (Resource temporarily unavailable)
234
* https://tracker.ceph.com/issues/63233
235
  mon|client|mds: valgrind reports possible leaks in the MDS
236 197 Patrick Donnelly
* https://tracker.ceph.com/issues/63278
237
  kclient: may wrongly decode session messages and believe it is blocklisted (dead jobs)
238 192 Patrick Donnelly
239 189 Rishabh Dave
h3. 9 Oct 2023
240
241
https://pulpito.ceph.com/rishabh-2023-10-06_11:56:52-fs-rishabh-cephfs-mon-testing-default-smithi/
242
243
* https://tracker.ceph.com/issues/54460
244
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
245
* https://tracker.ceph.com/issues/63141
246
  test_idem_unaffected_root_squash (test_admin.TestFsAuthorizeUpdate) fails
247
* https://tracker.ceph.com/issues/62937
248
  logrotate doesn't support parallel execution on same set of logfiles
249
* https://tracker.ceph.com/issues/61400
250
  valgrind+ceph-mon issues
251
* https://tracker.ceph.com/issues/57676
252
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
253
* https://tracker.ceph.com/issues/55805
254
  error during scrub thrashing reached max tries in 900 secs
255
256 188 Venky Shankar
h3. 26 Sep 2023
257
258
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230926.081818
259
260
* https://tracker.ceph.com/issues/52624
261
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
262
* https://tracker.ceph.com/issues/62873
263
    qa: FAIL: test_client_blocklisted_oldest_tid (tasks.cephfs.test_client_limits.TestClientLimits)
264
* https://tracker.ceph.com/issues/61400
265
    valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
266
* https://tracker.ceph.com/issues/57676
267
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
268
* https://tracker.ceph.com/issues/62682
269
    mon: no mdsmap broadcast after "fs set joinable" is set to true
270
* https://tracker.ceph.com/issues/63089
271
    qa: tasks/mirror times out
272
273 185 Rishabh Dave
h3. 22 Sep 2023
274
275
https://pulpito.ceph.com/rishabh-2023-09-12_12:12:15-fs-wip-rishabh-2023sep12-b2-testing-default-smithi/
276
277
* https://tracker.ceph.com/issues/59348
278
  qa: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota
279
* https://tracker.ceph.com/issues/59344
280
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
281
* https://tracker.ceph.com/issues/59531
282
  "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio)" 
283
* https://tracker.ceph.com/issues/61574
284
  build failure for mdtest project
285
* https://tracker.ceph.com/issues/62702
286
  fsstress.sh: MDS slow requests for the internal 'rename' requests
287
* https://tracker.ceph.com/issues/57676
288
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
289
290
* https://tracker.ceph.com/issues/62863 
291
  deadlock in ceph-fuse causes teuthology job to hang and fail
292
* https://tracker.ceph.com/issues/62870
293
  test_cluster_info (tasks.cephfs.test_nfs.TestNFS)
294
* https://tracker.ceph.com/issues/62873
295
  test_client_blocklisted_oldest_tid (tasks.cephfs.test_client_limits.TestClientLimits)
296
297 186 Venky Shankar
h3. 20 Sep 2023
298
299
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230920.072635
300
301
* https://tracker.ceph.com/issues/52624
302
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
303
* https://tracker.ceph.com/issues/61400
304
    valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
305
* https://tracker.ceph.com/issues/61399
306
    libmpich: undefined references to fi_strerror
307
* https://tracker.ceph.com/issues/62081
308
    tasks/fscrypt-common does not finish, timesout
309
* https://tracker.ceph.com/issues/62658 
310
    error during scrub thrashing: reached maximum tries (31) after waiting for 900 seconds
311
* https://tracker.ceph.com/issues/62915
312
    qa/suites/fs/nfs: No orchestrator configured (try `ceph orch set backend`) while running test cases
313
* https://tracker.ceph.com/issues/59531
314
    quincy: "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio)"
315
* https://tracker.ceph.com/issues/62873
316
  qa: FAIL: test_client_blocklisted_oldest_tid (tasks.cephfs.test_client_limits.TestClientLimits)
317
* https://tracker.ceph.com/issues/62936
318
    Test failure: test_mirroring_init_failure_with_recovery (tasks.cephfs.test_mirroring.TestMirroring)
319
* https://tracker.ceph.com/issues/62937
320
    Command failed on smithi027 with status 3: 'sudo logrotate /etc/logrotate.d/ceph-test.conf'
321
* https://tracker.ceph.com/issues/62510
322
    snaptest-git-ceph.sh failure with fs/thrash
323
* https://tracker.ceph.com/issues/62081
324
    tasks/fscrypt-common does not finish, timesout
325
* https://tracker.ceph.com/issues/62126
326
    test failure: suites/blogbench.sh stops running
327 187 Venky Shankar
* https://tracker.ceph.com/issues/62682
328
    mon: no mdsmap broadcast after "fs set joinable" is set to true
329 186 Venky Shankar
330 184 Milind Changire
h3. 19 Sep 2023
331
332
http://pulpito.front.sepia.ceph.com/mchangir-2023-09-12_05:40:22-fs-wip-mchangir-testing-20230908.140927-testing-default-smithi/
333
334
* https://tracker.ceph.com/issues/58220#note-9
335
  workunit fs/test_python.sh: test_disk_quota_exceeeded_error failure
336
* https://tracker.ceph.com/issues/62702
337
  Command failed (workunit test suites/fsstress.sh) on smithi124 with status 124
338
* https://tracker.ceph.com/issues/57676
339
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
340
* https://tracker.ceph.com/issues/59348
341
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
342
* https://tracker.ceph.com/issues/52624
343
  qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
344
* https://tracker.ceph.com/issues/51964
345
  qa: test_cephfs_mirror_restart_sync_on_blocklist failure
346
* https://tracker.ceph.com/issues/61243
347
  test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
348
* https://tracker.ceph.com/issues/59344
349
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
350
* https://tracker.ceph.com/issues/62873
351
  qa: FAIL: test_client_blocklisted_oldest_tid (tasks.cephfs.test_client_limits.TestClientLimits)
352
* https://tracker.ceph.com/issues/59413
353
  cephfs: qa snaptest-git-ceph.sh failed with "got remote process result: 128"
354
* https://tracker.ceph.com/issues/53859
355
  qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
356
* https://tracker.ceph.com/issues/62482
357
  qa: cluster [WRN] Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
358
359 178 Patrick Donnelly
360 177 Venky Shankar
h3. 13 Sep 2023
361
362
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230908.065909
363
364
* https://tracker.ceph.com/issues/52624
365
      qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
366
* https://tracker.ceph.com/issues/57655
367
    qa: fs:mixed-clients kernel_untar_build failure
368
* https://tracker.ceph.com/issues/57676
369
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
370
* https://tracker.ceph.com/issues/61243
371
    qa: tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev - 17 tests failed
372
* https://tracker.ceph.com/issues/62567
373
    postgres workunit times out - MDS_SLOW_REQUEST in logs
374
* https://tracker.ceph.com/issues/61400
375
    valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
376
* https://tracker.ceph.com/issues/61399
377
    libmpich: undefined references to fi_strerror
378
* https://tracker.ceph.com/issues/57655
379
    qa: fs:mixed-clients kernel_untar_build failure
380
* https://tracker.ceph.com/issues/57676
381
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
382
* https://tracker.ceph.com/issues/51964
383
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
384
* https://tracker.ceph.com/issues/62081
385
    tasks/fscrypt-common does not finish, timesout
386 178 Patrick Donnelly
387 179 Patrick Donnelly
h3. 2023 Sep 12
388 178 Patrick Donnelly
389
https://pulpito.ceph.com/pdonnell-2023-09-12_14:07:50-fs-wip-batrick-testing-20230912.122437-distro-default-smithi/
390 1 Patrick Donnelly
391 181 Patrick Donnelly
A few failures caused by qa refactoring in https://github.com/ceph/ceph/pull/48130 ; notably:
392
393 182 Patrick Donnelly
* Test failure: test_export_pin_many (tasks.cephfs.test_exports.TestExportPin) caused by fragmentation from config changes.
394 181 Patrick Donnelly
395
Failures:
396
397 179 Patrick Donnelly
* https://tracker.ceph.com/issues/59348
398
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
399
* https://tracker.ceph.com/issues/57656
400
  dbench: write failed on handle 10010 (Resource temporarily unavailable)
401
* https://tracker.ceph.com/issues/55805
402
  error scrub thrashing reached max tries in 900 secs
403
* https://tracker.ceph.com/issues/62067
404
    ffsb.sh failure "Resource temporarily unavailable"
405
* https://tracker.ceph.com/issues/59344
406
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
407
* https://tracker.ceph.com/issues/61399
408 180 Patrick Donnelly
  libmpich: undefined references to fi_strerror
409
* https://tracker.ceph.com/issues/62832
410
  common: config_proxy deadlock during shutdown (and possibly other times)
411
* https://tracker.ceph.com/issues/59413
412 1 Patrick Donnelly
  cephfs: qa snaptest-git-ceph.sh failed with "got remote process result: 128"
413 181 Patrick Donnelly
* https://tracker.ceph.com/issues/57676
414
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
415
* https://tracker.ceph.com/issues/62567
416
  Command failed with status 1: "sudo TESTDIR=/home/ubuntu/cephtest bash -c 'sudo -u postgres -- pgbench -s 500 -i'"
417
* https://tracker.ceph.com/issues/54460
418
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
419
* https://tracker.ceph.com/issues/58220#note-9
420
  workunit fs/test_python.sh: test_disk_quota_exceeeded_error failure
421
* https://tracker.ceph.com/issues/59348
422
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
423 183 Patrick Donnelly
* https://tracker.ceph.com/issues/62847
424
    mds: blogbench requests stuck (5mds+scrub+snaps-flush)
425
* https://tracker.ceph.com/issues/62848
426
    qa: fail_fs upgrade scenario hanging
427
* https://tracker.ceph.com/issues/62081
428
    tasks/fscrypt-common does not finish, timesout
429 177 Venky Shankar
430 176 Venky Shankar
h3. 11 Sep 2023
431 175 Venky Shankar
432
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230830.153114
433
434
* https://tracker.ceph.com/issues/52624
435
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
436
* https://tracker.ceph.com/issues/61399
437
    qa: build failure for ior (the failed instance is when compiling `mdtest`)
438
* https://tracker.ceph.com/issues/57655
439
    qa: fs:mixed-clients kernel_untar_build failure
440
* https://tracker.ceph.com/issues/61399
441
    ior build failure
442
* https://tracker.ceph.com/issues/59531
443
    quincy: "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio)"
444
* https://tracker.ceph.com/issues/59344
445
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
446
* https://tracker.ceph.com/issues/59346
447
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
448
* https://tracker.ceph.com/issues/59348
449
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
450
* https://tracker.ceph.com/issues/57676
451
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
452
* https://tracker.ceph.com/issues/61243
453
  qa: tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev - 17 tests failed
454
* https://tracker.ceph.com/issues/62567
455
  postgres workunit times out - MDS_SLOW_REQUEST in logs
456
457
458 174 Rishabh Dave
h3. 6 Sep 2023 Run 2
459
460
https://pulpito.ceph.com/rishabh-2023-08-25_01:50:32-fs-wip-rishabh-2023aug3-b5-testing-default-smithi/ 
461
462
* https://tracker.ceph.com/issues/51964
463
  test_cephfs_mirror_restart_sync_on_blocklist failure
464
* https://tracker.ceph.com/issues/59348
465
  test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
466
* https://tracker.ceph.com/issues/53859
467
  qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
468
* https://tracker.ceph.com/issues/61892
469
  test_strays.TestStrays.test_snapshot_remove failed
470
* https://tracker.ceph.com/issues/54460
471
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
472
* https://tracker.ceph.com/issues/59346
473
  fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write"
474
* https://tracker.ceph.com/issues/59344
475
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
476
* https://tracker.ceph.com/issues/62484
477
  qa: ffsb.sh test failure
478
* https://tracker.ceph.com/issues/62567
479
  Command failed with status 1: "sudo TESTDIR=/home/ubuntu/cephtest bash -c 'sudo -u postgres -- pgbench -s 500 -i'"
480
  
481
* https://tracker.ceph.com/issues/61399
482
  ior build failure
483
* https://tracker.ceph.com/issues/57676
484
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
485
* https://tracker.ceph.com/issues/55805
486
  error scrub thrashing reached max tries in 900 secs
487
488 172 Rishabh Dave
h3. 6 Sep 2023
489 171 Rishabh Dave
490 173 Rishabh Dave
https://pulpito.ceph.com/rishabh-2023-08-10_20:16:46-fs-wip-rishabh-2023Aug1-b4-testing-default-smithi/
491 171 Rishabh Dave
492 1 Patrick Donnelly
* https://tracker.ceph.com/issues/53859
493
  qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
494 173 Rishabh Dave
* https://tracker.ceph.com/issues/51964
495
  qa: test_cephfs_mirror_restart_sync_on_blocklist failure
496 1 Patrick Donnelly
* https://tracker.ceph.com/issues/61892
497 173 Rishabh Dave
  test_snapshot_remove (test_strays.TestStrays) failed
498
* https://tracker.ceph.com/issues/59348
499
  qa: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota
500
* https://tracker.ceph.com/issues/54462
501
  Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128
502
* https://tracker.ceph.com/issues/62556
503
  test_acls: xfstests_dev: python2 is missing
504
* https://tracker.ceph.com/issues/62067
505
  ffsb.sh failure "Resource temporarily unavailable"
506
* https://tracker.ceph.com/issues/57656
507
  dbench: write failed on handle 10010 (Resource temporarily unavailable)
508 1 Patrick Donnelly
* https://tracker.ceph.com/issues/59346
509
  fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write"
510 171 Rishabh Dave
* https://tracker.ceph.com/issues/59344
511 173 Rishabh Dave
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
512
513 171 Rishabh Dave
* https://tracker.ceph.com/issues/61399
514
  ior build failure
515
* https://tracker.ceph.com/issues/57676
516
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
517
* https://tracker.ceph.com/issues/55805
518
  error scrub thrashing reached max tries in 900 secs
519 173 Rishabh Dave
520
* https://tracker.ceph.com/issues/62567
521
  Command failed on smithi008 with status 1: "sudo TESTDIR=/home/ubuntu/cephtest bash -c 'sudo -u postgres -- pgbench -s 500 -i'"
522
* https://tracker.ceph.com/issues/62702
523
  workunit test suites/fsstress.sh on smithi066 with status 124
524 170 Rishabh Dave
525
h3. 5 Sep 2023
526
527
https://pulpito.ceph.com/rishabh-2023-08-25_06:38:25-fs-wip-rishabh-2023aug3-b5-testing-default-smithi/
528
orch:cephadm suite run: http://pulpito.front.sepia.ceph.com/rishabh-2023-09-05_12:16:09-orch:cephadm-wip-rishabh-2023aug3-b5-testing-default-smithi/
529
  this run has failures but acc to Adam King these are not relevant and should be ignored
530
531
* https://tracker.ceph.com/issues/61892
532
  test_snapshot_remove (test_strays.TestStrays) failed
533
* https://tracker.ceph.com/issues/59348
534
  test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota
535
* https://tracker.ceph.com/issues/54462
536
  Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128
537
* https://tracker.ceph.com/issues/62067
538
  ffsb.sh failure "Resource temporarily unavailable"
539
* https://tracker.ceph.com/issues/57656 
540
  dbench: write failed on handle 10010 (Resource temporarily unavailable)
541
* https://tracker.ceph.com/issues/59346
542
  fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write"
543
* https://tracker.ceph.com/issues/59344
544
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
545
* https://tracker.ceph.com/issues/50223
546
  client.xxxx isn't responding to mclientcaps(revoke)
547
* https://tracker.ceph.com/issues/57655
548
  qa: fs:mixed-clients kernel_untar_build failure
549
* https://tracker.ceph.com/issues/62187
550
  iozone.sh: line 5: iozone: command not found
551
 
552
* https://tracker.ceph.com/issues/61399
553
  ior build failure
554
* https://tracker.ceph.com/issues/57676
555
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
556
* https://tracker.ceph.com/issues/55805
557
  error scrub thrashing reached max tries in 900 secs
558 169 Venky Shankar
559
560
h3. 31 Aug 2023
561
562
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230824.045828
563
564
* https://tracker.ceph.com/issues/52624
565
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
566
* https://tracker.ceph.com/issues/62187
567
    iozone: command not found
568
* https://tracker.ceph.com/issues/61399
569
    ior build failure
570
* https://tracker.ceph.com/issues/59531
571
    quincy: "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio)"
572
* https://tracker.ceph.com/issues/61399
573
    qa: build failure for ior (the failed instance is when compiling `mdtest`)
574
* https://tracker.ceph.com/issues/57655
575
    qa: fs:mixed-clients kernel_untar_build failure
576
* https://tracker.ceph.com/issues/59344
577
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
578
* https://tracker.ceph.com/issues/59346
579
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
580
* https://tracker.ceph.com/issues/59348
581
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
582
* https://tracker.ceph.com/issues/59413
583
    cephfs: qa snaptest-git-ceph.sh failed with "got remote process result: 128"
584
* https://tracker.ceph.com/issues/62653
585
    qa: unimplemented fcntl command: 1036 with fsstress
586
* https://tracker.ceph.com/issues/61400
587
    valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
588
* https://tracker.ceph.com/issues/62658
589
    error during scrub thrashing: reached maximum tries (31) after waiting for 900 seconds
590
* https://tracker.ceph.com/issues/62188
591
    AttributeError: 'RemoteProcess' object has no attribute 'read' (happens only with multis-auth test)
592 168 Venky Shankar
593
594
h3. 25 Aug 2023
595
596
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230822.064807
597
598
* https://tracker.ceph.com/issues/59344
599
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
600
* https://tracker.ceph.com/issues/59346
601
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
602
* https://tracker.ceph.com/issues/59348
603
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
604
* https://tracker.ceph.com/issues/57655
605
    qa: fs:mixed-clients kernel_untar_build failure
606
* https://tracker.ceph.com/issues/61243
607
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
608
* https://tracker.ceph.com/issues/61399
609
    ior build failure
610
* https://tracker.ceph.com/issues/61399
611
    qa: build failure for ior (the failed instance is when compiling `mdtest`)
612
* https://tracker.ceph.com/issues/62484
613
    qa: ffsb.sh test failure
614
* https://tracker.ceph.com/issues/59531
615
    quincy: "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio)"
616
* https://tracker.ceph.com/issues/62510
617
    snaptest-git-ceph.sh failure with fs/thrash
618 167 Venky Shankar
619
620
h3. 24 Aug 2023
621
622
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230822.060131
623
624
* https://tracker.ceph.com/issues/57676
625
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
626
* https://tracker.ceph.com/issues/51964
627
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
628
* https://tracker.ceph.com/issues/59344
629
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
630
* https://tracker.ceph.com/issues/59346
631
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
632
* https://tracker.ceph.com/issues/59348
633
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
634
* https://tracker.ceph.com/issues/61399
635
    ior build failure
636
* https://tracker.ceph.com/issues/61399
637
    qa: build failure for ior (the failed instance is when compiling `mdtest`)
638
* https://tracker.ceph.com/issues/62510
639
    snaptest-git-ceph.sh failure with fs/thrash
640
* https://tracker.ceph.com/issues/62484
641
    qa: ffsb.sh test failure
642
* https://tracker.ceph.com/issues/57087
643
    qa: test_fragmented_injection (tasks.cephfs.test_data_scan.TestDataScan) failure
644
* https://tracker.ceph.com/issues/57656
645
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
646
* https://tracker.ceph.com/issues/62187
647
    iozone: command not found
648
* https://tracker.ceph.com/issues/62188
649
    AttributeError: 'RemoteProcess' object has no attribute 'read' (happens only with multis-auth test)
650
* https://tracker.ceph.com/issues/62567
651
    postgres workunit times out - MDS_SLOW_REQUEST in logs
652 166 Venky Shankar
653
654
h3. 22 Aug 2023
655
656
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230809.035933
657
658
* https://tracker.ceph.com/issues/57676
659
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
660
* https://tracker.ceph.com/issues/51964
661
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
662
* https://tracker.ceph.com/issues/59344
663
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
664
* https://tracker.ceph.com/issues/59346
665
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
666
* https://tracker.ceph.com/issues/59348
667
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
668
* https://tracker.ceph.com/issues/61399
669
    ior build failure
670
* https://tracker.ceph.com/issues/61399
671
    qa: build failure for ior (the failed instance is when compiling `mdtest`)
672
* https://tracker.ceph.com/issues/57655
673
    qa: fs:mixed-clients kernel_untar_build failure
674
* https://tracker.ceph.com/issues/61243
675
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
676
* https://tracker.ceph.com/issues/62188
677
    AttributeError: 'RemoteProcess' object has no attribute 'read' (happens only with multis-auth test)
678
* https://tracker.ceph.com/issues/62510
679
    snaptest-git-ceph.sh failure with fs/thrash
680
* https://tracker.ceph.com/issues/62511
681
    src/mds/MDLog.cc: 299: FAILED ceph_assert(!mds_is_shutting_down)
682 165 Venky Shankar
683
684
h3. 14 Aug 2023
685
686
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230808.093601
687
688
* https://tracker.ceph.com/issues/51964
689
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
690
* https://tracker.ceph.com/issues/61400
691
    valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
692
* https://tracker.ceph.com/issues/61399
693
    ior build failure
694
* https://tracker.ceph.com/issues/59348
695
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
696
* https://tracker.ceph.com/issues/59531
697
    cluster [WRN] OSD bench result of 137706.272521 IOPS exceeded the threshold
698
* https://tracker.ceph.com/issues/59344
699
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
700
* https://tracker.ceph.com/issues/59346
701
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
702
* https://tracker.ceph.com/issues/61399
703
    qa: build failure for ior (the failed instance is when compiling `mdtest`)
704
* https://tracker.ceph.com/issues/59684 [kclient bug]
705
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
706
* https://tracker.ceph.com/issues/61243 (NEW)
707
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
708
* https://tracker.ceph.com/issues/57655
709
    qa: fs:mixed-clients kernel_untar_build failure
710
* https://tracker.ceph.com/issues/57656
711
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
712 163 Venky Shankar
713
714
h3. 28 JULY 2023
715
716
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230725.053049
717
718
* https://tracker.ceph.com/issues/51964
719
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
720
* https://tracker.ceph.com/issues/61400
721
    valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
722
* https://tracker.ceph.com/issues/61399
723
    ior build failure
724
* https://tracker.ceph.com/issues/57676
725
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
726
* https://tracker.ceph.com/issues/59348
727
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
728
* https://tracker.ceph.com/issues/59531
729
    cluster [WRN] OSD bench result of 137706.272521 IOPS exceeded the threshold
730
* https://tracker.ceph.com/issues/59344
731
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
732
* https://tracker.ceph.com/issues/59346
733
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
734
* https://github.com/ceph/ceph/pull/52556
735
    task/test_nfs: AttributeError: 'TestNFS' object has no attribute 'run_ceph_cmd' (see note #4)
736
* https://tracker.ceph.com/issues/62187
737
    iozone: command not found
738
* https://tracker.ceph.com/issues/61399
739
    qa: build failure for ior (the failed instance is when compiling `mdtest`)
740
* https://tracker.ceph.com/issues/62188
741 164 Rishabh Dave
    AttributeError: 'RemoteProcess' object has no attribute 'read' (happens only with multis-auth test)
742 158 Rishabh Dave
743
h3. 24 Jul 2023
744
745
https://pulpito.ceph.com/rishabh-2023-07-13_21:35:13-fs-wip-rishabh-2023Jul13-testing-default-smithi/
746
https://pulpito.ceph.com/rishabh-2023-07-14_10:26:42-fs-wip-rishabh-2023Jul13-testing-default-smithi/
747
There were few failure from one of the PRs under testing. Following run confirms that removing this PR fixes these failures -
748
https://pulpito.ceph.com/rishabh-2023-07-18_02:11:50-fs-wip-rishabh-2023Jul13-m-quota-testing-default-smithi/
749
One more extra run to check if blogbench.sh fail every time:
750
https://pulpito.ceph.com/rishabh-2023-07-21_17:58:19-fs-wip-rishabh-2023Jul13-m-quota-testing-default-smithi/
751
blogbench.sh failure were seen on above runs for first time, following run with main branch that confirms that "blogbench.sh" was not related to any of the PRs that are under testing -
752 161 Rishabh Dave
https://pulpito.ceph.com/rishabh-2023-07-21_21:30:53-fs-wip-rishabh-2023Jul13-base-2-testing-default-smithi/
753
754
* https://tracker.ceph.com/issues/61892
755
  test_snapshot_remove (test_strays.TestStrays) failed
756
* https://tracker.ceph.com/issues/53859
757
  test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
758
* https://tracker.ceph.com/issues/61982
759
  test_clean_stale_subvolume_snapshot_metadata (tasks.cephfs.test_volumes.TestSubvolumeSnapshots)
760
* https://tracker.ceph.com/issues/52438
761
  qa: ffsb timeout
762
* https://tracker.ceph.com/issues/54460
763
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
764
* https://tracker.ceph.com/issues/57655
765
  qa: fs:mixed-clients kernel_untar_build failure
766
* https://tracker.ceph.com/issues/48773
767
  reached max tries: scrub does not complete
768
* https://tracker.ceph.com/issues/58340
769
  mds: fsstress.sh hangs with multimds
770
* https://tracker.ceph.com/issues/61400
771
  valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
772
* https://tracker.ceph.com/issues/57206
773
  libcephfs/test.sh: ceph_test_libcephfs_reclaim
774
  
775
* https://tracker.ceph.com/issues/57656
776
  [testing] dbench: write failed on handle 10010 (Resource temporarily unavailable)
777
* https://tracker.ceph.com/issues/61399
778
  ior build failure
779
* https://tracker.ceph.com/issues/57676
780
  error during scrub thrashing: backtrace
781
  
782
* https://tracker.ceph.com/issues/38452
783
  'sudo -u postgres -- pgbench -s 500 -i' failed
784 158 Rishabh Dave
* https://tracker.ceph.com/issues/62126
785 157 Venky Shankar
  blogbench.sh failure
786
787
h3. 18 July 2023
788
789
* https://tracker.ceph.com/issues/52624
790
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
791
* https://tracker.ceph.com/issues/57676
792
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
793
* https://tracker.ceph.com/issues/54460
794
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
795
* https://tracker.ceph.com/issues/57655
796
    qa: fs:mixed-clients kernel_untar_build failure
797
* https://tracker.ceph.com/issues/51964
798
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
799
* https://tracker.ceph.com/issues/59344
800
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
801
* https://tracker.ceph.com/issues/61182
802
    cephfs-mirror-ha-workunit: reached maximum tries (50) after waiting for 300 seconds
803
* https://tracker.ceph.com/issues/61957
804
    test_client_limits.TestClientLimits.test_client_release_bug
805
* https://tracker.ceph.com/issues/59348
806
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
807
* https://tracker.ceph.com/issues/61892
808
    test_strays.TestStrays.test_snapshot_remove failed
809
* https://tracker.ceph.com/issues/59346
810
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
811
* https://tracker.ceph.com/issues/44565
812
    src/mds/SimpleLock.h: 528: FAILED ceph_assert(state == LOCK_XLOCK || state == LOCK_XLOCKDONE || state == LOCK_XLOCKSNAP || state == LOCK_LOCK_XLOCK || state == LOCK_LOCK || is_locallock())
813
* https://tracker.ceph.com/issues/62067
814
    ffsb.sh failure "Resource temporarily unavailable"
815 156 Venky Shankar
816
817
h3. 17 July 2023
818
819
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230704.040136
820
821
* https://tracker.ceph.com/issues/61982
822
    Test failure: test_clean_stale_subvolume_snapshot_metadata (tasks.cephfs.test_volumes.TestSubvolumeSnapshots)
823
* https://tracker.ceph.com/issues/59344
824
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
825
* https://tracker.ceph.com/issues/61182
826
    cephfs-mirror-ha-workunit: reached maximum tries (50) after waiting for 300 seconds
827
* https://tracker.ceph.com/issues/61957
828
    test_client_limits.TestClientLimits.test_client_release_bug
829
* https://tracker.ceph.com/issues/61400
830
    valgrind+ceph-mon: segmentation fault in rocksdb+tcmalloc
831
* https://tracker.ceph.com/issues/59348
832
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
833
* https://tracker.ceph.com/issues/61892
834
    test_strays.TestStrays.test_snapshot_remove failed
835
* https://tracker.ceph.com/issues/59346
836
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
837
* https://tracker.ceph.com/issues/62036
838
    src/mds/MDCache.cc: 5131: FAILED ceph_assert(isolated_inodes.empty())
839
* https://tracker.ceph.com/issues/61737
840
    coredump from '/bin/podman pull quay.ceph.io/ceph-ci/ceph:pacific'
841
* https://tracker.ceph.com/issues/44565
842
    src/mds/SimpleLock.h: 528: FAILED ceph_assert(state == LOCK_XLOCK || state == LOCK_XLOCKDONE || state == LOCK_XLOCKSNAP || state == LOCK_LOCK_XLOCK || state == LOCK_LOCK || is_locallock())
843 155 Rishabh Dave
844 1 Patrick Donnelly
845 153 Rishabh Dave
h3. 13 July 2023 Run 2
846 152 Rishabh Dave
847
848
https://pulpito.ceph.com/rishabh-2023-07-08_23:33:40-fs-wip-rishabh-2023Jul9-testing-default-smithi/
849
https://pulpito.ceph.com/rishabh-2023-07-09_20:19:09-fs-wip-rishabh-2023Jul9-testing-default-smithi/
850
851
* https://tracker.ceph.com/issues/61957
852
  test_client_limits.TestClientLimits.test_client_release_bug
853
* https://tracker.ceph.com/issues/61982
854
  Test failure: test_clean_stale_subvolume_snapshot_metadata (tasks.cephfs.test_volumes.TestSubvolumeSnapshots)
855
* https://tracker.ceph.com/issues/59348
856
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
857
* https://tracker.ceph.com/issues/59344
858
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
859
* https://tracker.ceph.com/issues/54460
860
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
861
* https://tracker.ceph.com/issues/57655
862
  qa: fs:mixed-clients kernel_untar_build failure
863
* https://tracker.ceph.com/issues/61400
864
  valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
865
* https://tracker.ceph.com/issues/61399
866
  ior build failure
867
868 151 Venky Shankar
h3. 13 July 2023
869
870
https://pulpito.ceph.com/vshankar-2023-07-04_11:45:30-fs-wip-vshankar-testing-20230704.040242-testing-default-smithi/
871
872
* https://tracker.ceph.com/issues/54460
873
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
874
* https://tracker.ceph.com/issues/61400
875
    valgrind+ceph-mon: segmentation fault in rocksdb+tcmalloc
876
* https://tracker.ceph.com/issues/57655
877
    qa: fs:mixed-clients kernel_untar_build failure
878
* https://tracker.ceph.com/issues/61945
879
    LibCephFS.DelegTimeout failure
880
* https://tracker.ceph.com/issues/52624
881
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
882
* https://tracker.ceph.com/issues/57676
883
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
884
* https://tracker.ceph.com/issues/59348
885
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
886
* https://tracker.ceph.com/issues/59344
887
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
888
* https://tracker.ceph.com/issues/51964
889
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
890
* https://tracker.ceph.com/issues/59346
891
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
892
* https://tracker.ceph.com/issues/61982
893
    Test failure: test_clean_stale_subvolume_snapshot_metadata (tasks.cephfs.test_volumes.TestSubvolumeSnapshots)
894 150 Rishabh Dave
895
896
h3. 13 Jul 2023
897
898
https://pulpito.ceph.com/rishabh-2023-07-05_22:21:20-fs-wip-rishabh-2023Jul5-testing-default-smithi/
899
https://pulpito.ceph.com/rishabh-2023-07-06_19:33:28-fs-wip-rishabh-2023Jul5-testing-default-smithi/
900
901
* https://tracker.ceph.com/issues/61957
902
  test_client_limits.TestClientLimits.test_client_release_bug
903
* https://tracker.ceph.com/issues/59348
904
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
905
* https://tracker.ceph.com/issues/59346
906
  fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
907
* https://tracker.ceph.com/issues/48773
908
  scrub does not complete: reached max tries
909
* https://tracker.ceph.com/issues/59344
910
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
911
* https://tracker.ceph.com/issues/52438
912
  qa: ffsb timeout
913
* https://tracker.ceph.com/issues/57656
914
  [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
915
* https://tracker.ceph.com/issues/58742
916
  xfstests-dev: kcephfs: generic
917
* https://tracker.ceph.com/issues/61399
918 148 Rishabh Dave
  libmpich: undefined references to fi_strerror
919 149 Rishabh Dave
920 148 Rishabh Dave
h3. 12 July 2023
921
922
https://pulpito.ceph.com/rishabh-2023-07-05_18:32:52-fs-wip-rishabh-CephManager-in-CephFSTestCase-testing-default-smithi/
923
https://pulpito.ceph.com/rishabh-2023-07-06_19:46:43-fs-wip-rishabh-CephManager-in-CephFSTestCase-testing-default-smithi/
924
925
* https://tracker.ceph.com/issues/61892
926
  test_strays.TestStrays.test_snapshot_remove failed
927
* https://tracker.ceph.com/issues/59348
928
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
929
* https://tracker.ceph.com/issues/53859
930
  qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
931
* https://tracker.ceph.com/issues/59346
932
  fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write"
933
* https://tracker.ceph.com/issues/58742
934
  xfstests-dev: kcephfs: generic
935
* https://tracker.ceph.com/issues/59344
936
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
937
* https://tracker.ceph.com/issues/52438
938
  qa: ffsb timeout
939
* https://tracker.ceph.com/issues/57656
940
  [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
941
* https://tracker.ceph.com/issues/54460
942
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
943
* https://tracker.ceph.com/issues/57655
944
  qa: fs:mixed-clients kernel_untar_build failure
945
* https://tracker.ceph.com/issues/61182
946
  cephfs-mirror-ha-workunit: reached maximum tries (50) after waiting for 300 seconds
947
* https://tracker.ceph.com/issues/61400
948
  valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
949 147 Rishabh Dave
* https://tracker.ceph.com/issues/48773
950 146 Patrick Donnelly
  reached max tries: scrub does not complete
951
952
h3. 05 July 2023
953
954
https://pulpito.ceph.com/pdonnell-2023-07-05_03:38:33-fs:libcephfs-wip-pdonnell-testing-20230705.003205-distro-default-smithi/
955
956 137 Rishabh Dave
* https://tracker.ceph.com/issues/59346
957 143 Rishabh Dave
  fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write"
958
959
h3. 27 Jun 2023
960
961
https://pulpito.ceph.com/rishabh-2023-06-21_23:38:17-fs-wip-rishabh-improvements-authmon-testing-default-smithi/
962 144 Rishabh Dave
https://pulpito.ceph.com/rishabh-2023-06-23_17:37:30-fs-wip-rishabh-improvements-authmon-distro-default-smithi/
963
964
* https://tracker.ceph.com/issues/59348
965
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
966
* https://tracker.ceph.com/issues/54460
967
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
968
* https://tracker.ceph.com/issues/59346
969
  fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write"
970
* https://tracker.ceph.com/issues/59344
971
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
972
* https://tracker.ceph.com/issues/61399
973
  libmpich: undefined references to fi_strerror
974
* https://tracker.ceph.com/issues/50223
975
  client.xxxx isn't responding to mclientcaps(revoke)
976 143 Rishabh Dave
* https://tracker.ceph.com/issues/61831
977
  Test failure: test_mirroring_init_failure_with_recovery (tasks.cephfs.test_mirroring.TestMirroring)
978 142 Venky Shankar
979
980
h3. 22 June 2023
981
982
* https://tracker.ceph.com/issues/57676
983
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
984
* https://tracker.ceph.com/issues/54460
985
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
986
* https://tracker.ceph.com/issues/59344
987
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
988
* https://tracker.ceph.com/issues/59348
989
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
990
* https://tracker.ceph.com/issues/61400
991
    valgrind+ceph-mon: segmentation fault in rocksdb+tcmalloc
992
* https://tracker.ceph.com/issues/57655
993
    qa: fs:mixed-clients kernel_untar_build failure
994
* https://tracker.ceph.com/issues/61394
995
    qa/quincy: cluster [WRN] evicting unresponsive client smithi152 (4298), after 303.726 seconds" in cluster log
996
* https://tracker.ceph.com/issues/61762
997
    qa: wait_for_clean: failed before timeout expired
998
* https://tracker.ceph.com/issues/61775
999
    cephfs-mirror: mirror daemon does not shutdown (in mirror ha tests)
1000
* https://tracker.ceph.com/issues/44565
1001
    src/mds/SimpleLock.h: 528: FAILED ceph_assert(state == LOCK_XLOCK || state == LOCK_XLOCKDONE || state == LOCK_XLOCKSNAP || state == LOCK_LOCK_XLOCK || state == LOCK_LOCK || is_locallock())
1002
* https://tracker.ceph.com/issues/61790
1003
    cephfs client to mds comms remain silent after reconnect
1004
* https://tracker.ceph.com/issues/61791
1005
    snaptest-git-ceph.sh test timed out (job dead)
1006 139 Venky Shankar
1007
1008
h3. 20 June 2023
1009
1010
https://pulpito.ceph.com/vshankar-2023-06-15_04:58:28-fs-wip-vshankar-testing-20230614.124123-testing-default-smithi/
1011
1012
* https://tracker.ceph.com/issues/57676
1013
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1014
* https://tracker.ceph.com/issues/54460
1015
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1016 140 Venky Shankar
* https://tracker.ceph.com/issues/54462
1017 1 Patrick Donnelly
  Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128
1018 141 Venky Shankar
* https://tracker.ceph.com/issues/58340
1019 139 Venky Shankar
  mds: fsstress.sh hangs with multimds
1020
* https://tracker.ceph.com/issues/59344
1021
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
1022
* https://tracker.ceph.com/issues/59348
1023
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1024
* https://tracker.ceph.com/issues/57656
1025
  [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
1026
* https://tracker.ceph.com/issues/61400
1027
    valgrind+ceph-mon: segmentation fault in rocksdb+tcmalloc
1028
* https://tracker.ceph.com/issues/57655
1029
    qa: fs:mixed-clients kernel_untar_build failure
1030
* https://tracker.ceph.com/issues/44565
1031
    src/mds/SimpleLock.h: 528: FAILED ceph_assert(state == LOCK_XLOCK || state == LOCK_XLOCKDONE || state == LOCK_XLOCKSNAP || state == LOCK_LOCK_XLOCK || state == LOCK_LOCK || is_locallock())
1032
* https://tracker.ceph.com/issues/61737
1033 138 Rishabh Dave
    coredump from '/bin/podman pull quay.ceph.io/ceph-ci/ceph:pacific'
1034
1035
h3. 16 June 2023
1036
1037 1 Patrick Donnelly
https://pulpito.ceph.com/rishabh-2023-05-16_10:39:13-fs-wip-rishabh-2023May15-1524-testing-default-smithi/
1038 145 Rishabh Dave
https://pulpito.ceph.com/rishabh-2023-05-17_11:09:48-fs-wip-rishabh-2023May15-1524-testing-default-smithi/
1039 138 Rishabh Dave
https://pulpito.ceph.com/rishabh-2023-05-18_10:01:53-fs-wip-rishabh-2023May15-1524-testing-default-smithi/
1040 1 Patrick Donnelly
(bins were rebuilt with a subset of orig PRs) https://pulpito.ceph.com/rishabh-2023-06-09_10:19:22-fs-wip-rishabh-2023Jun9-1308-testing-default-smithi/
1041
1042
1043
* https://tracker.ceph.com/issues/59344
1044
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
1045 138 Rishabh Dave
* https://tracker.ceph.com/issues/59348
1046
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1047 145 Rishabh Dave
* https://tracker.ceph.com/issues/59346
1048
  fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write"
1049
* https://tracker.ceph.com/issues/57656
1050
  [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
1051
* https://tracker.ceph.com/issues/54460
1052
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1053 138 Rishabh Dave
* https://tracker.ceph.com/issues/54462
1054
  Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128
1055 145 Rishabh Dave
* https://tracker.ceph.com/issues/61399
1056
  libmpich: undefined references to fi_strerror
1057
* https://tracker.ceph.com/issues/58945
1058
  xfstests-dev: ceph-fuse: generic 
1059 138 Rishabh Dave
* https://tracker.ceph.com/issues/58742
1060 136 Patrick Donnelly
  xfstests-dev: kcephfs: generic
1061
1062
h3. 24 May 2023
1063
1064
https://pulpito.ceph.com/pdonnell-2023-05-23_18:20:18-fs-wip-pdonnell-testing-20230523.134409-distro-default-smithi/
1065
1066
* https://tracker.ceph.com/issues/57676
1067
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1068
* https://tracker.ceph.com/issues/59683
1069
    Error: Unable to find a match: userspace-rcu-devel libedit-devel device-mapper-devel with fscrypt tests
1070
* https://tracker.ceph.com/issues/61399
1071
    qa: "[Makefile:299: ior] Error 1"
1072
* https://tracker.ceph.com/issues/61265
1073
    qa: tasks.cephfs.fuse_mount:process failed to terminate after unmount
1074
* https://tracker.ceph.com/issues/59348
1075
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1076
* https://tracker.ceph.com/issues/59346
1077
    qa/workunits/fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write"
1078
* https://tracker.ceph.com/issues/61400
1079
    valgrind+ceph-mon: segmentation fault in rocksdb+tcmalloc
1080
* https://tracker.ceph.com/issues/54460
1081
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1082
* https://tracker.ceph.com/issues/51964
1083
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1084
* https://tracker.ceph.com/issues/59344
1085
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
1086
* https://tracker.ceph.com/issues/61407
1087
    mds: abort on CInode::verify_dirfrags
1088
* https://tracker.ceph.com/issues/48773
1089
    qa: scrub does not complete
1090
* https://tracker.ceph.com/issues/57655
1091
    qa: fs:mixed-clients kernel_untar_build failure
1092
* https://tracker.ceph.com/issues/61409
1093 128 Venky Shankar
    qa: _test_stale_caps does not wait for file flush before stat
1094
1095
h3. 15 May 2023
1096 130 Venky Shankar
1097 128 Venky Shankar
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230509.090020
1098
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230509.090020-6
1099
1100
* https://tracker.ceph.com/issues/52624
1101
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
1102
* https://tracker.ceph.com/issues/54460
1103
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1104
* https://tracker.ceph.com/issues/57676
1105
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1106
* https://tracker.ceph.com/issues/59684 [kclient bug]
1107
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
1108
* https://tracker.ceph.com/issues/59348
1109
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1110 131 Venky Shankar
* https://tracker.ceph.com/issues/61148
1111
    dbench test results in call trace in dmesg [kclient bug]
1112 133 Kotresh Hiremath Ravishankar
* https://tracker.ceph.com/issues/58340
1113 134 Kotresh Hiremath Ravishankar
    mds: fsstress.sh hangs with multimds
1114 125 Venky Shankar
1115
 
1116 129 Rishabh Dave
h3. 11 May 2023
1117
1118
https://pulpito.ceph.com/yuriw-2023-05-10_18:21:40-fs-wip-yuri7-testing-2023-05-10-0742-distro-default-smithi/
1119
1120
* https://tracker.ceph.com/issues/59684 [kclient bug]
1121
  Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
1122
* https://tracker.ceph.com/issues/59348
1123
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1124
* https://tracker.ceph.com/issues/57655
1125
  qa: fs:mixed-clients kernel_untar_build failure
1126
* https://tracker.ceph.com/issues/57676
1127
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
1128
* https://tracker.ceph.com/issues/55805
1129
  error during scrub thrashing reached max tries in 900 secs
1130
* https://tracker.ceph.com/issues/54460
1131
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1132
* https://tracker.ceph.com/issues/57656
1133
  [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
1134
* https://tracker.ceph.com/issues/58220
1135
  Command failed (workunit test fs/quota/quota.sh) on smithi081 with status 1:
1136 1 Patrick Donnelly
* https://tracker.ceph.com/issues/58220#note-9
1137
  workunit fs/test_python.sh: test_disk_quota_exceeeded_error failure
1138 134 Kotresh Hiremath Ravishankar
* https://tracker.ceph.com/issues/59342
1139
  qa/workunits/kernel_untar_build.sh failed when compiling the Linux source
1140 135 Kotresh Hiremath Ravishankar
* https://tracker.ceph.com/issues/58949
1141
    test_cephfs.test_disk_quota_exceeeded_error - AssertionError: DiskQuotaExceeded not raised by write
1142 129 Rishabh Dave
* https://tracker.ceph.com/issues/61243 (NEW)
1143
  test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
1144
1145 125 Venky Shankar
h3. 11 May 2023
1146 127 Venky Shankar
1147
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230509.054005
1148 126 Venky Shankar
1149 125 Venky Shankar
(no fsstress job failure [https://tracker.ceph.com/issues/58340] since https://github.com/ceph/ceph/pull/49553
1150
 was included in the branch, however, the PR got updated and needs retest).
1151
1152
* https://tracker.ceph.com/issues/52624
1153
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
1154
* https://tracker.ceph.com/issues/54460
1155
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1156
* https://tracker.ceph.com/issues/57676
1157
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1158
* https://tracker.ceph.com/issues/59683
1159
    Error: Unable to find a match: userspace-rcu-devel libedit-devel device-mapper-devel with fscrypt tests
1160
* https://tracker.ceph.com/issues/59684 [kclient bug]
1161
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
1162
* https://tracker.ceph.com/issues/59348
1163 124 Venky Shankar
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1164
1165
h3. 09 May 2023
1166
1167
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230506.143554
1168
1169
* https://tracker.ceph.com/issues/52624
1170
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
1171
* https://tracker.ceph.com/issues/58340
1172
    mds: fsstress.sh hangs with multimds
1173
* https://tracker.ceph.com/issues/54460
1174
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1175
* https://tracker.ceph.com/issues/57676
1176
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1177
* https://tracker.ceph.com/issues/51964
1178
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1179
* https://tracker.ceph.com/issues/59350
1180
    qa: test_flag_scrub_mdsdir (tasks.cephfs.test_scrub_checks.TestScrubChecks) ... ERROR
1181
* https://tracker.ceph.com/issues/59683
1182
    Error: Unable to find a match: userspace-rcu-devel libedit-devel device-mapper-devel with fscrypt tests
1183
* https://tracker.ceph.com/issues/59684 [kclient bug]
1184
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
1185
* https://tracker.ceph.com/issues/59348
1186 123 Venky Shankar
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1187
1188
h3. 10 Apr 2023
1189
1190
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230330.105356
1191
1192
* https://tracker.ceph.com/issues/52624
1193
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
1194
* https://tracker.ceph.com/issues/58340
1195
    mds: fsstress.sh hangs with multimds
1196
* https://tracker.ceph.com/issues/54460
1197
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1198
* https://tracker.ceph.com/issues/57676
1199
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1200 119 Rishabh Dave
* https://tracker.ceph.com/issues/51964
1201 120 Rishabh Dave
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1202 121 Rishabh Dave
1203 120 Rishabh Dave
h3. 31 Mar 2023
1204 122 Rishabh Dave
1205
run: http://pulpito.front.sepia.ceph.com/rishabh-2023-03-03_21:39:49-fs-wip-rishabh-2023Mar03-2316-testing-default-smithi/
1206 120 Rishabh Dave
re-run (some PRs removed from batch): http://pulpito.front.sepia.ceph.com/rishabh-2023-03-11_05:54:03-fs-wip-rishabh-2023Mar10-1727-testing-default-smithi/
1207
re-run (some PRs removed from batch): http://pulpito.front.sepia.ceph.com/rishabh-2023-03-23_08:27:28-fs-wip-rishabh-2023Mar20-2250-testing-default-smithi/
1208
1209
There were many more re-runs for "failed+dead" jobs as well as for individual jobs. half of the PRs from the batch were removed (gradually over subsequent re-runs).
1210
1211
* https://tracker.ceph.com/issues/57676
1212
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
1213
* https://tracker.ceph.com/issues/54460
1214
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1215
* https://tracker.ceph.com/issues/58220
1216
  Command failed (workunit test fs/quota/quota.sh) on smithixxx with status 1
1217
* https://tracker.ceph.com/issues/58220#note-9
1218
  workunit fs/test_python.sh: test_disk_quota_exceeeded_error failure
1219
* https://tracker.ceph.com/issues/56695
1220
  Command failed (workunit test suites/pjd.sh)
1221
* https://tracker.ceph.com/issues/58564 
1222
  workuit dbench failed with error code 1
1223
* https://tracker.ceph.com/issues/57206
1224
  libcephfs/test.sh: ceph_test_libcephfs_reclaim
1225
* https://tracker.ceph.com/issues/57580
1226
  Test failure: test_newops_getvxattr (tasks.cephfs.test_newops.TestNewOps)
1227
* https://tracker.ceph.com/issues/58940
1228
  ceph osd hit ceph_abort
1229
* https://tracker.ceph.com/issues/55805
1230 118 Venky Shankar
  error scrub thrashing reached max tries in 900 secs
1231
1232
h3. 30 March 2023
1233
1234
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230315.085747
1235
1236
* https://tracker.ceph.com/issues/58938
1237
    qa: xfstests-dev's generic test suite has 7 failures with kclient
1238
* https://tracker.ceph.com/issues/51964
1239
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1240
* https://tracker.ceph.com/issues/58340
1241 114 Venky Shankar
    mds: fsstress.sh hangs with multimds
1242
1243 115 Venky Shankar
h3. 29 March 2023
1244 114 Venky Shankar
1245
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230317.095222
1246
1247
* https://tracker.ceph.com/issues/56695
1248
    [RHEL stock] pjd test failures
1249
* https://tracker.ceph.com/issues/57676
1250
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1251
* https://tracker.ceph.com/issues/57087
1252
    qa: test_fragmented_injection (tasks.cephfs.test_data_scan.TestDataScan) failure
1253 116 Venky Shankar
* https://tracker.ceph.com/issues/58340
1254
    mds: fsstress.sh hangs with multimds
1255 114 Venky Shankar
* https://tracker.ceph.com/issues/57655
1256
    qa: fs:mixed-clients kernel_untar_build failure
1257 117 Venky Shankar
* https://tracker.ceph.com/issues/59230
1258
    Test failure: test_object_deletion (tasks.cephfs.test_damage.TestDamage)
1259 114 Venky Shankar
* https://tracker.ceph.com/issues/58938
1260 113 Venky Shankar
    qa: xfstests-dev's generic test suite has 7 failures with kclient
1261
1262
h3. 13 Mar 2023
1263
1264
* https://tracker.ceph.com/issues/56695
1265
    [RHEL stock] pjd test failures
1266
* https://tracker.ceph.com/issues/57676
1267
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1268
* https://tracker.ceph.com/issues/51964
1269
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1270
* https://tracker.ceph.com/issues/54460
1271
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1272
* https://tracker.ceph.com/issues/57656
1273 112 Venky Shankar
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
1274
1275
h3. 09 Mar 2023
1276
1277
https://pulpito.ceph.com/vshankar-2023-03-03_04:39:14-fs-wip-vshankar-testing-20230303.023823-testing-default-smithi/
1278
https://pulpito.ceph.com/vshankar-2023-03-08_15:12:36-fs-wip-vshankar-testing-20230308.112059-testing-default-smithi/
1279
1280
* https://tracker.ceph.com/issues/56695
1281
    [RHEL stock] pjd test failures
1282
* https://tracker.ceph.com/issues/57676
1283
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1284
* https://tracker.ceph.com/issues/51964
1285
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1286
* https://tracker.ceph.com/issues/54460
1287
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1288
* https://tracker.ceph.com/issues/58340
1289
    mds: fsstress.sh hangs with multimds
1290
* https://tracker.ceph.com/issues/57087
1291 111 Venky Shankar
    qa: test_fragmented_injection (tasks.cephfs.test_data_scan.TestDataScan) failure
1292
1293
h3. 07 Mar 2023
1294
1295
https://pulpito.ceph.com/vshankar-2023-03-02_09:21:58-fs-wip-vshankar-testing-20230222.044949-testing-default-smithi/
1296
https://pulpito.ceph.com/vshankar-2023-03-07_05:15:12-fs-wip-vshankar-testing-20230307.030510-testing-default-smithi/
1297
1298
* https://tracker.ceph.com/issues/56695
1299
    [RHEL stock] pjd test failures
1300
* https://tracker.ceph.com/issues/57676
1301
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1302
* https://tracker.ceph.com/issues/51964
1303
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1304
* https://tracker.ceph.com/issues/57656
1305
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
1306
* https://tracker.ceph.com/issues/57655
1307
    qa: fs:mixed-clients kernel_untar_build failure
1308
* https://tracker.ceph.com/issues/58220
1309
    Command failed (workunit test fs/quota/quota.sh) on smithi081 with status 1:
1310
* https://tracker.ceph.com/issues/54460
1311
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1312
* https://tracker.ceph.com/issues/58934
1313 109 Venky Shankar
    snaptest-git-ceph.sh failure with ceph-fuse
1314
1315
h3. 28 Feb 2023
1316
1317
https://pulpito.ceph.com/vshankar-2023-02-24_02:11:45-fs-wip-vshankar-testing-20230222.025426-testing-default-smithi/
1318
1319
* https://tracker.ceph.com/issues/56695
1320
    [RHEL stock] pjd test failures
1321
* https://tracker.ceph.com/issues/57676
1322
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1323 110 Venky Shankar
* https://tracker.ceph.com/issues/56446
1324 109 Venky Shankar
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
1325
1326 107 Venky Shankar
(teuthology infra issues causing testing delays - merging PRs which have tests passing)
1327
1328
h3. 25 Jan 2023
1329
1330
https://pulpito.ceph.com/vshankar-2023-01-25_07:57:32-fs-wip-vshankar-testing-20230125.055346-testing-default-smithi/
1331
1332
* https://tracker.ceph.com/issues/52624
1333
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
1334
* https://tracker.ceph.com/issues/56695
1335
    [RHEL stock] pjd test failures
1336
* https://tracker.ceph.com/issues/57676
1337
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1338
* https://tracker.ceph.com/issues/56446
1339
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
1340
* https://tracker.ceph.com/issues/57206
1341
    libcephfs/test.sh: ceph_test_libcephfs_reclaim
1342
* https://tracker.ceph.com/issues/58220
1343
    Command failed (workunit test fs/quota/quota.sh) on smithi081 with status 1:
1344
* https://tracker.ceph.com/issues/58340
1345
  mds: fsstress.sh hangs with multimds
1346
* https://tracker.ceph.com/issues/56011
1347
    fs/thrash: snaptest-snap-rm-cmp.sh fails in mds5sum comparison
1348
* https://tracker.ceph.com/issues/54460
1349 101 Rishabh Dave
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1350
1351
h3. 30 JAN 2023
1352
1353
run: http://pulpito.front.sepia.ceph.com/rishabh-2022-11-28_08:04:11-fs-wip-rishabh-testing-2022Nov24-1818-testing-default-smithi/
1354
re-run: http://pulpito.front.sepia.ceph.com/rishabh-2023-01-13_12:08:33-fs-wip-rishabh-testing-2022Nov24-11Jan2023-distro-default-smithi/
1355 105 Rishabh Dave
re-run of re-run: http://pulpito.front.sepia.ceph.com/rishabh-2023-01-23_18:53:32-fs-wip-rishabh-testing-2022Nov24-11Jan2023-distro-default-smithi/
1356
1357 101 Rishabh Dave
* https://tracker.ceph.com/issues/52624
1358
  qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
1359
* https://tracker.ceph.com/issues/56695
1360
  [RHEL stock] pjd test failures
1361
* https://tracker.ceph.com/issues/57676
1362
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
1363
* https://tracker.ceph.com/issues/55332
1364
  Failure in snaptest-git-ceph.sh
1365
* https://tracker.ceph.com/issues/51964
1366
  qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1367
* https://tracker.ceph.com/issues/56446
1368
  Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
1369
* https://tracker.ceph.com/issues/57655 
1370
  qa: fs:mixed-clients kernel_untar_build failure
1371
* https://tracker.ceph.com/issues/54460
1372
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1373 103 Rishabh Dave
* https://tracker.ceph.com/issues/58340
1374
  mds: fsstress.sh hangs with multimds
1375 101 Rishabh Dave
* https://tracker.ceph.com/issues/58219
1376 102 Rishabh Dave
  Command crashed: 'ceph-dencoder type inode_backtrace_t import - decode dump_json'
1377
1378
* "Failed to load ceph-mgr modules: prometheus" in cluster log"
1379 106 Rishabh Dave
  http://pulpito.front.sepia.ceph.com/rishabh-2023-01-23_18:53:32-fs-wip-rishabh-testing-2022Nov24-11Jan2023-distro-default-smithi/7134086
1380
  Acc to Venky this was fixed in https://github.com/ceph/ceph/commit/cf6089200d96fc56b08ee17a4e31f19823370dc8
1381 102 Rishabh Dave
* Created https://tracker.ceph.com/issues/58564
1382 100 Venky Shankar
  workunit test suites/dbench.sh failed error code 1
1383
1384
h3. 15 Dec 2022
1385
1386
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20221215.112736
1387
1388
* https://tracker.ceph.com/issues/52624
1389
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1390
* https://tracker.ceph.com/issues/56695
1391
    [RHEL stock] pjd test failures
1392
* https://tracker.ceph.com/issues/58219
1393
* https://tracker.ceph.com/issues/57655
1394
* qa: fs:mixed-clients kernel_untar_build failure
1395
    Test failure: test_journal_migration (tasks.cephfs.test_journal_migration.TestJournalMigration)
1396
* https://tracker.ceph.com/issues/57676
1397
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1398
* https://tracker.ceph.com/issues/58340
1399 96 Venky Shankar
    mds: fsstress.sh hangs with multimds
1400
1401
h3. 08 Dec 2022
1402 99 Venky Shankar
1403 96 Venky Shankar
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20221130.043104
1404
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20221209.043803
1405
1406
(lots of transient git.ceph.com failures)
1407
1408
* https://tracker.ceph.com/issues/52624
1409
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1410
* https://tracker.ceph.com/issues/56695
1411
    [RHEL stock] pjd test failures
1412
* https://tracker.ceph.com/issues/57655
1413
    qa: fs:mixed-clients kernel_untar_build failure
1414
* https://tracker.ceph.com/issues/58219
1415
    Test failure: test_journal_migration (tasks.cephfs.test_journal_migration.TestJournalMigration)
1416
* https://tracker.ceph.com/issues/58220
1417
    Command failed (workunit test fs/quota/quota.sh) on smithi081 with status 1:
1418 97 Venky Shankar
* https://tracker.ceph.com/issues/57676
1419
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1420 98 Venky Shankar
* https://tracker.ceph.com/issues/53859
1421
    qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
1422
* https://tracker.ceph.com/issues/54460
1423
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1424 96 Venky Shankar
* https://tracker.ceph.com/issues/58244
1425 95 Venky Shankar
    Test failure: test_rebuild_inotable (tasks.cephfs.test_data_scan.TestDataScan)
1426
1427
h3. 14 Oct 2022
1428
1429
https://pulpito.ceph.com/vshankar-2022-10-12_04:56:59-fs-wip-vshankar-testing-20221011-145847-testing-default-smithi/
1430
https://pulpito.ceph.com/vshankar-2022-10-14_04:04:57-fs-wip-vshankar-testing-20221014-072608-testing-default-smithi/
1431
1432
* https://tracker.ceph.com/issues/52624
1433
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1434
* https://tracker.ceph.com/issues/55804
1435
    Command failed (workunit test suites/pjd.sh)
1436
* https://tracker.ceph.com/issues/51964
1437
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1438
* https://tracker.ceph.com/issues/57682
1439
    client: ERROR: test_reconnect_after_blocklisted
1440 90 Rishabh Dave
* https://tracker.ceph.com/issues/54460
1441 91 Rishabh Dave
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1442
1443
h3. 10 Oct 2022
1444 92 Rishabh Dave
1445 91 Rishabh Dave
http://pulpito.front.sepia.ceph.com/rishabh-2022-09-30_19:45:21-fs-wip-rishabh-testing-30Sep2022-testing-default-smithi/
1446
1447
reruns
1448
* fs-thrash, passed: http://pulpito.front.sepia.ceph.com/rishabh-2022-10-04_13:19:47-fs-wip-rishabh-testing-30Sep2022-testing-default-smithi/
1449 94 Rishabh Dave
* fs-verify, passed: http://pulpito.front.sepia.ceph.com/rishabh-2022-10-05_12:25:37-fs-wip-rishabh-testing-30Sep2022-testing-default-smithi/
1450 91 Rishabh Dave
* cephadm failures also passed after many re-runs: http://pulpito.front.sepia.ceph.com/rishabh-2022-10-06_13:50:51-fs-wip-rishabh-testing-30Sep2022-2-testing-default-smithi/
1451 93 Rishabh Dave
    ** needed this PR to be merged in ceph-ci branch - https://github.com/ceph/ceph/pull/47458
1452 91 Rishabh Dave
1453
known bugs
1454
* https://tracker.ceph.com/issues/52624
1455
  qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1456
* https://tracker.ceph.com/issues/50223
1457
  client.xxxx isn't responding to mclientcaps(revoke
1458
* https://tracker.ceph.com/issues/57299
1459
  qa: test_dump_loads fails with JSONDecodeError
1460
* https://tracker.ceph.com/issues/57655 [Exist in main as well]
1461
  qa: fs:mixed-clients kernel_untar_build failure
1462
* https://tracker.ceph.com/issues/57206
1463 90 Rishabh Dave
  libcephfs/test.sh: ceph_test_libcephfs_reclaim
1464
1465
h3. 2022 Sep 29
1466
1467
http://pulpito.front.sepia.ceph.com/rishabh-2022-09-14_12:48:43-fs-wip-rishabh-testing-2022Sep9-1708-testing-default-smithi/
1468
1469
* https://tracker.ceph.com/issues/55804
1470
  Command failed (workunit test suites/pjd.sh)
1471
* https://tracker.ceph.com/issues/36593
1472
  Command failed (workunit test fs/quota/quota.sh) on smithixxx with status 1
1473
* https://tracker.ceph.com/issues/52624
1474
  qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1475
* https://tracker.ceph.com/issues/51964
1476
  qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1477
* https://tracker.ceph.com/issues/56632
1478
  Test failure: test_subvolume_snapshot_clone_quota_exceeded
1479
* https://tracker.ceph.com/issues/50821
1480 88 Patrick Donnelly
  qa: untar_snap_rm failure during mds thrashing
1481
1482
h3. 2022 Sep 26
1483
1484
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20220923.171109
1485
1486
* https://tracker.ceph.com/issues/55804
1487
    qa failure: pjd link tests failed
1488
* https://tracker.ceph.com/issues/57676
1489
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1490
* https://tracker.ceph.com/issues/52624
1491
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1492
* https://tracker.ceph.com/issues/57580
1493
    Test failure: test_newops_getvxattr (tasks.cephfs.test_newops.TestNewOps)
1494
* https://tracker.ceph.com/issues/48773
1495
    qa: scrub does not complete
1496
* https://tracker.ceph.com/issues/57299
1497
    qa: test_dump_loads fails with JSONDecodeError
1498
* https://tracker.ceph.com/issues/57280
1499
    qa: tasks/kernel_cfuse_workunits_untarbuild_blogbench fails - Failed to fetch package version from shaman
1500
* https://tracker.ceph.com/issues/57205
1501
    Test failure: test_subvolume_group_ls_filter_internal_directories (tasks.cephfs.test_volumes.TestSubvolumeGroups)
1502
* https://tracker.ceph.com/issues/57656
1503
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
1504
* https://tracker.ceph.com/issues/57677
1505
    qa: "1 MDSs behind on trimming (MDS_TRIM)"
1506
* https://tracker.ceph.com/issues/57206
1507
    libcephfs/test.sh: ceph_test_libcephfs_reclaim
1508
* https://tracker.ceph.com/issues/57446
1509
    qa: test_subvolume_snapshot_info_if_orphan_clone fails
1510 89 Patrick Donnelly
* https://tracker.ceph.com/issues/57655 [Exist in main as well]
1511
    qa: fs:mixed-clients kernel_untar_build failure
1512 88 Patrick Donnelly
* https://tracker.ceph.com/issues/57682
1513
    client: ERROR: test_reconnect_after_blocklisted
1514 87 Patrick Donnelly
1515
1516
h3. 2022 Sep 22
1517
1518
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20220920.234701
1519
1520
* https://tracker.ceph.com/issues/57299
1521
    qa: test_dump_loads fails with JSONDecodeError
1522
* https://tracker.ceph.com/issues/57205
1523
    Test failure: test_subvolume_group_ls_filter_internal_directories (tasks.cephfs.test_volumes.TestSubvolumeGroups)
1524
* https://tracker.ceph.com/issues/52624
1525
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1526
* https://tracker.ceph.com/issues/57580
1527
    Test failure: test_newops_getvxattr (tasks.cephfs.test_newops.TestNewOps)
1528
* https://tracker.ceph.com/issues/57280
1529
    qa: tasks/kernel_cfuse_workunits_untarbuild_blogbench fails - Failed to fetch package version from shaman
1530
* https://tracker.ceph.com/issues/48773
1531
    qa: scrub does not complete
1532
* https://tracker.ceph.com/issues/56446
1533
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
1534
* https://tracker.ceph.com/issues/57206
1535
    libcephfs/test.sh: ceph_test_libcephfs_reclaim
1536
* https://tracker.ceph.com/issues/51267
1537
    CommandFailedError: Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithi096 with status 1:...
1538
1539
NEW:
1540
1541
* https://tracker.ceph.com/issues/57656
1542
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
1543
* https://tracker.ceph.com/issues/57655 [Exist in main as well]
1544
    qa: fs:mixed-clients kernel_untar_build failure
1545
* https://tracker.ceph.com/issues/57657
1546
    mds: scrub locates mismatch between child accounted_rstats and self rstats
1547
1548
Segfault probably caused by: https://github.com/ceph/ceph/pull/47795#issuecomment-1255724799
1549 80 Venky Shankar
1550 79 Venky Shankar
1551
h3. 2022 Sep 16
1552
1553
https://pulpito.ceph.com/?branch=wip-vshankar-testing1-20220905-132828
1554
1555
* https://tracker.ceph.com/issues/57446
1556
    qa: test_subvolume_snapshot_info_if_orphan_clone fails
1557
* https://tracker.ceph.com/issues/57299
1558
    qa: test_dump_loads fails with JSONDecodeError
1559
* https://tracker.ceph.com/issues/50223
1560
    client.xxxx isn't responding to mclientcaps(revoke)
1561
* https://tracker.ceph.com/issues/52624
1562
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1563
* https://tracker.ceph.com/issues/57205
1564
    Test failure: test_subvolume_group_ls_filter_internal_directories (tasks.cephfs.test_volumes.TestSubvolumeGroups)
1565
* https://tracker.ceph.com/issues/57280
1566
    qa: tasks/kernel_cfuse_workunits_untarbuild_blogbench fails - Failed to fetch package version from shaman
1567
* https://tracker.ceph.com/issues/51282
1568
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
1569
* https://tracker.ceph.com/issues/48203
1570
  https://tracker.ceph.com/issues/36593
1571
    qa: quota failure
1572
    qa: quota failure caused by clients stepping on each other
1573
* https://tracker.ceph.com/issues/57580
1574 77 Rishabh Dave
    Test failure: test_newops_getvxattr (tasks.cephfs.test_newops.TestNewOps)
1575
1576 76 Rishabh Dave
1577
h3. 2022 Aug 26
1578
1579
http://pulpito.front.sepia.ceph.com/rishabh-2022-08-22_17:49:59-fs-wip-rishabh-testing-2022Aug19-testing-default-smithi/
1580
http://pulpito.front.sepia.ceph.com/rishabh-2022-08-24_11:56:51-fs-wip-rishabh-testing-2022Aug19-testing-default-smithi/
1581
1582
* https://tracker.ceph.com/issues/57206
1583
  libcephfs/test.sh: ceph_test_libcephfs_reclaim
1584
* https://tracker.ceph.com/issues/56632
1585
  Test failure: test_subvolume_snapshot_clone_quota_exceeded (tasks.cephfs.test_volumes.TestSubvolumeSnapshotClones)
1586
* https://tracker.ceph.com/issues/56446
1587
  Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
1588
* https://tracker.ceph.com/issues/51964
1589
  qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1590
* https://tracker.ceph.com/issues/53859
1591
  qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
1592
1593
* https://tracker.ceph.com/issues/54460
1594
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1595
* https://tracker.ceph.com/issues/54462
1596
  Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128
1597
* https://tracker.ceph.com/issues/54460
1598
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1599
* https://tracker.ceph.com/issues/36593
1600
  Command failed (workunit test fs/quota/quota.sh) on smithixxx with status 1
1601
1602
* https://tracker.ceph.com/issues/52624
1603
  qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1604
* https://tracker.ceph.com/issues/55804
1605
  Command failed (workunit test suites/pjd.sh)
1606
* https://tracker.ceph.com/issues/50223
1607
  client.xxxx isn't responding to mclientcaps(revoke)
1608 75 Venky Shankar
1609
1610
h3. 2022 Aug 22
1611
1612
https://pulpito.ceph.com/vshankar-2022-08-12_09:34:24-fs-wip-vshankar-testing1-20220812-072441-testing-default-smithi/
1613
https://pulpito.ceph.com/vshankar-2022-08-18_04:30:42-fs-wip-vshankar-testing1-20220818-082047-testing-default-smithi/ (drop problematic PR and re-run)
1614
1615
* https://tracker.ceph.com/issues/52624
1616
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1617
* https://tracker.ceph.com/issues/56446
1618
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
1619
* https://tracker.ceph.com/issues/55804
1620
    Command failed (workunit test suites/pjd.sh)
1621
* https://tracker.ceph.com/issues/51278
1622
    mds: "FAILED ceph_assert(!segments.empty())"
1623
* https://tracker.ceph.com/issues/54460
1624
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1625
* https://tracker.ceph.com/issues/57205
1626
    Test failure: test_subvolume_group_ls_filter_internal_directories (tasks.cephfs.test_volumes.TestSubvolumeGroups)
1627
* https://tracker.ceph.com/issues/57206
1628
    ceph_test_libcephfs_reclaim crashes during test
1629
* https://tracker.ceph.com/issues/53859
1630
  Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
1631
* https://tracker.ceph.com/issues/50223
1632 72 Venky Shankar
    client.xxxx isn't responding to mclientcaps(revoke)
1633
1634
h3. 2022 Aug 12
1635
1636
https://pulpito.ceph.com/vshankar-2022-08-10_04:06:00-fs-wip-vshankar-testing-20220805-190751-testing-default-smithi/
1637
https://pulpito.ceph.com/vshankar-2022-08-11_12:16:58-fs-wip-vshankar-testing-20220811-145809-testing-default-smithi/ (drop problematic PR and re-run)
1638
1639
* https://tracker.ceph.com/issues/52624
1640
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1641
* https://tracker.ceph.com/issues/56446
1642
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
1643
* https://tracker.ceph.com/issues/51964
1644
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1645
* https://tracker.ceph.com/issues/55804
1646
    Command failed (workunit test suites/pjd.sh)
1647
* https://tracker.ceph.com/issues/50223
1648
    client.xxxx isn't responding to mclientcaps(revoke)
1649
* https://tracker.ceph.com/issues/50821
1650 73 Venky Shankar
    qa: untar_snap_rm failure during mds thrashing
1651 72 Venky Shankar
* https://tracker.ceph.com/issues/54460
1652 71 Venky Shankar
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1653
1654
h3. 2022 Aug 04
1655
1656
https://pulpito.ceph.com/?branch=wip-vshankar-testing1-20220804-123835 (only mgr/volumes, mgr/stats)
1657
1658 69 Rishabh Dave
Unrealted teuthology failure on rhel
1659 68 Rishabh Dave
1660
h3. 2022 Jul 25
1661
1662
http://pulpito.front.sepia.ceph.com/rishabh-2022-07-22_11:34:20-fs-wip-rishabh-testing-2022Jul22-1400-testing-default-smithi/
1663
1664 74 Rishabh Dave
1st re-run: http://pulpito.front.sepia.ceph.com/rishabh-2022-07-24_03:51:19-fs-wip-rishabh-testing-2022Jul22-1400-testing-default-smithi
1665
2nd re-run: http://pulpito.front.sepia.ceph.com/rishabh-2022-07-24_08:53:36-fs-wip-rishabh-testing-2022Jul22-1400-testing-default-smithi/
1666 68 Rishabh Dave
3rd re-run: http://pulpito.front.sepia.ceph.com/rishabh-2022-07-24_08:53:36-fs-wip-rishabh-testing-2022Jul22-1400-testing-default-smithi/
1667
4th (final) re-run: http://pulpito.front.sepia.ceph.com/rishabh-2022-07-28_03:59:01-fs-wip-rishabh-testing-2022Jul28-0143-testing-default-smithi/
1668
1669
* https://tracker.ceph.com/issues/55804
1670
  Command failed (workunit test suites/pjd.sh)
1671
* https://tracker.ceph.com/issues/50223
1672
  client.xxxx isn't responding to mclientcaps(revoke)
1673
1674
* https://tracker.ceph.com/issues/54460
1675
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1676 1 Patrick Donnelly
* https://tracker.ceph.com/issues/36593
1677 74 Rishabh Dave
  Command failed (workunit test fs/quota/quota.sh) on smithixxx with status 1
1678 68 Rishabh Dave
* https://tracker.ceph.com/issues/54462
1679 67 Patrick Donnelly
  Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128~
1680
1681
h3. 2022 July 22
1682
1683
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20220721.235756
1684
1685
MDS_HEALTH_DUMMY error in log fixed by followup commit.
1686
transient selinux ping failure
1687
1688
* https://tracker.ceph.com/issues/56694
1689
    qa: avoid blocking forever on hung umount
1690
* https://tracker.ceph.com/issues/56695
1691
    [RHEL stock] pjd test failures
1692
* https://tracker.ceph.com/issues/56696
1693
    admin keyring disappears during qa run
1694
* https://tracker.ceph.com/issues/56697
1695
    qa: fs/snaps fails for fuse
1696
* https://tracker.ceph.com/issues/50222
1697
    osd: 5.2s0 deep-scrub : stat mismatch
1698
* https://tracker.ceph.com/issues/56698
1699
    client: FAILED ceph_assert(_size == 0)
1700
* https://tracker.ceph.com/issues/50223
1701
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
1702 66 Rishabh Dave
1703 65 Rishabh Dave
1704
h3. 2022 Jul 15
1705
1706
http://pulpito.front.sepia.ceph.com/rishabh-2022-07-08_23:53:34-fs-wip-rishabh-testing-2022Jul08-1820-testing-default-smithi/
1707
1708
re-run: http://pulpito.front.sepia.ceph.com/rishabh-2022-07-15_06:42:04-fs-wip-rishabh-testing-2022Jul08-1820-testing-default-smithi/
1709
1710
* https://tracker.ceph.com/issues/53859
1711
  Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
1712
* https://tracker.ceph.com/issues/55804
1713
  Command failed (workunit test suites/pjd.sh)
1714
* https://tracker.ceph.com/issues/50223
1715
  client.xxxx isn't responding to mclientcaps(revoke)
1716
* https://tracker.ceph.com/issues/50222
1717
  osd: deep-scrub : stat mismatch
1718
1719
* https://tracker.ceph.com/issues/56632
1720
  Test failure: test_subvolume_snapshot_clone_quota_exceeded (tasks.cephfs.test_volumes.TestSubvolumeSnapshotClones)
1721
* https://tracker.ceph.com/issues/56634
1722
  workunit test fs/snaps/snaptest-intodir.sh
1723
* https://tracker.ceph.com/issues/56644
1724
  Test failure: test_rapid_creation (tasks.cephfs.test_fragment.TestFragmentation)
1725
1726 61 Rishabh Dave
1727
1728
h3. 2022 July 05
1729 62 Rishabh Dave
1730 64 Rishabh Dave
http://pulpito.front.sepia.ceph.com/rishabh-2022-07-02_14:14:52-fs-wip-rishabh-testing-20220702-1631-testing-default-smithi/
1731
1732
On 1st re-run some jobs passed - http://pulpito.front.sepia.ceph.com/rishabh-2022-07-03_15:10:28-fs-wip-rishabh-testing-20220702-1631-distro-default-smithi/
1733
1734
On 2nd re-run only few jobs failed -
1735 62 Rishabh Dave
http://pulpito.front.sepia.ceph.com/rishabh-2022-07-06_05:24:29-fs-wip-rishabh-testing-20220705-2132-distro-default-smithi/
1736
http://pulpito.front.sepia.ceph.com/rishabh-2022-07-06_05:24:29-fs-wip-rishabh-testing-20220705-2132-distro-default-smithi/
1737
1738
* https://tracker.ceph.com/issues/56446
1739
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
1740
* https://tracker.ceph.com/issues/55804
1741
    Command failed (workunit test suites/pjd.sh) on smithi047 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/
1742
1743
* https://tracker.ceph.com/issues/56445
1744 63 Rishabh Dave
    Command failed on smithi080 with status 123: "find /home/ubuntu/cephtest/archive/syslog -name '*.log' -print0 | sudo xargs -0 --no-run-if-empty -- gzip --"
1745
* https://tracker.ceph.com/issues/51267
1746
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithi098 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1
1747 62 Rishabh Dave
* https://tracker.ceph.com/issues/50224
1748
    Test failure: test_mirroring_init_failure_with_recovery (tasks.cephfs.test_mirroring.TestMirroring)
1749 61 Rishabh Dave
1750 58 Venky Shankar
1751
1752
h3. 2022 July 04
1753
1754
https://pulpito.ceph.com/vshankar-2022-06-29_09:19:00-fs-wip-vshankar-testing-20220627-100931-testing-default-smithi/
1755
(rhel runs were borked due to: https://lists.ceph.io/hyperkitty/list/dev@ceph.io/thread/JSZQFUKVLDND4W33PXDGCABPHNSPT6SS/, tests ran with --filter-out=rhel)
1756
1757
* https://tracker.ceph.com/issues/56445
1758 59 Rishabh Dave
    Command failed on smithi162 with status 123: "find /home/ubuntu/cephtest/archive/syslog -name '*.log' -print0 | sudo xargs -0 --no-run-if-empty -- gzip --"
1759
* https://tracker.ceph.com/issues/56446
1760
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
1761
* https://tracker.ceph.com/issues/51964
1762 60 Rishabh Dave
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1763 59 Rishabh Dave
* https://tracker.ceph.com/issues/52624
1764 57 Venky Shankar
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1765
1766
h3. 2022 June 20
1767
1768
https://pulpito.ceph.com/vshankar-2022-06-15_04:03:39-fs-wip-vshankar-testing1-20220615-072516-testing-default-smithi/
1769
https://pulpito.ceph.com/vshankar-2022-06-19_08:22:46-fs-wip-vshankar-testing1-20220619-102531-testing-default-smithi/
1770
1771
* https://tracker.ceph.com/issues/52624
1772
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1773
* https://tracker.ceph.com/issues/55804
1774
    qa failure: pjd link tests failed
1775
* https://tracker.ceph.com/issues/54108
1776
    qa: iogen workunit: "The following counters failed to be set on mds daemons: {'mds.exported', 'mds.imported'}"
1777
* https://tracker.ceph.com/issues/55332
1778 56 Patrick Donnelly
    Failure in snaptest-git-ceph.sh (it's an async unlink/create bug)
1779
1780
h3. 2022 June 13
1781
1782
https://pulpito.ceph.com/pdonnell-2022-06-12_05:08:12-fs:workload-wip-pdonnell-testing-20220612.004943-distro-default-smithi/
1783
1784
* https://tracker.ceph.com/issues/56024
1785
    cephadm: removes ceph.conf during qa run causing command failure
1786
* https://tracker.ceph.com/issues/48773
1787
    qa: scrub does not complete
1788
* https://tracker.ceph.com/issues/56012
1789
    mds: src/mds/MDLog.cc: 283: FAILED ceph_assert(!mds->is_ any_replay())
1790 55 Venky Shankar
1791 54 Venky Shankar
1792
h3. 2022 Jun 13
1793
1794
https://pulpito.ceph.com/vshankar-2022-06-07_00:25:50-fs-wip-vshankar-testing-20220606-223254-testing-default-smithi/
1795
https://pulpito.ceph.com/vshankar-2022-06-10_01:04:46-fs-wip-vshankar-testing-20220609-175550-testing-default-smithi/
1796
1797
* https://tracker.ceph.com/issues/52624
1798
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1799
* https://tracker.ceph.com/issues/51964
1800
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1801
* https://tracker.ceph.com/issues/53859
1802
    qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
1803
* https://tracker.ceph.com/issues/55804
1804
    qa failure: pjd link tests failed
1805
* https://tracker.ceph.com/issues/56003
1806
    client: src/include/xlist.h: 81: FAILED ceph_assert(_size == 0)
1807
* https://tracker.ceph.com/issues/56011
1808
    fs/thrash: snaptest-snap-rm-cmp.sh fails in mds5sum comparison
1809
* https://tracker.ceph.com/issues/56012
1810 53 Venky Shankar
    mds: src/mds/MDLog.cc: 283: FAILED ceph_assert(!mds->is_ any_replay())
1811
1812
h3. 2022 Jun 07
1813
1814
https://pulpito.ceph.com/vshankar-2022-06-06_21:25:41-fs-wip-vshankar-testing1-20220606-230129-testing-default-smithi/
1815
https://pulpito.ceph.com/vshankar-2022-06-07_10:53:31-fs-wip-vshankar-testing1-20220607-104134-testing-default-smithi/ (rerun after dropping a problematic PR)
1816
1817
* https://tracker.ceph.com/issues/52624
1818
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1819
* https://tracker.ceph.com/issues/50223
1820
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
1821
* https://tracker.ceph.com/issues/50224
1822 51 Venky Shankar
    qa: test_mirroring_init_failure_with_recovery failure
1823
1824
h3. 2022 May 12
1825 52 Venky Shankar
1826 51 Venky Shankar
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20220509-125847
1827
https://pulpito.ceph.com/vshankar-2022-05-13_17:09:16-fs-wip-vshankar-testing-20220513-120051-testing-default-smithi/ (drop prs + rerun)
1828
1829
* https://tracker.ceph.com/issues/52624
1830
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1831
* https://tracker.ceph.com/issues/50223
1832
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
1833
* https://tracker.ceph.com/issues/55332
1834
    Failure in snaptest-git-ceph.sh
1835
* https://tracker.ceph.com/issues/53859
1836 1 Patrick Donnelly
    qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
1837 52 Venky Shankar
* https://tracker.ceph.com/issues/55538
1838
    Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead)
1839 51 Venky Shankar
* https://tracker.ceph.com/issues/55258
1840 49 Venky Shankar
    lots of "heartbeat_check: no reply from X.X.X.X" in OSD logs (cropss up again, though very infrequent)
1841
1842 50 Venky Shankar
h3. 2022 May 04
1843
1844
https://pulpito.ceph.com/vshankar-2022-05-01_13:18:44-fs-wip-vshankar-testing1-20220428-204527-testing-default-smithi/
1845 49 Venky Shankar
https://pulpito.ceph.com/vshankar-2022-05-02_16:58:59-fs-wip-vshankar-testing1-20220502-201957-testing-default-smithi/ (after dropping PRs)
1846
1847
* https://tracker.ceph.com/issues/52624
1848
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1849
* https://tracker.ceph.com/issues/50223
1850
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
1851
* https://tracker.ceph.com/issues/55332
1852
    Failure in snaptest-git-ceph.sh
1853
* https://tracker.ceph.com/issues/53859
1854
    qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
1855
* https://tracker.ceph.com/issues/55516
1856
    qa: fs suite tests failing with "json.decoder.JSONDecodeError: Extra data: line 2 column 82 (char 82)"
1857
* https://tracker.ceph.com/issues/55537
1858
    mds: crash during fs:upgrade test
1859
* https://tracker.ceph.com/issues/55538
1860 48 Venky Shankar
    Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead)
1861
1862
h3. 2022 Apr 25
1863
1864
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20220420-113951 (owner vshankar)
1865
1866
* https://tracker.ceph.com/issues/52624
1867
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1868
* https://tracker.ceph.com/issues/50223
1869
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
1870
* https://tracker.ceph.com/issues/55258
1871
    lots of "heartbeat_check: no reply from X.X.X.X" in OSD logs
1872
* https://tracker.ceph.com/issues/55377
1873 47 Venky Shankar
    kclient: mds revoke Fwb caps stuck after the kclient tries writebcak once
1874
1875
h3. 2022 Apr 14
1876
1877
https://pulpito.ceph.com/?branch=wip-vshankar-testing1-20220411-144044
1878
1879
* https://tracker.ceph.com/issues/52624
1880
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1881
* https://tracker.ceph.com/issues/50223
1882
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
1883
* https://tracker.ceph.com/issues/52438
1884
    qa: ffsb timeout
1885
* https://tracker.ceph.com/issues/55170
1886
    mds: crash during rejoin (CDir::fetch_keys)
1887
* https://tracker.ceph.com/issues/55331
1888
    pjd failure
1889
* https://tracker.ceph.com/issues/48773
1890
    qa: scrub does not complete
1891
* https://tracker.ceph.com/issues/55332
1892
    Failure in snaptest-git-ceph.sh
1893
* https://tracker.ceph.com/issues/55258
1894 45 Venky Shankar
    lots of "heartbeat_check: no reply from X.X.X.X" in OSD logs
1895
1896 46 Venky Shankar
h3. 2022 Apr 11
1897 45 Venky Shankar
1898
https://pulpito.ceph.com/?branch=wip-vshankar-testing-55110-20220408-203242
1899
1900
* https://tracker.ceph.com/issues/48773
1901
    qa: scrub does not complete
1902
* https://tracker.ceph.com/issues/52624
1903
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1904
* https://tracker.ceph.com/issues/52438
1905
    qa: ffsb timeout
1906
* https://tracker.ceph.com/issues/48680
1907
    mds: scrubbing stuck "scrub active (0 inodes in the stack)"
1908
* https://tracker.ceph.com/issues/55236
1909
    qa: fs/snaps tests fails with "hit max job timeout"
1910
* https://tracker.ceph.com/issues/54108
1911
    qa: iogen workunit: "The following counters failed to be set on mds daemons: {'mds.exported', 'mds.imported'}"
1912
* https://tracker.ceph.com/issues/54971
1913
    Test failure: test_perf_stats_stale_metrics (tasks.cephfs.test_mds_metrics.TestMDSMetrics)
1914
* https://tracker.ceph.com/issues/50223
1915
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
1916
* https://tracker.ceph.com/issues/55258
1917 44 Venky Shankar
    lots of "heartbeat_check: no reply from X.X.X.X" in OSD logs
1918 42 Venky Shankar
1919 43 Venky Shankar
h3. 2022 Mar 21
1920
1921
https://pulpito.ceph.com/vshankar-2022-03-20_02:16:37-fs-wip-vshankar-testing-20220319-163539-testing-default-smithi/
1922
1923
Run didn't go well, lots of failures - debugging by dropping PRs and running against master branch. Only merging unrelated PRs that pass tests.
1924
1925
1926 42 Venky Shankar
h3. 2022 Mar 08
1927
1928
https://pulpito.ceph.com/vshankar-2022-02-28_04:32:15-fs-wip-vshankar-testing-20220226-211550-testing-default-smithi/
1929
1930
rerun with
1931
- (drop) https://github.com/ceph/ceph/pull/44679
1932
- (drop) https://github.com/ceph/ceph/pull/44958
1933
https://pulpito.ceph.com/vshankar-2022-03-06_14:47:51-fs-wip-vshankar-testing-20220304-132102-testing-default-smithi/
1934
1935
* https://tracker.ceph.com/issues/54419 (new)
1936
    `ceph orch upgrade start` seems to never reach completion
1937
* https://tracker.ceph.com/issues/51964
1938
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1939
* https://tracker.ceph.com/issues/52624
1940
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1941
* https://tracker.ceph.com/issues/50223
1942
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
1943
* https://tracker.ceph.com/issues/52438
1944
    qa: ffsb timeout
1945
* https://tracker.ceph.com/issues/50821
1946
    qa: untar_snap_rm failure during mds thrashing
1947 41 Venky Shankar
1948
1949
h3. 2022 Feb 09
1950
1951
https://pulpito.ceph.com/vshankar-2022-02-05_17:27:49-fs-wip-vshankar-testing-20220201-113815-testing-default-smithi/
1952
1953
rerun with
1954
- (drop) https://github.com/ceph/ceph/pull/37938
1955
- (drop) https://github.com/ceph/ceph/pull/44335
1956
- (drop) https://github.com/ceph/ceph/pull/44491
1957
- (drop) https://github.com/ceph/ceph/pull/44501
1958
https://pulpito.ceph.com/vshankar-2022-02-08_14:27:29-fs-wip-vshankar-testing-20220208-181241-testing-default-smithi/
1959
1960
* https://tracker.ceph.com/issues/51964
1961
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1962
* https://tracker.ceph.com/issues/54066
1963
    test_subvolume_no_upgrade_v1_sanity fails with `AssertionError: 1000 != 0`
1964
* https://tracker.ceph.com/issues/48773
1965
    qa: scrub does not complete
1966
* https://tracker.ceph.com/issues/52624
1967
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1968
* https://tracker.ceph.com/issues/50223
1969
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
1970
* https://tracker.ceph.com/issues/52438
1971 40 Patrick Donnelly
    qa: ffsb timeout
1972
1973
h3. 2022 Feb 01
1974
1975
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20220127.171526
1976
1977
* https://tracker.ceph.com/issues/54107
1978
    kclient: hang during umount
1979
* https://tracker.ceph.com/issues/54106
1980
    kclient: hang during workunit cleanup
1981
* https://tracker.ceph.com/issues/54108
1982
    qa: iogen workunit: "The following counters failed to be set on mds daemons: {'mds.exported', 'mds.imported'}"
1983
* https://tracker.ceph.com/issues/48773
1984
    qa: scrub does not complete
1985
* https://tracker.ceph.com/issues/52438
1986
    qa: ffsb timeout
1987 36 Venky Shankar
1988
1989
h3. 2022 Jan 13
1990 39 Venky Shankar
1991 36 Venky Shankar
https://pulpito.ceph.com/vshankar-2022-01-06_13:18:41-fs-wip-vshankar-testing-20220106-145819-testing-default-smithi/
1992 38 Venky Shankar
1993
rerun with:
1994 36 Venky Shankar
- (add) https://github.com/ceph/ceph/pull/44570
1995
- (drop) https://github.com/ceph/ceph/pull/43184
1996
https://pulpito.ceph.com/vshankar-2022-01-13_04:42:40-fs-wip-vshankar-testing-20220106-145819-testing-default-smithi/
1997
1998
* https://tracker.ceph.com/issues/50223
1999
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2000
* https://tracker.ceph.com/issues/51282
2001
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2002
* https://tracker.ceph.com/issues/48773
2003
    qa: scrub does not complete
2004
* https://tracker.ceph.com/issues/52624
2005
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2006
* https://tracker.ceph.com/issues/53859
2007 34 Venky Shankar
    qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
2008
2009
h3. 2022 Jan 03
2010
2011
https://pulpito.ceph.com/vshankar-2021-12-22_07:37:44-fs-wip-vshankar-testing-20211216-114012-testing-default-smithi/
2012
https://pulpito.ceph.com/vshankar-2022-01-03_12:27:45-fs-wip-vshankar-testing-20220103-142738-testing-default-smithi/ (rerun)
2013
2014
* https://tracker.ceph.com/issues/50223
2015
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2016
* https://tracker.ceph.com/issues/51964
2017
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
2018
* https://tracker.ceph.com/issues/51267
2019
    CommandFailedError: Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithi096 with status 1:...
2020
* https://tracker.ceph.com/issues/51282
2021
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2022
* https://tracker.ceph.com/issues/50821
2023
    qa: untar_snap_rm failure during mds thrashing
2024 35 Ramana Raja
* https://tracker.ceph.com/issues/51278
2025
    mds: "FAILED ceph_assert(!segments.empty())"
2026
* https://tracker.ceph.com/issues/52279
2027 34 Venky Shankar
    cephadm tests fail due to: error adding seccomp filter rule for syscall bdflush: requested action matches default action of filter
2028 33 Patrick Donnelly
2029
2030
h3. 2021 Dec 22
2031
2032
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211222.014316
2033
2034
* https://tracker.ceph.com/issues/52624
2035
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2036
* https://tracker.ceph.com/issues/50223
2037
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2038
* https://tracker.ceph.com/issues/52279
2039
    cephadm tests fail due to: error adding seccomp filter rule for syscall bdflush: requested action matches default action of filter
2040
* https://tracker.ceph.com/issues/50224
2041
    qa: test_mirroring_init_failure_with_recovery failure
2042
* https://tracker.ceph.com/issues/48773
2043
    qa: scrub does not complete
2044 32 Venky Shankar
2045
2046
h3. 2021 Nov 30
2047
2048
https://pulpito.ceph.com/vshankar-2021-11-24_07:14:27-fs-wip-vshankar-testing-20211124-094330-testing-default-smithi/
2049
https://pulpito.ceph.com/vshankar-2021-11-30_06:23:32-fs-wip-vshankar-testing-20211124-094330-distro-default-smithi/ (rerun w/ QA fixes)
2050
2051
* https://tracker.ceph.com/issues/53436
2052
    mds, mon: mds beacon messages get dropped? (mds never reaches up:active state)
2053
* https://tracker.ceph.com/issues/51964
2054
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
2055
* https://tracker.ceph.com/issues/48812
2056
    qa: test_scrub_pause_and_resume_with_abort failure
2057
* https://tracker.ceph.com/issues/51076
2058
    "wait_for_recovery: failed before timeout expired" during thrashosd test with EC backend.
2059
* https://tracker.ceph.com/issues/50223
2060
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2061
* https://tracker.ceph.com/issues/52624
2062
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2063
* https://tracker.ceph.com/issues/50250
2064
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
2065 31 Patrick Donnelly
2066
2067
h3. 2021 November 9
2068
2069
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211109.180315
2070
2071
* https://tracker.ceph.com/issues/53214
2072
    qa: "dd: error reading '/sys/kernel/debug/ceph/2a934501-6731-4052-a836-f42229a869be.client4874/metrics': Is a directory"
2073
* https://tracker.ceph.com/issues/48773
2074
    qa: scrub does not complete
2075
* https://tracker.ceph.com/issues/50223
2076
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2077
* https://tracker.ceph.com/issues/51282
2078
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2079
* https://tracker.ceph.com/issues/52624
2080
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2081
* https://tracker.ceph.com/issues/53216
2082
    qa: "RuntimeError: value of attributes should be either str or None. client_id"
2083
* https://tracker.ceph.com/issues/50250
2084
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
2085
2086 30 Patrick Donnelly
2087
2088
h3. 2021 November 03
2089
2090
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211103.023355
2091
2092
* https://tracker.ceph.com/issues/51964
2093
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
2094
* https://tracker.ceph.com/issues/51282
2095
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2096
* https://tracker.ceph.com/issues/52436
2097
    fs/ceph: "corrupt mdsmap"
2098
* https://tracker.ceph.com/issues/53074
2099
    pybind/mgr/cephadm: upgrade sequence does not continue if no MDS are active
2100
* https://tracker.ceph.com/issues/53150
2101
    pybind/mgr/cephadm/upgrade: tolerate MDS failures during upgrade straddling v16.2.5
2102
* https://tracker.ceph.com/issues/53155
2103
    MDSMonitor: assertion during upgrade to v16.2.5+
2104 29 Patrick Donnelly
2105
2106
h3. 2021 October 26
2107
2108
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211025.000447
2109
2110
* https://tracker.ceph.com/issues/53074
2111
    pybind/mgr/cephadm: upgrade sequence does not continue if no MDS are active
2112
* https://tracker.ceph.com/issues/52997
2113
    testing: hang ing umount
2114
* https://tracker.ceph.com/issues/50824
2115
    qa: snaptest-git-ceph bus error
2116
* https://tracker.ceph.com/issues/52436
2117
    fs/ceph: "corrupt mdsmap"
2118
* https://tracker.ceph.com/issues/48773
2119
    qa: scrub does not complete
2120
* https://tracker.ceph.com/issues/53082
2121
    ceph-fuse: segmenetation fault in Client::handle_mds_map
2122
* https://tracker.ceph.com/issues/50223
2123
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2124
* https://tracker.ceph.com/issues/52624
2125
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2126
* https://tracker.ceph.com/issues/50224
2127
    qa: test_mirroring_init_failure_with_recovery failure
2128
* https://tracker.ceph.com/issues/50821
2129
    qa: untar_snap_rm failure during mds thrashing
2130
* https://tracker.ceph.com/issues/50250
2131
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
2132
2133 27 Patrick Donnelly
2134
2135 28 Patrick Donnelly
h3. 2021 October 19
2136 27 Patrick Donnelly
2137
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211019.013028
2138
2139
* https://tracker.ceph.com/issues/52995
2140
    qa: test_standby_count_wanted failure
2141
* https://tracker.ceph.com/issues/52948
2142
    osd: fails to come up: "teuthology.misc:7 of 8 OSDs are up"
2143
* https://tracker.ceph.com/issues/52996
2144
    qa: test_perf_counters via test_openfiletable
2145
* https://tracker.ceph.com/issues/48772
2146
    qa: pjd: not ok 9, 44, 80
2147
* https://tracker.ceph.com/issues/52997
2148
    testing: hang ing umount
2149
* https://tracker.ceph.com/issues/50250
2150
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
2151
* https://tracker.ceph.com/issues/52624
2152
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2153
* https://tracker.ceph.com/issues/50223
2154
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2155
* https://tracker.ceph.com/issues/50821
2156
    qa: untar_snap_rm failure during mds thrashing
2157
* https://tracker.ceph.com/issues/48773
2158
    qa: scrub does not complete
2159 26 Patrick Donnelly
2160
2161
h3. 2021 October 12
2162
2163
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211012.192211
2164
2165
Some failures caused by teuthology bug: https://tracker.ceph.com/issues/52944
2166
2167
New test caused failure: https://github.com/ceph/ceph/pull/43297#discussion_r729883167
2168
2169
2170
* https://tracker.ceph.com/issues/51282
2171
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2172
* https://tracker.ceph.com/issues/52948
2173
    osd: fails to come up: "teuthology.misc:7 of 8 OSDs are up"
2174
* https://tracker.ceph.com/issues/48773
2175
    qa: scrub does not complete
2176
* https://tracker.ceph.com/issues/50224
2177
    qa: test_mirroring_init_failure_with_recovery failure
2178
* https://tracker.ceph.com/issues/52949
2179
    RuntimeError: The following counters failed to be set on mds daemons: {'mds.dir_split'}
2180 25 Patrick Donnelly
2181 23 Patrick Donnelly
2182 24 Patrick Donnelly
h3. 2021 October 02
2183
2184
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211002.163337
2185
2186
Some failures caused by cephadm upgrade test. Fixed in follow-up qa commit.
2187
2188
test_simple failures caused by PR in this set.
2189
2190
A few reruns because of QA infra noise.
2191
2192
* https://tracker.ceph.com/issues/52822
2193
    qa: failed pacific install on fs:upgrade
2194
* https://tracker.ceph.com/issues/52624
2195
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2196
* https://tracker.ceph.com/issues/50223
2197
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2198
* https://tracker.ceph.com/issues/48773
2199
    qa: scrub does not complete
2200
2201
2202 23 Patrick Donnelly
h3. 2021 September 20
2203
2204
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210917.174826
2205
2206
* https://tracker.ceph.com/issues/52677
2207
    qa: test_simple failure
2208
* https://tracker.ceph.com/issues/51279
2209
    kclient hangs on umount (testing branch)
2210
* https://tracker.ceph.com/issues/50223
2211
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2212
* https://tracker.ceph.com/issues/50250
2213
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
2214
* https://tracker.ceph.com/issues/52624
2215
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2216
* https://tracker.ceph.com/issues/52438
2217
    qa: ffsb timeout
2218 22 Patrick Donnelly
2219
2220
h3. 2021 September 10
2221
2222
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210910.181451
2223
2224
* https://tracker.ceph.com/issues/50223
2225
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2226
* https://tracker.ceph.com/issues/50250
2227
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
2228
* https://tracker.ceph.com/issues/52624
2229
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2230
* https://tracker.ceph.com/issues/52625
2231
    qa: test_kill_mdstable (tasks.cephfs.test_snapshots.TestSnapshots)
2232
* https://tracker.ceph.com/issues/52439
2233
    qa: acls does not compile on centos stream
2234
* https://tracker.ceph.com/issues/50821
2235
    qa: untar_snap_rm failure during mds thrashing
2236
* https://tracker.ceph.com/issues/48773
2237
    qa: scrub does not complete
2238
* https://tracker.ceph.com/issues/52626
2239
    mds: ScrubStack.cc: 831: FAILED ceph_assert(diri)
2240
* https://tracker.ceph.com/issues/51279
2241
    kclient hangs on umount (testing branch)
2242 21 Patrick Donnelly
2243
2244
h3. 2021 August 27
2245
2246
Several jobs died because of device failures.
2247
2248
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210827.024746
2249
2250
* https://tracker.ceph.com/issues/52430
2251
    mds: fast async create client mount breaks racy test
2252
* https://tracker.ceph.com/issues/52436
2253
    fs/ceph: "corrupt mdsmap"
2254
* https://tracker.ceph.com/issues/52437
2255
    mds: InoTable::replay_release_ids abort via test_inotable_sync
2256
* https://tracker.ceph.com/issues/51282
2257
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2258
* https://tracker.ceph.com/issues/52438
2259
    qa: ffsb timeout
2260
* https://tracker.ceph.com/issues/52439
2261
    qa: acls does not compile on centos stream
2262 20 Patrick Donnelly
2263
2264
h3. 2021 July 30
2265
2266
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210729.214022
2267
2268
* https://tracker.ceph.com/issues/50250
2269
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
2270
* https://tracker.ceph.com/issues/51282
2271
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2272
* https://tracker.ceph.com/issues/48773
2273
    qa: scrub does not complete
2274
* https://tracker.ceph.com/issues/51975
2275
    pybind/mgr/stats: KeyError
2276 19 Patrick Donnelly
2277
2278
h3. 2021 July 28
2279
2280
https://pulpito.ceph.com/pdonnell-2021-07-28_00:39:45-fs-wip-pdonnell-testing-20210727.213757-distro-basic-smithi/
2281
2282
with qa fix: https://pulpito.ceph.com/pdonnell-2021-07-28_16:20:28-fs-wip-pdonnell-testing-20210728.141004-distro-basic-smithi/
2283
2284
* https://tracker.ceph.com/issues/51905
2285
    qa: "error reading sessionmap 'mds1_sessionmap'"
2286
* https://tracker.ceph.com/issues/48773
2287
    qa: scrub does not complete
2288
* https://tracker.ceph.com/issues/50250
2289
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
2290
* https://tracker.ceph.com/issues/51267
2291
    CommandFailedError: Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithi096 with status 1:...
2292
* https://tracker.ceph.com/issues/51279
2293
    kclient hangs on umount (testing branch)
2294 18 Patrick Donnelly
2295
2296
h3. 2021 July 16
2297
2298
https://pulpito.ceph.com/pdonnell-2021-07-16_05:50:11-fs-wip-pdonnell-testing-20210716.022804-distro-basic-smithi/
2299
2300
* https://tracker.ceph.com/issues/48773
2301
    qa: scrub does not complete
2302
* https://tracker.ceph.com/issues/48772
2303
    qa: pjd: not ok 9, 44, 80
2304
* https://tracker.ceph.com/issues/45434
2305
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2306
* https://tracker.ceph.com/issues/51279
2307
    kclient hangs on umount (testing branch)
2308
* https://tracker.ceph.com/issues/50824
2309
    qa: snaptest-git-ceph bus error
2310 17 Patrick Donnelly
2311
2312
h3. 2021 July 04
2313
2314
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210703.052904
2315
2316
* https://tracker.ceph.com/issues/48773
2317
    qa: scrub does not complete
2318
* https://tracker.ceph.com/issues/39150
2319
    mon: "FAILED ceph_assert(session_map.sessions.empty())" when out of quorum
2320
* https://tracker.ceph.com/issues/45434
2321
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2322
* https://tracker.ceph.com/issues/51282
2323
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2324
* https://tracker.ceph.com/issues/48771
2325
    qa: iogen: workload fails to cause balancing
2326
* https://tracker.ceph.com/issues/51279
2327
    kclient hangs on umount (testing branch)
2328
* https://tracker.ceph.com/issues/50250
2329
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
2330 16 Patrick Donnelly
2331
2332
h3. 2021 July 01
2333
2334
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210701.192056
2335
2336
* https://tracker.ceph.com/issues/51197
2337
    qa: [WRN] Scrub error on inode 0x10000001520 (/client.0/tmp/t/linux-5.4/Documentation/driver-api) see mds.f log and `damage ls` output for details
2338
* https://tracker.ceph.com/issues/50866
2339
    osd: stat mismatch on objects
2340
* https://tracker.ceph.com/issues/48773
2341
    qa: scrub does not complete
2342 15 Patrick Donnelly
2343
2344
h3. 2021 June 26
2345
2346
https://pulpito.ceph.com/pdonnell-2021-06-26_00:57:00-fs-wip-pdonnell-testing-20210625.225421-distro-basic-smithi/
2347
2348
* https://tracker.ceph.com/issues/51183
2349
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
2350
* https://tracker.ceph.com/issues/51410
2351
    kclient: fails to finish reconnect during MDS thrashing (testing branch)
2352
* https://tracker.ceph.com/issues/48773
2353
    qa: scrub does not complete
2354
* https://tracker.ceph.com/issues/51282
2355
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2356
* https://tracker.ceph.com/issues/51169
2357
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
2358
* https://tracker.ceph.com/issues/48772
2359
    qa: pjd: not ok 9, 44, 80
2360 14 Patrick Donnelly
2361
2362
h3. 2021 June 21
2363
2364
https://pulpito.ceph.com/pdonnell-2021-06-22_00:27:21-fs-wip-pdonnell-testing-20210621.231646-distro-basic-smithi/
2365
2366
One failure caused by PR: https://github.com/ceph/ceph/pull/41935#issuecomment-866472599
2367
2368
* https://tracker.ceph.com/issues/51282
2369
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2370
* https://tracker.ceph.com/issues/51183
2371
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
2372
* https://tracker.ceph.com/issues/48773
2373
    qa: scrub does not complete
2374
* https://tracker.ceph.com/issues/48771
2375
    qa: iogen: workload fails to cause balancing
2376
* https://tracker.ceph.com/issues/51169
2377
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
2378
* https://tracker.ceph.com/issues/50495
2379
    libcephfs: shutdown race fails with status 141
2380
* https://tracker.ceph.com/issues/45434
2381
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2382
* https://tracker.ceph.com/issues/50824
2383
    qa: snaptest-git-ceph bus error
2384
* https://tracker.ceph.com/issues/50223
2385
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2386 13 Patrick Donnelly
2387
2388
h3. 2021 June 16
2389
2390
https://pulpito.ceph.com/pdonnell-2021-06-16_21:26:55-fs-wip-pdonnell-testing-20210616.191804-distro-basic-smithi/
2391
2392
MDS abort class of failures caused by PR: https://github.com/ceph/ceph/pull/41667
2393
2394
* https://tracker.ceph.com/issues/45434
2395
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2396
* https://tracker.ceph.com/issues/51169
2397
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
2398
* https://tracker.ceph.com/issues/43216
2399
    MDSMonitor: removes MDS coming out of quorum election
2400
* https://tracker.ceph.com/issues/51278
2401
    mds: "FAILED ceph_assert(!segments.empty())"
2402
* https://tracker.ceph.com/issues/51279
2403
    kclient hangs on umount (testing branch)
2404
* https://tracker.ceph.com/issues/51280
2405
    mds: "FAILED ceph_assert(r == 0 || r == -2)"
2406
* https://tracker.ceph.com/issues/51183
2407
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
2408
* https://tracker.ceph.com/issues/51281
2409
    qa: snaptest-snap-rm-cmp.sh: "echo 'FAIL: bad match, /tmp/a 4637e766853d1ad16a7b17079e2c6f03 != real c3883760b18d50e8d78819c54d579b00'"
2410
* https://tracker.ceph.com/issues/48773
2411
    qa: scrub does not complete
2412
* https://tracker.ceph.com/issues/51076
2413
    "wait_for_recovery: failed before timeout expired" during thrashosd test with EC backend.
2414
* https://tracker.ceph.com/issues/51228
2415
    qa: rmdir: failed to remove 'a/.snap/*': No such file or directory
2416
* https://tracker.ceph.com/issues/51282
2417
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2418 12 Patrick Donnelly
2419
2420
h3. 2021 June 14
2421
2422
https://pulpito.ceph.com/pdonnell-2021-06-14_20:53:05-fs-wip-pdonnell-testing-20210614.173325-distro-basic-smithi/
2423
2424
Some Ubuntu 20.04 upgrade fallout. In particular, upgrade tests are failing due to missing packages for 18.04 Pacific.
2425
2426
* https://tracker.ceph.com/issues/51169
2427
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
2428
* https://tracker.ceph.com/issues/51228
2429
    qa: rmdir: failed to remove 'a/.snap/*': No such file or directory
2430
* https://tracker.ceph.com/issues/48773
2431
    qa: scrub does not complete
2432
* https://tracker.ceph.com/issues/51183
2433
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
2434
* https://tracker.ceph.com/issues/45434
2435
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2436
* https://tracker.ceph.com/issues/51182
2437
    pybind/mgr/snap_schedule: Invalid command: Unexpected argument 'fs=cephfs'
2438
* https://tracker.ceph.com/issues/51229
2439
    qa: test_multi_snap_schedule list difference failure
2440
* https://tracker.ceph.com/issues/50821
2441
    qa: untar_snap_rm failure during mds thrashing
2442 11 Patrick Donnelly
2443
2444
h3. 2021 June 13
2445
2446
https://pulpito.ceph.com/pdonnell-2021-06-12_02:45:35-fs-wip-pdonnell-testing-20210612.002809-distro-basic-smithi/
2447
2448
Some Ubuntu 20.04 upgrade fallout. In particular, upgrade tests are failing due to missing packages for 18.04 Pacific.
2449
2450
* https://tracker.ceph.com/issues/51169
2451
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
2452
* https://tracker.ceph.com/issues/48773
2453
    qa: scrub does not complete
2454
* https://tracker.ceph.com/issues/51182
2455
    pybind/mgr/snap_schedule: Invalid command: Unexpected argument 'fs=cephfs'
2456
* https://tracker.ceph.com/issues/51183
2457
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
2458
* https://tracker.ceph.com/issues/51197
2459
    qa: [WRN] Scrub error on inode 0x10000001520 (/client.0/tmp/t/linux-5.4/Documentation/driver-api) see mds.f log and `damage ls` output for details
2460
* https://tracker.ceph.com/issues/45434
2461 10 Patrick Donnelly
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2462
2463
h3. 2021 June 11
2464
2465
https://pulpito.ceph.com/pdonnell-2021-06-11_18:02:10-fs-wip-pdonnell-testing-20210611.162716-distro-basic-smithi/
2466
2467
Some Ubuntu 20.04 upgrade fallout. In particular, upgrade tests are failing due to missing packages for 18.04 Pacific.
2468
2469
* https://tracker.ceph.com/issues/51169
2470
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
2471
* https://tracker.ceph.com/issues/45434
2472
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2473
* https://tracker.ceph.com/issues/48771
2474
    qa: iogen: workload fails to cause balancing
2475
* https://tracker.ceph.com/issues/43216
2476
    MDSMonitor: removes MDS coming out of quorum election
2477
* https://tracker.ceph.com/issues/51182
2478
    pybind/mgr/snap_schedule: Invalid command: Unexpected argument 'fs=cephfs'
2479
* https://tracker.ceph.com/issues/50223
2480
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2481
* https://tracker.ceph.com/issues/48773
2482
    qa: scrub does not complete
2483
* https://tracker.ceph.com/issues/51183
2484
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
2485
* https://tracker.ceph.com/issues/51184
2486
    qa: fs:bugs does not specify distro
2487 9 Patrick Donnelly
2488
2489
h3. 2021 June 03
2490
2491
https://pulpito.ceph.com/pdonnell-2021-06-03_03:40:33-fs-wip-pdonnell-testing-20210603.020013-distro-basic-smithi/
2492
2493
* https://tracker.ceph.com/issues/45434
2494
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2495
* https://tracker.ceph.com/issues/50016
2496
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
2497
* https://tracker.ceph.com/issues/50821
2498
    qa: untar_snap_rm failure during mds thrashing
2499
* https://tracker.ceph.com/issues/50622 (regression)
2500
    msg: active_connections regression
2501
* https://tracker.ceph.com/issues/49845#note-2 (regression)
2502
    qa: failed umount in test_volumes
2503
* https://tracker.ceph.com/issues/48773
2504
    qa: scrub does not complete
2505
* https://tracker.ceph.com/issues/43216
2506
    MDSMonitor: removes MDS coming out of quorum election
2507 7 Patrick Donnelly
2508
2509 8 Patrick Donnelly
h3. 2021 May 18
2510
2511
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210518.214114
2512
2513
Regression in testing kernel caused some failures. Ilya fixed those and rerun
2514
looked better. Some odd new noise in the rerun relating to packaging and "No
2515
module named 'tasks.ceph'".
2516
2517
* https://tracker.ceph.com/issues/50824
2518
    qa: snaptest-git-ceph bus error
2519
* https://tracker.ceph.com/issues/50622 (regression)
2520
    msg: active_connections regression
2521
* https://tracker.ceph.com/issues/49845#note-2 (regression)
2522
    qa: failed umount in test_volumes
2523
* https://tracker.ceph.com/issues/48203 (stock kernel update required)
2524
    qa: quota failure
2525
2526
2527 7 Patrick Donnelly
h3. 2021 May 18
2528
2529
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210518.025642
2530
2531
* https://tracker.ceph.com/issues/50821
2532
    qa: untar_snap_rm failure during mds thrashing
2533
* https://tracker.ceph.com/issues/48773
2534
    qa: scrub does not complete
2535
* https://tracker.ceph.com/issues/45591
2536
    mgr: FAILED ceph_assert(daemon != nullptr)
2537
* https://tracker.ceph.com/issues/50866
2538
    osd: stat mismatch on objects
2539
* https://tracker.ceph.com/issues/50016
2540
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
2541
* https://tracker.ceph.com/issues/50867
2542
    qa: fs:mirror: reduced data availability
2543
* https://tracker.ceph.com/issues/50821
2544
    qa: untar_snap_rm failure during mds thrashing
2545
* https://tracker.ceph.com/issues/50622 (regression)
2546
    msg: active_connections regression
2547
* https://tracker.ceph.com/issues/50223
2548
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2549
* https://tracker.ceph.com/issues/50868
2550
    qa: "kern.log.gz already exists; not overwritten"
2551
* https://tracker.ceph.com/issues/50870
2552
    qa: test_full: "rm: cannot remove 'large_file_a': Permission denied"
2553 6 Patrick Donnelly
2554
2555
h3. 2021 May 11
2556
2557
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210511.232042
2558
2559
* one class of failures caused by PR
2560
* https://tracker.ceph.com/issues/48812
2561
    qa: test_scrub_pause_and_resume_with_abort failure
2562
* https://tracker.ceph.com/issues/50390
2563
    mds: monclient: wait_auth_rotating timed out after 30
2564
* https://tracker.ceph.com/issues/48773
2565
    qa: scrub does not complete
2566
* https://tracker.ceph.com/issues/50821
2567
    qa: untar_snap_rm failure during mds thrashing
2568
* https://tracker.ceph.com/issues/50224
2569
    qa: test_mirroring_init_failure_with_recovery failure
2570
* https://tracker.ceph.com/issues/50622 (regression)
2571
    msg: active_connections regression
2572
* https://tracker.ceph.com/issues/50825
2573
    qa: snaptest-git-ceph hang during mon thrashing v2
2574
* https://tracker.ceph.com/issues/50821
2575
    qa: untar_snap_rm failure during mds thrashing
2576
* https://tracker.ceph.com/issues/50823
2577
    qa: RuntimeError: timeout waiting for cluster to stabilize
2578 5 Patrick Donnelly
2579
2580
h3. 2021 May 14
2581
2582
https://pulpito.ceph.com/pdonnell-2021-05-14_21:45:42-fs-master-distro-basic-smithi/
2583
2584
* https://tracker.ceph.com/issues/48812
2585
    qa: test_scrub_pause_and_resume_with_abort failure
2586
* https://tracker.ceph.com/issues/50821
2587
    qa: untar_snap_rm failure during mds thrashing
2588
* https://tracker.ceph.com/issues/50622 (regression)
2589
    msg: active_connections regression
2590
* https://tracker.ceph.com/issues/50822
2591
    qa: testing kernel patch for client metrics causes mds abort
2592
* https://tracker.ceph.com/issues/48773
2593
    qa: scrub does not complete
2594
* https://tracker.ceph.com/issues/50823
2595
    qa: RuntimeError: timeout waiting for cluster to stabilize
2596
* https://tracker.ceph.com/issues/50824
2597
    qa: snaptest-git-ceph bus error
2598
* https://tracker.ceph.com/issues/50825
2599
    qa: snaptest-git-ceph hang during mon thrashing v2
2600
* https://tracker.ceph.com/issues/50826
2601
    kceph: stock RHEL kernel hangs on snaptests with mon|osd thrashers
2602 4 Patrick Donnelly
2603
2604
h3. 2021 May 01
2605
2606
https://pulpito.ceph.com/pdonnell-2021-05-01_09:07:09-fs-wip-pdonnell-testing-20210501.040415-distro-basic-smithi/
2607
2608
* https://tracker.ceph.com/issues/45434
2609
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2610
* https://tracker.ceph.com/issues/50281
2611
    qa: untar_snap_rm timeout
2612
* https://tracker.ceph.com/issues/48203 (stock kernel update required)
2613
    qa: quota failure
2614
* https://tracker.ceph.com/issues/48773
2615
    qa: scrub does not complete
2616
* https://tracker.ceph.com/issues/50390
2617
    mds: monclient: wait_auth_rotating timed out after 30
2618
* https://tracker.ceph.com/issues/50250
2619
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details"
2620
* https://tracker.ceph.com/issues/50622 (regression)
2621
    msg: active_connections regression
2622
* https://tracker.ceph.com/issues/45591
2623
    mgr: FAILED ceph_assert(daemon != nullptr)
2624
* https://tracker.ceph.com/issues/50221
2625
    qa: snaptest-git-ceph failure in git diff
2626
* https://tracker.ceph.com/issues/50016
2627
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
2628 3 Patrick Donnelly
2629
2630
h3. 2021 Apr 15
2631
2632
https://pulpito.ceph.com/pdonnell-2021-04-15_01:35:57-fs-wip-pdonnell-testing-20210414.230315-distro-basic-smithi/
2633
2634
* https://tracker.ceph.com/issues/50281
2635
    qa: untar_snap_rm timeout
2636
* https://tracker.ceph.com/issues/50220
2637
    qa: dbench workload timeout
2638
* https://tracker.ceph.com/issues/50246
2639
    mds: failure replaying journal (EMetaBlob)
2640
* https://tracker.ceph.com/issues/50250
2641
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details"
2642
* https://tracker.ceph.com/issues/50016
2643
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
2644
* https://tracker.ceph.com/issues/50222
2645
    osd: 5.2s0 deep-scrub : stat mismatch
2646
* https://tracker.ceph.com/issues/45434
2647
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2648
* https://tracker.ceph.com/issues/49845
2649
    qa: failed umount in test_volumes
2650
* https://tracker.ceph.com/issues/37808
2651
    osd: osdmap cache weak_refs assert during shutdown
2652
* https://tracker.ceph.com/issues/50387
2653
    client: fs/snaps failure
2654
* https://tracker.ceph.com/issues/50389
2655
    mds: "cluster [ERR] Error recovering journal 0x203: (2) No such file or directory" in cluster log"
2656
* https://tracker.ceph.com/issues/50216
2657
    qa: "ls: cannot access 'lost+found': No such file or directory"
2658
* https://tracker.ceph.com/issues/50390
2659
    mds: monclient: wait_auth_rotating timed out after 30
2660
2661 1 Patrick Donnelly
2662
2663 2 Patrick Donnelly
h3. 2021 Apr 08
2664
2665
https://pulpito.ceph.com/pdonnell-2021-04-08_22:42:24-fs-wip-pdonnell-testing-20210408.192301-distro-basic-smithi/
2666
2667
* https://tracker.ceph.com/issues/45434
2668
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2669
* https://tracker.ceph.com/issues/50016
2670
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
2671
* https://tracker.ceph.com/issues/48773
2672
    qa: scrub does not complete
2673
* https://tracker.ceph.com/issues/50279
2674
    qa: "Replacing daemon mds.b as rank 0 with standby daemon mds.c"
2675
* https://tracker.ceph.com/issues/50246
2676
    mds: failure replaying journal (EMetaBlob)
2677
* https://tracker.ceph.com/issues/48365
2678
    qa: ffsb build failure on CentOS 8.2
2679
* https://tracker.ceph.com/issues/50216
2680
    qa: "ls: cannot access 'lost+found': No such file or directory"
2681
* https://tracker.ceph.com/issues/50223
2682
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2683
* https://tracker.ceph.com/issues/50280
2684
    cephadm: RuntimeError: uid/gid not found
2685
* https://tracker.ceph.com/issues/50281
2686
    qa: untar_snap_rm timeout
2687
2688 1 Patrick Donnelly
h3. 2021 Apr 08
2689
2690
https://pulpito.ceph.com/pdonnell-2021-04-08_04:31:36-fs-wip-pdonnell-testing-20210408.024225-distro-basic-smithi/
2691
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210408.142238 (with logic inversion / QA fix)
2692
2693
* https://tracker.ceph.com/issues/50246
2694
    mds: failure replaying journal (EMetaBlob)
2695
* https://tracker.ceph.com/issues/50250
2696
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details"
2697
2698
2699
h3. 2021 Apr 07
2700
2701
https://pulpito.ceph.com/pdonnell-2021-04-07_02:12:41-fs-wip-pdonnell-testing-20210406.213012-distro-basic-smithi/
2702
2703
* https://tracker.ceph.com/issues/50215
2704
    qa: "log [ERR] : error reading sessionmap 'mds2_sessionmap'"
2705
* https://tracker.ceph.com/issues/49466
2706
    qa: "Command failed on gibba030 with status 1: 'set -ex\nsudo dd of=/tmp/tmp.ZEeZBasJer'"
2707
* https://tracker.ceph.com/issues/50216
2708
    qa: "ls: cannot access 'lost+found': No such file or directory"
2709
* https://tracker.ceph.com/issues/48773
2710
    qa: scrub does not complete
2711
* https://tracker.ceph.com/issues/49845
2712
    qa: failed umount in test_volumes
2713
* https://tracker.ceph.com/issues/50220
2714
    qa: dbench workload timeout
2715
* https://tracker.ceph.com/issues/50221
2716
    qa: snaptest-git-ceph failure in git diff
2717
* https://tracker.ceph.com/issues/50222
2718
    osd: 5.2s0 deep-scrub : stat mismatch
2719
* https://tracker.ceph.com/issues/50223
2720
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2721
* https://tracker.ceph.com/issues/50224
2722
    qa: test_mirroring_init_failure_with_recovery failure
2723
2724
h3. 2021 Apr 01
2725
2726
https://pulpito.ceph.com/pdonnell-2021-04-01_00:45:34-fs-wip-pdonnell-testing-20210331.222326-distro-basic-smithi/
2727
2728
* https://tracker.ceph.com/issues/48772
2729
    qa: pjd: not ok 9, 44, 80
2730
* https://tracker.ceph.com/issues/50177
2731
    osd: "stalled aio... buggy kernel or bad device?"
2732
* https://tracker.ceph.com/issues/48771
2733
    qa: iogen: workload fails to cause balancing
2734
* https://tracker.ceph.com/issues/49845
2735
    qa: failed umount in test_volumes
2736
* https://tracker.ceph.com/issues/48773
2737
    qa: scrub does not complete
2738
* https://tracker.ceph.com/issues/48805
2739
    mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details"
2740
* https://tracker.ceph.com/issues/50178
2741
    qa: "TypeError: run() got an unexpected keyword argument 'shell'"
2742
* https://tracker.ceph.com/issues/45434
2743
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2744
2745
h3. 2021 Mar 24
2746
2747
https://pulpito.ceph.com/pdonnell-2021-03-24_23:26:35-fs-wip-pdonnell-testing-20210324.190252-distro-basic-smithi/
2748
2749
* https://tracker.ceph.com/issues/49500
2750
    qa: "Assertion `cb_done' failed."
2751
* https://tracker.ceph.com/issues/50019
2752
    qa: mount failure with cephadm "probably no MDS server is up?"
2753
* https://tracker.ceph.com/issues/50020
2754
    qa: "RADOS object not found (Failed to operate read op for oid cephfs_mirror)"
2755
* https://tracker.ceph.com/issues/48773
2756
    qa: scrub does not complete
2757
* https://tracker.ceph.com/issues/45434
2758
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2759
* https://tracker.ceph.com/issues/48805
2760
    mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details"
2761
* https://tracker.ceph.com/issues/48772
2762
    qa: pjd: not ok 9, 44, 80
2763
* https://tracker.ceph.com/issues/50021
2764
    qa: snaptest-git-ceph failure during mon thrashing
2765
* https://tracker.ceph.com/issues/48771
2766
    qa: iogen: workload fails to cause balancing
2767
* https://tracker.ceph.com/issues/50016
2768
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
2769
* https://tracker.ceph.com/issues/49466
2770
    qa: "Command failed on gibba030 with status 1: 'set -ex\nsudo dd of=/tmp/tmp.ZEeZBasJer'"
2771
2772
2773
h3. 2021 Mar 18
2774
2775
https://pulpito.ceph.com/pdonnell-2021-03-18_13:46:31-fs-wip-pdonnell-testing-20210318.024145-distro-basic-smithi/
2776
2777
* https://tracker.ceph.com/issues/49466
2778
    qa: "Command failed on gibba030 with status 1: 'set -ex\nsudo dd of=/tmp/tmp.ZEeZBasJer'"
2779
* https://tracker.ceph.com/issues/48773
2780
    qa: scrub does not complete
2781
* https://tracker.ceph.com/issues/48805
2782
    mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details"
2783
* https://tracker.ceph.com/issues/45434
2784
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2785
* https://tracker.ceph.com/issues/49845
2786
    qa: failed umount in test_volumes
2787
* https://tracker.ceph.com/issues/49605
2788
    mgr: drops command on the floor
2789
* https://tracker.ceph.com/issues/48203 (stock kernel update required)
2790
    qa: quota failure
2791
* https://tracker.ceph.com/issues/49928
2792
    client: items pinned in cache preventing unmount x2
2793
2794
h3. 2021 Mar 15
2795
2796
https://pulpito.ceph.com/pdonnell-2021-03-15_22:16:56-fs-wip-pdonnell-testing-20210315.182203-distro-basic-smithi/
2797
2798
* https://tracker.ceph.com/issues/49842
2799
    qa: stuck pkg install
2800
* https://tracker.ceph.com/issues/49466
2801
    qa: "Command failed on gibba030 with status 1: 'set -ex\nsudo dd of=/tmp/tmp.ZEeZBasJer'"
2802
* https://tracker.ceph.com/issues/49822
2803
    test: test_mirroring_command_idempotency (tasks.cephfs.test_admin.TestMirroringCommands) failure
2804
* https://tracker.ceph.com/issues/49240
2805
    terminate called after throwing an instance of 'std::bad_alloc'
2806
* https://tracker.ceph.com/issues/48773
2807
    qa: scrub does not complete
2808
* https://tracker.ceph.com/issues/45434
2809
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2810
* https://tracker.ceph.com/issues/49500
2811
    qa: "Assertion `cb_done' failed."
2812
* https://tracker.ceph.com/issues/49843
2813
    qa: fs/snaps/snaptest-upchildrealms.sh failure
2814
* https://tracker.ceph.com/issues/49845
2815
    qa: failed umount in test_volumes
2816
* https://tracker.ceph.com/issues/48805
2817
    mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details"
2818
* https://tracker.ceph.com/issues/49605
2819
    mgr: drops command on the floor
2820
2821
and failure caused by PR: https://github.com/ceph/ceph/pull/39969
2822
2823
2824
h3. 2021 Mar 09
2825
2826
https://pulpito.ceph.com/pdonnell-2021-03-09_03:27:39-fs-wip-pdonnell-testing-20210308.214827-distro-basic-smithi/
2827
2828
* https://tracker.ceph.com/issues/49500
2829
    qa: "Assertion `cb_done' failed."
2830
* https://tracker.ceph.com/issues/48805
2831
    mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details"
2832
* https://tracker.ceph.com/issues/48773
2833
    qa: scrub does not complete
2834
* https://tracker.ceph.com/issues/45434
2835
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2836
* https://tracker.ceph.com/issues/49240
2837
    terminate called after throwing an instance of 'std::bad_alloc'
2838
* https://tracker.ceph.com/issues/49466
2839
    qa: "Command failed on gibba030 with status 1: 'set -ex\nsudo dd of=/tmp/tmp.ZEeZBasJer'"
2840
* https://tracker.ceph.com/issues/49684
2841
    qa: fs:cephadm mount does not wait for mds to be created
2842
* https://tracker.ceph.com/issues/48771
2843
    qa: iogen: workload fails to cause balancing