Project

General

Profile

Main » History » Version 206

Venky Shankar, 11/14/2023 05:56 AM

1 79 Venky Shankar
h1. MAIN
2
3 201 Rishabh Dave
h3. ADD NEW ENTRY BELOW
4
5 206 Venky Shankar
h3. 14 Nov 2023
6
7
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20231106.073650
8
9
(nvm the fs:upgrade test failure - the PR is excluded from merge)
10
11
* https://tracker.ceph.com/issues/57676
12
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
13
* https://tracker.ceph.com/issues/63233
14
    mon|client|mds: valgrind reports possible leaks in the MDS
15
* https://tracker.ceph.com/issues/63141
16
    qa/cephfs: test_idem_unaffected_root_squash fails
17
* https://tracker.ceph.com/issues/62580
18
    testing: Test failure: test_snapshot_remove (tasks.cephfs.test_strays.TestStrays)
19
* https://tracker.ceph.com/issues/57655
20
    qa: fs:mixed-clients kernel_untar_build failure
21
* https://tracker.ceph.com/issues/51964
22
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
23
* https://tracker.ceph.com/issues/63519
24
    ceph-fuse: reef ceph-fuse crashes with main branch ceph-mds
25
* https://tracker.ceph.com/issues/57087
26
    qa: test_fragmented_injection (tasks.cephfs.test_data_scan.TestDataScan) failure
27
* https://tracker.ceph.com/issues/58945
28
    qa: xfstests-dev's generic test suite has 20 failures with fuse client
29
30 204 Rishabh Dave
h3. 7 Nov 2023
31
32 205 Rishabh Dave
fs: https://pulpito.ceph.com/rishabh-2023-11-04_04:30:51-fs-rishabh-2023nov3-testing-default-smithi/
33
re-run: https://pulpito.ceph.com/rishabh-2023-11-05_14:10:09-fs-rishabh-2023nov3-testing-default-smithi/
34
smoke: https://pulpito.ceph.com/rishabh-2023-11-08_08:39:05-smoke-rishabh-2023nov3-testing-default-smithi/
35 204 Rishabh Dave
36
* https://tracker.ceph.com/issues/53859
37
  qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
38
* https://tracker.ceph.com/issues/63233
39
  mon|client|mds: valgrind reports possible leaks in the MDS
40
* https://tracker.ceph.com/issues/57655
41
  qa: fs:mixed-clients kernel_untar_build failure
42
* https://tracker.ceph.com/issues/57676
43
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
44
45
* https://tracker.ceph.com/issues/63473
46
  fsstress.sh failed with errno 124
47
48 202 Rishabh Dave
h3. 3 Nov 2023
49 203 Rishabh Dave
50 202 Rishabh Dave
https://pulpito.ceph.com/rishabh-2023-10-27_06:26:52-fs-rishabh-2023oct26-testing-default-smithi/
51
52
* https://tracker.ceph.com/issues/63141
53
  qa/cephfs: test_idem_unaffected_root_squash fails
54
* https://tracker.ceph.com/issues/63233
55
  mon|client|mds: valgrind reports possible leaks in the MDS
56
* https://tracker.ceph.com/issues/57656
57
  dbench: write failed on handle 10010 (Resource temporarily unavailable)
58
* https://tracker.ceph.com/issues/57655
59
  qa: fs:mixed-clients kernel_untar_build failure
60
* https://tracker.ceph.com/issues/57676
61
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
62
63
* https://tracker.ceph.com/issues/59531
64
  "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio)"
65
* https://tracker.ceph.com/issues/52624
66
  qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
67
68 198 Patrick Donnelly
h3. 24 October 2023
69
70
https://pulpito.ceph.com/?branch=wip-batrick-testing-20231024.144545
71
72 200 Patrick Donnelly
Two failures:
73
74
https://pulpito.ceph.com/pdonnell-2023-10-26_05:21:22-fs-wip-batrick-testing-20231024.144545-distro-default-smithi/7438459/
75
https://pulpito.ceph.com/pdonnell-2023-10-26_05:21:22-fs-wip-batrick-testing-20231024.144545-distro-default-smithi/7438468/
76
77
probably related to https://github.com/ceph/ceph/pull/53255. Killing the mount as part of the test did not complete. Will research more.
78
79 198 Patrick Donnelly
* https://tracker.ceph.com/issues/52624
80
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
81
* https://tracker.ceph.com/issues/57676
82 199 Patrick Donnelly
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
83
* https://tracker.ceph.com/issues/63233
84
    mon|client|mds: valgrind reports possible leaks in the MDS
85
* https://tracker.ceph.com/issues/59531
86
    "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS 
87
* https://tracker.ceph.com/issues/57655
88
    qa: fs:mixed-clients kernel_untar_build failure
89 200 Patrick Donnelly
* https://tracker.ceph.com/issues/62067
90
    ffsb.sh failure "Resource temporarily unavailable"
91
* https://tracker.ceph.com/issues/63411
92
    qa: flush journal may cause timeouts of `scrub status`
93
* https://tracker.ceph.com/issues/61243
94
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
95
* https://tracker.ceph.com/issues/63141
96 198 Patrick Donnelly
    test_idem_unaffected_root_squash (test_admin.TestFsAuthorizeUpdate) fails
97 148 Rishabh Dave
98 195 Venky Shankar
h3. 18 Oct 2023
99
100
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20231018.065603
101
102
* https://tracker.ceph.com/issues/52624
103
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
104
* https://tracker.ceph.com/issues/57676
105
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
106
* https://tracker.ceph.com/issues/63233
107
    mon|client|mds: valgrind reports possible leaks in the MDS
108
* https://tracker.ceph.com/issues/63141
109
    qa/cephfs: test_idem_unaffected_root_squash fails
110
* https://tracker.ceph.com/issues/59531
111
  "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio)" 
112
* https://tracker.ceph.com/issues/62658
113
    error during scrub thrashing: reached maximum tries (31) after waiting for 900 seconds
114
* https://tracker.ceph.com/issues/62580
115
    testing: Test failure: test_snapshot_remove (tasks.cephfs.test_strays.TestStrays)
116
* https://tracker.ceph.com/issues/62067
117
    ffsb.sh failure "Resource temporarily unavailable"
118
* https://tracker.ceph.com/issues/57655
119
    qa: fs:mixed-clients kernel_untar_build failure
120
* https://tracker.ceph.com/issues/62036
121
    src/mds/MDCache.cc: 5131: FAILED ceph_assert(isolated_inodes.empty())
122
* https://tracker.ceph.com/issues/58945
123
    qa: xfstests-dev's generic test suite has 20 failures with fuse client
124
* https://tracker.ceph.com/issues/62847
125
    mds: blogbench requests stuck (5mds+scrub+snaps-flush)
126
127 193 Venky Shankar
h3. 13 Oct 2023
128
129
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20231013.093215
130
131
* https://tracker.ceph.com/issues/52624
132
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
133
* https://tracker.ceph.com/issues/62936
134
    Test failure: test_mirroring_init_failure_with_recovery (tasks.cephfs.test_mirroring.TestMirroring)
135
* https://tracker.ceph.com/issues/47292
136
    cephfs-shell: test_df_for_valid_file failure
137
* https://tracker.ceph.com/issues/63141
138
    qa/cephfs: test_idem_unaffected_root_squash fails
139
* https://tracker.ceph.com/issues/62081
140
    tasks/fscrypt-common does not finish, timesout
141 1 Patrick Donnelly
* https://tracker.ceph.com/issues/58945
142
    qa: xfstests-dev's generic test suite has 20 failures with fuse client
143 194 Venky Shankar
* https://tracker.ceph.com/issues/63233
144
    mon|client|mds: valgrind reports possible leaks in the MDS
145 193 Venky Shankar
146 190 Patrick Donnelly
h3. 16 Oct 2023
147
148
https://pulpito.ceph.com/?branch=wip-batrick-testing-20231016.203825
149
150 192 Patrick Donnelly
Infrastructure issues:
151
* /teuthology/pdonnell-2023-10-19_12:04:12-fs-wip-batrick-testing-20231016.203825-distro-default-smithi/7432286/teuthology.log
152
    Host lost.
153
154 196 Patrick Donnelly
One followup fix:
155
* https://pulpito.ceph.com/pdonnell-2023-10-20_00:33:29-fs-wip-batrick-testing-20231016.203825-distro-default-smithi/
156
157 192 Patrick Donnelly
Failures:
158
159
* https://tracker.ceph.com/issues/56694
160
    qa: avoid blocking forever on hung umount
161
* https://tracker.ceph.com/issues/63089
162
    qa: tasks/mirror times out
163
* https://tracker.ceph.com/issues/52624
164
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
165
* https://tracker.ceph.com/issues/59531
166
  "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio)" 
167
* https://tracker.ceph.com/issues/57676
168
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
169
* https://tracker.ceph.com/issues/62658 
170
    error during scrub thrashing: reached maximum tries (31) after waiting for 900 seconds
171
* https://tracker.ceph.com/issues/61243
172
  test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
173
* https://tracker.ceph.com/issues/57656
174
  dbench: write failed on handle 10010 (Resource temporarily unavailable)
175
* https://tracker.ceph.com/issues/63233
176
  mon|client|mds: valgrind reports possible leaks in the MDS
177 197 Patrick Donnelly
* https://tracker.ceph.com/issues/63278
178
  kclient: may wrongly decode session messages and believe it is blocklisted (dead jobs)
179 192 Patrick Donnelly
180 189 Rishabh Dave
h3. 9 Oct 2023
181
182
https://pulpito.ceph.com/rishabh-2023-10-06_11:56:52-fs-rishabh-cephfs-mon-testing-default-smithi/
183
184
* https://tracker.ceph.com/issues/54460
185
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
186
* https://tracker.ceph.com/issues/63141
187
  test_idem_unaffected_root_squash (test_admin.TestFsAuthorizeUpdate) fails
188
* https://tracker.ceph.com/issues/62937
189
  logrotate doesn't support parallel execution on same set of logfiles
190
* https://tracker.ceph.com/issues/61400
191
  valgrind+ceph-mon issues
192
* https://tracker.ceph.com/issues/57676
193
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
194
* https://tracker.ceph.com/issues/55805
195
  error during scrub thrashing reached max tries in 900 secs
196
197 188 Venky Shankar
h3. 26 Sep 2023
198
199
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230926.081818
200
201
* https://tracker.ceph.com/issues/52624
202
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
203
* https://tracker.ceph.com/issues/62873
204
    qa: FAIL: test_client_blocklisted_oldest_tid (tasks.cephfs.test_client_limits.TestClientLimits)
205
* https://tracker.ceph.com/issues/61400
206
    valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
207
* https://tracker.ceph.com/issues/57676
208
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
209
* https://tracker.ceph.com/issues/62682
210
    mon: no mdsmap broadcast after "fs set joinable" is set to true
211
* https://tracker.ceph.com/issues/63089
212
    qa: tasks/mirror times out
213
214 185 Rishabh Dave
h3. 22 Sep 2023
215
216
https://pulpito.ceph.com/rishabh-2023-09-12_12:12:15-fs-wip-rishabh-2023sep12-b2-testing-default-smithi/
217
218
* https://tracker.ceph.com/issues/59348
219
  qa: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota
220
* https://tracker.ceph.com/issues/59344
221
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
222
* https://tracker.ceph.com/issues/59531
223
  "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio)" 
224
* https://tracker.ceph.com/issues/61574
225
  build failure for mdtest project
226
* https://tracker.ceph.com/issues/62702
227
  fsstress.sh: MDS slow requests for the internal 'rename' requests
228
* https://tracker.ceph.com/issues/57676
229
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
230
231
* https://tracker.ceph.com/issues/62863 
232
  deadlock in ceph-fuse causes teuthology job to hang and fail
233
* https://tracker.ceph.com/issues/62870
234
  test_cluster_info (tasks.cephfs.test_nfs.TestNFS)
235
* https://tracker.ceph.com/issues/62873
236
  test_client_blocklisted_oldest_tid (tasks.cephfs.test_client_limits.TestClientLimits)
237
238 186 Venky Shankar
h3. 20 Sep 2023
239
240
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230920.072635
241
242
* https://tracker.ceph.com/issues/52624
243
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
244
* https://tracker.ceph.com/issues/61400
245
    valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
246
* https://tracker.ceph.com/issues/61399
247
    libmpich: undefined references to fi_strerror
248
* https://tracker.ceph.com/issues/62081
249
    tasks/fscrypt-common does not finish, timesout
250
* https://tracker.ceph.com/issues/62658 
251
    error during scrub thrashing: reached maximum tries (31) after waiting for 900 seconds
252
* https://tracker.ceph.com/issues/62915
253
    qa/suites/fs/nfs: No orchestrator configured (try `ceph orch set backend`) while running test cases
254
* https://tracker.ceph.com/issues/59531
255
    quincy: "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio)"
256
* https://tracker.ceph.com/issues/62873
257
  qa: FAIL: test_client_blocklisted_oldest_tid (tasks.cephfs.test_client_limits.TestClientLimits)
258
* https://tracker.ceph.com/issues/62936
259
    Test failure: test_mirroring_init_failure_with_recovery (tasks.cephfs.test_mirroring.TestMirroring)
260
* https://tracker.ceph.com/issues/62937
261
    Command failed on smithi027 with status 3: 'sudo logrotate /etc/logrotate.d/ceph-test.conf'
262
* https://tracker.ceph.com/issues/62510
263
    snaptest-git-ceph.sh failure with fs/thrash
264
* https://tracker.ceph.com/issues/62081
265
    tasks/fscrypt-common does not finish, timesout
266
* https://tracker.ceph.com/issues/62126
267
    test failure: suites/blogbench.sh stops running
268 187 Venky Shankar
* https://tracker.ceph.com/issues/62682
269
    mon: no mdsmap broadcast after "fs set joinable" is set to true
270 186 Venky Shankar
271 184 Milind Changire
h3. 19 Sep 2023
272
273
http://pulpito.front.sepia.ceph.com/mchangir-2023-09-12_05:40:22-fs-wip-mchangir-testing-20230908.140927-testing-default-smithi/
274
275
* https://tracker.ceph.com/issues/58220#note-9
276
  workunit fs/test_python.sh: test_disk_quota_exceeeded_error failure
277
* https://tracker.ceph.com/issues/62702
278
  Command failed (workunit test suites/fsstress.sh) on smithi124 with status 124
279
* https://tracker.ceph.com/issues/57676
280
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
281
* https://tracker.ceph.com/issues/59348
282
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
283
* https://tracker.ceph.com/issues/52624
284
  qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
285
* https://tracker.ceph.com/issues/51964
286
  qa: test_cephfs_mirror_restart_sync_on_blocklist failure
287
* https://tracker.ceph.com/issues/61243
288
  test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
289
* https://tracker.ceph.com/issues/59344
290
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
291
* https://tracker.ceph.com/issues/62873
292
  qa: FAIL: test_client_blocklisted_oldest_tid (tasks.cephfs.test_client_limits.TestClientLimits)
293
* https://tracker.ceph.com/issues/59413
294
  cephfs: qa snaptest-git-ceph.sh failed with "got remote process result: 128"
295
* https://tracker.ceph.com/issues/53859
296
  qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
297
* https://tracker.ceph.com/issues/62482
298
  qa: cluster [WRN] Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
299
300 178 Patrick Donnelly
301 177 Venky Shankar
h3. 13 Sep 2023
302
303
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230908.065909
304
305
* https://tracker.ceph.com/issues/52624
306
      qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
307
* https://tracker.ceph.com/issues/57655
308
    qa: fs:mixed-clients kernel_untar_build failure
309
* https://tracker.ceph.com/issues/57676
310
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
311
* https://tracker.ceph.com/issues/61243
312
    qa: tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev - 17 tests failed
313
* https://tracker.ceph.com/issues/62567
314
    postgres workunit times out - MDS_SLOW_REQUEST in logs
315
* https://tracker.ceph.com/issues/61400
316
    valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
317
* https://tracker.ceph.com/issues/61399
318
    libmpich: undefined references to fi_strerror
319
* https://tracker.ceph.com/issues/57655
320
    qa: fs:mixed-clients kernel_untar_build failure
321
* https://tracker.ceph.com/issues/57676
322
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
323
* https://tracker.ceph.com/issues/51964
324
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
325
* https://tracker.ceph.com/issues/62081
326
    tasks/fscrypt-common does not finish, timesout
327 178 Patrick Donnelly
328 179 Patrick Donnelly
h3. 2023 Sep 12
329 178 Patrick Donnelly
330
https://pulpito.ceph.com/pdonnell-2023-09-12_14:07:50-fs-wip-batrick-testing-20230912.122437-distro-default-smithi/
331 1 Patrick Donnelly
332 181 Patrick Donnelly
A few failures caused by qa refactoring in https://github.com/ceph/ceph/pull/48130 ; notably:
333
334 182 Patrick Donnelly
* Test failure: test_export_pin_many (tasks.cephfs.test_exports.TestExportPin) caused by fragmentation from config changes.
335 181 Patrick Donnelly
336
Failures:
337
338 179 Patrick Donnelly
* https://tracker.ceph.com/issues/59348
339
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
340
* https://tracker.ceph.com/issues/57656
341
  dbench: write failed on handle 10010 (Resource temporarily unavailable)
342
* https://tracker.ceph.com/issues/55805
343
  error scrub thrashing reached max tries in 900 secs
344
* https://tracker.ceph.com/issues/62067
345
    ffsb.sh failure "Resource temporarily unavailable"
346
* https://tracker.ceph.com/issues/59344
347
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
348
* https://tracker.ceph.com/issues/61399
349 180 Patrick Donnelly
  libmpich: undefined references to fi_strerror
350
* https://tracker.ceph.com/issues/62832
351
  common: config_proxy deadlock during shutdown (and possibly other times)
352
* https://tracker.ceph.com/issues/59413
353 1 Patrick Donnelly
  cephfs: qa snaptest-git-ceph.sh failed with "got remote process result: 128"
354 181 Patrick Donnelly
* https://tracker.ceph.com/issues/57676
355
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
356
* https://tracker.ceph.com/issues/62567
357
  Command failed with status 1: "sudo TESTDIR=/home/ubuntu/cephtest bash -c 'sudo -u postgres -- pgbench -s 500 -i'"
358
* https://tracker.ceph.com/issues/54460
359
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
360
* https://tracker.ceph.com/issues/58220#note-9
361
  workunit fs/test_python.sh: test_disk_quota_exceeeded_error failure
362
* https://tracker.ceph.com/issues/59348
363
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
364 183 Patrick Donnelly
* https://tracker.ceph.com/issues/62847
365
    mds: blogbench requests stuck (5mds+scrub+snaps-flush)
366
* https://tracker.ceph.com/issues/62848
367
    qa: fail_fs upgrade scenario hanging
368
* https://tracker.ceph.com/issues/62081
369
    tasks/fscrypt-common does not finish, timesout
370 177 Venky Shankar
371 176 Venky Shankar
h3. 11 Sep 2023
372 175 Venky Shankar
373
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230830.153114
374
375
* https://tracker.ceph.com/issues/52624
376
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
377
* https://tracker.ceph.com/issues/61399
378
    qa: build failure for ior (the failed instance is when compiling `mdtest`)
379
* https://tracker.ceph.com/issues/57655
380
    qa: fs:mixed-clients kernel_untar_build failure
381
* https://tracker.ceph.com/issues/61399
382
    ior build failure
383
* https://tracker.ceph.com/issues/59531
384
    quincy: "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio)"
385
* https://tracker.ceph.com/issues/59344
386
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
387
* https://tracker.ceph.com/issues/59346
388
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
389
* https://tracker.ceph.com/issues/59348
390
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
391
* https://tracker.ceph.com/issues/57676
392
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
393
* https://tracker.ceph.com/issues/61243
394
  qa: tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev - 17 tests failed
395
* https://tracker.ceph.com/issues/62567
396
  postgres workunit times out - MDS_SLOW_REQUEST in logs
397
398
399 174 Rishabh Dave
h3. 6 Sep 2023 Run 2
400
401
https://pulpito.ceph.com/rishabh-2023-08-25_01:50:32-fs-wip-rishabh-2023aug3-b5-testing-default-smithi/ 
402
403
* https://tracker.ceph.com/issues/51964
404
  test_cephfs_mirror_restart_sync_on_blocklist failure
405
* https://tracker.ceph.com/issues/59348
406
  test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
407
* https://tracker.ceph.com/issues/53859
408
  qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
409
* https://tracker.ceph.com/issues/61892
410
  test_strays.TestStrays.test_snapshot_remove failed
411
* https://tracker.ceph.com/issues/54460
412
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
413
* https://tracker.ceph.com/issues/59346
414
  fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write"
415
* https://tracker.ceph.com/issues/59344
416
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
417
* https://tracker.ceph.com/issues/62484
418
  qa: ffsb.sh test failure
419
* https://tracker.ceph.com/issues/62567
420
  Command failed with status 1: "sudo TESTDIR=/home/ubuntu/cephtest bash -c 'sudo -u postgres -- pgbench -s 500 -i'"
421
  
422
* https://tracker.ceph.com/issues/61399
423
  ior build failure
424
* https://tracker.ceph.com/issues/57676
425
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
426
* https://tracker.ceph.com/issues/55805
427
  error scrub thrashing reached max tries in 900 secs
428
429 172 Rishabh Dave
h3. 6 Sep 2023
430 171 Rishabh Dave
431 173 Rishabh Dave
https://pulpito.ceph.com/rishabh-2023-08-10_20:16:46-fs-wip-rishabh-2023Aug1-b4-testing-default-smithi/
432 171 Rishabh Dave
433 1 Patrick Donnelly
* https://tracker.ceph.com/issues/53859
434
  qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
435 173 Rishabh Dave
* https://tracker.ceph.com/issues/51964
436
  qa: test_cephfs_mirror_restart_sync_on_blocklist failure
437 1 Patrick Donnelly
* https://tracker.ceph.com/issues/61892
438 173 Rishabh Dave
  test_snapshot_remove (test_strays.TestStrays) failed
439
* https://tracker.ceph.com/issues/59348
440
  qa: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota
441
* https://tracker.ceph.com/issues/54462
442
  Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128
443
* https://tracker.ceph.com/issues/62556
444
  test_acls: xfstests_dev: python2 is missing
445
* https://tracker.ceph.com/issues/62067
446
  ffsb.sh failure "Resource temporarily unavailable"
447
* https://tracker.ceph.com/issues/57656
448
  dbench: write failed on handle 10010 (Resource temporarily unavailable)
449 1 Patrick Donnelly
* https://tracker.ceph.com/issues/59346
450
  fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write"
451 171 Rishabh Dave
* https://tracker.ceph.com/issues/59344
452 173 Rishabh Dave
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
453
454 171 Rishabh Dave
* https://tracker.ceph.com/issues/61399
455
  ior build failure
456
* https://tracker.ceph.com/issues/57676
457
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
458
* https://tracker.ceph.com/issues/55805
459
  error scrub thrashing reached max tries in 900 secs
460 173 Rishabh Dave
461
* https://tracker.ceph.com/issues/62567
462
  Command failed on smithi008 with status 1: "sudo TESTDIR=/home/ubuntu/cephtest bash -c 'sudo -u postgres -- pgbench -s 500 -i'"
463
* https://tracker.ceph.com/issues/62702
464
  workunit test suites/fsstress.sh on smithi066 with status 124
465 170 Rishabh Dave
466
h3. 5 Sep 2023
467
468
https://pulpito.ceph.com/rishabh-2023-08-25_06:38:25-fs-wip-rishabh-2023aug3-b5-testing-default-smithi/
469
orch:cephadm suite run: http://pulpito.front.sepia.ceph.com/rishabh-2023-09-05_12:16:09-orch:cephadm-wip-rishabh-2023aug3-b5-testing-default-smithi/
470
  this run has failures but acc to Adam King these are not relevant and should be ignored
471
472
* https://tracker.ceph.com/issues/61892
473
  test_snapshot_remove (test_strays.TestStrays) failed
474
* https://tracker.ceph.com/issues/59348
475
  test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota
476
* https://tracker.ceph.com/issues/54462
477
  Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128
478
* https://tracker.ceph.com/issues/62067
479
  ffsb.sh failure "Resource temporarily unavailable"
480
* https://tracker.ceph.com/issues/57656 
481
  dbench: write failed on handle 10010 (Resource temporarily unavailable)
482
* https://tracker.ceph.com/issues/59346
483
  fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write"
484
* https://tracker.ceph.com/issues/59344
485
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
486
* https://tracker.ceph.com/issues/50223
487
  client.xxxx isn't responding to mclientcaps(revoke)
488
* https://tracker.ceph.com/issues/57655
489
  qa: fs:mixed-clients kernel_untar_build failure
490
* https://tracker.ceph.com/issues/62187
491
  iozone.sh: line 5: iozone: command not found
492
 
493
* https://tracker.ceph.com/issues/61399
494
  ior build failure
495
* https://tracker.ceph.com/issues/57676
496
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
497
* https://tracker.ceph.com/issues/55805
498
  error scrub thrashing reached max tries in 900 secs
499 169 Venky Shankar
500
501
h3. 31 Aug 2023
502
503
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230824.045828
504
505
* https://tracker.ceph.com/issues/52624
506
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
507
* https://tracker.ceph.com/issues/62187
508
    iozone: command not found
509
* https://tracker.ceph.com/issues/61399
510
    ior build failure
511
* https://tracker.ceph.com/issues/59531
512
    quincy: "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio)"
513
* https://tracker.ceph.com/issues/61399
514
    qa: build failure for ior (the failed instance is when compiling `mdtest`)
515
* https://tracker.ceph.com/issues/57655
516
    qa: fs:mixed-clients kernel_untar_build failure
517
* https://tracker.ceph.com/issues/59344
518
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
519
* https://tracker.ceph.com/issues/59346
520
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
521
* https://tracker.ceph.com/issues/59348
522
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
523
* https://tracker.ceph.com/issues/59413
524
    cephfs: qa snaptest-git-ceph.sh failed with "got remote process result: 128"
525
* https://tracker.ceph.com/issues/62653
526
    qa: unimplemented fcntl command: 1036 with fsstress
527
* https://tracker.ceph.com/issues/61400
528
    valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
529
* https://tracker.ceph.com/issues/62658
530
    error during scrub thrashing: reached maximum tries (31) after waiting for 900 seconds
531
* https://tracker.ceph.com/issues/62188
532
    AttributeError: 'RemoteProcess' object has no attribute 'read' (happens only with multis-auth test)
533 168 Venky Shankar
534
535
h3. 25 Aug 2023
536
537
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230822.064807
538
539
* https://tracker.ceph.com/issues/59344
540
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
541
* https://tracker.ceph.com/issues/59346
542
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
543
* https://tracker.ceph.com/issues/59348
544
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
545
* https://tracker.ceph.com/issues/57655
546
    qa: fs:mixed-clients kernel_untar_build failure
547
* https://tracker.ceph.com/issues/61243
548
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
549
* https://tracker.ceph.com/issues/61399
550
    ior build failure
551
* https://tracker.ceph.com/issues/61399
552
    qa: build failure for ior (the failed instance is when compiling `mdtest`)
553
* https://tracker.ceph.com/issues/62484
554
    qa: ffsb.sh test failure
555
* https://tracker.ceph.com/issues/59531
556
    quincy: "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio)"
557
* https://tracker.ceph.com/issues/62510
558
    snaptest-git-ceph.sh failure with fs/thrash
559 167 Venky Shankar
560
561
h3. 24 Aug 2023
562
563
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230822.060131
564
565
* https://tracker.ceph.com/issues/57676
566
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
567
* https://tracker.ceph.com/issues/51964
568
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
569
* https://tracker.ceph.com/issues/59344
570
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
571
* https://tracker.ceph.com/issues/59346
572
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
573
* https://tracker.ceph.com/issues/59348
574
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
575
* https://tracker.ceph.com/issues/61399
576
    ior build failure
577
* https://tracker.ceph.com/issues/61399
578
    qa: build failure for ior (the failed instance is when compiling `mdtest`)
579
* https://tracker.ceph.com/issues/62510
580
    snaptest-git-ceph.sh failure with fs/thrash
581
* https://tracker.ceph.com/issues/62484
582
    qa: ffsb.sh test failure
583
* https://tracker.ceph.com/issues/57087
584
    qa: test_fragmented_injection (tasks.cephfs.test_data_scan.TestDataScan) failure
585
* https://tracker.ceph.com/issues/57656
586
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
587
* https://tracker.ceph.com/issues/62187
588
    iozone: command not found
589
* https://tracker.ceph.com/issues/62188
590
    AttributeError: 'RemoteProcess' object has no attribute 'read' (happens only with multis-auth test)
591
* https://tracker.ceph.com/issues/62567
592
    postgres workunit times out - MDS_SLOW_REQUEST in logs
593 166 Venky Shankar
594
595
h3. 22 Aug 2023
596
597
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230809.035933
598
599
* https://tracker.ceph.com/issues/57676
600
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
601
* https://tracker.ceph.com/issues/51964
602
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
603
* https://tracker.ceph.com/issues/59344
604
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
605
* https://tracker.ceph.com/issues/59346
606
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
607
* https://tracker.ceph.com/issues/59348
608
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
609
* https://tracker.ceph.com/issues/61399
610
    ior build failure
611
* https://tracker.ceph.com/issues/61399
612
    qa: build failure for ior (the failed instance is when compiling `mdtest`)
613
* https://tracker.ceph.com/issues/57655
614
    qa: fs:mixed-clients kernel_untar_build failure
615
* https://tracker.ceph.com/issues/61243
616
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
617
* https://tracker.ceph.com/issues/62188
618
    AttributeError: 'RemoteProcess' object has no attribute 'read' (happens only with multis-auth test)
619
* https://tracker.ceph.com/issues/62510
620
    snaptest-git-ceph.sh failure with fs/thrash
621
* https://tracker.ceph.com/issues/62511
622
    src/mds/MDLog.cc: 299: FAILED ceph_assert(!mds_is_shutting_down)
623 165 Venky Shankar
624
625
h3. 14 Aug 2023
626
627
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230808.093601
628
629
* https://tracker.ceph.com/issues/51964
630
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
631
* https://tracker.ceph.com/issues/61400
632
    valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
633
* https://tracker.ceph.com/issues/61399
634
    ior build failure
635
* https://tracker.ceph.com/issues/59348
636
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
637
* https://tracker.ceph.com/issues/59531
638
    cluster [WRN] OSD bench result of 137706.272521 IOPS exceeded the threshold
639
* https://tracker.ceph.com/issues/59344
640
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
641
* https://tracker.ceph.com/issues/59346
642
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
643
* https://tracker.ceph.com/issues/61399
644
    qa: build failure for ior (the failed instance is when compiling `mdtest`)
645
* https://tracker.ceph.com/issues/59684 [kclient bug]
646
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
647
* https://tracker.ceph.com/issues/61243 (NEW)
648
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
649
* https://tracker.ceph.com/issues/57655
650
    qa: fs:mixed-clients kernel_untar_build failure
651
* https://tracker.ceph.com/issues/57656
652
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
653 163 Venky Shankar
654
655
h3. 28 JULY 2023
656
657
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230725.053049
658
659
* https://tracker.ceph.com/issues/51964
660
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
661
* https://tracker.ceph.com/issues/61400
662
    valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
663
* https://tracker.ceph.com/issues/61399
664
    ior build failure
665
* https://tracker.ceph.com/issues/57676
666
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
667
* https://tracker.ceph.com/issues/59348
668
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
669
* https://tracker.ceph.com/issues/59531
670
    cluster [WRN] OSD bench result of 137706.272521 IOPS exceeded the threshold
671
* https://tracker.ceph.com/issues/59344
672
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
673
* https://tracker.ceph.com/issues/59346
674
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
675
* https://github.com/ceph/ceph/pull/52556
676
    task/test_nfs: AttributeError: 'TestNFS' object has no attribute 'run_ceph_cmd' (see note #4)
677
* https://tracker.ceph.com/issues/62187
678
    iozone: command not found
679
* https://tracker.ceph.com/issues/61399
680
    qa: build failure for ior (the failed instance is when compiling `mdtest`)
681
* https://tracker.ceph.com/issues/62188
682 164 Rishabh Dave
    AttributeError: 'RemoteProcess' object has no attribute 'read' (happens only with multis-auth test)
683 158 Rishabh Dave
684
h3. 24 Jul 2023
685
686
https://pulpito.ceph.com/rishabh-2023-07-13_21:35:13-fs-wip-rishabh-2023Jul13-testing-default-smithi/
687
https://pulpito.ceph.com/rishabh-2023-07-14_10:26:42-fs-wip-rishabh-2023Jul13-testing-default-smithi/
688
There were few failure from one of the PRs under testing. Following run confirms that removing this PR fixes these failures -
689
https://pulpito.ceph.com/rishabh-2023-07-18_02:11:50-fs-wip-rishabh-2023Jul13-m-quota-testing-default-smithi/
690
One more extra run to check if blogbench.sh fail every time:
691
https://pulpito.ceph.com/rishabh-2023-07-21_17:58:19-fs-wip-rishabh-2023Jul13-m-quota-testing-default-smithi/
692
blogbench.sh failure were seen on above runs for first time, following run with main branch that confirms that "blogbench.sh" was not related to any of the PRs that are under testing -
693 161 Rishabh Dave
https://pulpito.ceph.com/rishabh-2023-07-21_21:30:53-fs-wip-rishabh-2023Jul13-base-2-testing-default-smithi/
694
695
* https://tracker.ceph.com/issues/61892
696
  test_snapshot_remove (test_strays.TestStrays) failed
697
* https://tracker.ceph.com/issues/53859
698
  test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
699
* https://tracker.ceph.com/issues/61982
700
  test_clean_stale_subvolume_snapshot_metadata (tasks.cephfs.test_volumes.TestSubvolumeSnapshots)
701
* https://tracker.ceph.com/issues/52438
702
  qa: ffsb timeout
703
* https://tracker.ceph.com/issues/54460
704
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
705
* https://tracker.ceph.com/issues/57655
706
  qa: fs:mixed-clients kernel_untar_build failure
707
* https://tracker.ceph.com/issues/48773
708
  reached max tries: scrub does not complete
709
* https://tracker.ceph.com/issues/58340
710
  mds: fsstress.sh hangs with multimds
711
* https://tracker.ceph.com/issues/61400
712
  valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
713
* https://tracker.ceph.com/issues/57206
714
  libcephfs/test.sh: ceph_test_libcephfs_reclaim
715
  
716
* https://tracker.ceph.com/issues/57656
717
  [testing] dbench: write failed on handle 10010 (Resource temporarily unavailable)
718
* https://tracker.ceph.com/issues/61399
719
  ior build failure
720
* https://tracker.ceph.com/issues/57676
721
  error during scrub thrashing: backtrace
722
  
723
* https://tracker.ceph.com/issues/38452
724
  'sudo -u postgres -- pgbench -s 500 -i' failed
725 158 Rishabh Dave
* https://tracker.ceph.com/issues/62126
726 157 Venky Shankar
  blogbench.sh failure
727
728
h3. 18 July 2023
729
730
* https://tracker.ceph.com/issues/52624
731
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
732
* https://tracker.ceph.com/issues/57676
733
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
734
* https://tracker.ceph.com/issues/54460
735
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
736
* https://tracker.ceph.com/issues/57655
737
    qa: fs:mixed-clients kernel_untar_build failure
738
* https://tracker.ceph.com/issues/51964
739
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
740
* https://tracker.ceph.com/issues/59344
741
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
742
* https://tracker.ceph.com/issues/61182
743
    cephfs-mirror-ha-workunit: reached maximum tries (50) after waiting for 300 seconds
744
* https://tracker.ceph.com/issues/61957
745
    test_client_limits.TestClientLimits.test_client_release_bug
746
* https://tracker.ceph.com/issues/59348
747
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
748
* https://tracker.ceph.com/issues/61892
749
    test_strays.TestStrays.test_snapshot_remove failed
750
* https://tracker.ceph.com/issues/59346
751
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
752
* https://tracker.ceph.com/issues/44565
753
    src/mds/SimpleLock.h: 528: FAILED ceph_assert(state == LOCK_XLOCK || state == LOCK_XLOCKDONE || state == LOCK_XLOCKSNAP || state == LOCK_LOCK_XLOCK || state == LOCK_LOCK || is_locallock())
754
* https://tracker.ceph.com/issues/62067
755
    ffsb.sh failure "Resource temporarily unavailable"
756 156 Venky Shankar
757
758
h3. 17 July 2023
759
760
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230704.040136
761
762
* https://tracker.ceph.com/issues/61982
763
    Test failure: test_clean_stale_subvolume_snapshot_metadata (tasks.cephfs.test_volumes.TestSubvolumeSnapshots)
764
* https://tracker.ceph.com/issues/59344
765
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
766
* https://tracker.ceph.com/issues/61182
767
    cephfs-mirror-ha-workunit: reached maximum tries (50) after waiting for 300 seconds
768
* https://tracker.ceph.com/issues/61957
769
    test_client_limits.TestClientLimits.test_client_release_bug
770
* https://tracker.ceph.com/issues/61400
771
    valgrind+ceph-mon: segmentation fault in rocksdb+tcmalloc
772
* https://tracker.ceph.com/issues/59348
773
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
774
* https://tracker.ceph.com/issues/61892
775
    test_strays.TestStrays.test_snapshot_remove failed
776
* https://tracker.ceph.com/issues/59346
777
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
778
* https://tracker.ceph.com/issues/62036
779
    src/mds/MDCache.cc: 5131: FAILED ceph_assert(isolated_inodes.empty())
780
* https://tracker.ceph.com/issues/61737
781
    coredump from '/bin/podman pull quay.ceph.io/ceph-ci/ceph:pacific'
782
* https://tracker.ceph.com/issues/44565
783
    src/mds/SimpleLock.h: 528: FAILED ceph_assert(state == LOCK_XLOCK || state == LOCK_XLOCKDONE || state == LOCK_XLOCKSNAP || state == LOCK_LOCK_XLOCK || state == LOCK_LOCK || is_locallock())
784 155 Rishabh Dave
785 1 Patrick Donnelly
786 153 Rishabh Dave
h3. 13 July 2023 Run 2
787 152 Rishabh Dave
788
789
https://pulpito.ceph.com/rishabh-2023-07-08_23:33:40-fs-wip-rishabh-2023Jul9-testing-default-smithi/
790
https://pulpito.ceph.com/rishabh-2023-07-09_20:19:09-fs-wip-rishabh-2023Jul9-testing-default-smithi/
791
792
* https://tracker.ceph.com/issues/61957
793
  test_client_limits.TestClientLimits.test_client_release_bug
794
* https://tracker.ceph.com/issues/61982
795
  Test failure: test_clean_stale_subvolume_snapshot_metadata (tasks.cephfs.test_volumes.TestSubvolumeSnapshots)
796
* https://tracker.ceph.com/issues/59348
797
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
798
* https://tracker.ceph.com/issues/59344
799
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
800
* https://tracker.ceph.com/issues/54460
801
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
802
* https://tracker.ceph.com/issues/57655
803
  qa: fs:mixed-clients kernel_untar_build failure
804
* https://tracker.ceph.com/issues/61400
805
  valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
806
* https://tracker.ceph.com/issues/61399
807
  ior build failure
808
809 151 Venky Shankar
h3. 13 July 2023
810
811
https://pulpito.ceph.com/vshankar-2023-07-04_11:45:30-fs-wip-vshankar-testing-20230704.040242-testing-default-smithi/
812
813
* https://tracker.ceph.com/issues/54460
814
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
815
* https://tracker.ceph.com/issues/61400
816
    valgrind+ceph-mon: segmentation fault in rocksdb+tcmalloc
817
* https://tracker.ceph.com/issues/57655
818
    qa: fs:mixed-clients kernel_untar_build failure
819
* https://tracker.ceph.com/issues/61945
820
    LibCephFS.DelegTimeout failure
821
* https://tracker.ceph.com/issues/52624
822
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
823
* https://tracker.ceph.com/issues/57676
824
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
825
* https://tracker.ceph.com/issues/59348
826
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
827
* https://tracker.ceph.com/issues/59344
828
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
829
* https://tracker.ceph.com/issues/51964
830
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
831
* https://tracker.ceph.com/issues/59346
832
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
833
* https://tracker.ceph.com/issues/61982
834
    Test failure: test_clean_stale_subvolume_snapshot_metadata (tasks.cephfs.test_volumes.TestSubvolumeSnapshots)
835 150 Rishabh Dave
836
837
h3. 13 Jul 2023
838
839
https://pulpito.ceph.com/rishabh-2023-07-05_22:21:20-fs-wip-rishabh-2023Jul5-testing-default-smithi/
840
https://pulpito.ceph.com/rishabh-2023-07-06_19:33:28-fs-wip-rishabh-2023Jul5-testing-default-smithi/
841
842
* https://tracker.ceph.com/issues/61957
843
  test_client_limits.TestClientLimits.test_client_release_bug
844
* https://tracker.ceph.com/issues/59348
845
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
846
* https://tracker.ceph.com/issues/59346
847
  fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
848
* https://tracker.ceph.com/issues/48773
849
  scrub does not complete: reached max tries
850
* https://tracker.ceph.com/issues/59344
851
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
852
* https://tracker.ceph.com/issues/52438
853
  qa: ffsb timeout
854
* https://tracker.ceph.com/issues/57656
855
  [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
856
* https://tracker.ceph.com/issues/58742
857
  xfstests-dev: kcephfs: generic
858
* https://tracker.ceph.com/issues/61399
859 148 Rishabh Dave
  libmpich: undefined references to fi_strerror
860 149 Rishabh Dave
861 148 Rishabh Dave
h3. 12 July 2023
862
863
https://pulpito.ceph.com/rishabh-2023-07-05_18:32:52-fs-wip-rishabh-CephManager-in-CephFSTestCase-testing-default-smithi/
864
https://pulpito.ceph.com/rishabh-2023-07-06_19:46:43-fs-wip-rishabh-CephManager-in-CephFSTestCase-testing-default-smithi/
865
866
* https://tracker.ceph.com/issues/61892
867
  test_strays.TestStrays.test_snapshot_remove failed
868
* https://tracker.ceph.com/issues/59348
869
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
870
* https://tracker.ceph.com/issues/53859
871
  qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
872
* https://tracker.ceph.com/issues/59346
873
  fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write"
874
* https://tracker.ceph.com/issues/58742
875
  xfstests-dev: kcephfs: generic
876
* https://tracker.ceph.com/issues/59344
877
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
878
* https://tracker.ceph.com/issues/52438
879
  qa: ffsb timeout
880
* https://tracker.ceph.com/issues/57656
881
  [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
882
* https://tracker.ceph.com/issues/54460
883
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
884
* https://tracker.ceph.com/issues/57655
885
  qa: fs:mixed-clients kernel_untar_build failure
886
* https://tracker.ceph.com/issues/61182
887
  cephfs-mirror-ha-workunit: reached maximum tries (50) after waiting for 300 seconds
888
* https://tracker.ceph.com/issues/61400
889
  valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
890 147 Rishabh Dave
* https://tracker.ceph.com/issues/48773
891 146 Patrick Donnelly
  reached max tries: scrub does not complete
892
893
h3. 05 July 2023
894
895
https://pulpito.ceph.com/pdonnell-2023-07-05_03:38:33-fs:libcephfs-wip-pdonnell-testing-20230705.003205-distro-default-smithi/
896
897 137 Rishabh Dave
* https://tracker.ceph.com/issues/59346
898 143 Rishabh Dave
  fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write"
899
900
h3. 27 Jun 2023
901
902
https://pulpito.ceph.com/rishabh-2023-06-21_23:38:17-fs-wip-rishabh-improvements-authmon-testing-default-smithi/
903 144 Rishabh Dave
https://pulpito.ceph.com/rishabh-2023-06-23_17:37:30-fs-wip-rishabh-improvements-authmon-distro-default-smithi/
904
905
* https://tracker.ceph.com/issues/59348
906
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
907
* https://tracker.ceph.com/issues/54460
908
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
909
* https://tracker.ceph.com/issues/59346
910
  fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write"
911
* https://tracker.ceph.com/issues/59344
912
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
913
* https://tracker.ceph.com/issues/61399
914
  libmpich: undefined references to fi_strerror
915
* https://tracker.ceph.com/issues/50223
916
  client.xxxx isn't responding to mclientcaps(revoke)
917 143 Rishabh Dave
* https://tracker.ceph.com/issues/61831
918
  Test failure: test_mirroring_init_failure_with_recovery (tasks.cephfs.test_mirroring.TestMirroring)
919 142 Venky Shankar
920
921
h3. 22 June 2023
922
923
* https://tracker.ceph.com/issues/57676
924
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
925
* https://tracker.ceph.com/issues/54460
926
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
927
* https://tracker.ceph.com/issues/59344
928
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
929
* https://tracker.ceph.com/issues/59348
930
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
931
* https://tracker.ceph.com/issues/61400
932
    valgrind+ceph-mon: segmentation fault in rocksdb+tcmalloc
933
* https://tracker.ceph.com/issues/57655
934
    qa: fs:mixed-clients kernel_untar_build failure
935
* https://tracker.ceph.com/issues/61394
936
    qa/quincy: cluster [WRN] evicting unresponsive client smithi152 (4298), after 303.726 seconds" in cluster log
937
* https://tracker.ceph.com/issues/61762
938
    qa: wait_for_clean: failed before timeout expired
939
* https://tracker.ceph.com/issues/61775
940
    cephfs-mirror: mirror daemon does not shutdown (in mirror ha tests)
941
* https://tracker.ceph.com/issues/44565
942
    src/mds/SimpleLock.h: 528: FAILED ceph_assert(state == LOCK_XLOCK || state == LOCK_XLOCKDONE || state == LOCK_XLOCKSNAP || state == LOCK_LOCK_XLOCK || state == LOCK_LOCK || is_locallock())
943
* https://tracker.ceph.com/issues/61790
944
    cephfs client to mds comms remain silent after reconnect
945
* https://tracker.ceph.com/issues/61791
946
    snaptest-git-ceph.sh test timed out (job dead)
947 139 Venky Shankar
948
949
h3. 20 June 2023
950
951
https://pulpito.ceph.com/vshankar-2023-06-15_04:58:28-fs-wip-vshankar-testing-20230614.124123-testing-default-smithi/
952
953
* https://tracker.ceph.com/issues/57676
954
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
955
* https://tracker.ceph.com/issues/54460
956
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
957 140 Venky Shankar
* https://tracker.ceph.com/issues/54462
958 1 Patrick Donnelly
  Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128
959 141 Venky Shankar
* https://tracker.ceph.com/issues/58340
960 139 Venky Shankar
  mds: fsstress.sh hangs with multimds
961
* https://tracker.ceph.com/issues/59344
962
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
963
* https://tracker.ceph.com/issues/59348
964
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
965
* https://tracker.ceph.com/issues/57656
966
  [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
967
* https://tracker.ceph.com/issues/61400
968
    valgrind+ceph-mon: segmentation fault in rocksdb+tcmalloc
969
* https://tracker.ceph.com/issues/57655
970
    qa: fs:mixed-clients kernel_untar_build failure
971
* https://tracker.ceph.com/issues/44565
972
    src/mds/SimpleLock.h: 528: FAILED ceph_assert(state == LOCK_XLOCK || state == LOCK_XLOCKDONE || state == LOCK_XLOCKSNAP || state == LOCK_LOCK_XLOCK || state == LOCK_LOCK || is_locallock())
973
* https://tracker.ceph.com/issues/61737
974 138 Rishabh Dave
    coredump from '/bin/podman pull quay.ceph.io/ceph-ci/ceph:pacific'
975
976
h3. 16 June 2023
977
978 1 Patrick Donnelly
https://pulpito.ceph.com/rishabh-2023-05-16_10:39:13-fs-wip-rishabh-2023May15-1524-testing-default-smithi/
979 145 Rishabh Dave
https://pulpito.ceph.com/rishabh-2023-05-17_11:09:48-fs-wip-rishabh-2023May15-1524-testing-default-smithi/
980 138 Rishabh Dave
https://pulpito.ceph.com/rishabh-2023-05-18_10:01:53-fs-wip-rishabh-2023May15-1524-testing-default-smithi/
981 1 Patrick Donnelly
(bins were rebuilt with a subset of orig PRs) https://pulpito.ceph.com/rishabh-2023-06-09_10:19:22-fs-wip-rishabh-2023Jun9-1308-testing-default-smithi/
982
983
984
* https://tracker.ceph.com/issues/59344
985
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
986 138 Rishabh Dave
* https://tracker.ceph.com/issues/59348
987
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
988 145 Rishabh Dave
* https://tracker.ceph.com/issues/59346
989
  fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write"
990
* https://tracker.ceph.com/issues/57656
991
  [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
992
* https://tracker.ceph.com/issues/54460
993
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
994 138 Rishabh Dave
* https://tracker.ceph.com/issues/54462
995
  Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128
996 145 Rishabh Dave
* https://tracker.ceph.com/issues/61399
997
  libmpich: undefined references to fi_strerror
998
* https://tracker.ceph.com/issues/58945
999
  xfstests-dev: ceph-fuse: generic 
1000 138 Rishabh Dave
* https://tracker.ceph.com/issues/58742
1001 136 Patrick Donnelly
  xfstests-dev: kcephfs: generic
1002
1003
h3. 24 May 2023
1004
1005
https://pulpito.ceph.com/pdonnell-2023-05-23_18:20:18-fs-wip-pdonnell-testing-20230523.134409-distro-default-smithi/
1006
1007
* https://tracker.ceph.com/issues/57676
1008
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1009
* https://tracker.ceph.com/issues/59683
1010
    Error: Unable to find a match: userspace-rcu-devel libedit-devel device-mapper-devel with fscrypt tests
1011
* https://tracker.ceph.com/issues/61399
1012
    qa: "[Makefile:299: ior] Error 1"
1013
* https://tracker.ceph.com/issues/61265
1014
    qa: tasks.cephfs.fuse_mount:process failed to terminate after unmount
1015
* https://tracker.ceph.com/issues/59348
1016
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1017
* https://tracker.ceph.com/issues/59346
1018
    qa/workunits/fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write"
1019
* https://tracker.ceph.com/issues/61400
1020
    valgrind+ceph-mon: segmentation fault in rocksdb+tcmalloc
1021
* https://tracker.ceph.com/issues/54460
1022
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1023
* https://tracker.ceph.com/issues/51964
1024
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1025
* https://tracker.ceph.com/issues/59344
1026
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
1027
* https://tracker.ceph.com/issues/61407
1028
    mds: abort on CInode::verify_dirfrags
1029
* https://tracker.ceph.com/issues/48773
1030
    qa: scrub does not complete
1031
* https://tracker.ceph.com/issues/57655
1032
    qa: fs:mixed-clients kernel_untar_build failure
1033
* https://tracker.ceph.com/issues/61409
1034 128 Venky Shankar
    qa: _test_stale_caps does not wait for file flush before stat
1035
1036
h3. 15 May 2023
1037 130 Venky Shankar
1038 128 Venky Shankar
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230509.090020
1039
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230509.090020-6
1040
1041
* https://tracker.ceph.com/issues/52624
1042
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
1043
* https://tracker.ceph.com/issues/54460
1044
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1045
* https://tracker.ceph.com/issues/57676
1046
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1047
* https://tracker.ceph.com/issues/59684 [kclient bug]
1048
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
1049
* https://tracker.ceph.com/issues/59348
1050
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1051 131 Venky Shankar
* https://tracker.ceph.com/issues/61148
1052
    dbench test results in call trace in dmesg [kclient bug]
1053 133 Kotresh Hiremath Ravishankar
* https://tracker.ceph.com/issues/58340
1054 134 Kotresh Hiremath Ravishankar
    mds: fsstress.sh hangs with multimds
1055 125 Venky Shankar
1056
 
1057 129 Rishabh Dave
h3. 11 May 2023
1058
1059
https://pulpito.ceph.com/yuriw-2023-05-10_18:21:40-fs-wip-yuri7-testing-2023-05-10-0742-distro-default-smithi/
1060
1061
* https://tracker.ceph.com/issues/59684 [kclient bug]
1062
  Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
1063
* https://tracker.ceph.com/issues/59348
1064
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1065
* https://tracker.ceph.com/issues/57655
1066
  qa: fs:mixed-clients kernel_untar_build failure
1067
* https://tracker.ceph.com/issues/57676
1068
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
1069
* https://tracker.ceph.com/issues/55805
1070
  error during scrub thrashing reached max tries in 900 secs
1071
* https://tracker.ceph.com/issues/54460
1072
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1073
* https://tracker.ceph.com/issues/57656
1074
  [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
1075
* https://tracker.ceph.com/issues/58220
1076
  Command failed (workunit test fs/quota/quota.sh) on smithi081 with status 1:
1077 1 Patrick Donnelly
* https://tracker.ceph.com/issues/58220#note-9
1078
  workunit fs/test_python.sh: test_disk_quota_exceeeded_error failure
1079 134 Kotresh Hiremath Ravishankar
* https://tracker.ceph.com/issues/59342
1080
  qa/workunits/kernel_untar_build.sh failed when compiling the Linux source
1081 135 Kotresh Hiremath Ravishankar
* https://tracker.ceph.com/issues/58949
1082
    test_cephfs.test_disk_quota_exceeeded_error - AssertionError: DiskQuotaExceeded not raised by write
1083 129 Rishabh Dave
* https://tracker.ceph.com/issues/61243 (NEW)
1084
  test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
1085
1086 125 Venky Shankar
h3. 11 May 2023
1087 127 Venky Shankar
1088
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230509.054005
1089 126 Venky Shankar
1090 125 Venky Shankar
(no fsstress job failure [https://tracker.ceph.com/issues/58340] since https://github.com/ceph/ceph/pull/49553
1091
 was included in the branch, however, the PR got updated and needs retest).
1092
1093
* https://tracker.ceph.com/issues/52624
1094
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
1095
* https://tracker.ceph.com/issues/54460
1096
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1097
* https://tracker.ceph.com/issues/57676
1098
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1099
* https://tracker.ceph.com/issues/59683
1100
    Error: Unable to find a match: userspace-rcu-devel libedit-devel device-mapper-devel with fscrypt tests
1101
* https://tracker.ceph.com/issues/59684 [kclient bug]
1102
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
1103
* https://tracker.ceph.com/issues/59348
1104 124 Venky Shankar
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1105
1106
h3. 09 May 2023
1107
1108
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230506.143554
1109
1110
* https://tracker.ceph.com/issues/52624
1111
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
1112
* https://tracker.ceph.com/issues/58340
1113
    mds: fsstress.sh hangs with multimds
1114
* https://tracker.ceph.com/issues/54460
1115
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1116
* https://tracker.ceph.com/issues/57676
1117
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1118
* https://tracker.ceph.com/issues/51964
1119
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1120
* https://tracker.ceph.com/issues/59350
1121
    qa: test_flag_scrub_mdsdir (tasks.cephfs.test_scrub_checks.TestScrubChecks) ... ERROR
1122
* https://tracker.ceph.com/issues/59683
1123
    Error: Unable to find a match: userspace-rcu-devel libedit-devel device-mapper-devel with fscrypt tests
1124
* https://tracker.ceph.com/issues/59684 [kclient bug]
1125
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
1126
* https://tracker.ceph.com/issues/59348
1127 123 Venky Shankar
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1128
1129
h3. 10 Apr 2023
1130
1131
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230330.105356
1132
1133
* https://tracker.ceph.com/issues/52624
1134
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
1135
* https://tracker.ceph.com/issues/58340
1136
    mds: fsstress.sh hangs with multimds
1137
* https://tracker.ceph.com/issues/54460
1138
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1139
* https://tracker.ceph.com/issues/57676
1140
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1141 119 Rishabh Dave
* https://tracker.ceph.com/issues/51964
1142 120 Rishabh Dave
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1143 121 Rishabh Dave
1144 120 Rishabh Dave
h3. 31 Mar 2023
1145 122 Rishabh Dave
1146
run: http://pulpito.front.sepia.ceph.com/rishabh-2023-03-03_21:39:49-fs-wip-rishabh-2023Mar03-2316-testing-default-smithi/
1147 120 Rishabh Dave
re-run (some PRs removed from batch): http://pulpito.front.sepia.ceph.com/rishabh-2023-03-11_05:54:03-fs-wip-rishabh-2023Mar10-1727-testing-default-smithi/
1148
re-run (some PRs removed from batch): http://pulpito.front.sepia.ceph.com/rishabh-2023-03-23_08:27:28-fs-wip-rishabh-2023Mar20-2250-testing-default-smithi/
1149
1150
There were many more re-runs for "failed+dead" jobs as well as for individual jobs. half of the PRs from the batch were removed (gradually over subsequent re-runs).
1151
1152
* https://tracker.ceph.com/issues/57676
1153
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
1154
* https://tracker.ceph.com/issues/54460
1155
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1156
* https://tracker.ceph.com/issues/58220
1157
  Command failed (workunit test fs/quota/quota.sh) on smithixxx with status 1
1158
* https://tracker.ceph.com/issues/58220#note-9
1159
  workunit fs/test_python.sh: test_disk_quota_exceeeded_error failure
1160
* https://tracker.ceph.com/issues/56695
1161
  Command failed (workunit test suites/pjd.sh)
1162
* https://tracker.ceph.com/issues/58564 
1163
  workuit dbench failed with error code 1
1164
* https://tracker.ceph.com/issues/57206
1165
  libcephfs/test.sh: ceph_test_libcephfs_reclaim
1166
* https://tracker.ceph.com/issues/57580
1167
  Test failure: test_newops_getvxattr (tasks.cephfs.test_newops.TestNewOps)
1168
* https://tracker.ceph.com/issues/58940
1169
  ceph osd hit ceph_abort
1170
* https://tracker.ceph.com/issues/55805
1171 118 Venky Shankar
  error scrub thrashing reached max tries in 900 secs
1172
1173
h3. 30 March 2023
1174
1175
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230315.085747
1176
1177
* https://tracker.ceph.com/issues/58938
1178
    qa: xfstests-dev's generic test suite has 7 failures with kclient
1179
* https://tracker.ceph.com/issues/51964
1180
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1181
* https://tracker.ceph.com/issues/58340
1182 114 Venky Shankar
    mds: fsstress.sh hangs with multimds
1183
1184 115 Venky Shankar
h3. 29 March 2023
1185 114 Venky Shankar
1186
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230317.095222
1187
1188
* https://tracker.ceph.com/issues/56695
1189
    [RHEL stock] pjd test failures
1190
* https://tracker.ceph.com/issues/57676
1191
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1192
* https://tracker.ceph.com/issues/57087
1193
    qa: test_fragmented_injection (tasks.cephfs.test_data_scan.TestDataScan) failure
1194 116 Venky Shankar
* https://tracker.ceph.com/issues/58340
1195
    mds: fsstress.sh hangs with multimds
1196 114 Venky Shankar
* https://tracker.ceph.com/issues/57655
1197
    qa: fs:mixed-clients kernel_untar_build failure
1198 117 Venky Shankar
* https://tracker.ceph.com/issues/59230
1199
    Test failure: test_object_deletion (tasks.cephfs.test_damage.TestDamage)
1200 114 Venky Shankar
* https://tracker.ceph.com/issues/58938
1201 113 Venky Shankar
    qa: xfstests-dev's generic test suite has 7 failures with kclient
1202
1203
h3. 13 Mar 2023
1204
1205
* https://tracker.ceph.com/issues/56695
1206
    [RHEL stock] pjd test failures
1207
* https://tracker.ceph.com/issues/57676
1208
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1209
* https://tracker.ceph.com/issues/51964
1210
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1211
* https://tracker.ceph.com/issues/54460
1212
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1213
* https://tracker.ceph.com/issues/57656
1214 112 Venky Shankar
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
1215
1216
h3. 09 Mar 2023
1217
1218
https://pulpito.ceph.com/vshankar-2023-03-03_04:39:14-fs-wip-vshankar-testing-20230303.023823-testing-default-smithi/
1219
https://pulpito.ceph.com/vshankar-2023-03-08_15:12:36-fs-wip-vshankar-testing-20230308.112059-testing-default-smithi/
1220
1221
* https://tracker.ceph.com/issues/56695
1222
    [RHEL stock] pjd test failures
1223
* https://tracker.ceph.com/issues/57676
1224
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1225
* https://tracker.ceph.com/issues/51964
1226
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1227
* https://tracker.ceph.com/issues/54460
1228
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1229
* https://tracker.ceph.com/issues/58340
1230
    mds: fsstress.sh hangs with multimds
1231
* https://tracker.ceph.com/issues/57087
1232 111 Venky Shankar
    qa: test_fragmented_injection (tasks.cephfs.test_data_scan.TestDataScan) failure
1233
1234
h3. 07 Mar 2023
1235
1236
https://pulpito.ceph.com/vshankar-2023-03-02_09:21:58-fs-wip-vshankar-testing-20230222.044949-testing-default-smithi/
1237
https://pulpito.ceph.com/vshankar-2023-03-07_05:15:12-fs-wip-vshankar-testing-20230307.030510-testing-default-smithi/
1238
1239
* https://tracker.ceph.com/issues/56695
1240
    [RHEL stock] pjd test failures
1241
* https://tracker.ceph.com/issues/57676
1242
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1243
* https://tracker.ceph.com/issues/51964
1244
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1245
* https://tracker.ceph.com/issues/57656
1246
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
1247
* https://tracker.ceph.com/issues/57655
1248
    qa: fs:mixed-clients kernel_untar_build failure
1249
* https://tracker.ceph.com/issues/58220
1250
    Command failed (workunit test fs/quota/quota.sh) on smithi081 with status 1:
1251
* https://tracker.ceph.com/issues/54460
1252
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1253
* https://tracker.ceph.com/issues/58934
1254 109 Venky Shankar
    snaptest-git-ceph.sh failure with ceph-fuse
1255
1256
h3. 28 Feb 2023
1257
1258
https://pulpito.ceph.com/vshankar-2023-02-24_02:11:45-fs-wip-vshankar-testing-20230222.025426-testing-default-smithi/
1259
1260
* https://tracker.ceph.com/issues/56695
1261
    [RHEL stock] pjd test failures
1262
* https://tracker.ceph.com/issues/57676
1263
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1264 110 Venky Shankar
* https://tracker.ceph.com/issues/56446
1265 109 Venky Shankar
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
1266
1267 107 Venky Shankar
(teuthology infra issues causing testing delays - merging PRs which have tests passing)
1268
1269
h3. 25 Jan 2023
1270
1271
https://pulpito.ceph.com/vshankar-2023-01-25_07:57:32-fs-wip-vshankar-testing-20230125.055346-testing-default-smithi/
1272
1273
* https://tracker.ceph.com/issues/52624
1274
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
1275
* https://tracker.ceph.com/issues/56695
1276
    [RHEL stock] pjd test failures
1277
* https://tracker.ceph.com/issues/57676
1278
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1279
* https://tracker.ceph.com/issues/56446
1280
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
1281
* https://tracker.ceph.com/issues/57206
1282
    libcephfs/test.sh: ceph_test_libcephfs_reclaim
1283
* https://tracker.ceph.com/issues/58220
1284
    Command failed (workunit test fs/quota/quota.sh) on smithi081 with status 1:
1285
* https://tracker.ceph.com/issues/58340
1286
  mds: fsstress.sh hangs with multimds
1287
* https://tracker.ceph.com/issues/56011
1288
    fs/thrash: snaptest-snap-rm-cmp.sh fails in mds5sum comparison
1289
* https://tracker.ceph.com/issues/54460
1290 101 Rishabh Dave
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1291
1292
h3. 30 JAN 2023
1293
1294
run: http://pulpito.front.sepia.ceph.com/rishabh-2022-11-28_08:04:11-fs-wip-rishabh-testing-2022Nov24-1818-testing-default-smithi/
1295
re-run: http://pulpito.front.sepia.ceph.com/rishabh-2023-01-13_12:08:33-fs-wip-rishabh-testing-2022Nov24-11Jan2023-distro-default-smithi/
1296 105 Rishabh Dave
re-run of re-run: http://pulpito.front.sepia.ceph.com/rishabh-2023-01-23_18:53:32-fs-wip-rishabh-testing-2022Nov24-11Jan2023-distro-default-smithi/
1297
1298 101 Rishabh Dave
* https://tracker.ceph.com/issues/52624
1299
  qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
1300
* https://tracker.ceph.com/issues/56695
1301
  [RHEL stock] pjd test failures
1302
* https://tracker.ceph.com/issues/57676
1303
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
1304
* https://tracker.ceph.com/issues/55332
1305
  Failure in snaptest-git-ceph.sh
1306
* https://tracker.ceph.com/issues/51964
1307
  qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1308
* https://tracker.ceph.com/issues/56446
1309
  Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
1310
* https://tracker.ceph.com/issues/57655 
1311
  qa: fs:mixed-clients kernel_untar_build failure
1312
* https://tracker.ceph.com/issues/54460
1313
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1314 103 Rishabh Dave
* https://tracker.ceph.com/issues/58340
1315
  mds: fsstress.sh hangs with multimds
1316 101 Rishabh Dave
* https://tracker.ceph.com/issues/58219
1317 102 Rishabh Dave
  Command crashed: 'ceph-dencoder type inode_backtrace_t import - decode dump_json'
1318
1319
* "Failed to load ceph-mgr modules: prometheus" in cluster log"
1320 106 Rishabh Dave
  http://pulpito.front.sepia.ceph.com/rishabh-2023-01-23_18:53:32-fs-wip-rishabh-testing-2022Nov24-11Jan2023-distro-default-smithi/7134086
1321
  Acc to Venky this was fixed in https://github.com/ceph/ceph/commit/cf6089200d96fc56b08ee17a4e31f19823370dc8
1322 102 Rishabh Dave
* Created https://tracker.ceph.com/issues/58564
1323 100 Venky Shankar
  workunit test suites/dbench.sh failed error code 1
1324
1325
h3. 15 Dec 2022
1326
1327
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20221215.112736
1328
1329
* https://tracker.ceph.com/issues/52624
1330
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1331
* https://tracker.ceph.com/issues/56695
1332
    [RHEL stock] pjd test failures
1333
* https://tracker.ceph.com/issues/58219
1334
* https://tracker.ceph.com/issues/57655
1335
* qa: fs:mixed-clients kernel_untar_build failure
1336
    Test failure: test_journal_migration (tasks.cephfs.test_journal_migration.TestJournalMigration)
1337
* https://tracker.ceph.com/issues/57676
1338
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1339
* https://tracker.ceph.com/issues/58340
1340 96 Venky Shankar
    mds: fsstress.sh hangs with multimds
1341
1342
h3. 08 Dec 2022
1343 99 Venky Shankar
1344 96 Venky Shankar
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20221130.043104
1345
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20221209.043803
1346
1347
(lots of transient git.ceph.com failures)
1348
1349
* https://tracker.ceph.com/issues/52624
1350
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1351
* https://tracker.ceph.com/issues/56695
1352
    [RHEL stock] pjd test failures
1353
* https://tracker.ceph.com/issues/57655
1354
    qa: fs:mixed-clients kernel_untar_build failure
1355
* https://tracker.ceph.com/issues/58219
1356
    Test failure: test_journal_migration (tasks.cephfs.test_journal_migration.TestJournalMigration)
1357
* https://tracker.ceph.com/issues/58220
1358
    Command failed (workunit test fs/quota/quota.sh) on smithi081 with status 1:
1359 97 Venky Shankar
* https://tracker.ceph.com/issues/57676
1360
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1361 98 Venky Shankar
* https://tracker.ceph.com/issues/53859
1362
    qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
1363
* https://tracker.ceph.com/issues/54460
1364
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1365 96 Venky Shankar
* https://tracker.ceph.com/issues/58244
1366 95 Venky Shankar
    Test failure: test_rebuild_inotable (tasks.cephfs.test_data_scan.TestDataScan)
1367
1368
h3. 14 Oct 2022
1369
1370
https://pulpito.ceph.com/vshankar-2022-10-12_04:56:59-fs-wip-vshankar-testing-20221011-145847-testing-default-smithi/
1371
https://pulpito.ceph.com/vshankar-2022-10-14_04:04:57-fs-wip-vshankar-testing-20221014-072608-testing-default-smithi/
1372
1373
* https://tracker.ceph.com/issues/52624
1374
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1375
* https://tracker.ceph.com/issues/55804
1376
    Command failed (workunit test suites/pjd.sh)
1377
* https://tracker.ceph.com/issues/51964
1378
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1379
* https://tracker.ceph.com/issues/57682
1380
    client: ERROR: test_reconnect_after_blocklisted
1381 90 Rishabh Dave
* https://tracker.ceph.com/issues/54460
1382 91 Rishabh Dave
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1383
1384
h3. 10 Oct 2022
1385 92 Rishabh Dave
1386 91 Rishabh Dave
http://pulpito.front.sepia.ceph.com/rishabh-2022-09-30_19:45:21-fs-wip-rishabh-testing-30Sep2022-testing-default-smithi/
1387
1388
reruns
1389
* fs-thrash, passed: http://pulpito.front.sepia.ceph.com/rishabh-2022-10-04_13:19:47-fs-wip-rishabh-testing-30Sep2022-testing-default-smithi/
1390 94 Rishabh Dave
* fs-verify, passed: http://pulpito.front.sepia.ceph.com/rishabh-2022-10-05_12:25:37-fs-wip-rishabh-testing-30Sep2022-testing-default-smithi/
1391 91 Rishabh Dave
* cephadm failures also passed after many re-runs: http://pulpito.front.sepia.ceph.com/rishabh-2022-10-06_13:50:51-fs-wip-rishabh-testing-30Sep2022-2-testing-default-smithi/
1392 93 Rishabh Dave
    ** needed this PR to be merged in ceph-ci branch - https://github.com/ceph/ceph/pull/47458
1393 91 Rishabh Dave
1394
known bugs
1395
* https://tracker.ceph.com/issues/52624
1396
  qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1397
* https://tracker.ceph.com/issues/50223
1398
  client.xxxx isn't responding to mclientcaps(revoke
1399
* https://tracker.ceph.com/issues/57299
1400
  qa: test_dump_loads fails with JSONDecodeError
1401
* https://tracker.ceph.com/issues/57655 [Exist in main as well]
1402
  qa: fs:mixed-clients kernel_untar_build failure
1403
* https://tracker.ceph.com/issues/57206
1404 90 Rishabh Dave
  libcephfs/test.sh: ceph_test_libcephfs_reclaim
1405
1406
h3. 2022 Sep 29
1407
1408
http://pulpito.front.sepia.ceph.com/rishabh-2022-09-14_12:48:43-fs-wip-rishabh-testing-2022Sep9-1708-testing-default-smithi/
1409
1410
* https://tracker.ceph.com/issues/55804
1411
  Command failed (workunit test suites/pjd.sh)
1412
* https://tracker.ceph.com/issues/36593
1413
  Command failed (workunit test fs/quota/quota.sh) on smithixxx with status 1
1414
* https://tracker.ceph.com/issues/52624
1415
  qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1416
* https://tracker.ceph.com/issues/51964
1417
  qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1418
* https://tracker.ceph.com/issues/56632
1419
  Test failure: test_subvolume_snapshot_clone_quota_exceeded
1420
* https://tracker.ceph.com/issues/50821
1421 88 Patrick Donnelly
  qa: untar_snap_rm failure during mds thrashing
1422
1423
h3. 2022 Sep 26
1424
1425
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20220923.171109
1426
1427
* https://tracker.ceph.com/issues/55804
1428
    qa failure: pjd link tests failed
1429
* https://tracker.ceph.com/issues/57676
1430
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1431
* https://tracker.ceph.com/issues/52624
1432
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1433
* https://tracker.ceph.com/issues/57580
1434
    Test failure: test_newops_getvxattr (tasks.cephfs.test_newops.TestNewOps)
1435
* https://tracker.ceph.com/issues/48773
1436
    qa: scrub does not complete
1437
* https://tracker.ceph.com/issues/57299
1438
    qa: test_dump_loads fails with JSONDecodeError
1439
* https://tracker.ceph.com/issues/57280
1440
    qa: tasks/kernel_cfuse_workunits_untarbuild_blogbench fails - Failed to fetch package version from shaman
1441
* https://tracker.ceph.com/issues/57205
1442
    Test failure: test_subvolume_group_ls_filter_internal_directories (tasks.cephfs.test_volumes.TestSubvolumeGroups)
1443
* https://tracker.ceph.com/issues/57656
1444
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
1445
* https://tracker.ceph.com/issues/57677
1446
    qa: "1 MDSs behind on trimming (MDS_TRIM)"
1447
* https://tracker.ceph.com/issues/57206
1448
    libcephfs/test.sh: ceph_test_libcephfs_reclaim
1449
* https://tracker.ceph.com/issues/57446
1450
    qa: test_subvolume_snapshot_info_if_orphan_clone fails
1451 89 Patrick Donnelly
* https://tracker.ceph.com/issues/57655 [Exist in main as well]
1452
    qa: fs:mixed-clients kernel_untar_build failure
1453 88 Patrick Donnelly
* https://tracker.ceph.com/issues/57682
1454
    client: ERROR: test_reconnect_after_blocklisted
1455 87 Patrick Donnelly
1456
1457
h3. 2022 Sep 22
1458
1459
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20220920.234701
1460
1461
* https://tracker.ceph.com/issues/57299
1462
    qa: test_dump_loads fails with JSONDecodeError
1463
* https://tracker.ceph.com/issues/57205
1464
    Test failure: test_subvolume_group_ls_filter_internal_directories (tasks.cephfs.test_volumes.TestSubvolumeGroups)
1465
* https://tracker.ceph.com/issues/52624
1466
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1467
* https://tracker.ceph.com/issues/57580
1468
    Test failure: test_newops_getvxattr (tasks.cephfs.test_newops.TestNewOps)
1469
* https://tracker.ceph.com/issues/57280
1470
    qa: tasks/kernel_cfuse_workunits_untarbuild_blogbench fails - Failed to fetch package version from shaman
1471
* https://tracker.ceph.com/issues/48773
1472
    qa: scrub does not complete
1473
* https://tracker.ceph.com/issues/56446
1474
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
1475
* https://tracker.ceph.com/issues/57206
1476
    libcephfs/test.sh: ceph_test_libcephfs_reclaim
1477
* https://tracker.ceph.com/issues/51267
1478
    CommandFailedError: Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithi096 with status 1:...
1479
1480
NEW:
1481
1482
* https://tracker.ceph.com/issues/57656
1483
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
1484
* https://tracker.ceph.com/issues/57655 [Exist in main as well]
1485
    qa: fs:mixed-clients kernel_untar_build failure
1486
* https://tracker.ceph.com/issues/57657
1487
    mds: scrub locates mismatch between child accounted_rstats and self rstats
1488
1489
Segfault probably caused by: https://github.com/ceph/ceph/pull/47795#issuecomment-1255724799
1490 80 Venky Shankar
1491 79 Venky Shankar
1492
h3. 2022 Sep 16
1493
1494
https://pulpito.ceph.com/?branch=wip-vshankar-testing1-20220905-132828
1495
1496
* https://tracker.ceph.com/issues/57446
1497
    qa: test_subvolume_snapshot_info_if_orphan_clone fails
1498
* https://tracker.ceph.com/issues/57299
1499
    qa: test_dump_loads fails with JSONDecodeError
1500
* https://tracker.ceph.com/issues/50223
1501
    client.xxxx isn't responding to mclientcaps(revoke)
1502
* https://tracker.ceph.com/issues/52624
1503
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1504
* https://tracker.ceph.com/issues/57205
1505
    Test failure: test_subvolume_group_ls_filter_internal_directories (tasks.cephfs.test_volumes.TestSubvolumeGroups)
1506
* https://tracker.ceph.com/issues/57280
1507
    qa: tasks/kernel_cfuse_workunits_untarbuild_blogbench fails - Failed to fetch package version from shaman
1508
* https://tracker.ceph.com/issues/51282
1509
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
1510
* https://tracker.ceph.com/issues/48203
1511
  https://tracker.ceph.com/issues/36593
1512
    qa: quota failure
1513
    qa: quota failure caused by clients stepping on each other
1514
* https://tracker.ceph.com/issues/57580
1515 77 Rishabh Dave
    Test failure: test_newops_getvxattr (tasks.cephfs.test_newops.TestNewOps)
1516
1517 76 Rishabh Dave
1518
h3. 2022 Aug 26
1519
1520
http://pulpito.front.sepia.ceph.com/rishabh-2022-08-22_17:49:59-fs-wip-rishabh-testing-2022Aug19-testing-default-smithi/
1521
http://pulpito.front.sepia.ceph.com/rishabh-2022-08-24_11:56:51-fs-wip-rishabh-testing-2022Aug19-testing-default-smithi/
1522
1523
* https://tracker.ceph.com/issues/57206
1524
  libcephfs/test.sh: ceph_test_libcephfs_reclaim
1525
* https://tracker.ceph.com/issues/56632
1526
  Test failure: test_subvolume_snapshot_clone_quota_exceeded (tasks.cephfs.test_volumes.TestSubvolumeSnapshotClones)
1527
* https://tracker.ceph.com/issues/56446
1528
  Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
1529
* https://tracker.ceph.com/issues/51964
1530
  qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1531
* https://tracker.ceph.com/issues/53859
1532
  qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
1533
1534
* https://tracker.ceph.com/issues/54460
1535
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1536
* https://tracker.ceph.com/issues/54462
1537
  Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128
1538
* https://tracker.ceph.com/issues/54460
1539
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1540
* https://tracker.ceph.com/issues/36593
1541
  Command failed (workunit test fs/quota/quota.sh) on smithixxx with status 1
1542
1543
* https://tracker.ceph.com/issues/52624
1544
  qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1545
* https://tracker.ceph.com/issues/55804
1546
  Command failed (workunit test suites/pjd.sh)
1547
* https://tracker.ceph.com/issues/50223
1548
  client.xxxx isn't responding to mclientcaps(revoke)
1549 75 Venky Shankar
1550
1551
h3. 2022 Aug 22
1552
1553
https://pulpito.ceph.com/vshankar-2022-08-12_09:34:24-fs-wip-vshankar-testing1-20220812-072441-testing-default-smithi/
1554
https://pulpito.ceph.com/vshankar-2022-08-18_04:30:42-fs-wip-vshankar-testing1-20220818-082047-testing-default-smithi/ (drop problematic PR and re-run)
1555
1556
* https://tracker.ceph.com/issues/52624
1557
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1558
* https://tracker.ceph.com/issues/56446
1559
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
1560
* https://tracker.ceph.com/issues/55804
1561
    Command failed (workunit test suites/pjd.sh)
1562
* https://tracker.ceph.com/issues/51278
1563
    mds: "FAILED ceph_assert(!segments.empty())"
1564
* https://tracker.ceph.com/issues/54460
1565
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1566
* https://tracker.ceph.com/issues/57205
1567
    Test failure: test_subvolume_group_ls_filter_internal_directories (tasks.cephfs.test_volumes.TestSubvolumeGroups)
1568
* https://tracker.ceph.com/issues/57206
1569
    ceph_test_libcephfs_reclaim crashes during test
1570
* https://tracker.ceph.com/issues/53859
1571
  Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
1572
* https://tracker.ceph.com/issues/50223
1573 72 Venky Shankar
    client.xxxx isn't responding to mclientcaps(revoke)
1574
1575
h3. 2022 Aug 12
1576
1577
https://pulpito.ceph.com/vshankar-2022-08-10_04:06:00-fs-wip-vshankar-testing-20220805-190751-testing-default-smithi/
1578
https://pulpito.ceph.com/vshankar-2022-08-11_12:16:58-fs-wip-vshankar-testing-20220811-145809-testing-default-smithi/ (drop problematic PR and re-run)
1579
1580
* https://tracker.ceph.com/issues/52624
1581
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1582
* https://tracker.ceph.com/issues/56446
1583
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
1584
* https://tracker.ceph.com/issues/51964
1585
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1586
* https://tracker.ceph.com/issues/55804
1587
    Command failed (workunit test suites/pjd.sh)
1588
* https://tracker.ceph.com/issues/50223
1589
    client.xxxx isn't responding to mclientcaps(revoke)
1590
* https://tracker.ceph.com/issues/50821
1591 73 Venky Shankar
    qa: untar_snap_rm failure during mds thrashing
1592 72 Venky Shankar
* https://tracker.ceph.com/issues/54460
1593 71 Venky Shankar
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1594
1595
h3. 2022 Aug 04
1596
1597
https://pulpito.ceph.com/?branch=wip-vshankar-testing1-20220804-123835 (only mgr/volumes, mgr/stats)
1598
1599 69 Rishabh Dave
Unrealted teuthology failure on rhel
1600 68 Rishabh Dave
1601
h3. 2022 Jul 25
1602
1603
http://pulpito.front.sepia.ceph.com/rishabh-2022-07-22_11:34:20-fs-wip-rishabh-testing-2022Jul22-1400-testing-default-smithi/
1604
1605 74 Rishabh Dave
1st re-run: http://pulpito.front.sepia.ceph.com/rishabh-2022-07-24_03:51:19-fs-wip-rishabh-testing-2022Jul22-1400-testing-default-smithi
1606
2nd re-run: http://pulpito.front.sepia.ceph.com/rishabh-2022-07-24_08:53:36-fs-wip-rishabh-testing-2022Jul22-1400-testing-default-smithi/
1607 68 Rishabh Dave
3rd re-run: http://pulpito.front.sepia.ceph.com/rishabh-2022-07-24_08:53:36-fs-wip-rishabh-testing-2022Jul22-1400-testing-default-smithi/
1608
4th (final) re-run: http://pulpito.front.sepia.ceph.com/rishabh-2022-07-28_03:59:01-fs-wip-rishabh-testing-2022Jul28-0143-testing-default-smithi/
1609
1610
* https://tracker.ceph.com/issues/55804
1611
  Command failed (workunit test suites/pjd.sh)
1612
* https://tracker.ceph.com/issues/50223
1613
  client.xxxx isn't responding to mclientcaps(revoke)
1614
1615
* https://tracker.ceph.com/issues/54460
1616
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1617 1 Patrick Donnelly
* https://tracker.ceph.com/issues/36593
1618 74 Rishabh Dave
  Command failed (workunit test fs/quota/quota.sh) on smithixxx with status 1
1619 68 Rishabh Dave
* https://tracker.ceph.com/issues/54462
1620 67 Patrick Donnelly
  Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128~
1621
1622
h3. 2022 July 22
1623
1624
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20220721.235756
1625
1626
MDS_HEALTH_DUMMY error in log fixed by followup commit.
1627
transient selinux ping failure
1628
1629
* https://tracker.ceph.com/issues/56694
1630
    qa: avoid blocking forever on hung umount
1631
* https://tracker.ceph.com/issues/56695
1632
    [RHEL stock] pjd test failures
1633
* https://tracker.ceph.com/issues/56696
1634
    admin keyring disappears during qa run
1635
* https://tracker.ceph.com/issues/56697
1636
    qa: fs/snaps fails for fuse
1637
* https://tracker.ceph.com/issues/50222
1638
    osd: 5.2s0 deep-scrub : stat mismatch
1639
* https://tracker.ceph.com/issues/56698
1640
    client: FAILED ceph_assert(_size == 0)
1641
* https://tracker.ceph.com/issues/50223
1642
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
1643 66 Rishabh Dave
1644 65 Rishabh Dave
1645
h3. 2022 Jul 15
1646
1647
http://pulpito.front.sepia.ceph.com/rishabh-2022-07-08_23:53:34-fs-wip-rishabh-testing-2022Jul08-1820-testing-default-smithi/
1648
1649
re-run: http://pulpito.front.sepia.ceph.com/rishabh-2022-07-15_06:42:04-fs-wip-rishabh-testing-2022Jul08-1820-testing-default-smithi/
1650
1651
* https://tracker.ceph.com/issues/53859
1652
  Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
1653
* https://tracker.ceph.com/issues/55804
1654
  Command failed (workunit test suites/pjd.sh)
1655
* https://tracker.ceph.com/issues/50223
1656
  client.xxxx isn't responding to mclientcaps(revoke)
1657
* https://tracker.ceph.com/issues/50222
1658
  osd: deep-scrub : stat mismatch
1659
1660
* https://tracker.ceph.com/issues/56632
1661
  Test failure: test_subvolume_snapshot_clone_quota_exceeded (tasks.cephfs.test_volumes.TestSubvolumeSnapshotClones)
1662
* https://tracker.ceph.com/issues/56634
1663
  workunit test fs/snaps/snaptest-intodir.sh
1664
* https://tracker.ceph.com/issues/56644
1665
  Test failure: test_rapid_creation (tasks.cephfs.test_fragment.TestFragmentation)
1666
1667 61 Rishabh Dave
1668
1669
h3. 2022 July 05
1670 62 Rishabh Dave
1671 64 Rishabh Dave
http://pulpito.front.sepia.ceph.com/rishabh-2022-07-02_14:14:52-fs-wip-rishabh-testing-20220702-1631-testing-default-smithi/
1672
1673
On 1st re-run some jobs passed - http://pulpito.front.sepia.ceph.com/rishabh-2022-07-03_15:10:28-fs-wip-rishabh-testing-20220702-1631-distro-default-smithi/
1674
1675
On 2nd re-run only few jobs failed -
1676 62 Rishabh Dave
http://pulpito.front.sepia.ceph.com/rishabh-2022-07-06_05:24:29-fs-wip-rishabh-testing-20220705-2132-distro-default-smithi/
1677
http://pulpito.front.sepia.ceph.com/rishabh-2022-07-06_05:24:29-fs-wip-rishabh-testing-20220705-2132-distro-default-smithi/
1678
1679
* https://tracker.ceph.com/issues/56446
1680
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
1681
* https://tracker.ceph.com/issues/55804
1682
    Command failed (workunit test suites/pjd.sh) on smithi047 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/
1683
1684
* https://tracker.ceph.com/issues/56445
1685 63 Rishabh Dave
    Command failed on smithi080 with status 123: "find /home/ubuntu/cephtest/archive/syslog -name '*.log' -print0 | sudo xargs -0 --no-run-if-empty -- gzip --"
1686
* https://tracker.ceph.com/issues/51267
1687
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithi098 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1
1688 62 Rishabh Dave
* https://tracker.ceph.com/issues/50224
1689
    Test failure: test_mirroring_init_failure_with_recovery (tasks.cephfs.test_mirroring.TestMirroring)
1690 61 Rishabh Dave
1691 58 Venky Shankar
1692
1693
h3. 2022 July 04
1694
1695
https://pulpito.ceph.com/vshankar-2022-06-29_09:19:00-fs-wip-vshankar-testing-20220627-100931-testing-default-smithi/
1696
(rhel runs were borked due to: https://lists.ceph.io/hyperkitty/list/dev@ceph.io/thread/JSZQFUKVLDND4W33PXDGCABPHNSPT6SS/, tests ran with --filter-out=rhel)
1697
1698
* https://tracker.ceph.com/issues/56445
1699 59 Rishabh Dave
    Command failed on smithi162 with status 123: "find /home/ubuntu/cephtest/archive/syslog -name '*.log' -print0 | sudo xargs -0 --no-run-if-empty -- gzip --"
1700
* https://tracker.ceph.com/issues/56446
1701
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
1702
* https://tracker.ceph.com/issues/51964
1703 60 Rishabh Dave
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1704 59 Rishabh Dave
* https://tracker.ceph.com/issues/52624
1705 57 Venky Shankar
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1706
1707
h3. 2022 June 20
1708
1709
https://pulpito.ceph.com/vshankar-2022-06-15_04:03:39-fs-wip-vshankar-testing1-20220615-072516-testing-default-smithi/
1710
https://pulpito.ceph.com/vshankar-2022-06-19_08:22:46-fs-wip-vshankar-testing1-20220619-102531-testing-default-smithi/
1711
1712
* https://tracker.ceph.com/issues/52624
1713
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1714
* https://tracker.ceph.com/issues/55804
1715
    qa failure: pjd link tests failed
1716
* https://tracker.ceph.com/issues/54108
1717
    qa: iogen workunit: "The following counters failed to be set on mds daemons: {'mds.exported', 'mds.imported'}"
1718
* https://tracker.ceph.com/issues/55332
1719 56 Patrick Donnelly
    Failure in snaptest-git-ceph.sh (it's an async unlink/create bug)
1720
1721
h3. 2022 June 13
1722
1723
https://pulpito.ceph.com/pdonnell-2022-06-12_05:08:12-fs:workload-wip-pdonnell-testing-20220612.004943-distro-default-smithi/
1724
1725
* https://tracker.ceph.com/issues/56024
1726
    cephadm: removes ceph.conf during qa run causing command failure
1727
* https://tracker.ceph.com/issues/48773
1728
    qa: scrub does not complete
1729
* https://tracker.ceph.com/issues/56012
1730
    mds: src/mds/MDLog.cc: 283: FAILED ceph_assert(!mds->is_ any_replay())
1731 55 Venky Shankar
1732 54 Venky Shankar
1733
h3. 2022 Jun 13
1734
1735
https://pulpito.ceph.com/vshankar-2022-06-07_00:25:50-fs-wip-vshankar-testing-20220606-223254-testing-default-smithi/
1736
https://pulpito.ceph.com/vshankar-2022-06-10_01:04:46-fs-wip-vshankar-testing-20220609-175550-testing-default-smithi/
1737
1738
* https://tracker.ceph.com/issues/52624
1739
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1740
* https://tracker.ceph.com/issues/51964
1741
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1742
* https://tracker.ceph.com/issues/53859
1743
    qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
1744
* https://tracker.ceph.com/issues/55804
1745
    qa failure: pjd link tests failed
1746
* https://tracker.ceph.com/issues/56003
1747
    client: src/include/xlist.h: 81: FAILED ceph_assert(_size == 0)
1748
* https://tracker.ceph.com/issues/56011
1749
    fs/thrash: snaptest-snap-rm-cmp.sh fails in mds5sum comparison
1750
* https://tracker.ceph.com/issues/56012
1751 53 Venky Shankar
    mds: src/mds/MDLog.cc: 283: FAILED ceph_assert(!mds->is_ any_replay())
1752
1753
h3. 2022 Jun 07
1754
1755
https://pulpito.ceph.com/vshankar-2022-06-06_21:25:41-fs-wip-vshankar-testing1-20220606-230129-testing-default-smithi/
1756
https://pulpito.ceph.com/vshankar-2022-06-07_10:53:31-fs-wip-vshankar-testing1-20220607-104134-testing-default-smithi/ (rerun after dropping a problematic PR)
1757
1758
* https://tracker.ceph.com/issues/52624
1759
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1760
* https://tracker.ceph.com/issues/50223
1761
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
1762
* https://tracker.ceph.com/issues/50224
1763 51 Venky Shankar
    qa: test_mirroring_init_failure_with_recovery failure
1764
1765
h3. 2022 May 12
1766 52 Venky Shankar
1767 51 Venky Shankar
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20220509-125847
1768
https://pulpito.ceph.com/vshankar-2022-05-13_17:09:16-fs-wip-vshankar-testing-20220513-120051-testing-default-smithi/ (drop prs + rerun)
1769
1770
* https://tracker.ceph.com/issues/52624
1771
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1772
* https://tracker.ceph.com/issues/50223
1773
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
1774
* https://tracker.ceph.com/issues/55332
1775
    Failure in snaptest-git-ceph.sh
1776
* https://tracker.ceph.com/issues/53859
1777 1 Patrick Donnelly
    qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
1778 52 Venky Shankar
* https://tracker.ceph.com/issues/55538
1779
    Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead)
1780 51 Venky Shankar
* https://tracker.ceph.com/issues/55258
1781 49 Venky Shankar
    lots of "heartbeat_check: no reply from X.X.X.X" in OSD logs (cropss up again, though very infrequent)
1782
1783 50 Venky Shankar
h3. 2022 May 04
1784
1785
https://pulpito.ceph.com/vshankar-2022-05-01_13:18:44-fs-wip-vshankar-testing1-20220428-204527-testing-default-smithi/
1786 49 Venky Shankar
https://pulpito.ceph.com/vshankar-2022-05-02_16:58:59-fs-wip-vshankar-testing1-20220502-201957-testing-default-smithi/ (after dropping PRs)
1787
1788
* https://tracker.ceph.com/issues/52624
1789
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1790
* https://tracker.ceph.com/issues/50223
1791
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
1792
* https://tracker.ceph.com/issues/55332
1793
    Failure in snaptest-git-ceph.sh
1794
* https://tracker.ceph.com/issues/53859
1795
    qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
1796
* https://tracker.ceph.com/issues/55516
1797
    qa: fs suite tests failing with "json.decoder.JSONDecodeError: Extra data: line 2 column 82 (char 82)"
1798
* https://tracker.ceph.com/issues/55537
1799
    mds: crash during fs:upgrade test
1800
* https://tracker.ceph.com/issues/55538
1801 48 Venky Shankar
    Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead)
1802
1803
h3. 2022 Apr 25
1804
1805
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20220420-113951 (owner vshankar)
1806
1807
* https://tracker.ceph.com/issues/52624
1808
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1809
* https://tracker.ceph.com/issues/50223
1810
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
1811
* https://tracker.ceph.com/issues/55258
1812
    lots of "heartbeat_check: no reply from X.X.X.X" in OSD logs
1813
* https://tracker.ceph.com/issues/55377
1814 47 Venky Shankar
    kclient: mds revoke Fwb caps stuck after the kclient tries writebcak once
1815
1816
h3. 2022 Apr 14
1817
1818
https://pulpito.ceph.com/?branch=wip-vshankar-testing1-20220411-144044
1819
1820
* https://tracker.ceph.com/issues/52624
1821
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1822
* https://tracker.ceph.com/issues/50223
1823
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
1824
* https://tracker.ceph.com/issues/52438
1825
    qa: ffsb timeout
1826
* https://tracker.ceph.com/issues/55170
1827
    mds: crash during rejoin (CDir::fetch_keys)
1828
* https://tracker.ceph.com/issues/55331
1829
    pjd failure
1830
* https://tracker.ceph.com/issues/48773
1831
    qa: scrub does not complete
1832
* https://tracker.ceph.com/issues/55332
1833
    Failure in snaptest-git-ceph.sh
1834
* https://tracker.ceph.com/issues/55258
1835 45 Venky Shankar
    lots of "heartbeat_check: no reply from X.X.X.X" in OSD logs
1836
1837 46 Venky Shankar
h3. 2022 Apr 11
1838 45 Venky Shankar
1839
https://pulpito.ceph.com/?branch=wip-vshankar-testing-55110-20220408-203242
1840
1841
* https://tracker.ceph.com/issues/48773
1842
    qa: scrub does not complete
1843
* https://tracker.ceph.com/issues/52624
1844
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1845
* https://tracker.ceph.com/issues/52438
1846
    qa: ffsb timeout
1847
* https://tracker.ceph.com/issues/48680
1848
    mds: scrubbing stuck "scrub active (0 inodes in the stack)"
1849
* https://tracker.ceph.com/issues/55236
1850
    qa: fs/snaps tests fails with "hit max job timeout"
1851
* https://tracker.ceph.com/issues/54108
1852
    qa: iogen workunit: "The following counters failed to be set on mds daemons: {'mds.exported', 'mds.imported'}"
1853
* https://tracker.ceph.com/issues/54971
1854
    Test failure: test_perf_stats_stale_metrics (tasks.cephfs.test_mds_metrics.TestMDSMetrics)
1855
* https://tracker.ceph.com/issues/50223
1856
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
1857
* https://tracker.ceph.com/issues/55258
1858 44 Venky Shankar
    lots of "heartbeat_check: no reply from X.X.X.X" in OSD logs
1859 42 Venky Shankar
1860 43 Venky Shankar
h3. 2022 Mar 21
1861
1862
https://pulpito.ceph.com/vshankar-2022-03-20_02:16:37-fs-wip-vshankar-testing-20220319-163539-testing-default-smithi/
1863
1864
Run didn't go well, lots of failures - debugging by dropping PRs and running against master branch. Only merging unrelated PRs that pass tests.
1865
1866
1867 42 Venky Shankar
h3. 2022 Mar 08
1868
1869
https://pulpito.ceph.com/vshankar-2022-02-28_04:32:15-fs-wip-vshankar-testing-20220226-211550-testing-default-smithi/
1870
1871
rerun with
1872
- (drop) https://github.com/ceph/ceph/pull/44679
1873
- (drop) https://github.com/ceph/ceph/pull/44958
1874
https://pulpito.ceph.com/vshankar-2022-03-06_14:47:51-fs-wip-vshankar-testing-20220304-132102-testing-default-smithi/
1875
1876
* https://tracker.ceph.com/issues/54419 (new)
1877
    `ceph orch upgrade start` seems to never reach completion
1878
* https://tracker.ceph.com/issues/51964
1879
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1880
* https://tracker.ceph.com/issues/52624
1881
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1882
* https://tracker.ceph.com/issues/50223
1883
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
1884
* https://tracker.ceph.com/issues/52438
1885
    qa: ffsb timeout
1886
* https://tracker.ceph.com/issues/50821
1887
    qa: untar_snap_rm failure during mds thrashing
1888 41 Venky Shankar
1889
1890
h3. 2022 Feb 09
1891
1892
https://pulpito.ceph.com/vshankar-2022-02-05_17:27:49-fs-wip-vshankar-testing-20220201-113815-testing-default-smithi/
1893
1894
rerun with
1895
- (drop) https://github.com/ceph/ceph/pull/37938
1896
- (drop) https://github.com/ceph/ceph/pull/44335
1897
- (drop) https://github.com/ceph/ceph/pull/44491
1898
- (drop) https://github.com/ceph/ceph/pull/44501
1899
https://pulpito.ceph.com/vshankar-2022-02-08_14:27:29-fs-wip-vshankar-testing-20220208-181241-testing-default-smithi/
1900
1901
* https://tracker.ceph.com/issues/51964
1902
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1903
* https://tracker.ceph.com/issues/54066
1904
    test_subvolume_no_upgrade_v1_sanity fails with `AssertionError: 1000 != 0`
1905
* https://tracker.ceph.com/issues/48773
1906
    qa: scrub does not complete
1907
* https://tracker.ceph.com/issues/52624
1908
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1909
* https://tracker.ceph.com/issues/50223
1910
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
1911
* https://tracker.ceph.com/issues/52438
1912 40 Patrick Donnelly
    qa: ffsb timeout
1913
1914
h3. 2022 Feb 01
1915
1916
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20220127.171526
1917
1918
* https://tracker.ceph.com/issues/54107
1919
    kclient: hang during umount
1920
* https://tracker.ceph.com/issues/54106
1921
    kclient: hang during workunit cleanup
1922
* https://tracker.ceph.com/issues/54108
1923
    qa: iogen workunit: "The following counters failed to be set on mds daemons: {'mds.exported', 'mds.imported'}"
1924
* https://tracker.ceph.com/issues/48773
1925
    qa: scrub does not complete
1926
* https://tracker.ceph.com/issues/52438
1927
    qa: ffsb timeout
1928 36 Venky Shankar
1929
1930
h3. 2022 Jan 13
1931 39 Venky Shankar
1932 36 Venky Shankar
https://pulpito.ceph.com/vshankar-2022-01-06_13:18:41-fs-wip-vshankar-testing-20220106-145819-testing-default-smithi/
1933 38 Venky Shankar
1934
rerun with:
1935 36 Venky Shankar
- (add) https://github.com/ceph/ceph/pull/44570
1936
- (drop) https://github.com/ceph/ceph/pull/43184
1937
https://pulpito.ceph.com/vshankar-2022-01-13_04:42:40-fs-wip-vshankar-testing-20220106-145819-testing-default-smithi/
1938
1939
* https://tracker.ceph.com/issues/50223
1940
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
1941
* https://tracker.ceph.com/issues/51282
1942
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
1943
* https://tracker.ceph.com/issues/48773
1944
    qa: scrub does not complete
1945
* https://tracker.ceph.com/issues/52624
1946
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1947
* https://tracker.ceph.com/issues/53859
1948 34 Venky Shankar
    qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
1949
1950
h3. 2022 Jan 03
1951
1952
https://pulpito.ceph.com/vshankar-2021-12-22_07:37:44-fs-wip-vshankar-testing-20211216-114012-testing-default-smithi/
1953
https://pulpito.ceph.com/vshankar-2022-01-03_12:27:45-fs-wip-vshankar-testing-20220103-142738-testing-default-smithi/ (rerun)
1954
1955
* https://tracker.ceph.com/issues/50223
1956
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
1957
* https://tracker.ceph.com/issues/51964
1958
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1959
* https://tracker.ceph.com/issues/51267
1960
    CommandFailedError: Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithi096 with status 1:...
1961
* https://tracker.ceph.com/issues/51282
1962
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
1963
* https://tracker.ceph.com/issues/50821
1964
    qa: untar_snap_rm failure during mds thrashing
1965 35 Ramana Raja
* https://tracker.ceph.com/issues/51278
1966
    mds: "FAILED ceph_assert(!segments.empty())"
1967
* https://tracker.ceph.com/issues/52279
1968 34 Venky Shankar
    cephadm tests fail due to: error adding seccomp filter rule for syscall bdflush: requested action matches default action of filter
1969 33 Patrick Donnelly
1970
1971
h3. 2021 Dec 22
1972
1973
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211222.014316
1974
1975
* https://tracker.ceph.com/issues/52624
1976
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1977
* https://tracker.ceph.com/issues/50223
1978
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
1979
* https://tracker.ceph.com/issues/52279
1980
    cephadm tests fail due to: error adding seccomp filter rule for syscall bdflush: requested action matches default action of filter
1981
* https://tracker.ceph.com/issues/50224
1982
    qa: test_mirroring_init_failure_with_recovery failure
1983
* https://tracker.ceph.com/issues/48773
1984
    qa: scrub does not complete
1985 32 Venky Shankar
1986
1987
h3. 2021 Nov 30
1988
1989
https://pulpito.ceph.com/vshankar-2021-11-24_07:14:27-fs-wip-vshankar-testing-20211124-094330-testing-default-smithi/
1990
https://pulpito.ceph.com/vshankar-2021-11-30_06:23:32-fs-wip-vshankar-testing-20211124-094330-distro-default-smithi/ (rerun w/ QA fixes)
1991
1992
* https://tracker.ceph.com/issues/53436
1993
    mds, mon: mds beacon messages get dropped? (mds never reaches up:active state)
1994
* https://tracker.ceph.com/issues/51964
1995
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1996
* https://tracker.ceph.com/issues/48812
1997
    qa: test_scrub_pause_and_resume_with_abort failure
1998
* https://tracker.ceph.com/issues/51076
1999
    "wait_for_recovery: failed before timeout expired" during thrashosd test with EC backend.
2000
* https://tracker.ceph.com/issues/50223
2001
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2002
* https://tracker.ceph.com/issues/52624
2003
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2004
* https://tracker.ceph.com/issues/50250
2005
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
2006 31 Patrick Donnelly
2007
2008
h3. 2021 November 9
2009
2010
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211109.180315
2011
2012
* https://tracker.ceph.com/issues/53214
2013
    qa: "dd: error reading '/sys/kernel/debug/ceph/2a934501-6731-4052-a836-f42229a869be.client4874/metrics': Is a directory"
2014
* https://tracker.ceph.com/issues/48773
2015
    qa: scrub does not complete
2016
* https://tracker.ceph.com/issues/50223
2017
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2018
* https://tracker.ceph.com/issues/51282
2019
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2020
* https://tracker.ceph.com/issues/52624
2021
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2022
* https://tracker.ceph.com/issues/53216
2023
    qa: "RuntimeError: value of attributes should be either str or None. client_id"
2024
* https://tracker.ceph.com/issues/50250
2025
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
2026
2027 30 Patrick Donnelly
2028
2029
h3. 2021 November 03
2030
2031
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211103.023355
2032
2033
* https://tracker.ceph.com/issues/51964
2034
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
2035
* https://tracker.ceph.com/issues/51282
2036
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2037
* https://tracker.ceph.com/issues/52436
2038
    fs/ceph: "corrupt mdsmap"
2039
* https://tracker.ceph.com/issues/53074
2040
    pybind/mgr/cephadm: upgrade sequence does not continue if no MDS are active
2041
* https://tracker.ceph.com/issues/53150
2042
    pybind/mgr/cephadm/upgrade: tolerate MDS failures during upgrade straddling v16.2.5
2043
* https://tracker.ceph.com/issues/53155
2044
    MDSMonitor: assertion during upgrade to v16.2.5+
2045 29 Patrick Donnelly
2046
2047
h3. 2021 October 26
2048
2049
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211025.000447
2050
2051
* https://tracker.ceph.com/issues/53074
2052
    pybind/mgr/cephadm: upgrade sequence does not continue if no MDS are active
2053
* https://tracker.ceph.com/issues/52997
2054
    testing: hang ing umount
2055
* https://tracker.ceph.com/issues/50824
2056
    qa: snaptest-git-ceph bus error
2057
* https://tracker.ceph.com/issues/52436
2058
    fs/ceph: "corrupt mdsmap"
2059
* https://tracker.ceph.com/issues/48773
2060
    qa: scrub does not complete
2061
* https://tracker.ceph.com/issues/53082
2062
    ceph-fuse: segmenetation fault in Client::handle_mds_map
2063
* https://tracker.ceph.com/issues/50223
2064
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2065
* https://tracker.ceph.com/issues/52624
2066
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2067
* https://tracker.ceph.com/issues/50224
2068
    qa: test_mirroring_init_failure_with_recovery failure
2069
* https://tracker.ceph.com/issues/50821
2070
    qa: untar_snap_rm failure during mds thrashing
2071
* https://tracker.ceph.com/issues/50250
2072
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
2073
2074 27 Patrick Donnelly
2075
2076 28 Patrick Donnelly
h3. 2021 October 19
2077 27 Patrick Donnelly
2078
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211019.013028
2079
2080
* https://tracker.ceph.com/issues/52995
2081
    qa: test_standby_count_wanted failure
2082
* https://tracker.ceph.com/issues/52948
2083
    osd: fails to come up: "teuthology.misc:7 of 8 OSDs are up"
2084
* https://tracker.ceph.com/issues/52996
2085
    qa: test_perf_counters via test_openfiletable
2086
* https://tracker.ceph.com/issues/48772
2087
    qa: pjd: not ok 9, 44, 80
2088
* https://tracker.ceph.com/issues/52997
2089
    testing: hang ing umount
2090
* https://tracker.ceph.com/issues/50250
2091
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
2092
* https://tracker.ceph.com/issues/52624
2093
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2094
* https://tracker.ceph.com/issues/50223
2095
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2096
* https://tracker.ceph.com/issues/50821
2097
    qa: untar_snap_rm failure during mds thrashing
2098
* https://tracker.ceph.com/issues/48773
2099
    qa: scrub does not complete
2100 26 Patrick Donnelly
2101
2102
h3. 2021 October 12
2103
2104
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211012.192211
2105
2106
Some failures caused by teuthology bug: https://tracker.ceph.com/issues/52944
2107
2108
New test caused failure: https://github.com/ceph/ceph/pull/43297#discussion_r729883167
2109
2110
2111
* https://tracker.ceph.com/issues/51282
2112
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2113
* https://tracker.ceph.com/issues/52948
2114
    osd: fails to come up: "teuthology.misc:7 of 8 OSDs are up"
2115
* https://tracker.ceph.com/issues/48773
2116
    qa: scrub does not complete
2117
* https://tracker.ceph.com/issues/50224
2118
    qa: test_mirroring_init_failure_with_recovery failure
2119
* https://tracker.ceph.com/issues/52949
2120
    RuntimeError: The following counters failed to be set on mds daemons: {'mds.dir_split'}
2121 25 Patrick Donnelly
2122 23 Patrick Donnelly
2123 24 Patrick Donnelly
h3. 2021 October 02
2124
2125
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211002.163337
2126
2127
Some failures caused by cephadm upgrade test. Fixed in follow-up qa commit.
2128
2129
test_simple failures caused by PR in this set.
2130
2131
A few reruns because of QA infra noise.
2132
2133
* https://tracker.ceph.com/issues/52822
2134
    qa: failed pacific install on fs:upgrade
2135
* https://tracker.ceph.com/issues/52624
2136
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2137
* https://tracker.ceph.com/issues/50223
2138
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2139
* https://tracker.ceph.com/issues/48773
2140
    qa: scrub does not complete
2141
2142
2143 23 Patrick Donnelly
h3. 2021 September 20
2144
2145
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210917.174826
2146
2147
* https://tracker.ceph.com/issues/52677
2148
    qa: test_simple failure
2149
* https://tracker.ceph.com/issues/51279
2150
    kclient hangs on umount (testing branch)
2151
* https://tracker.ceph.com/issues/50223
2152
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2153
* https://tracker.ceph.com/issues/50250
2154
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
2155
* https://tracker.ceph.com/issues/52624
2156
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2157
* https://tracker.ceph.com/issues/52438
2158
    qa: ffsb timeout
2159 22 Patrick Donnelly
2160
2161
h3. 2021 September 10
2162
2163
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210910.181451
2164
2165
* https://tracker.ceph.com/issues/50223
2166
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2167
* https://tracker.ceph.com/issues/50250
2168
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
2169
* https://tracker.ceph.com/issues/52624
2170
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2171
* https://tracker.ceph.com/issues/52625
2172
    qa: test_kill_mdstable (tasks.cephfs.test_snapshots.TestSnapshots)
2173
* https://tracker.ceph.com/issues/52439
2174
    qa: acls does not compile on centos stream
2175
* https://tracker.ceph.com/issues/50821
2176
    qa: untar_snap_rm failure during mds thrashing
2177
* https://tracker.ceph.com/issues/48773
2178
    qa: scrub does not complete
2179
* https://tracker.ceph.com/issues/52626
2180
    mds: ScrubStack.cc: 831: FAILED ceph_assert(diri)
2181
* https://tracker.ceph.com/issues/51279
2182
    kclient hangs on umount (testing branch)
2183 21 Patrick Donnelly
2184
2185
h3. 2021 August 27
2186
2187
Several jobs died because of device failures.
2188
2189
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210827.024746
2190
2191
* https://tracker.ceph.com/issues/52430
2192
    mds: fast async create client mount breaks racy test
2193
* https://tracker.ceph.com/issues/52436
2194
    fs/ceph: "corrupt mdsmap"
2195
* https://tracker.ceph.com/issues/52437
2196
    mds: InoTable::replay_release_ids abort via test_inotable_sync
2197
* https://tracker.ceph.com/issues/51282
2198
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2199
* https://tracker.ceph.com/issues/52438
2200
    qa: ffsb timeout
2201
* https://tracker.ceph.com/issues/52439
2202
    qa: acls does not compile on centos stream
2203 20 Patrick Donnelly
2204
2205
h3. 2021 July 30
2206
2207
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210729.214022
2208
2209
* https://tracker.ceph.com/issues/50250
2210
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
2211
* https://tracker.ceph.com/issues/51282
2212
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2213
* https://tracker.ceph.com/issues/48773
2214
    qa: scrub does not complete
2215
* https://tracker.ceph.com/issues/51975
2216
    pybind/mgr/stats: KeyError
2217 19 Patrick Donnelly
2218
2219
h3. 2021 July 28
2220
2221
https://pulpito.ceph.com/pdonnell-2021-07-28_00:39:45-fs-wip-pdonnell-testing-20210727.213757-distro-basic-smithi/
2222
2223
with qa fix: https://pulpito.ceph.com/pdonnell-2021-07-28_16:20:28-fs-wip-pdonnell-testing-20210728.141004-distro-basic-smithi/
2224
2225
* https://tracker.ceph.com/issues/51905
2226
    qa: "error reading sessionmap 'mds1_sessionmap'"
2227
* https://tracker.ceph.com/issues/48773
2228
    qa: scrub does not complete
2229
* https://tracker.ceph.com/issues/50250
2230
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
2231
* https://tracker.ceph.com/issues/51267
2232
    CommandFailedError: Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithi096 with status 1:...
2233
* https://tracker.ceph.com/issues/51279
2234
    kclient hangs on umount (testing branch)
2235 18 Patrick Donnelly
2236
2237
h3. 2021 July 16
2238
2239
https://pulpito.ceph.com/pdonnell-2021-07-16_05:50:11-fs-wip-pdonnell-testing-20210716.022804-distro-basic-smithi/
2240
2241
* https://tracker.ceph.com/issues/48773
2242
    qa: scrub does not complete
2243
* https://tracker.ceph.com/issues/48772
2244
    qa: pjd: not ok 9, 44, 80
2245
* https://tracker.ceph.com/issues/45434
2246
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2247
* https://tracker.ceph.com/issues/51279
2248
    kclient hangs on umount (testing branch)
2249
* https://tracker.ceph.com/issues/50824
2250
    qa: snaptest-git-ceph bus error
2251 17 Patrick Donnelly
2252
2253
h3. 2021 July 04
2254
2255
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210703.052904
2256
2257
* https://tracker.ceph.com/issues/48773
2258
    qa: scrub does not complete
2259
* https://tracker.ceph.com/issues/39150
2260
    mon: "FAILED ceph_assert(session_map.sessions.empty())" when out of quorum
2261
* https://tracker.ceph.com/issues/45434
2262
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2263
* https://tracker.ceph.com/issues/51282
2264
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2265
* https://tracker.ceph.com/issues/48771
2266
    qa: iogen: workload fails to cause balancing
2267
* https://tracker.ceph.com/issues/51279
2268
    kclient hangs on umount (testing branch)
2269
* https://tracker.ceph.com/issues/50250
2270
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
2271 16 Patrick Donnelly
2272
2273
h3. 2021 July 01
2274
2275
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210701.192056
2276
2277
* https://tracker.ceph.com/issues/51197
2278
    qa: [WRN] Scrub error on inode 0x10000001520 (/client.0/tmp/t/linux-5.4/Documentation/driver-api) see mds.f log and `damage ls` output for details
2279
* https://tracker.ceph.com/issues/50866
2280
    osd: stat mismatch on objects
2281
* https://tracker.ceph.com/issues/48773
2282
    qa: scrub does not complete
2283 15 Patrick Donnelly
2284
2285
h3. 2021 June 26
2286
2287
https://pulpito.ceph.com/pdonnell-2021-06-26_00:57:00-fs-wip-pdonnell-testing-20210625.225421-distro-basic-smithi/
2288
2289
* https://tracker.ceph.com/issues/51183
2290
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
2291
* https://tracker.ceph.com/issues/51410
2292
    kclient: fails to finish reconnect during MDS thrashing (testing branch)
2293
* https://tracker.ceph.com/issues/48773
2294
    qa: scrub does not complete
2295
* https://tracker.ceph.com/issues/51282
2296
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2297
* https://tracker.ceph.com/issues/51169
2298
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
2299
* https://tracker.ceph.com/issues/48772
2300
    qa: pjd: not ok 9, 44, 80
2301 14 Patrick Donnelly
2302
2303
h3. 2021 June 21
2304
2305
https://pulpito.ceph.com/pdonnell-2021-06-22_00:27:21-fs-wip-pdonnell-testing-20210621.231646-distro-basic-smithi/
2306
2307
One failure caused by PR: https://github.com/ceph/ceph/pull/41935#issuecomment-866472599
2308
2309
* https://tracker.ceph.com/issues/51282
2310
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2311
* https://tracker.ceph.com/issues/51183
2312
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
2313
* https://tracker.ceph.com/issues/48773
2314
    qa: scrub does not complete
2315
* https://tracker.ceph.com/issues/48771
2316
    qa: iogen: workload fails to cause balancing
2317
* https://tracker.ceph.com/issues/51169
2318
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
2319
* https://tracker.ceph.com/issues/50495
2320
    libcephfs: shutdown race fails with status 141
2321
* https://tracker.ceph.com/issues/45434
2322
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2323
* https://tracker.ceph.com/issues/50824
2324
    qa: snaptest-git-ceph bus error
2325
* https://tracker.ceph.com/issues/50223
2326
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2327 13 Patrick Donnelly
2328
2329
h3. 2021 June 16
2330
2331
https://pulpito.ceph.com/pdonnell-2021-06-16_21:26:55-fs-wip-pdonnell-testing-20210616.191804-distro-basic-smithi/
2332
2333
MDS abort class of failures caused by PR: https://github.com/ceph/ceph/pull/41667
2334
2335
* https://tracker.ceph.com/issues/45434
2336
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2337
* https://tracker.ceph.com/issues/51169
2338
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
2339
* https://tracker.ceph.com/issues/43216
2340
    MDSMonitor: removes MDS coming out of quorum election
2341
* https://tracker.ceph.com/issues/51278
2342
    mds: "FAILED ceph_assert(!segments.empty())"
2343
* https://tracker.ceph.com/issues/51279
2344
    kclient hangs on umount (testing branch)
2345
* https://tracker.ceph.com/issues/51280
2346
    mds: "FAILED ceph_assert(r == 0 || r == -2)"
2347
* https://tracker.ceph.com/issues/51183
2348
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
2349
* https://tracker.ceph.com/issues/51281
2350
    qa: snaptest-snap-rm-cmp.sh: "echo 'FAIL: bad match, /tmp/a 4637e766853d1ad16a7b17079e2c6f03 != real c3883760b18d50e8d78819c54d579b00'"
2351
* https://tracker.ceph.com/issues/48773
2352
    qa: scrub does not complete
2353
* https://tracker.ceph.com/issues/51076
2354
    "wait_for_recovery: failed before timeout expired" during thrashosd test with EC backend.
2355
* https://tracker.ceph.com/issues/51228
2356
    qa: rmdir: failed to remove 'a/.snap/*': No such file or directory
2357
* https://tracker.ceph.com/issues/51282
2358
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2359 12 Patrick Donnelly
2360
2361
h3. 2021 June 14
2362
2363
https://pulpito.ceph.com/pdonnell-2021-06-14_20:53:05-fs-wip-pdonnell-testing-20210614.173325-distro-basic-smithi/
2364
2365
Some Ubuntu 20.04 upgrade fallout. In particular, upgrade tests are failing due to missing packages for 18.04 Pacific.
2366
2367
* https://tracker.ceph.com/issues/51169
2368
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
2369
* https://tracker.ceph.com/issues/51228
2370
    qa: rmdir: failed to remove 'a/.snap/*': No such file or directory
2371
* https://tracker.ceph.com/issues/48773
2372
    qa: scrub does not complete
2373
* https://tracker.ceph.com/issues/51183
2374
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
2375
* https://tracker.ceph.com/issues/45434
2376
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2377
* https://tracker.ceph.com/issues/51182
2378
    pybind/mgr/snap_schedule: Invalid command: Unexpected argument 'fs=cephfs'
2379
* https://tracker.ceph.com/issues/51229
2380
    qa: test_multi_snap_schedule list difference failure
2381
* https://tracker.ceph.com/issues/50821
2382
    qa: untar_snap_rm failure during mds thrashing
2383 11 Patrick Donnelly
2384
2385
h3. 2021 June 13
2386
2387
https://pulpito.ceph.com/pdonnell-2021-06-12_02:45:35-fs-wip-pdonnell-testing-20210612.002809-distro-basic-smithi/
2388
2389
Some Ubuntu 20.04 upgrade fallout. In particular, upgrade tests are failing due to missing packages for 18.04 Pacific.
2390
2391
* https://tracker.ceph.com/issues/51169
2392
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
2393
* https://tracker.ceph.com/issues/48773
2394
    qa: scrub does not complete
2395
* https://tracker.ceph.com/issues/51182
2396
    pybind/mgr/snap_schedule: Invalid command: Unexpected argument 'fs=cephfs'
2397
* https://tracker.ceph.com/issues/51183
2398
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
2399
* https://tracker.ceph.com/issues/51197
2400
    qa: [WRN] Scrub error on inode 0x10000001520 (/client.0/tmp/t/linux-5.4/Documentation/driver-api) see mds.f log and `damage ls` output for details
2401
* https://tracker.ceph.com/issues/45434
2402 10 Patrick Donnelly
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2403
2404
h3. 2021 June 11
2405
2406
https://pulpito.ceph.com/pdonnell-2021-06-11_18:02:10-fs-wip-pdonnell-testing-20210611.162716-distro-basic-smithi/
2407
2408
Some Ubuntu 20.04 upgrade fallout. In particular, upgrade tests are failing due to missing packages for 18.04 Pacific.
2409
2410
* https://tracker.ceph.com/issues/51169
2411
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
2412
* https://tracker.ceph.com/issues/45434
2413
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2414
* https://tracker.ceph.com/issues/48771
2415
    qa: iogen: workload fails to cause balancing
2416
* https://tracker.ceph.com/issues/43216
2417
    MDSMonitor: removes MDS coming out of quorum election
2418
* https://tracker.ceph.com/issues/51182
2419
    pybind/mgr/snap_schedule: Invalid command: Unexpected argument 'fs=cephfs'
2420
* https://tracker.ceph.com/issues/50223
2421
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2422
* https://tracker.ceph.com/issues/48773
2423
    qa: scrub does not complete
2424
* https://tracker.ceph.com/issues/51183
2425
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
2426
* https://tracker.ceph.com/issues/51184
2427
    qa: fs:bugs does not specify distro
2428 9 Patrick Donnelly
2429
2430
h3. 2021 June 03
2431
2432
https://pulpito.ceph.com/pdonnell-2021-06-03_03:40:33-fs-wip-pdonnell-testing-20210603.020013-distro-basic-smithi/
2433
2434
* https://tracker.ceph.com/issues/45434
2435
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2436
* https://tracker.ceph.com/issues/50016
2437
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
2438
* https://tracker.ceph.com/issues/50821
2439
    qa: untar_snap_rm failure during mds thrashing
2440
* https://tracker.ceph.com/issues/50622 (regression)
2441
    msg: active_connections regression
2442
* https://tracker.ceph.com/issues/49845#note-2 (regression)
2443
    qa: failed umount in test_volumes
2444
* https://tracker.ceph.com/issues/48773
2445
    qa: scrub does not complete
2446
* https://tracker.ceph.com/issues/43216
2447
    MDSMonitor: removes MDS coming out of quorum election
2448 7 Patrick Donnelly
2449
2450 8 Patrick Donnelly
h3. 2021 May 18
2451
2452
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210518.214114
2453
2454
Regression in testing kernel caused some failures. Ilya fixed those and rerun
2455
looked better. Some odd new noise in the rerun relating to packaging and "No
2456
module named 'tasks.ceph'".
2457
2458
* https://tracker.ceph.com/issues/50824
2459
    qa: snaptest-git-ceph bus error
2460
* https://tracker.ceph.com/issues/50622 (regression)
2461
    msg: active_connections regression
2462
* https://tracker.ceph.com/issues/49845#note-2 (regression)
2463
    qa: failed umount in test_volumes
2464
* https://tracker.ceph.com/issues/48203 (stock kernel update required)
2465
    qa: quota failure
2466
2467
2468 7 Patrick Donnelly
h3. 2021 May 18
2469
2470
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210518.025642
2471
2472
* https://tracker.ceph.com/issues/50821
2473
    qa: untar_snap_rm failure during mds thrashing
2474
* https://tracker.ceph.com/issues/48773
2475
    qa: scrub does not complete
2476
* https://tracker.ceph.com/issues/45591
2477
    mgr: FAILED ceph_assert(daemon != nullptr)
2478
* https://tracker.ceph.com/issues/50866
2479
    osd: stat mismatch on objects
2480
* https://tracker.ceph.com/issues/50016
2481
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
2482
* https://tracker.ceph.com/issues/50867
2483
    qa: fs:mirror: reduced data availability
2484
* https://tracker.ceph.com/issues/50821
2485
    qa: untar_snap_rm failure during mds thrashing
2486
* https://tracker.ceph.com/issues/50622 (regression)
2487
    msg: active_connections regression
2488
* https://tracker.ceph.com/issues/50223
2489
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2490
* https://tracker.ceph.com/issues/50868
2491
    qa: "kern.log.gz already exists; not overwritten"
2492
* https://tracker.ceph.com/issues/50870
2493
    qa: test_full: "rm: cannot remove 'large_file_a': Permission denied"
2494 6 Patrick Donnelly
2495
2496
h3. 2021 May 11
2497
2498
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210511.232042
2499
2500
* one class of failures caused by PR
2501
* https://tracker.ceph.com/issues/48812
2502
    qa: test_scrub_pause_and_resume_with_abort failure
2503
* https://tracker.ceph.com/issues/50390
2504
    mds: monclient: wait_auth_rotating timed out after 30
2505
* https://tracker.ceph.com/issues/48773
2506
    qa: scrub does not complete
2507
* https://tracker.ceph.com/issues/50821
2508
    qa: untar_snap_rm failure during mds thrashing
2509
* https://tracker.ceph.com/issues/50224
2510
    qa: test_mirroring_init_failure_with_recovery failure
2511
* https://tracker.ceph.com/issues/50622 (regression)
2512
    msg: active_connections regression
2513
* https://tracker.ceph.com/issues/50825
2514
    qa: snaptest-git-ceph hang during mon thrashing v2
2515
* https://tracker.ceph.com/issues/50821
2516
    qa: untar_snap_rm failure during mds thrashing
2517
* https://tracker.ceph.com/issues/50823
2518
    qa: RuntimeError: timeout waiting for cluster to stabilize
2519 5 Patrick Donnelly
2520
2521
h3. 2021 May 14
2522
2523
https://pulpito.ceph.com/pdonnell-2021-05-14_21:45:42-fs-master-distro-basic-smithi/
2524
2525
* https://tracker.ceph.com/issues/48812
2526
    qa: test_scrub_pause_and_resume_with_abort failure
2527
* https://tracker.ceph.com/issues/50821
2528
    qa: untar_snap_rm failure during mds thrashing
2529
* https://tracker.ceph.com/issues/50622 (regression)
2530
    msg: active_connections regression
2531
* https://tracker.ceph.com/issues/50822
2532
    qa: testing kernel patch for client metrics causes mds abort
2533
* https://tracker.ceph.com/issues/48773
2534
    qa: scrub does not complete
2535
* https://tracker.ceph.com/issues/50823
2536
    qa: RuntimeError: timeout waiting for cluster to stabilize
2537
* https://tracker.ceph.com/issues/50824
2538
    qa: snaptest-git-ceph bus error
2539
* https://tracker.ceph.com/issues/50825
2540
    qa: snaptest-git-ceph hang during mon thrashing v2
2541
* https://tracker.ceph.com/issues/50826
2542
    kceph: stock RHEL kernel hangs on snaptests with mon|osd thrashers
2543 4 Patrick Donnelly
2544
2545
h3. 2021 May 01
2546
2547
https://pulpito.ceph.com/pdonnell-2021-05-01_09:07:09-fs-wip-pdonnell-testing-20210501.040415-distro-basic-smithi/
2548
2549
* https://tracker.ceph.com/issues/45434
2550
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2551
* https://tracker.ceph.com/issues/50281
2552
    qa: untar_snap_rm timeout
2553
* https://tracker.ceph.com/issues/48203 (stock kernel update required)
2554
    qa: quota failure
2555
* https://tracker.ceph.com/issues/48773
2556
    qa: scrub does not complete
2557
* https://tracker.ceph.com/issues/50390
2558
    mds: monclient: wait_auth_rotating timed out after 30
2559
* https://tracker.ceph.com/issues/50250
2560
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details"
2561
* https://tracker.ceph.com/issues/50622 (regression)
2562
    msg: active_connections regression
2563
* https://tracker.ceph.com/issues/45591
2564
    mgr: FAILED ceph_assert(daemon != nullptr)
2565
* https://tracker.ceph.com/issues/50221
2566
    qa: snaptest-git-ceph failure in git diff
2567
* https://tracker.ceph.com/issues/50016
2568
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
2569 3 Patrick Donnelly
2570
2571
h3. 2021 Apr 15
2572
2573
https://pulpito.ceph.com/pdonnell-2021-04-15_01:35:57-fs-wip-pdonnell-testing-20210414.230315-distro-basic-smithi/
2574
2575
* https://tracker.ceph.com/issues/50281
2576
    qa: untar_snap_rm timeout
2577
* https://tracker.ceph.com/issues/50220
2578
    qa: dbench workload timeout
2579
* https://tracker.ceph.com/issues/50246
2580
    mds: failure replaying journal (EMetaBlob)
2581
* https://tracker.ceph.com/issues/50250
2582
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details"
2583
* https://tracker.ceph.com/issues/50016
2584
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
2585
* https://tracker.ceph.com/issues/50222
2586
    osd: 5.2s0 deep-scrub : stat mismatch
2587
* https://tracker.ceph.com/issues/45434
2588
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2589
* https://tracker.ceph.com/issues/49845
2590
    qa: failed umount in test_volumes
2591
* https://tracker.ceph.com/issues/37808
2592
    osd: osdmap cache weak_refs assert during shutdown
2593
* https://tracker.ceph.com/issues/50387
2594
    client: fs/snaps failure
2595
* https://tracker.ceph.com/issues/50389
2596
    mds: "cluster [ERR] Error recovering journal 0x203: (2) No such file or directory" in cluster log"
2597
* https://tracker.ceph.com/issues/50216
2598
    qa: "ls: cannot access 'lost+found': No such file or directory"
2599
* https://tracker.ceph.com/issues/50390
2600
    mds: monclient: wait_auth_rotating timed out after 30
2601
2602 1 Patrick Donnelly
2603
2604 2 Patrick Donnelly
h3. 2021 Apr 08
2605
2606
https://pulpito.ceph.com/pdonnell-2021-04-08_22:42:24-fs-wip-pdonnell-testing-20210408.192301-distro-basic-smithi/
2607
2608
* https://tracker.ceph.com/issues/45434
2609
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2610
* https://tracker.ceph.com/issues/50016
2611
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
2612
* https://tracker.ceph.com/issues/48773
2613
    qa: scrub does not complete
2614
* https://tracker.ceph.com/issues/50279
2615
    qa: "Replacing daemon mds.b as rank 0 with standby daemon mds.c"
2616
* https://tracker.ceph.com/issues/50246
2617
    mds: failure replaying journal (EMetaBlob)
2618
* https://tracker.ceph.com/issues/48365
2619
    qa: ffsb build failure on CentOS 8.2
2620
* https://tracker.ceph.com/issues/50216
2621
    qa: "ls: cannot access 'lost+found': No such file or directory"
2622
* https://tracker.ceph.com/issues/50223
2623
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2624
* https://tracker.ceph.com/issues/50280
2625
    cephadm: RuntimeError: uid/gid not found
2626
* https://tracker.ceph.com/issues/50281
2627
    qa: untar_snap_rm timeout
2628
2629 1 Patrick Donnelly
h3. 2021 Apr 08
2630
2631
https://pulpito.ceph.com/pdonnell-2021-04-08_04:31:36-fs-wip-pdonnell-testing-20210408.024225-distro-basic-smithi/
2632
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210408.142238 (with logic inversion / QA fix)
2633
2634
* https://tracker.ceph.com/issues/50246
2635
    mds: failure replaying journal (EMetaBlob)
2636
* https://tracker.ceph.com/issues/50250
2637
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details"
2638
2639
2640
h3. 2021 Apr 07
2641
2642
https://pulpito.ceph.com/pdonnell-2021-04-07_02:12:41-fs-wip-pdonnell-testing-20210406.213012-distro-basic-smithi/
2643
2644
* https://tracker.ceph.com/issues/50215
2645
    qa: "log [ERR] : error reading sessionmap 'mds2_sessionmap'"
2646
* https://tracker.ceph.com/issues/49466
2647
    qa: "Command failed on gibba030 with status 1: 'set -ex\nsudo dd of=/tmp/tmp.ZEeZBasJer'"
2648
* https://tracker.ceph.com/issues/50216
2649
    qa: "ls: cannot access 'lost+found': No such file or directory"
2650
* https://tracker.ceph.com/issues/48773
2651
    qa: scrub does not complete
2652
* https://tracker.ceph.com/issues/49845
2653
    qa: failed umount in test_volumes
2654
* https://tracker.ceph.com/issues/50220
2655
    qa: dbench workload timeout
2656
* https://tracker.ceph.com/issues/50221
2657
    qa: snaptest-git-ceph failure in git diff
2658
* https://tracker.ceph.com/issues/50222
2659
    osd: 5.2s0 deep-scrub : stat mismatch
2660
* https://tracker.ceph.com/issues/50223
2661
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2662
* https://tracker.ceph.com/issues/50224
2663
    qa: test_mirroring_init_failure_with_recovery failure
2664
2665
h3. 2021 Apr 01
2666
2667
https://pulpito.ceph.com/pdonnell-2021-04-01_00:45:34-fs-wip-pdonnell-testing-20210331.222326-distro-basic-smithi/
2668
2669
* https://tracker.ceph.com/issues/48772
2670
    qa: pjd: not ok 9, 44, 80
2671
* https://tracker.ceph.com/issues/50177
2672
    osd: "stalled aio... buggy kernel or bad device?"
2673
* https://tracker.ceph.com/issues/48771
2674
    qa: iogen: workload fails to cause balancing
2675
* https://tracker.ceph.com/issues/49845
2676
    qa: failed umount in test_volumes
2677
* https://tracker.ceph.com/issues/48773
2678
    qa: scrub does not complete
2679
* https://tracker.ceph.com/issues/48805
2680
    mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details"
2681
* https://tracker.ceph.com/issues/50178
2682
    qa: "TypeError: run() got an unexpected keyword argument 'shell'"
2683
* https://tracker.ceph.com/issues/45434
2684
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2685
2686
h3. 2021 Mar 24
2687
2688
https://pulpito.ceph.com/pdonnell-2021-03-24_23:26:35-fs-wip-pdonnell-testing-20210324.190252-distro-basic-smithi/
2689
2690
* https://tracker.ceph.com/issues/49500
2691
    qa: "Assertion `cb_done' failed."
2692
* https://tracker.ceph.com/issues/50019
2693
    qa: mount failure with cephadm "probably no MDS server is up?"
2694
* https://tracker.ceph.com/issues/50020
2695
    qa: "RADOS object not found (Failed to operate read op for oid cephfs_mirror)"
2696
* https://tracker.ceph.com/issues/48773
2697
    qa: scrub does not complete
2698
* https://tracker.ceph.com/issues/45434
2699
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2700
* https://tracker.ceph.com/issues/48805
2701
    mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details"
2702
* https://tracker.ceph.com/issues/48772
2703
    qa: pjd: not ok 9, 44, 80
2704
* https://tracker.ceph.com/issues/50021
2705
    qa: snaptest-git-ceph failure during mon thrashing
2706
* https://tracker.ceph.com/issues/48771
2707
    qa: iogen: workload fails to cause balancing
2708
* https://tracker.ceph.com/issues/50016
2709
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
2710
* https://tracker.ceph.com/issues/49466
2711
    qa: "Command failed on gibba030 with status 1: 'set -ex\nsudo dd of=/tmp/tmp.ZEeZBasJer'"
2712
2713
2714
h3. 2021 Mar 18
2715
2716
https://pulpito.ceph.com/pdonnell-2021-03-18_13:46:31-fs-wip-pdonnell-testing-20210318.024145-distro-basic-smithi/
2717
2718
* https://tracker.ceph.com/issues/49466
2719
    qa: "Command failed on gibba030 with status 1: 'set -ex\nsudo dd of=/tmp/tmp.ZEeZBasJer'"
2720
* https://tracker.ceph.com/issues/48773
2721
    qa: scrub does not complete
2722
* https://tracker.ceph.com/issues/48805
2723
    mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details"
2724
* https://tracker.ceph.com/issues/45434
2725
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2726
* https://tracker.ceph.com/issues/49845
2727
    qa: failed umount in test_volumes
2728
* https://tracker.ceph.com/issues/49605
2729
    mgr: drops command on the floor
2730
* https://tracker.ceph.com/issues/48203 (stock kernel update required)
2731
    qa: quota failure
2732
* https://tracker.ceph.com/issues/49928
2733
    client: items pinned in cache preventing unmount x2
2734
2735
h3. 2021 Mar 15
2736
2737
https://pulpito.ceph.com/pdonnell-2021-03-15_22:16:56-fs-wip-pdonnell-testing-20210315.182203-distro-basic-smithi/
2738
2739
* https://tracker.ceph.com/issues/49842
2740
    qa: stuck pkg install
2741
* https://tracker.ceph.com/issues/49466
2742
    qa: "Command failed on gibba030 with status 1: 'set -ex\nsudo dd of=/tmp/tmp.ZEeZBasJer'"
2743
* https://tracker.ceph.com/issues/49822
2744
    test: test_mirroring_command_idempotency (tasks.cephfs.test_admin.TestMirroringCommands) failure
2745
* https://tracker.ceph.com/issues/49240
2746
    terminate called after throwing an instance of 'std::bad_alloc'
2747
* https://tracker.ceph.com/issues/48773
2748
    qa: scrub does not complete
2749
* https://tracker.ceph.com/issues/45434
2750
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2751
* https://tracker.ceph.com/issues/49500
2752
    qa: "Assertion `cb_done' failed."
2753
* https://tracker.ceph.com/issues/49843
2754
    qa: fs/snaps/snaptest-upchildrealms.sh failure
2755
* https://tracker.ceph.com/issues/49845
2756
    qa: failed umount in test_volumes
2757
* https://tracker.ceph.com/issues/48805
2758
    mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details"
2759
* https://tracker.ceph.com/issues/49605
2760
    mgr: drops command on the floor
2761
2762
and failure caused by PR: https://github.com/ceph/ceph/pull/39969
2763
2764
2765
h3. 2021 Mar 09
2766
2767
https://pulpito.ceph.com/pdonnell-2021-03-09_03:27:39-fs-wip-pdonnell-testing-20210308.214827-distro-basic-smithi/
2768
2769
* https://tracker.ceph.com/issues/49500
2770
    qa: "Assertion `cb_done' failed."
2771
* https://tracker.ceph.com/issues/48805
2772
    mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details"
2773
* https://tracker.ceph.com/issues/48773
2774
    qa: scrub does not complete
2775
* https://tracker.ceph.com/issues/45434
2776
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2777
* https://tracker.ceph.com/issues/49240
2778
    terminate called after throwing an instance of 'std::bad_alloc'
2779
* https://tracker.ceph.com/issues/49466
2780
    qa: "Command failed on gibba030 with status 1: 'set -ex\nsudo dd of=/tmp/tmp.ZEeZBasJer'"
2781
* https://tracker.ceph.com/issues/49684
2782
    qa: fs:cephadm mount does not wait for mds to be created
2783
* https://tracker.ceph.com/issues/48771
2784
    qa: iogen: workload fails to cause balancing