Project

General

Profile

Main » History » Version 135

Kotresh Hiremath Ravishankar, 05/18/2023 11:09 AM

1 79 Venky Shankar
h1. MAIN
2
3 128 Venky Shankar
h3. 15 May 2023
4
5
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230509.090020
6 130 Venky Shankar
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230509.090020-6
7 128 Venky Shankar
8
* https://tracker.ceph.com/issues/52624
9
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
10
* https://tracker.ceph.com/issues/54460
11
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
12
* https://tracker.ceph.com/issues/57676
13
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
14
* https://tracker.ceph.com/issues/59684 [kclient bug]
15
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
16
* https://tracker.ceph.com/issues/59348
17
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
18
* https://tracker.ceph.com/issues/61148
19
    dbench test results in call trace in dmesg [kclient bug]
20 131 Venky Shankar
* https://tracker.ceph.com/issues/58340
21
    mds: fsstress.sh hangs with multimds
22 133 Kotresh Hiremath Ravishankar
23 134 Kotresh Hiremath Ravishankar
 
24 125 Venky Shankar
h3. 11 May 2023
25
26 129 Rishabh Dave
https://pulpito.ceph.com/yuriw-2023-05-10_18:21:40-fs-wip-yuri7-testing-2023-05-10-0742-distro-default-smithi/
27
28
* https://tracker.ceph.com/issues/59684 [kclient bug]
29
  Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
30
* https://tracker.ceph.com/issues/59348
31
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
32
* https://tracker.ceph.com/issues/57655
33
  qa: fs:mixed-clients kernel_untar_build failure
34
* https://tracker.ceph.com/issues/57676
35
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
36
* https://tracker.ceph.com/issues/55805
37
  error during scrub thrashing reached max tries in 900 secs
38
* https://tracker.ceph.com/issues/54460
39
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
40
* https://tracker.ceph.com/issues/57656
41
  [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
42
* https://tracker.ceph.com/issues/58220
43
  Command failed (workunit test fs/quota/quota.sh) on smithi081 with status 1:
44
* https://tracker.ceph.com/issues/58220#note-9
45
  workunit fs/test_python.sh: test_disk_quota_exceeeded_error failure
46 1 Patrick Donnelly
* https://tracker.ceph.com/issues/59342
47
  qa/workunits/kernel_untar_build.sh failed when compiling the Linux source
48 134 Kotresh Hiremath Ravishankar
* https://tracker.ceph.com/issues/58949
49
    test_cephfs.test_disk_quota_exceeeded_error - AssertionError: DiskQuotaExceeded not raised by write
50 135 Kotresh Hiremath Ravishankar
* https://tracker.ceph.com/issues/61243 (NEW)
51
  test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
52 129 Rishabh Dave
53
h3. 11 May 2023
54
55 125 Venky Shankar
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230509.054005
56 127 Venky Shankar
57
(no fsstress job failure [https://tracker.ceph.com/issues/58340] since https://github.com/ceph/ceph/pull/49553
58 126 Venky Shankar
 was included in the branch, however, the PR got updated and needs retest).
59 125 Venky Shankar
60
* https://tracker.ceph.com/issues/52624
61
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
62
* https://tracker.ceph.com/issues/54460
63
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
64
* https://tracker.ceph.com/issues/57676
65
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
66
* https://tracker.ceph.com/issues/59683
67
    Error: Unable to find a match: userspace-rcu-devel libedit-devel device-mapper-devel with fscrypt tests
68
* https://tracker.ceph.com/issues/59684 [kclient bug]
69
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
70
* https://tracker.ceph.com/issues/59348
71
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
72
73 124 Venky Shankar
h3. 09 May 2023
74
75
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230506.143554
76
77
* https://tracker.ceph.com/issues/52624
78
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
79
* https://tracker.ceph.com/issues/58340
80
    mds: fsstress.sh hangs with multimds
81
* https://tracker.ceph.com/issues/54460
82
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
83
* https://tracker.ceph.com/issues/57676
84
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
85
* https://tracker.ceph.com/issues/51964
86
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
87
* https://tracker.ceph.com/issues/59350
88
    qa: test_flag_scrub_mdsdir (tasks.cephfs.test_scrub_checks.TestScrubChecks) ... ERROR
89
* https://tracker.ceph.com/issues/59683
90
    Error: Unable to find a match: userspace-rcu-devel libedit-devel device-mapper-devel with fscrypt tests
91
* https://tracker.ceph.com/issues/59684 [kclient bug]
92
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
93
* https://tracker.ceph.com/issues/59348
94
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
95
96 123 Venky Shankar
h3. 10 Apr 2023
97
98
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230330.105356
99
100
* https://tracker.ceph.com/issues/52624
101
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
102
* https://tracker.ceph.com/issues/58340
103
    mds: fsstress.sh hangs with multimds
104
* https://tracker.ceph.com/issues/54460
105
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
106
* https://tracker.ceph.com/issues/57676
107
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
108
* https://tracker.ceph.com/issues/51964
109
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
110 119 Rishabh Dave
111 120 Rishabh Dave
h3. 31 Mar 2023
112 121 Rishabh Dave
113 120 Rishabh Dave
run: http://pulpito.front.sepia.ceph.com/rishabh-2023-03-03_21:39:49-fs-wip-rishabh-2023Mar03-2316-testing-default-smithi/
114 122 Rishabh Dave
re-run (some PRs removed from batch): http://pulpito.front.sepia.ceph.com/rishabh-2023-03-11_05:54:03-fs-wip-rishabh-2023Mar10-1727-testing-default-smithi/
115
re-run (some PRs removed from batch): http://pulpito.front.sepia.ceph.com/rishabh-2023-03-23_08:27:28-fs-wip-rishabh-2023Mar20-2250-testing-default-smithi/
116 120 Rishabh Dave
117
There were many more re-runs for "failed+dead" jobs as well as for individual jobs. half of the PRs from the batch were removed (gradually over subsequent re-runs).
118
119
* https://tracker.ceph.com/issues/57676
120
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
121
* https://tracker.ceph.com/issues/54460
122
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
123
* https://tracker.ceph.com/issues/58220
124
  Command failed (workunit test fs/quota/quota.sh) on smithixxx with status 1
125
* https://tracker.ceph.com/issues/58220#note-9
126
  workunit fs/test_python.sh: test_disk_quota_exceeeded_error failure
127
* https://tracker.ceph.com/issues/56695
128
  Command failed (workunit test suites/pjd.sh)
129
* https://tracker.ceph.com/issues/58564 
130
  workuit dbench failed with error code 1
131
* https://tracker.ceph.com/issues/57206
132
  libcephfs/test.sh: ceph_test_libcephfs_reclaim
133
* https://tracker.ceph.com/issues/57580
134
  Test failure: test_newops_getvxattr (tasks.cephfs.test_newops.TestNewOps)
135
* https://tracker.ceph.com/issues/58940
136
  ceph osd hit ceph_abort
137
* https://tracker.ceph.com/issues/55805
138
  error scrub thrashing reached max tries in 900 secs
139
140 118 Venky Shankar
h3. 30 March 2023
141
142
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230315.085747
143
144
* https://tracker.ceph.com/issues/58938
145
    qa: xfstests-dev's generic test suite has 7 failures with kclient
146
* https://tracker.ceph.com/issues/51964
147
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
148
* https://tracker.ceph.com/issues/58340
149
    mds: fsstress.sh hangs with multimds
150
151 114 Venky Shankar
h3. 29 March 2023
152
153 115 Venky Shankar
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230317.095222
154 114 Venky Shankar
155
* https://tracker.ceph.com/issues/56695
156
    [RHEL stock] pjd test failures
157
* https://tracker.ceph.com/issues/57676
158
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
159
* https://tracker.ceph.com/issues/57087
160
    qa: test_fragmented_injection (tasks.cephfs.test_data_scan.TestDataScan) failure
161
* https://tracker.ceph.com/issues/58340
162
    mds: fsstress.sh hangs with multimds
163 116 Venky Shankar
* https://tracker.ceph.com/issues/57655
164
    qa: fs:mixed-clients kernel_untar_build failure
165 114 Venky Shankar
* https://tracker.ceph.com/issues/59230
166
    Test failure: test_object_deletion (tasks.cephfs.test_damage.TestDamage)
167 117 Venky Shankar
* https://tracker.ceph.com/issues/58938
168
    qa: xfstests-dev's generic test suite has 7 failures with kclient
169 114 Venky Shankar
170 113 Venky Shankar
h3. 13 Mar 2023
171
172
* https://tracker.ceph.com/issues/56695
173
    [RHEL stock] pjd test failures
174
* https://tracker.ceph.com/issues/57676
175
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
176
* https://tracker.ceph.com/issues/51964
177
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
178
* https://tracker.ceph.com/issues/54460
179
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
180
* https://tracker.ceph.com/issues/57656
181
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
182
183 112 Venky Shankar
h3. 09 Mar 2023
184
185
https://pulpito.ceph.com/vshankar-2023-03-03_04:39:14-fs-wip-vshankar-testing-20230303.023823-testing-default-smithi/
186
https://pulpito.ceph.com/vshankar-2023-03-08_15:12:36-fs-wip-vshankar-testing-20230308.112059-testing-default-smithi/
187
188
* https://tracker.ceph.com/issues/56695
189
    [RHEL stock] pjd test failures
190
* https://tracker.ceph.com/issues/57676
191
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
192
* https://tracker.ceph.com/issues/51964
193
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
194
* https://tracker.ceph.com/issues/54460
195
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
196
* https://tracker.ceph.com/issues/58340
197
    mds: fsstress.sh hangs with multimds
198
* https://tracker.ceph.com/issues/57087
199
    qa: test_fragmented_injection (tasks.cephfs.test_data_scan.TestDataScan) failure
200
201 111 Venky Shankar
h3. 07 Mar 2023
202
203
https://pulpito.ceph.com/vshankar-2023-03-02_09:21:58-fs-wip-vshankar-testing-20230222.044949-testing-default-smithi/
204
https://pulpito.ceph.com/vshankar-2023-03-07_05:15:12-fs-wip-vshankar-testing-20230307.030510-testing-default-smithi/
205
206
* https://tracker.ceph.com/issues/56695
207
    [RHEL stock] pjd test failures
208
* https://tracker.ceph.com/issues/57676
209
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
210
* https://tracker.ceph.com/issues/51964
211
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
212
* https://tracker.ceph.com/issues/57656
213
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
214
* https://tracker.ceph.com/issues/57655
215
    qa: fs:mixed-clients kernel_untar_build failure
216
* https://tracker.ceph.com/issues/58220
217
    Command failed (workunit test fs/quota/quota.sh) on smithi081 with status 1:
218
* https://tracker.ceph.com/issues/54460
219
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
220
* https://tracker.ceph.com/issues/58934
221
    snaptest-git-ceph.sh failure with ceph-fuse
222
223 109 Venky Shankar
h3. 28 Feb 2023
224
225
https://pulpito.ceph.com/vshankar-2023-02-24_02:11:45-fs-wip-vshankar-testing-20230222.025426-testing-default-smithi/
226
227
* https://tracker.ceph.com/issues/56695
228
    [RHEL stock] pjd test failures
229
* https://tracker.ceph.com/issues/57676
230
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
231
* https://tracker.ceph.com/issues/56446
232
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
233 110 Venky Shankar
234 109 Venky Shankar
(teuthology infra issues causing testing delays - merging PRs which have tests passing)
235
236 107 Venky Shankar
h3. 25 Jan 2023
237
238
https://pulpito.ceph.com/vshankar-2023-01-25_07:57:32-fs-wip-vshankar-testing-20230125.055346-testing-default-smithi/
239
240
* https://tracker.ceph.com/issues/52624
241
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
242
* https://tracker.ceph.com/issues/56695
243
    [RHEL stock] pjd test failures
244
* https://tracker.ceph.com/issues/57676
245
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
246
* https://tracker.ceph.com/issues/56446
247
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
248
* https://tracker.ceph.com/issues/57206
249
    libcephfs/test.sh: ceph_test_libcephfs_reclaim
250
* https://tracker.ceph.com/issues/58220
251
    Command failed (workunit test fs/quota/quota.sh) on smithi081 with status 1:
252
* https://tracker.ceph.com/issues/58340
253
  mds: fsstress.sh hangs with multimds
254
* https://tracker.ceph.com/issues/56011
255
    fs/thrash: snaptest-snap-rm-cmp.sh fails in mds5sum comparison
256
* https://tracker.ceph.com/issues/54460
257
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
258
259 101 Rishabh Dave
h3. 30 JAN 2023
260
261
run: http://pulpito.front.sepia.ceph.com/rishabh-2022-11-28_08:04:11-fs-wip-rishabh-testing-2022Nov24-1818-testing-default-smithi/
262
re-run: http://pulpito.front.sepia.ceph.com/rishabh-2023-01-13_12:08:33-fs-wip-rishabh-testing-2022Nov24-11Jan2023-distro-default-smithi/
263
re-run of re-run: http://pulpito.front.sepia.ceph.com/rishabh-2023-01-23_18:53:32-fs-wip-rishabh-testing-2022Nov24-11Jan2023-distro-default-smithi/
264
265 105 Rishabh Dave
* https://tracker.ceph.com/issues/52624
266
  qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
267 101 Rishabh Dave
* https://tracker.ceph.com/issues/56695
268
  [RHEL stock] pjd test failures
269
* https://tracker.ceph.com/issues/57676
270
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
271
* https://tracker.ceph.com/issues/55332
272
  Failure in snaptest-git-ceph.sh
273
* https://tracker.ceph.com/issues/51964
274
  qa: test_cephfs_mirror_restart_sync_on_blocklist failure
275
* https://tracker.ceph.com/issues/56446
276
  Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
277
* https://tracker.ceph.com/issues/57655 
278
  qa: fs:mixed-clients kernel_untar_build failure
279
* https://tracker.ceph.com/issues/54460
280
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
281
* https://tracker.ceph.com/issues/58340
282
  mds: fsstress.sh hangs with multimds
283 103 Rishabh Dave
* https://tracker.ceph.com/issues/58219
284
  Command crashed: 'ceph-dencoder type inode_backtrace_t import - decode dump_json'
285 101 Rishabh Dave
286 102 Rishabh Dave
* "Failed to load ceph-mgr modules: prometheus" in cluster log"
287
  http://pulpito.front.sepia.ceph.com/rishabh-2023-01-23_18:53:32-fs-wip-rishabh-testing-2022Nov24-11Jan2023-distro-default-smithi/7134086
288
  Acc to Venky this was fixed in https://github.com/ceph/ceph/commit/cf6089200d96fc56b08ee17a4e31f19823370dc8
289 106 Rishabh Dave
* Created https://tracker.ceph.com/issues/58564
290
  workunit test suites/dbench.sh failed error code 1
291 102 Rishabh Dave
292 100 Venky Shankar
h3. 15 Dec 2022
293
294
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20221215.112736
295
296
* https://tracker.ceph.com/issues/52624
297
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
298
* https://tracker.ceph.com/issues/56695
299
    [RHEL stock] pjd test failures
300
* https://tracker.ceph.com/issues/58219
301
* https://tracker.ceph.com/issues/57655
302
* qa: fs:mixed-clients kernel_untar_build failure
303
    Test failure: test_journal_migration (tasks.cephfs.test_journal_migration.TestJournalMigration)
304
* https://tracker.ceph.com/issues/57676
305
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
306
* https://tracker.ceph.com/issues/58340
307
    mds: fsstress.sh hangs with multimds
308
309 96 Venky Shankar
h3. 08 Dec 2022
310
311
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20221130.043104
312 99 Venky Shankar
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20221209.043803
313 96 Venky Shankar
314
(lots of transient git.ceph.com failures)
315
316
* https://tracker.ceph.com/issues/52624
317
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
318
* https://tracker.ceph.com/issues/56695
319
    [RHEL stock] pjd test failures
320
* https://tracker.ceph.com/issues/57655
321
    qa: fs:mixed-clients kernel_untar_build failure
322
* https://tracker.ceph.com/issues/58219
323
    Test failure: test_journal_migration (tasks.cephfs.test_journal_migration.TestJournalMigration)
324
* https://tracker.ceph.com/issues/58220
325
    Command failed (workunit test fs/quota/quota.sh) on smithi081 with status 1:
326
* https://tracker.ceph.com/issues/57676
327
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
328 97 Venky Shankar
* https://tracker.ceph.com/issues/53859
329
    qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
330 98 Venky Shankar
* https://tracker.ceph.com/issues/54460
331
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
332
* https://tracker.ceph.com/issues/58244
333
    Test failure: test_rebuild_inotable (tasks.cephfs.test_data_scan.TestDataScan)
334 96 Venky Shankar
335 95 Venky Shankar
h3. 14 Oct 2022
336
337
https://pulpito.ceph.com/vshankar-2022-10-12_04:56:59-fs-wip-vshankar-testing-20221011-145847-testing-default-smithi/
338
https://pulpito.ceph.com/vshankar-2022-10-14_04:04:57-fs-wip-vshankar-testing-20221014-072608-testing-default-smithi/
339
340
* https://tracker.ceph.com/issues/52624
341
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
342
* https://tracker.ceph.com/issues/55804
343
    Command failed (workunit test suites/pjd.sh)
344
* https://tracker.ceph.com/issues/51964
345
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
346
* https://tracker.ceph.com/issues/57682
347
    client: ERROR: test_reconnect_after_blocklisted
348
* https://tracker.ceph.com/issues/54460
349
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
350 90 Rishabh Dave
351 91 Rishabh Dave
h3. 10 Oct 2022
352
353
http://pulpito.front.sepia.ceph.com/rishabh-2022-09-30_19:45:21-fs-wip-rishabh-testing-30Sep2022-testing-default-smithi/
354 92 Rishabh Dave
355 91 Rishabh Dave
reruns
356
* fs-thrash, passed: http://pulpito.front.sepia.ceph.com/rishabh-2022-10-04_13:19:47-fs-wip-rishabh-testing-30Sep2022-testing-default-smithi/
357
* fs-verify, passed: http://pulpito.front.sepia.ceph.com/rishabh-2022-10-05_12:25:37-fs-wip-rishabh-testing-30Sep2022-testing-default-smithi/
358
* cephadm failures also passed after many re-runs: http://pulpito.front.sepia.ceph.com/rishabh-2022-10-06_13:50:51-fs-wip-rishabh-testing-30Sep2022-2-testing-default-smithi/
359 94 Rishabh Dave
    ** needed this PR to be merged in ceph-ci branch - https://github.com/ceph/ceph/pull/47458
360 91 Rishabh Dave
361 93 Rishabh Dave
known bugs
362 91 Rishabh Dave
* https://tracker.ceph.com/issues/52624
363
  qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
364
* https://tracker.ceph.com/issues/50223
365
  client.xxxx isn't responding to mclientcaps(revoke
366
* https://tracker.ceph.com/issues/57299
367
  qa: test_dump_loads fails with JSONDecodeError
368
* https://tracker.ceph.com/issues/57655 [Exist in main as well]
369
  qa: fs:mixed-clients kernel_untar_build failure
370
* https://tracker.ceph.com/issues/57206
371
  libcephfs/test.sh: ceph_test_libcephfs_reclaim
372
373 90 Rishabh Dave
h3. 2022 Sep 29
374
375
http://pulpito.front.sepia.ceph.com/rishabh-2022-09-14_12:48:43-fs-wip-rishabh-testing-2022Sep9-1708-testing-default-smithi/
376
377
* https://tracker.ceph.com/issues/55804
378
  Command failed (workunit test suites/pjd.sh)
379
* https://tracker.ceph.com/issues/36593
380
  Command failed (workunit test fs/quota/quota.sh) on smithixxx with status 1
381
* https://tracker.ceph.com/issues/52624
382
  qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
383
* https://tracker.ceph.com/issues/51964
384
  qa: test_cephfs_mirror_restart_sync_on_blocklist failure
385
* https://tracker.ceph.com/issues/56632
386
  Test failure: test_subvolume_snapshot_clone_quota_exceeded
387
* https://tracker.ceph.com/issues/50821
388
  qa: untar_snap_rm failure during mds thrashing
389
390 88 Patrick Donnelly
h3. 2022 Sep 26
391
392
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20220923.171109
393
394
* https://tracker.ceph.com/issues/55804
395
    qa failure: pjd link tests failed
396
* https://tracker.ceph.com/issues/57676
397
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
398
* https://tracker.ceph.com/issues/52624
399
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
400
* https://tracker.ceph.com/issues/57580
401
    Test failure: test_newops_getvxattr (tasks.cephfs.test_newops.TestNewOps)
402
* https://tracker.ceph.com/issues/48773
403
    qa: scrub does not complete
404
* https://tracker.ceph.com/issues/57299
405
    qa: test_dump_loads fails with JSONDecodeError
406
* https://tracker.ceph.com/issues/57280
407
    qa: tasks/kernel_cfuse_workunits_untarbuild_blogbench fails - Failed to fetch package version from shaman
408
* https://tracker.ceph.com/issues/57205
409
    Test failure: test_subvolume_group_ls_filter_internal_directories (tasks.cephfs.test_volumes.TestSubvolumeGroups)
410
* https://tracker.ceph.com/issues/57656
411
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
412
* https://tracker.ceph.com/issues/57677
413
    qa: "1 MDSs behind on trimming (MDS_TRIM)"
414
* https://tracker.ceph.com/issues/57206
415
    libcephfs/test.sh: ceph_test_libcephfs_reclaim
416
* https://tracker.ceph.com/issues/57446
417
    qa: test_subvolume_snapshot_info_if_orphan_clone fails
418
* https://tracker.ceph.com/issues/57655 [Exist in main as well]
419
    qa: fs:mixed-clients kernel_untar_build failure
420 89 Patrick Donnelly
* https://tracker.ceph.com/issues/57682
421
    client: ERROR: test_reconnect_after_blocklisted
422 88 Patrick Donnelly
423
424 87 Patrick Donnelly
h3. 2022 Sep 22
425
426
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20220920.234701
427
428
* https://tracker.ceph.com/issues/57299
429
    qa: test_dump_loads fails with JSONDecodeError
430
* https://tracker.ceph.com/issues/57205
431
    Test failure: test_subvolume_group_ls_filter_internal_directories (tasks.cephfs.test_volumes.TestSubvolumeGroups)
432
* https://tracker.ceph.com/issues/52624
433
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
434
* https://tracker.ceph.com/issues/57580
435
    Test failure: test_newops_getvxattr (tasks.cephfs.test_newops.TestNewOps)
436
* https://tracker.ceph.com/issues/57280
437
    qa: tasks/kernel_cfuse_workunits_untarbuild_blogbench fails - Failed to fetch package version from shaman
438
* https://tracker.ceph.com/issues/48773
439
    qa: scrub does not complete
440
* https://tracker.ceph.com/issues/56446
441
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
442
* https://tracker.ceph.com/issues/57206
443
    libcephfs/test.sh: ceph_test_libcephfs_reclaim
444
* https://tracker.ceph.com/issues/51267
445
    CommandFailedError: Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithi096 with status 1:...
446
447
NEW:
448
449
* https://tracker.ceph.com/issues/57656
450
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
451
* https://tracker.ceph.com/issues/57655 [Exist in main as well]
452
    qa: fs:mixed-clients kernel_untar_build failure
453
* https://tracker.ceph.com/issues/57657
454
    mds: scrub locates mismatch between child accounted_rstats and self rstats
455
456
Segfault probably caused by: https://github.com/ceph/ceph/pull/47795#issuecomment-1255724799
457
458
459 80 Venky Shankar
h3. 2022 Sep 16
460 79 Venky Shankar
461
https://pulpito.ceph.com/?branch=wip-vshankar-testing1-20220905-132828
462
463
* https://tracker.ceph.com/issues/57446
464
    qa: test_subvolume_snapshot_info_if_orphan_clone fails
465
* https://tracker.ceph.com/issues/57299
466
    qa: test_dump_loads fails with JSONDecodeError
467
* https://tracker.ceph.com/issues/50223
468
    client.xxxx isn't responding to mclientcaps(revoke)
469
* https://tracker.ceph.com/issues/52624
470
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
471
* https://tracker.ceph.com/issues/57205
472
    Test failure: test_subvolume_group_ls_filter_internal_directories (tasks.cephfs.test_volumes.TestSubvolumeGroups)
473
* https://tracker.ceph.com/issues/57280
474
    qa: tasks/kernel_cfuse_workunits_untarbuild_blogbench fails - Failed to fetch package version from shaman
475
* https://tracker.ceph.com/issues/51282
476
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
477
* https://tracker.ceph.com/issues/48203
478
  https://tracker.ceph.com/issues/36593
479
    qa: quota failure
480
    qa: quota failure caused by clients stepping on each other
481
* https://tracker.ceph.com/issues/57580
482
    Test failure: test_newops_getvxattr (tasks.cephfs.test_newops.TestNewOps)
483
484 77 Rishabh Dave
485
h3. 2022 Aug 26
486 76 Rishabh Dave
487
http://pulpito.front.sepia.ceph.com/rishabh-2022-08-22_17:49:59-fs-wip-rishabh-testing-2022Aug19-testing-default-smithi/
488
http://pulpito.front.sepia.ceph.com/rishabh-2022-08-24_11:56:51-fs-wip-rishabh-testing-2022Aug19-testing-default-smithi/
489
490
* https://tracker.ceph.com/issues/57206
491
  libcephfs/test.sh: ceph_test_libcephfs_reclaim
492
* https://tracker.ceph.com/issues/56632
493
  Test failure: test_subvolume_snapshot_clone_quota_exceeded (tasks.cephfs.test_volumes.TestSubvolumeSnapshotClones)
494
* https://tracker.ceph.com/issues/56446
495
  Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
496
* https://tracker.ceph.com/issues/51964
497
  qa: test_cephfs_mirror_restart_sync_on_blocklist failure
498
* https://tracker.ceph.com/issues/53859
499
  qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
500
501
* https://tracker.ceph.com/issues/54460
502
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
503
* https://tracker.ceph.com/issues/54462
504
  Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128
505
* https://tracker.ceph.com/issues/54460
506
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
507
* https://tracker.ceph.com/issues/36593
508
  Command failed (workunit test fs/quota/quota.sh) on smithixxx with status 1
509
510
* https://tracker.ceph.com/issues/52624
511
  qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
512
* https://tracker.ceph.com/issues/55804
513
  Command failed (workunit test suites/pjd.sh)
514
* https://tracker.ceph.com/issues/50223
515
  client.xxxx isn't responding to mclientcaps(revoke)
516
517
518 75 Venky Shankar
h3. 2022 Aug 22
519
520
https://pulpito.ceph.com/vshankar-2022-08-12_09:34:24-fs-wip-vshankar-testing1-20220812-072441-testing-default-smithi/
521
https://pulpito.ceph.com/vshankar-2022-08-18_04:30:42-fs-wip-vshankar-testing1-20220818-082047-testing-default-smithi/ (drop problematic PR and re-run)
522
523
* https://tracker.ceph.com/issues/52624
524
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
525
* https://tracker.ceph.com/issues/56446
526
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
527
* https://tracker.ceph.com/issues/55804
528
    Command failed (workunit test suites/pjd.sh)
529
* https://tracker.ceph.com/issues/51278
530
    mds: "FAILED ceph_assert(!segments.empty())"
531
* https://tracker.ceph.com/issues/54460
532
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
533
* https://tracker.ceph.com/issues/57205
534
    Test failure: test_subvolume_group_ls_filter_internal_directories (tasks.cephfs.test_volumes.TestSubvolumeGroups)
535
* https://tracker.ceph.com/issues/57206
536
    ceph_test_libcephfs_reclaim crashes during test
537
* https://tracker.ceph.com/issues/53859
538
  Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
539
* https://tracker.ceph.com/issues/50223
540
    client.xxxx isn't responding to mclientcaps(revoke)
541
542 72 Venky Shankar
h3. 2022 Aug 12
543
544
https://pulpito.ceph.com/vshankar-2022-08-10_04:06:00-fs-wip-vshankar-testing-20220805-190751-testing-default-smithi/
545
https://pulpito.ceph.com/vshankar-2022-08-11_12:16:58-fs-wip-vshankar-testing-20220811-145809-testing-default-smithi/ (drop problematic PR and re-run)
546
547
* https://tracker.ceph.com/issues/52624
548
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
549
* https://tracker.ceph.com/issues/56446
550
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
551
* https://tracker.ceph.com/issues/51964
552
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
553
* https://tracker.ceph.com/issues/55804
554
    Command failed (workunit test suites/pjd.sh)
555
* https://tracker.ceph.com/issues/50223
556
    client.xxxx isn't responding to mclientcaps(revoke)
557
* https://tracker.ceph.com/issues/50821
558
    qa: untar_snap_rm failure during mds thrashing
559
* https://tracker.ceph.com/issues/54460
560 73 Venky Shankar
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
561 72 Venky Shankar
562 71 Venky Shankar
h3. 2022 Aug 04
563
564
https://pulpito.ceph.com/?branch=wip-vshankar-testing1-20220804-123835 (only mgr/volumes, mgr/stats)
565
566
Unrealted teuthology failure on rhel
567
568 69 Rishabh Dave
h3. 2022 Jul 25
569 68 Rishabh Dave
570
http://pulpito.front.sepia.ceph.com/rishabh-2022-07-22_11:34:20-fs-wip-rishabh-testing-2022Jul22-1400-testing-default-smithi/
571
572
1st re-run: http://pulpito.front.sepia.ceph.com/rishabh-2022-07-24_03:51:19-fs-wip-rishabh-testing-2022Jul22-1400-testing-default-smithi
573
2nd re-run: http://pulpito.front.sepia.ceph.com/rishabh-2022-07-24_08:53:36-fs-wip-rishabh-testing-2022Jul22-1400-testing-default-smithi/
574 74 Rishabh Dave
3rd re-run: http://pulpito.front.sepia.ceph.com/rishabh-2022-07-24_08:53:36-fs-wip-rishabh-testing-2022Jul22-1400-testing-default-smithi/
575
4th (final) re-run: http://pulpito.front.sepia.ceph.com/rishabh-2022-07-28_03:59:01-fs-wip-rishabh-testing-2022Jul28-0143-testing-default-smithi/
576 68 Rishabh Dave
577
* https://tracker.ceph.com/issues/55804
578
  Command failed (workunit test suites/pjd.sh)
579
* https://tracker.ceph.com/issues/50223
580
  client.xxxx isn't responding to mclientcaps(revoke)
581
582
* https://tracker.ceph.com/issues/54460
583
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
584
* https://tracker.ceph.com/issues/36593
585
  Command failed (workunit test fs/quota/quota.sh) on smithixxx with status 1
586 1 Patrick Donnelly
* https://tracker.ceph.com/issues/54462
587 74 Rishabh Dave
  Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128~
588 68 Rishabh Dave
589 67 Patrick Donnelly
h3. 2022 July 22
590
591
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20220721.235756
592
593
MDS_HEALTH_DUMMY error in log fixed by followup commit.
594
transient selinux ping failure
595
596
* https://tracker.ceph.com/issues/56694
597
    qa: avoid blocking forever on hung umount
598
* https://tracker.ceph.com/issues/56695
599
    [RHEL stock] pjd test failures
600
* https://tracker.ceph.com/issues/56696
601
    admin keyring disappears during qa run
602
* https://tracker.ceph.com/issues/56697
603
    qa: fs/snaps fails for fuse
604
* https://tracker.ceph.com/issues/50222
605
    osd: 5.2s0 deep-scrub : stat mismatch
606
* https://tracker.ceph.com/issues/56698
607
    client: FAILED ceph_assert(_size == 0)
608
* https://tracker.ceph.com/issues/50223
609
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
610
611
612 66 Rishabh Dave
h3. 2022 Jul 15
613 65 Rishabh Dave
614
http://pulpito.front.sepia.ceph.com/rishabh-2022-07-08_23:53:34-fs-wip-rishabh-testing-2022Jul08-1820-testing-default-smithi/
615
616
re-run: http://pulpito.front.sepia.ceph.com/rishabh-2022-07-15_06:42:04-fs-wip-rishabh-testing-2022Jul08-1820-testing-default-smithi/
617
618
* https://tracker.ceph.com/issues/53859
619
  Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
620
* https://tracker.ceph.com/issues/55804
621
  Command failed (workunit test suites/pjd.sh)
622
* https://tracker.ceph.com/issues/50223
623
  client.xxxx isn't responding to mclientcaps(revoke)
624
* https://tracker.ceph.com/issues/50222
625
  osd: deep-scrub : stat mismatch
626
627
* https://tracker.ceph.com/issues/56632
628
  Test failure: test_subvolume_snapshot_clone_quota_exceeded (tasks.cephfs.test_volumes.TestSubvolumeSnapshotClones)
629
* https://tracker.ceph.com/issues/56634
630
  workunit test fs/snaps/snaptest-intodir.sh
631
* https://tracker.ceph.com/issues/56644
632
  Test failure: test_rapid_creation (tasks.cephfs.test_fragment.TestFragmentation)
633
634
635
636 61 Rishabh Dave
h3. 2022 July 05
637
638
http://pulpito.front.sepia.ceph.com/rishabh-2022-07-02_14:14:52-fs-wip-rishabh-testing-20220702-1631-testing-default-smithi/
639 62 Rishabh Dave
640 64 Rishabh Dave
On 1st re-run some jobs passed - http://pulpito.front.sepia.ceph.com/rishabh-2022-07-03_15:10:28-fs-wip-rishabh-testing-20220702-1631-distro-default-smithi/
641
642
On 2nd re-run only few jobs failed -
643
http://pulpito.front.sepia.ceph.com/rishabh-2022-07-06_05:24:29-fs-wip-rishabh-testing-20220705-2132-distro-default-smithi/
644
http://pulpito.front.sepia.ceph.com/rishabh-2022-07-06_05:24:29-fs-wip-rishabh-testing-20220705-2132-distro-default-smithi/
645 62 Rishabh Dave
646
* https://tracker.ceph.com/issues/56446
647
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
648
* https://tracker.ceph.com/issues/55804
649
    Command failed (workunit test suites/pjd.sh) on smithi047 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/
650
651
* https://tracker.ceph.com/issues/56445
652
    Command failed on smithi080 with status 123: "find /home/ubuntu/cephtest/archive/syslog -name '*.log' -print0 | sudo xargs -0 --no-run-if-empty -- gzip --"
653
* https://tracker.ceph.com/issues/51267
654 63 Rishabh Dave
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithi098 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1
655
* https://tracker.ceph.com/issues/50224
656
    Test failure: test_mirroring_init_failure_with_recovery (tasks.cephfs.test_mirroring.TestMirroring)
657 62 Rishabh Dave
658
659 61 Rishabh Dave
660 58 Venky Shankar
h3. 2022 July 04
661
662
https://pulpito.ceph.com/vshankar-2022-06-29_09:19:00-fs-wip-vshankar-testing-20220627-100931-testing-default-smithi/
663
(rhel runs were borked due to: https://lists.ceph.io/hyperkitty/list/dev@ceph.io/thread/JSZQFUKVLDND4W33PXDGCABPHNSPT6SS/, tests ran with --filter-out=rhel)
664
665
* https://tracker.ceph.com/issues/56445
666
    Command failed on smithi162 with status 123: "find /home/ubuntu/cephtest/archive/syslog -name '*.log' -print0 | sudo xargs -0 --no-run-if-empty -- gzip --"
667
* https://tracker.ceph.com/issues/56446
668 59 Rishabh Dave
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
669
* https://tracker.ceph.com/issues/51964
670
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
671
* https://tracker.ceph.com/issues/52624
672 60 Rishabh Dave
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
673 59 Rishabh Dave
674 57 Venky Shankar
h3. 2022 June 20
675
676
https://pulpito.ceph.com/vshankar-2022-06-15_04:03:39-fs-wip-vshankar-testing1-20220615-072516-testing-default-smithi/
677
https://pulpito.ceph.com/vshankar-2022-06-19_08:22:46-fs-wip-vshankar-testing1-20220619-102531-testing-default-smithi/
678
679
* https://tracker.ceph.com/issues/52624
680
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
681
* https://tracker.ceph.com/issues/55804
682
    qa failure: pjd link tests failed
683
* https://tracker.ceph.com/issues/54108
684
    qa: iogen workunit: "The following counters failed to be set on mds daemons: {'mds.exported', 'mds.imported'}"
685
* https://tracker.ceph.com/issues/55332
686
    Failure in snaptest-git-ceph.sh (it's an async unlink/create bug)
687
688 56 Patrick Donnelly
h3. 2022 June 13
689
690
https://pulpito.ceph.com/pdonnell-2022-06-12_05:08:12-fs:workload-wip-pdonnell-testing-20220612.004943-distro-default-smithi/
691
692
* https://tracker.ceph.com/issues/56024
693
    cephadm: removes ceph.conf during qa run causing command failure
694
* https://tracker.ceph.com/issues/48773
695
    qa: scrub does not complete
696
* https://tracker.ceph.com/issues/56012
697
    mds: src/mds/MDLog.cc: 283: FAILED ceph_assert(!mds->is_ any_replay())
698
699
700 55 Venky Shankar
h3. 2022 Jun 13
701 54 Venky Shankar
702
https://pulpito.ceph.com/vshankar-2022-06-07_00:25:50-fs-wip-vshankar-testing-20220606-223254-testing-default-smithi/
703
https://pulpito.ceph.com/vshankar-2022-06-10_01:04:46-fs-wip-vshankar-testing-20220609-175550-testing-default-smithi/
704
705
* https://tracker.ceph.com/issues/52624
706
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
707
* https://tracker.ceph.com/issues/51964
708
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
709
* https://tracker.ceph.com/issues/53859
710
    qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
711
* https://tracker.ceph.com/issues/55804
712
    qa failure: pjd link tests failed
713
* https://tracker.ceph.com/issues/56003
714
    client: src/include/xlist.h: 81: FAILED ceph_assert(_size == 0)
715
* https://tracker.ceph.com/issues/56011
716
    fs/thrash: snaptest-snap-rm-cmp.sh fails in mds5sum comparison
717
* https://tracker.ceph.com/issues/56012
718
    mds: src/mds/MDLog.cc: 283: FAILED ceph_assert(!mds->is_ any_replay())
719
720 53 Venky Shankar
h3. 2022 Jun 07
721
722
https://pulpito.ceph.com/vshankar-2022-06-06_21:25:41-fs-wip-vshankar-testing1-20220606-230129-testing-default-smithi/
723
https://pulpito.ceph.com/vshankar-2022-06-07_10:53:31-fs-wip-vshankar-testing1-20220607-104134-testing-default-smithi/ (rerun after dropping a problematic PR)
724
725
* https://tracker.ceph.com/issues/52624
726
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
727
* https://tracker.ceph.com/issues/50223
728
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
729
* https://tracker.ceph.com/issues/50224
730
    qa: test_mirroring_init_failure_with_recovery failure
731
732 51 Venky Shankar
h3. 2022 May 12
733
734
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20220509-125847
735 52 Venky Shankar
https://pulpito.ceph.com/vshankar-2022-05-13_17:09:16-fs-wip-vshankar-testing-20220513-120051-testing-default-smithi/ (drop prs + rerun)
736 51 Venky Shankar
737
* https://tracker.ceph.com/issues/52624
738
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
739
* https://tracker.ceph.com/issues/50223
740
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
741
* https://tracker.ceph.com/issues/55332
742
    Failure in snaptest-git-ceph.sh
743
* https://tracker.ceph.com/issues/53859
744
    qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
745
* https://tracker.ceph.com/issues/55538
746 1 Patrick Donnelly
    Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead)
747 52 Venky Shankar
* https://tracker.ceph.com/issues/55258
748
    lots of "heartbeat_check: no reply from X.X.X.X" in OSD logs (cropss up again, though very infrequent)
749 51 Venky Shankar
750 49 Venky Shankar
h3. 2022 May 04
751
752 50 Venky Shankar
https://pulpito.ceph.com/vshankar-2022-05-01_13:18:44-fs-wip-vshankar-testing1-20220428-204527-testing-default-smithi/
753
https://pulpito.ceph.com/vshankar-2022-05-02_16:58:59-fs-wip-vshankar-testing1-20220502-201957-testing-default-smithi/ (after dropping PRs)
754
755 49 Venky Shankar
* https://tracker.ceph.com/issues/52624
756
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
757
* https://tracker.ceph.com/issues/50223
758
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
759
* https://tracker.ceph.com/issues/55332
760
    Failure in snaptest-git-ceph.sh
761
* https://tracker.ceph.com/issues/53859
762
    qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
763
* https://tracker.ceph.com/issues/55516
764
    qa: fs suite tests failing with "json.decoder.JSONDecodeError: Extra data: line 2 column 82 (char 82)"
765
* https://tracker.ceph.com/issues/55537
766
    mds: crash during fs:upgrade test
767
* https://tracker.ceph.com/issues/55538
768
    Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead)
769
770 48 Venky Shankar
h3. 2022 Apr 25
771
772
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20220420-113951 (owner vshankar)
773
774
* https://tracker.ceph.com/issues/52624
775
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
776
* https://tracker.ceph.com/issues/50223
777
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
778
* https://tracker.ceph.com/issues/55258
779
    lots of "heartbeat_check: no reply from X.X.X.X" in OSD logs
780
* https://tracker.ceph.com/issues/55377
781
    kclient: mds revoke Fwb caps stuck after the kclient tries writebcak once
782
783 47 Venky Shankar
h3. 2022 Apr 14
784
785
https://pulpito.ceph.com/?branch=wip-vshankar-testing1-20220411-144044
786
787
* https://tracker.ceph.com/issues/52624
788
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
789
* https://tracker.ceph.com/issues/50223
790
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
791
* https://tracker.ceph.com/issues/52438
792
    qa: ffsb timeout
793
* https://tracker.ceph.com/issues/55170
794
    mds: crash during rejoin (CDir::fetch_keys)
795
* https://tracker.ceph.com/issues/55331
796
    pjd failure
797
* https://tracker.ceph.com/issues/48773
798
    qa: scrub does not complete
799
* https://tracker.ceph.com/issues/55332
800
    Failure in snaptest-git-ceph.sh
801
* https://tracker.ceph.com/issues/55258
802
    lots of "heartbeat_check: no reply from X.X.X.X" in OSD logs
803
804 45 Venky Shankar
h3. 2022 Apr 11
805
806 46 Venky Shankar
https://pulpito.ceph.com/?branch=wip-vshankar-testing-55110-20220408-203242
807 45 Venky Shankar
808
* https://tracker.ceph.com/issues/48773
809
    qa: scrub does not complete
810
* https://tracker.ceph.com/issues/52624
811
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
812
* https://tracker.ceph.com/issues/52438
813
    qa: ffsb timeout
814
* https://tracker.ceph.com/issues/48680
815
    mds: scrubbing stuck "scrub active (0 inodes in the stack)"
816
* https://tracker.ceph.com/issues/55236
817
    qa: fs/snaps tests fails with "hit max job timeout"
818
* https://tracker.ceph.com/issues/54108
819
    qa: iogen workunit: "The following counters failed to be set on mds daemons: {'mds.exported', 'mds.imported'}"
820
* https://tracker.ceph.com/issues/54971
821
    Test failure: test_perf_stats_stale_metrics (tasks.cephfs.test_mds_metrics.TestMDSMetrics)
822
* https://tracker.ceph.com/issues/50223
823
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
824
* https://tracker.ceph.com/issues/55258
825
    lots of "heartbeat_check: no reply from X.X.X.X" in OSD logs
826
827 44 Venky Shankar
h3. 2022 Mar 21
828 42 Venky Shankar
829 43 Venky Shankar
https://pulpito.ceph.com/vshankar-2022-03-20_02:16:37-fs-wip-vshankar-testing-20220319-163539-testing-default-smithi/
830
831
Run didn't go well, lots of failures - debugging by dropping PRs and running against master branch. Only merging unrelated PRs that pass tests.
832
833
834
h3. 2022 Mar 08
835
836 42 Venky Shankar
https://pulpito.ceph.com/vshankar-2022-02-28_04:32:15-fs-wip-vshankar-testing-20220226-211550-testing-default-smithi/
837
838
rerun with
839
- (drop) https://github.com/ceph/ceph/pull/44679
840
- (drop) https://github.com/ceph/ceph/pull/44958
841
https://pulpito.ceph.com/vshankar-2022-03-06_14:47:51-fs-wip-vshankar-testing-20220304-132102-testing-default-smithi/
842
843
* https://tracker.ceph.com/issues/54419 (new)
844
    `ceph orch upgrade start` seems to never reach completion
845
* https://tracker.ceph.com/issues/51964
846
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
847
* https://tracker.ceph.com/issues/52624
848
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
849
* https://tracker.ceph.com/issues/50223
850
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
851
* https://tracker.ceph.com/issues/52438
852
    qa: ffsb timeout
853
* https://tracker.ceph.com/issues/50821
854
    qa: untar_snap_rm failure during mds thrashing
855
856
857 41 Venky Shankar
h3. 2022 Feb 09
858
859
https://pulpito.ceph.com/vshankar-2022-02-05_17:27:49-fs-wip-vshankar-testing-20220201-113815-testing-default-smithi/
860
861
rerun with
862
- (drop) https://github.com/ceph/ceph/pull/37938
863
- (drop) https://github.com/ceph/ceph/pull/44335
864
- (drop) https://github.com/ceph/ceph/pull/44491
865
- (drop) https://github.com/ceph/ceph/pull/44501
866
https://pulpito.ceph.com/vshankar-2022-02-08_14:27:29-fs-wip-vshankar-testing-20220208-181241-testing-default-smithi/
867
868
* https://tracker.ceph.com/issues/51964
869
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
870
* https://tracker.ceph.com/issues/54066
871
    test_subvolume_no_upgrade_v1_sanity fails with `AssertionError: 1000 != 0`
872
* https://tracker.ceph.com/issues/48773
873
    qa: scrub does not complete
874
* https://tracker.ceph.com/issues/52624
875
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
876
* https://tracker.ceph.com/issues/50223
877
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
878
* https://tracker.ceph.com/issues/52438
879
    qa: ffsb timeout
880
881 40 Patrick Donnelly
h3. 2022 Feb 01
882
883
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20220127.171526
884
885
* https://tracker.ceph.com/issues/54107
886
    kclient: hang during umount
887
* https://tracker.ceph.com/issues/54106
888
    kclient: hang during workunit cleanup
889
* https://tracker.ceph.com/issues/54108
890
    qa: iogen workunit: "The following counters failed to be set on mds daemons: {'mds.exported', 'mds.imported'}"
891
* https://tracker.ceph.com/issues/48773
892
    qa: scrub does not complete
893
* https://tracker.ceph.com/issues/52438
894
    qa: ffsb timeout
895
896
897 36 Venky Shankar
h3. 2022 Jan 13
898
899
https://pulpito.ceph.com/vshankar-2022-01-06_13:18:41-fs-wip-vshankar-testing-20220106-145819-testing-default-smithi/
900 39 Venky Shankar
901 36 Venky Shankar
rerun with:
902 38 Venky Shankar
- (add) https://github.com/ceph/ceph/pull/44570
903
- (drop) https://github.com/ceph/ceph/pull/43184
904 36 Venky Shankar
https://pulpito.ceph.com/vshankar-2022-01-13_04:42:40-fs-wip-vshankar-testing-20220106-145819-testing-default-smithi/
905
906
* https://tracker.ceph.com/issues/50223
907
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
908
* https://tracker.ceph.com/issues/51282
909
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
910
* https://tracker.ceph.com/issues/48773
911
    qa: scrub does not complete
912
* https://tracker.ceph.com/issues/52624
913
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
914
* https://tracker.ceph.com/issues/53859
915
    qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
916
917 34 Venky Shankar
h3. 2022 Jan 03
918
919
https://pulpito.ceph.com/vshankar-2021-12-22_07:37:44-fs-wip-vshankar-testing-20211216-114012-testing-default-smithi/
920
https://pulpito.ceph.com/vshankar-2022-01-03_12:27:45-fs-wip-vshankar-testing-20220103-142738-testing-default-smithi/ (rerun)
921
922
* https://tracker.ceph.com/issues/50223
923
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
924
* https://tracker.ceph.com/issues/51964
925
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
926
* https://tracker.ceph.com/issues/51267
927
    CommandFailedError: Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithi096 with status 1:...
928
* https://tracker.ceph.com/issues/51282
929
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
930
* https://tracker.ceph.com/issues/50821
931
    qa: untar_snap_rm failure during mds thrashing
932
* https://tracker.ceph.com/issues/51278
933
    mds: "FAILED ceph_assert(!segments.empty())"
934 35 Ramana Raja
* https://tracker.ceph.com/issues/52279
935
    cephadm tests fail due to: error adding seccomp filter rule for syscall bdflush: requested action matches default action of filter
936
937 34 Venky Shankar
938 33 Patrick Donnelly
h3. 2021 Dec 22
939
940
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211222.014316
941
942
* https://tracker.ceph.com/issues/52624
943
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
944
* https://tracker.ceph.com/issues/50223
945
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
946
* https://tracker.ceph.com/issues/52279
947
    cephadm tests fail due to: error adding seccomp filter rule for syscall bdflush: requested action matches default action of filter
948
* https://tracker.ceph.com/issues/50224
949
    qa: test_mirroring_init_failure_with_recovery failure
950
* https://tracker.ceph.com/issues/48773
951
    qa: scrub does not complete
952
953
954 32 Venky Shankar
h3. 2021 Nov 30
955
956
https://pulpito.ceph.com/vshankar-2021-11-24_07:14:27-fs-wip-vshankar-testing-20211124-094330-testing-default-smithi/
957
https://pulpito.ceph.com/vshankar-2021-11-30_06:23:32-fs-wip-vshankar-testing-20211124-094330-distro-default-smithi/ (rerun w/ QA fixes)
958
959
* https://tracker.ceph.com/issues/53436
960
    mds, mon: mds beacon messages get dropped? (mds never reaches up:active state)
961
* https://tracker.ceph.com/issues/51964
962
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
963
* https://tracker.ceph.com/issues/48812
964
    qa: test_scrub_pause_and_resume_with_abort failure
965
* https://tracker.ceph.com/issues/51076
966
    "wait_for_recovery: failed before timeout expired" during thrashosd test with EC backend.
967
* https://tracker.ceph.com/issues/50223
968
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
969
* https://tracker.ceph.com/issues/52624
970
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
971
* https://tracker.ceph.com/issues/50250
972
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
973
974
975 31 Patrick Donnelly
h3. 2021 November 9
976
977
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211109.180315
978
979
* https://tracker.ceph.com/issues/53214
980
    qa: "dd: error reading '/sys/kernel/debug/ceph/2a934501-6731-4052-a836-f42229a869be.client4874/metrics': Is a directory"
981
* https://tracker.ceph.com/issues/48773
982
    qa: scrub does not complete
983
* https://tracker.ceph.com/issues/50223
984
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
985
* https://tracker.ceph.com/issues/51282
986
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
987
* https://tracker.ceph.com/issues/52624
988
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
989
* https://tracker.ceph.com/issues/53216
990
    qa: "RuntimeError: value of attributes should be either str or None. client_id"
991
* https://tracker.ceph.com/issues/50250
992
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
993
994
995
996 30 Patrick Donnelly
h3. 2021 November 03
997
998
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211103.023355
999
1000
* https://tracker.ceph.com/issues/51964
1001
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1002
* https://tracker.ceph.com/issues/51282
1003
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
1004
* https://tracker.ceph.com/issues/52436
1005
    fs/ceph: "corrupt mdsmap"
1006
* https://tracker.ceph.com/issues/53074
1007
    pybind/mgr/cephadm: upgrade sequence does not continue if no MDS are active
1008
* https://tracker.ceph.com/issues/53150
1009
    pybind/mgr/cephadm/upgrade: tolerate MDS failures during upgrade straddling v16.2.5
1010
* https://tracker.ceph.com/issues/53155
1011
    MDSMonitor: assertion during upgrade to v16.2.5+
1012
1013
1014 29 Patrick Donnelly
h3. 2021 October 26
1015
1016
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211025.000447
1017
1018
* https://tracker.ceph.com/issues/53074
1019
    pybind/mgr/cephadm: upgrade sequence does not continue if no MDS are active
1020
* https://tracker.ceph.com/issues/52997
1021
    testing: hang ing umount
1022
* https://tracker.ceph.com/issues/50824
1023
    qa: snaptest-git-ceph bus error
1024
* https://tracker.ceph.com/issues/52436
1025
    fs/ceph: "corrupt mdsmap"
1026
* https://tracker.ceph.com/issues/48773
1027
    qa: scrub does not complete
1028
* https://tracker.ceph.com/issues/53082
1029
    ceph-fuse: segmenetation fault in Client::handle_mds_map
1030
* https://tracker.ceph.com/issues/50223
1031
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
1032
* https://tracker.ceph.com/issues/52624
1033
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1034
* https://tracker.ceph.com/issues/50224
1035
    qa: test_mirroring_init_failure_with_recovery failure
1036
* https://tracker.ceph.com/issues/50821
1037
    qa: untar_snap_rm failure during mds thrashing
1038
* https://tracker.ceph.com/issues/50250
1039
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
1040
1041
1042
1043 27 Patrick Donnelly
h3. 2021 October 19
1044
1045 28 Patrick Donnelly
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211019.013028
1046 27 Patrick Donnelly
1047
* https://tracker.ceph.com/issues/52995
1048
    qa: test_standby_count_wanted failure
1049
* https://tracker.ceph.com/issues/52948
1050
    osd: fails to come up: "teuthology.misc:7 of 8 OSDs are up"
1051
* https://tracker.ceph.com/issues/52996
1052
    qa: test_perf_counters via test_openfiletable
1053
* https://tracker.ceph.com/issues/48772
1054
    qa: pjd: not ok 9, 44, 80
1055
* https://tracker.ceph.com/issues/52997
1056
    testing: hang ing umount
1057
* https://tracker.ceph.com/issues/50250
1058
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
1059
* https://tracker.ceph.com/issues/52624
1060
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1061
* https://tracker.ceph.com/issues/50223
1062
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
1063
* https://tracker.ceph.com/issues/50821
1064
    qa: untar_snap_rm failure during mds thrashing
1065
* https://tracker.ceph.com/issues/48773
1066
    qa: scrub does not complete
1067
1068
1069 26 Patrick Donnelly
h3. 2021 October 12
1070
1071
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211012.192211
1072
1073
Some failures caused by teuthology bug: https://tracker.ceph.com/issues/52944
1074
1075
New test caused failure: https://github.com/ceph/ceph/pull/43297#discussion_r729883167
1076
1077
1078
* https://tracker.ceph.com/issues/51282
1079
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
1080
* https://tracker.ceph.com/issues/52948
1081
    osd: fails to come up: "teuthology.misc:7 of 8 OSDs are up"
1082
* https://tracker.ceph.com/issues/48773
1083
    qa: scrub does not complete
1084
* https://tracker.ceph.com/issues/50224
1085
    qa: test_mirroring_init_failure_with_recovery failure
1086
* https://tracker.ceph.com/issues/52949
1087
    RuntimeError: The following counters failed to be set on mds daemons: {'mds.dir_split'}
1088
1089
1090 25 Patrick Donnelly
h3. 2021 October 02
1091 23 Patrick Donnelly
1092 24 Patrick Donnelly
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211002.163337
1093
1094
Some failures caused by cephadm upgrade test. Fixed in follow-up qa commit.
1095
1096
test_simple failures caused by PR in this set.
1097
1098
A few reruns because of QA infra noise.
1099
1100
* https://tracker.ceph.com/issues/52822
1101
    qa: failed pacific install on fs:upgrade
1102
* https://tracker.ceph.com/issues/52624
1103
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1104
* https://tracker.ceph.com/issues/50223
1105
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
1106
* https://tracker.ceph.com/issues/48773
1107
    qa: scrub does not complete
1108
1109
1110
h3. 2021 September 20
1111
1112 23 Patrick Donnelly
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210917.174826
1113
1114
* https://tracker.ceph.com/issues/52677
1115
    qa: test_simple failure
1116
* https://tracker.ceph.com/issues/51279
1117
    kclient hangs on umount (testing branch)
1118
* https://tracker.ceph.com/issues/50223
1119
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
1120
* https://tracker.ceph.com/issues/50250
1121
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
1122
* https://tracker.ceph.com/issues/52624
1123
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1124
* https://tracker.ceph.com/issues/52438
1125
    qa: ffsb timeout
1126
1127
1128 22 Patrick Donnelly
h3. 2021 September 10
1129
1130
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210910.181451
1131
1132
* https://tracker.ceph.com/issues/50223
1133
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
1134
* https://tracker.ceph.com/issues/50250
1135
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
1136
* https://tracker.ceph.com/issues/52624
1137
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1138
* https://tracker.ceph.com/issues/52625
1139
    qa: test_kill_mdstable (tasks.cephfs.test_snapshots.TestSnapshots)
1140
* https://tracker.ceph.com/issues/52439
1141
    qa: acls does not compile on centos stream
1142
* https://tracker.ceph.com/issues/50821
1143
    qa: untar_snap_rm failure during mds thrashing
1144
* https://tracker.ceph.com/issues/48773
1145
    qa: scrub does not complete
1146
* https://tracker.ceph.com/issues/52626
1147
    mds: ScrubStack.cc: 831: FAILED ceph_assert(diri)
1148
* https://tracker.ceph.com/issues/51279
1149
    kclient hangs on umount (testing branch)
1150
1151
1152 21 Patrick Donnelly
h3. 2021 August 27
1153
1154
Several jobs died because of device failures.
1155
1156
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210827.024746
1157
1158
* https://tracker.ceph.com/issues/52430
1159
    mds: fast async create client mount breaks racy test
1160
* https://tracker.ceph.com/issues/52436
1161
    fs/ceph: "corrupt mdsmap"
1162
* https://tracker.ceph.com/issues/52437
1163
    mds: InoTable::replay_release_ids abort via test_inotable_sync
1164
* https://tracker.ceph.com/issues/51282
1165
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
1166
* https://tracker.ceph.com/issues/52438
1167
    qa: ffsb timeout
1168
* https://tracker.ceph.com/issues/52439
1169
    qa: acls does not compile on centos stream
1170
1171
1172 20 Patrick Donnelly
h3. 2021 July 30
1173
1174
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210729.214022
1175
1176
* https://tracker.ceph.com/issues/50250
1177
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
1178
* https://tracker.ceph.com/issues/51282
1179
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
1180
* https://tracker.ceph.com/issues/48773
1181
    qa: scrub does not complete
1182
* https://tracker.ceph.com/issues/51975
1183
    pybind/mgr/stats: KeyError
1184
1185
1186 19 Patrick Donnelly
h3. 2021 July 28
1187
1188
https://pulpito.ceph.com/pdonnell-2021-07-28_00:39:45-fs-wip-pdonnell-testing-20210727.213757-distro-basic-smithi/
1189
1190
with qa fix: https://pulpito.ceph.com/pdonnell-2021-07-28_16:20:28-fs-wip-pdonnell-testing-20210728.141004-distro-basic-smithi/
1191
1192
* https://tracker.ceph.com/issues/51905
1193
    qa: "error reading sessionmap 'mds1_sessionmap'"
1194
* https://tracker.ceph.com/issues/48773
1195
    qa: scrub does not complete
1196
* https://tracker.ceph.com/issues/50250
1197
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
1198
* https://tracker.ceph.com/issues/51267
1199
    CommandFailedError: Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithi096 with status 1:...
1200
* https://tracker.ceph.com/issues/51279
1201
    kclient hangs on umount (testing branch)
1202
1203
1204 18 Patrick Donnelly
h3. 2021 July 16
1205
1206
https://pulpito.ceph.com/pdonnell-2021-07-16_05:50:11-fs-wip-pdonnell-testing-20210716.022804-distro-basic-smithi/
1207
1208
* https://tracker.ceph.com/issues/48773
1209
    qa: scrub does not complete
1210
* https://tracker.ceph.com/issues/48772
1211
    qa: pjd: not ok 9, 44, 80
1212
* https://tracker.ceph.com/issues/45434
1213
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
1214
* https://tracker.ceph.com/issues/51279
1215
    kclient hangs on umount (testing branch)
1216
* https://tracker.ceph.com/issues/50824
1217
    qa: snaptest-git-ceph bus error
1218
1219
1220 17 Patrick Donnelly
h3. 2021 July 04
1221
1222
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210703.052904
1223
1224
* https://tracker.ceph.com/issues/48773
1225
    qa: scrub does not complete
1226
* https://tracker.ceph.com/issues/39150
1227
    mon: "FAILED ceph_assert(session_map.sessions.empty())" when out of quorum
1228
* https://tracker.ceph.com/issues/45434
1229
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
1230
* https://tracker.ceph.com/issues/51282
1231
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
1232
* https://tracker.ceph.com/issues/48771
1233
    qa: iogen: workload fails to cause balancing
1234
* https://tracker.ceph.com/issues/51279
1235
    kclient hangs on umount (testing branch)
1236
* https://tracker.ceph.com/issues/50250
1237
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
1238
1239
1240 16 Patrick Donnelly
h3. 2021 July 01
1241
1242
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210701.192056
1243
1244
* https://tracker.ceph.com/issues/51197
1245
    qa: [WRN] Scrub error on inode 0x10000001520 (/client.0/tmp/t/linux-5.4/Documentation/driver-api) see mds.f log and `damage ls` output for details
1246
* https://tracker.ceph.com/issues/50866
1247
    osd: stat mismatch on objects
1248
* https://tracker.ceph.com/issues/48773
1249
    qa: scrub does not complete
1250
1251
1252 15 Patrick Donnelly
h3. 2021 June 26
1253
1254
https://pulpito.ceph.com/pdonnell-2021-06-26_00:57:00-fs-wip-pdonnell-testing-20210625.225421-distro-basic-smithi/
1255
1256
* https://tracker.ceph.com/issues/51183
1257
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
1258
* https://tracker.ceph.com/issues/51410
1259
    kclient: fails to finish reconnect during MDS thrashing (testing branch)
1260
* https://tracker.ceph.com/issues/48773
1261
    qa: scrub does not complete
1262
* https://tracker.ceph.com/issues/51282
1263
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
1264
* https://tracker.ceph.com/issues/51169
1265
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
1266
* https://tracker.ceph.com/issues/48772
1267
    qa: pjd: not ok 9, 44, 80
1268
1269
1270 14 Patrick Donnelly
h3. 2021 June 21
1271
1272
https://pulpito.ceph.com/pdonnell-2021-06-22_00:27:21-fs-wip-pdonnell-testing-20210621.231646-distro-basic-smithi/
1273
1274
One failure caused by PR: https://github.com/ceph/ceph/pull/41935#issuecomment-866472599
1275
1276
* https://tracker.ceph.com/issues/51282
1277
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
1278
* https://tracker.ceph.com/issues/51183
1279
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
1280
* https://tracker.ceph.com/issues/48773
1281
    qa: scrub does not complete
1282
* https://tracker.ceph.com/issues/48771
1283
    qa: iogen: workload fails to cause balancing
1284
* https://tracker.ceph.com/issues/51169
1285
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
1286
* https://tracker.ceph.com/issues/50495
1287
    libcephfs: shutdown race fails with status 141
1288
* https://tracker.ceph.com/issues/45434
1289
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
1290
* https://tracker.ceph.com/issues/50824
1291
    qa: snaptest-git-ceph bus error
1292
* https://tracker.ceph.com/issues/50223
1293
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
1294
1295
1296 13 Patrick Donnelly
h3. 2021 June 16
1297
1298
https://pulpito.ceph.com/pdonnell-2021-06-16_21:26:55-fs-wip-pdonnell-testing-20210616.191804-distro-basic-smithi/
1299
1300
MDS abort class of failures caused by PR: https://github.com/ceph/ceph/pull/41667
1301
1302
* https://tracker.ceph.com/issues/45434
1303
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
1304
* https://tracker.ceph.com/issues/51169
1305
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
1306
* https://tracker.ceph.com/issues/43216
1307
    MDSMonitor: removes MDS coming out of quorum election
1308
* https://tracker.ceph.com/issues/51278
1309
    mds: "FAILED ceph_assert(!segments.empty())"
1310
* https://tracker.ceph.com/issues/51279
1311
    kclient hangs on umount (testing branch)
1312
* https://tracker.ceph.com/issues/51280
1313
    mds: "FAILED ceph_assert(r == 0 || r == -2)"
1314
* https://tracker.ceph.com/issues/51183
1315
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
1316
* https://tracker.ceph.com/issues/51281
1317
    qa: snaptest-snap-rm-cmp.sh: "echo 'FAIL: bad match, /tmp/a 4637e766853d1ad16a7b17079e2c6f03 != real c3883760b18d50e8d78819c54d579b00'"
1318
* https://tracker.ceph.com/issues/48773
1319
    qa: scrub does not complete
1320
* https://tracker.ceph.com/issues/51076
1321
    "wait_for_recovery: failed before timeout expired" during thrashosd test with EC backend.
1322
* https://tracker.ceph.com/issues/51228
1323
    qa: rmdir: failed to remove 'a/.snap/*': No such file or directory
1324
* https://tracker.ceph.com/issues/51282
1325
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
1326
1327
1328 12 Patrick Donnelly
h3. 2021 June 14
1329
1330
https://pulpito.ceph.com/pdonnell-2021-06-14_20:53:05-fs-wip-pdonnell-testing-20210614.173325-distro-basic-smithi/
1331
1332
Some Ubuntu 20.04 upgrade fallout. In particular, upgrade tests are failing due to missing packages for 18.04 Pacific.
1333
1334
* https://tracker.ceph.com/issues/51169
1335
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
1336
* https://tracker.ceph.com/issues/51228
1337
    qa: rmdir: failed to remove 'a/.snap/*': No such file or directory
1338
* https://tracker.ceph.com/issues/48773
1339
    qa: scrub does not complete
1340
* https://tracker.ceph.com/issues/51183
1341
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
1342
* https://tracker.ceph.com/issues/45434
1343
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
1344
* https://tracker.ceph.com/issues/51182
1345
    pybind/mgr/snap_schedule: Invalid command: Unexpected argument 'fs=cephfs'
1346
* https://tracker.ceph.com/issues/51229
1347
    qa: test_multi_snap_schedule list difference failure
1348
* https://tracker.ceph.com/issues/50821
1349
    qa: untar_snap_rm failure during mds thrashing
1350
1351
1352 11 Patrick Donnelly
h3. 2021 June 13
1353
1354
https://pulpito.ceph.com/pdonnell-2021-06-12_02:45:35-fs-wip-pdonnell-testing-20210612.002809-distro-basic-smithi/
1355
1356
Some Ubuntu 20.04 upgrade fallout. In particular, upgrade tests are failing due to missing packages for 18.04 Pacific.
1357
1358
* https://tracker.ceph.com/issues/51169
1359
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
1360
* https://tracker.ceph.com/issues/48773
1361
    qa: scrub does not complete
1362
* https://tracker.ceph.com/issues/51182
1363
    pybind/mgr/snap_schedule: Invalid command: Unexpected argument 'fs=cephfs'
1364
* https://tracker.ceph.com/issues/51183
1365
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
1366
* https://tracker.ceph.com/issues/51197
1367
    qa: [WRN] Scrub error on inode 0x10000001520 (/client.0/tmp/t/linux-5.4/Documentation/driver-api) see mds.f log and `damage ls` output for details
1368
* https://tracker.ceph.com/issues/45434
1369
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
1370
1371 10 Patrick Donnelly
h3. 2021 June 11
1372
1373
https://pulpito.ceph.com/pdonnell-2021-06-11_18:02:10-fs-wip-pdonnell-testing-20210611.162716-distro-basic-smithi/
1374
1375
Some Ubuntu 20.04 upgrade fallout. In particular, upgrade tests are failing due to missing packages for 18.04 Pacific.
1376
1377
* https://tracker.ceph.com/issues/51169
1378
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
1379
* https://tracker.ceph.com/issues/45434
1380
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
1381
* https://tracker.ceph.com/issues/48771
1382
    qa: iogen: workload fails to cause balancing
1383
* https://tracker.ceph.com/issues/43216
1384
    MDSMonitor: removes MDS coming out of quorum election
1385
* https://tracker.ceph.com/issues/51182
1386
    pybind/mgr/snap_schedule: Invalid command: Unexpected argument 'fs=cephfs'
1387
* https://tracker.ceph.com/issues/50223
1388
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
1389
* https://tracker.ceph.com/issues/48773
1390
    qa: scrub does not complete
1391
* https://tracker.ceph.com/issues/51183
1392
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
1393
* https://tracker.ceph.com/issues/51184
1394
    qa: fs:bugs does not specify distro
1395
1396
1397 9 Patrick Donnelly
h3. 2021 June 03
1398
1399
https://pulpito.ceph.com/pdonnell-2021-06-03_03:40:33-fs-wip-pdonnell-testing-20210603.020013-distro-basic-smithi/
1400
1401
* https://tracker.ceph.com/issues/45434
1402
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
1403
* https://tracker.ceph.com/issues/50016
1404
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
1405
* https://tracker.ceph.com/issues/50821
1406
    qa: untar_snap_rm failure during mds thrashing
1407
* https://tracker.ceph.com/issues/50622 (regression)
1408
    msg: active_connections regression
1409
* https://tracker.ceph.com/issues/49845#note-2 (regression)
1410
    qa: failed umount in test_volumes
1411
* https://tracker.ceph.com/issues/48773
1412
    qa: scrub does not complete
1413
* https://tracker.ceph.com/issues/43216
1414
    MDSMonitor: removes MDS coming out of quorum election
1415
1416
1417 7 Patrick Donnelly
h3. 2021 May 18
1418
1419 8 Patrick Donnelly
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210518.214114
1420
1421
Regression in testing kernel caused some failures. Ilya fixed those and rerun
1422
looked better. Some odd new noise in the rerun relating to packaging and "No
1423
module named 'tasks.ceph'".
1424
1425
* https://tracker.ceph.com/issues/50824
1426
    qa: snaptest-git-ceph bus error
1427
* https://tracker.ceph.com/issues/50622 (regression)
1428
    msg: active_connections regression
1429
* https://tracker.ceph.com/issues/49845#note-2 (regression)
1430
    qa: failed umount in test_volumes
1431
* https://tracker.ceph.com/issues/48203 (stock kernel update required)
1432
    qa: quota failure
1433
1434
1435
h3. 2021 May 18
1436
1437 7 Patrick Donnelly
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210518.025642
1438
1439
* https://tracker.ceph.com/issues/50821
1440
    qa: untar_snap_rm failure during mds thrashing
1441
* https://tracker.ceph.com/issues/48773
1442
    qa: scrub does not complete
1443
* https://tracker.ceph.com/issues/45591
1444
    mgr: FAILED ceph_assert(daemon != nullptr)
1445
* https://tracker.ceph.com/issues/50866
1446
    osd: stat mismatch on objects
1447
* https://tracker.ceph.com/issues/50016
1448
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
1449
* https://tracker.ceph.com/issues/50867
1450
    qa: fs:mirror: reduced data availability
1451
* https://tracker.ceph.com/issues/50821
1452
    qa: untar_snap_rm failure during mds thrashing
1453
* https://tracker.ceph.com/issues/50622 (regression)
1454
    msg: active_connections regression
1455
* https://tracker.ceph.com/issues/50223
1456
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
1457
* https://tracker.ceph.com/issues/50868
1458
    qa: "kern.log.gz already exists; not overwritten"
1459
* https://tracker.ceph.com/issues/50870
1460
    qa: test_full: "rm: cannot remove 'large_file_a': Permission denied"
1461
1462
1463 6 Patrick Donnelly
h3. 2021 May 11
1464
1465
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210511.232042
1466
1467
* one class of failures caused by PR
1468
* https://tracker.ceph.com/issues/48812
1469
    qa: test_scrub_pause_and_resume_with_abort failure
1470
* https://tracker.ceph.com/issues/50390
1471
    mds: monclient: wait_auth_rotating timed out after 30
1472
* https://tracker.ceph.com/issues/48773
1473
    qa: scrub does not complete
1474
* https://tracker.ceph.com/issues/50821
1475
    qa: untar_snap_rm failure during mds thrashing
1476
* https://tracker.ceph.com/issues/50224
1477
    qa: test_mirroring_init_failure_with_recovery failure
1478
* https://tracker.ceph.com/issues/50622 (regression)
1479
    msg: active_connections regression
1480
* https://tracker.ceph.com/issues/50825
1481
    qa: snaptest-git-ceph hang during mon thrashing v2
1482
* https://tracker.ceph.com/issues/50821
1483
    qa: untar_snap_rm failure during mds thrashing
1484
* https://tracker.ceph.com/issues/50823
1485
    qa: RuntimeError: timeout waiting for cluster to stabilize
1486
1487
1488 5 Patrick Donnelly
h3. 2021 May 14
1489
1490
https://pulpito.ceph.com/pdonnell-2021-05-14_21:45:42-fs-master-distro-basic-smithi/
1491
1492
* https://tracker.ceph.com/issues/48812
1493
    qa: test_scrub_pause_and_resume_with_abort failure
1494
* https://tracker.ceph.com/issues/50821
1495
    qa: untar_snap_rm failure during mds thrashing
1496
* https://tracker.ceph.com/issues/50622 (regression)
1497
    msg: active_connections regression
1498
* https://tracker.ceph.com/issues/50822
1499
    qa: testing kernel patch for client metrics causes mds abort
1500
* https://tracker.ceph.com/issues/48773
1501
    qa: scrub does not complete
1502
* https://tracker.ceph.com/issues/50823
1503
    qa: RuntimeError: timeout waiting for cluster to stabilize
1504
* https://tracker.ceph.com/issues/50824
1505
    qa: snaptest-git-ceph bus error
1506
* https://tracker.ceph.com/issues/50825
1507
    qa: snaptest-git-ceph hang during mon thrashing v2
1508
* https://tracker.ceph.com/issues/50826
1509
    kceph: stock RHEL kernel hangs on snaptests with mon|osd thrashers
1510
1511
1512 4 Patrick Donnelly
h3. 2021 May 01
1513
1514
https://pulpito.ceph.com/pdonnell-2021-05-01_09:07:09-fs-wip-pdonnell-testing-20210501.040415-distro-basic-smithi/
1515
1516
* https://tracker.ceph.com/issues/45434
1517
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
1518
* https://tracker.ceph.com/issues/50281
1519
    qa: untar_snap_rm timeout
1520
* https://tracker.ceph.com/issues/48203 (stock kernel update required)
1521
    qa: quota failure
1522
* https://tracker.ceph.com/issues/48773
1523
    qa: scrub does not complete
1524
* https://tracker.ceph.com/issues/50390
1525
    mds: monclient: wait_auth_rotating timed out after 30
1526
* https://tracker.ceph.com/issues/50250
1527
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details"
1528
* https://tracker.ceph.com/issues/50622 (regression)
1529
    msg: active_connections regression
1530
* https://tracker.ceph.com/issues/45591
1531
    mgr: FAILED ceph_assert(daemon != nullptr)
1532
* https://tracker.ceph.com/issues/50221
1533
    qa: snaptest-git-ceph failure in git diff
1534
* https://tracker.ceph.com/issues/50016
1535
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
1536
1537
1538 3 Patrick Donnelly
h3. 2021 Apr 15
1539
1540
https://pulpito.ceph.com/pdonnell-2021-04-15_01:35:57-fs-wip-pdonnell-testing-20210414.230315-distro-basic-smithi/
1541
1542
* https://tracker.ceph.com/issues/50281
1543
    qa: untar_snap_rm timeout
1544
* https://tracker.ceph.com/issues/50220
1545
    qa: dbench workload timeout
1546
* https://tracker.ceph.com/issues/50246
1547
    mds: failure replaying journal (EMetaBlob)
1548
* https://tracker.ceph.com/issues/50250
1549
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details"
1550
* https://tracker.ceph.com/issues/50016
1551
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
1552
* https://tracker.ceph.com/issues/50222
1553
    osd: 5.2s0 deep-scrub : stat mismatch
1554
* https://tracker.ceph.com/issues/45434
1555
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
1556
* https://tracker.ceph.com/issues/49845
1557
    qa: failed umount in test_volumes
1558
* https://tracker.ceph.com/issues/37808
1559
    osd: osdmap cache weak_refs assert during shutdown
1560
* https://tracker.ceph.com/issues/50387
1561
    client: fs/snaps failure
1562
* https://tracker.ceph.com/issues/50389
1563
    mds: "cluster [ERR] Error recovering journal 0x203: (2) No such file or directory" in cluster log"
1564
* https://tracker.ceph.com/issues/50216
1565
    qa: "ls: cannot access 'lost+found': No such file or directory"
1566
* https://tracker.ceph.com/issues/50390
1567
    mds: monclient: wait_auth_rotating timed out after 30
1568
1569
1570
1571 1 Patrick Donnelly
h3. 2021 Apr 08
1572
1573 2 Patrick Donnelly
https://pulpito.ceph.com/pdonnell-2021-04-08_22:42:24-fs-wip-pdonnell-testing-20210408.192301-distro-basic-smithi/
1574
1575
* https://tracker.ceph.com/issues/45434
1576
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
1577
* https://tracker.ceph.com/issues/50016
1578
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
1579
* https://tracker.ceph.com/issues/48773
1580
    qa: scrub does not complete
1581
* https://tracker.ceph.com/issues/50279
1582
    qa: "Replacing daemon mds.b as rank 0 with standby daemon mds.c"
1583
* https://tracker.ceph.com/issues/50246
1584
    mds: failure replaying journal (EMetaBlob)
1585
* https://tracker.ceph.com/issues/48365
1586
    qa: ffsb build failure on CentOS 8.2
1587
* https://tracker.ceph.com/issues/50216
1588
    qa: "ls: cannot access 'lost+found': No such file or directory"
1589
* https://tracker.ceph.com/issues/50223
1590
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
1591
* https://tracker.ceph.com/issues/50280
1592
    cephadm: RuntimeError: uid/gid not found
1593
* https://tracker.ceph.com/issues/50281
1594
    qa: untar_snap_rm timeout
1595
1596
h3. 2021 Apr 08
1597
1598 1 Patrick Donnelly
https://pulpito.ceph.com/pdonnell-2021-04-08_04:31:36-fs-wip-pdonnell-testing-20210408.024225-distro-basic-smithi/
1599
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210408.142238 (with logic inversion / QA fix)
1600
1601
* https://tracker.ceph.com/issues/50246
1602
    mds: failure replaying journal (EMetaBlob)
1603
* https://tracker.ceph.com/issues/50250
1604
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details"
1605
1606
1607
h3. 2021 Apr 07
1608
1609
https://pulpito.ceph.com/pdonnell-2021-04-07_02:12:41-fs-wip-pdonnell-testing-20210406.213012-distro-basic-smithi/
1610
1611
* https://tracker.ceph.com/issues/50215
1612
    qa: "log [ERR] : error reading sessionmap 'mds2_sessionmap'"
1613
* https://tracker.ceph.com/issues/49466
1614
    qa: "Command failed on gibba030 with status 1: 'set -ex\nsudo dd of=/tmp/tmp.ZEeZBasJer'"
1615
* https://tracker.ceph.com/issues/50216
1616
    qa: "ls: cannot access 'lost+found': No such file or directory"
1617
* https://tracker.ceph.com/issues/48773
1618
    qa: scrub does not complete
1619
* https://tracker.ceph.com/issues/49845
1620
    qa: failed umount in test_volumes
1621
* https://tracker.ceph.com/issues/50220
1622
    qa: dbench workload timeout
1623
* https://tracker.ceph.com/issues/50221
1624
    qa: snaptest-git-ceph failure in git diff
1625
* https://tracker.ceph.com/issues/50222
1626
    osd: 5.2s0 deep-scrub : stat mismatch
1627
* https://tracker.ceph.com/issues/50223
1628
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
1629
* https://tracker.ceph.com/issues/50224
1630
    qa: test_mirroring_init_failure_with_recovery failure
1631
1632
h3. 2021 Apr 01
1633
1634
https://pulpito.ceph.com/pdonnell-2021-04-01_00:45:34-fs-wip-pdonnell-testing-20210331.222326-distro-basic-smithi/
1635
1636
* https://tracker.ceph.com/issues/48772
1637
    qa: pjd: not ok 9, 44, 80
1638
* https://tracker.ceph.com/issues/50177
1639
    osd: "stalled aio... buggy kernel or bad device?"
1640
* https://tracker.ceph.com/issues/48771
1641
    qa: iogen: workload fails to cause balancing
1642
* https://tracker.ceph.com/issues/49845
1643
    qa: failed umount in test_volumes
1644
* https://tracker.ceph.com/issues/48773
1645
    qa: scrub does not complete
1646
* https://tracker.ceph.com/issues/48805
1647
    mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details"
1648
* https://tracker.ceph.com/issues/50178
1649
    qa: "TypeError: run() got an unexpected keyword argument 'shell'"
1650
* https://tracker.ceph.com/issues/45434
1651
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
1652
1653
h3. 2021 Mar 24
1654
1655
https://pulpito.ceph.com/pdonnell-2021-03-24_23:26:35-fs-wip-pdonnell-testing-20210324.190252-distro-basic-smithi/
1656
1657
* https://tracker.ceph.com/issues/49500
1658
    qa: "Assertion `cb_done' failed."
1659
* https://tracker.ceph.com/issues/50019
1660
    qa: mount failure with cephadm "probably no MDS server is up?"
1661
* https://tracker.ceph.com/issues/50020
1662
    qa: "RADOS object not found (Failed to operate read op for oid cephfs_mirror)"
1663
* https://tracker.ceph.com/issues/48773
1664
    qa: scrub does not complete
1665
* https://tracker.ceph.com/issues/45434
1666
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
1667
* https://tracker.ceph.com/issues/48805
1668
    mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details"
1669
* https://tracker.ceph.com/issues/48772
1670
    qa: pjd: not ok 9, 44, 80
1671
* https://tracker.ceph.com/issues/50021
1672
    qa: snaptest-git-ceph failure during mon thrashing
1673
* https://tracker.ceph.com/issues/48771
1674
    qa: iogen: workload fails to cause balancing
1675
* https://tracker.ceph.com/issues/50016
1676
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
1677
* https://tracker.ceph.com/issues/49466
1678
    qa: "Command failed on gibba030 with status 1: 'set -ex\nsudo dd of=/tmp/tmp.ZEeZBasJer'"
1679
1680
1681
h3. 2021 Mar 18
1682
1683
https://pulpito.ceph.com/pdonnell-2021-03-18_13:46:31-fs-wip-pdonnell-testing-20210318.024145-distro-basic-smithi/
1684
1685
* https://tracker.ceph.com/issues/49466
1686
    qa: "Command failed on gibba030 with status 1: 'set -ex\nsudo dd of=/tmp/tmp.ZEeZBasJer'"
1687
* https://tracker.ceph.com/issues/48773
1688
    qa: scrub does not complete
1689
* https://tracker.ceph.com/issues/48805
1690
    mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details"
1691
* https://tracker.ceph.com/issues/45434
1692
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
1693
* https://tracker.ceph.com/issues/49845
1694
    qa: failed umount in test_volumes
1695
* https://tracker.ceph.com/issues/49605
1696
    mgr: drops command on the floor
1697
* https://tracker.ceph.com/issues/48203 (stock kernel update required)
1698
    qa: quota failure
1699
* https://tracker.ceph.com/issues/49928
1700
    client: items pinned in cache preventing unmount x2
1701
1702
h3. 2021 Mar 15
1703
1704
https://pulpito.ceph.com/pdonnell-2021-03-15_22:16:56-fs-wip-pdonnell-testing-20210315.182203-distro-basic-smithi/
1705
1706
* https://tracker.ceph.com/issues/49842
1707
    qa: stuck pkg install
1708
* https://tracker.ceph.com/issues/49466
1709
    qa: "Command failed on gibba030 with status 1: 'set -ex\nsudo dd of=/tmp/tmp.ZEeZBasJer'"
1710
* https://tracker.ceph.com/issues/49822
1711
    test: test_mirroring_command_idempotency (tasks.cephfs.test_admin.TestMirroringCommands) failure
1712
* https://tracker.ceph.com/issues/49240
1713
    terminate called after throwing an instance of 'std::bad_alloc'
1714
* https://tracker.ceph.com/issues/48773
1715
    qa: scrub does not complete
1716
* https://tracker.ceph.com/issues/45434
1717
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
1718
* https://tracker.ceph.com/issues/49500
1719
    qa: "Assertion `cb_done' failed."
1720
* https://tracker.ceph.com/issues/49843
1721
    qa: fs/snaps/snaptest-upchildrealms.sh failure
1722
* https://tracker.ceph.com/issues/49845
1723
    qa: failed umount in test_volumes
1724
* https://tracker.ceph.com/issues/48805
1725
    mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details"
1726
* https://tracker.ceph.com/issues/49605
1727
    mgr: drops command on the floor
1728
1729
and failure caused by PR: https://github.com/ceph/ceph/pull/39969
1730
1731
1732
h3. 2021 Mar 09
1733
1734
https://pulpito.ceph.com/pdonnell-2021-03-09_03:27:39-fs-wip-pdonnell-testing-20210308.214827-distro-basic-smithi/
1735
1736
* https://tracker.ceph.com/issues/49500
1737
    qa: "Assertion `cb_done' failed."
1738
* https://tracker.ceph.com/issues/48805
1739
    mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details"
1740
* https://tracker.ceph.com/issues/48773
1741
    qa: scrub does not complete
1742
* https://tracker.ceph.com/issues/45434
1743
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
1744
* https://tracker.ceph.com/issues/49240
1745
    terminate called after throwing an instance of 'std::bad_alloc'
1746
* https://tracker.ceph.com/issues/49466
1747
    qa: "Command failed on gibba030 with status 1: 'set -ex\nsudo dd of=/tmp/tmp.ZEeZBasJer'"
1748
* https://tracker.ceph.com/issues/49684
1749
    qa: fs:cephadm mount does not wait for mds to be created
1750
* https://tracker.ceph.com/issues/48771
1751
    qa: iogen: workload fails to cause balancing