Project

General

Profile

Main » History » Version 133

Kotresh Hiremath Ravishankar, 05/18/2023 10:45 AM

1 79 Venky Shankar
h1. MAIN
2
3 128 Venky Shankar
h3. 15 May 2023
4
5
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230509.090020
6 130 Venky Shankar
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230509.090020-6
7 128 Venky Shankar
8
* https://tracker.ceph.com/issues/52624
9
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
10
* https://tracker.ceph.com/issues/54460
11
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
12
* https://tracker.ceph.com/issues/57676
13
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
14
* https://tracker.ceph.com/issues/59684 [kclient bug]
15
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
16
* https://tracker.ceph.com/issues/59348
17
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
18
* https://tracker.ceph.com/issues/61148
19
    dbench test results in call trace in dmesg [kclient bug]
20 131 Venky Shankar
* https://tracker.ceph.com/issues/58340
21
    mds: fsstress.sh hangs with multimds
22 133 Kotresh Hiremath Ravishankar
* https://tracker.ceph.com/issues/58949
23
  https://pulpito.ceph.com/yuriw-2023-05-10_18:21:40-fs-wip-yuri7-testing-2023-05-10-0742-distro-default-smithi/7270166
24 128 Venky Shankar
25 125 Venky Shankar
h3. 11 May 2023
26
27 129 Rishabh Dave
https://pulpito.ceph.com/yuriw-2023-05-10_18:21:40-fs-wip-yuri7-testing-2023-05-10-0742-distro-default-smithi/
28
29
* https://tracker.ceph.com/issues/59684 [kclient bug]
30
  Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
31
* https://tracker.ceph.com/issues/59348
32
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
33
* https://tracker.ceph.com/issues/57655
34
  qa: fs:mixed-clients kernel_untar_build failure
35
* https://tracker.ceph.com/issues/57676
36
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
37
* https://tracker.ceph.com/issues/55805
38
  error during scrub thrashing reached max tries in 900 secs
39
* https://tracker.ceph.com/issues/54460
40
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
41
* https://tracker.ceph.com/issues/57656
42
  [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
43
* https://tracker.ceph.com/issues/58220
44
  Command failed (workunit test fs/quota/quota.sh) on smithi081 with status 1:
45
* https://tracker.ceph.com/issues/58220#note-9
46
  workunit fs/test_python.sh: test_disk_quota_exceeeded_error failure
47 132 Kotresh Hiremath Ravishankar
* https://tracker.ceph.com/issues/59342
48
  qa/workunits/kernel_untar_build.sh failed when compiling the Linux source
49 129 Rishabh Dave
50
h3. 11 May 2023
51
52 125 Venky Shankar
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230509.054005
53 127 Venky Shankar
54
(no fsstress job failure [https://tracker.ceph.com/issues/58340] since https://github.com/ceph/ceph/pull/49553
55 126 Venky Shankar
 was included in the branch, however, the PR got updated and needs retest).
56 125 Venky Shankar
57
* https://tracker.ceph.com/issues/52624
58
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
59
* https://tracker.ceph.com/issues/54460
60
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
61
* https://tracker.ceph.com/issues/57676
62
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
63
* https://tracker.ceph.com/issues/59683
64
    Error: Unable to find a match: userspace-rcu-devel libedit-devel device-mapper-devel with fscrypt tests
65
* https://tracker.ceph.com/issues/59684 [kclient bug]
66
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
67
* https://tracker.ceph.com/issues/59348
68
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
69
70 124 Venky Shankar
h3. 09 May 2023
71
72
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230506.143554
73
74
* https://tracker.ceph.com/issues/52624
75
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
76
* https://tracker.ceph.com/issues/58340
77
    mds: fsstress.sh hangs with multimds
78
* https://tracker.ceph.com/issues/54460
79
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
80
* https://tracker.ceph.com/issues/57676
81
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
82
* https://tracker.ceph.com/issues/51964
83
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
84
* https://tracker.ceph.com/issues/59350
85
    qa: test_flag_scrub_mdsdir (tasks.cephfs.test_scrub_checks.TestScrubChecks) ... ERROR
86
* https://tracker.ceph.com/issues/59683
87
    Error: Unable to find a match: userspace-rcu-devel libedit-devel device-mapper-devel with fscrypt tests
88
* https://tracker.ceph.com/issues/59684 [kclient bug]
89
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
90
* https://tracker.ceph.com/issues/59348
91
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
92
93 123 Venky Shankar
h3. 10 Apr 2023
94
95
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230330.105356
96
97
* https://tracker.ceph.com/issues/52624
98
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
99
* https://tracker.ceph.com/issues/58340
100
    mds: fsstress.sh hangs with multimds
101
* https://tracker.ceph.com/issues/54460
102
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
103
* https://tracker.ceph.com/issues/57676
104
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
105
* https://tracker.ceph.com/issues/51964
106
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
107 119 Rishabh Dave
108 120 Rishabh Dave
h3. 31 Mar 2023
109 121 Rishabh Dave
110 120 Rishabh Dave
run: http://pulpito.front.sepia.ceph.com/rishabh-2023-03-03_21:39:49-fs-wip-rishabh-2023Mar03-2316-testing-default-smithi/
111 122 Rishabh Dave
re-run (some PRs removed from batch): http://pulpito.front.sepia.ceph.com/rishabh-2023-03-11_05:54:03-fs-wip-rishabh-2023Mar10-1727-testing-default-smithi/
112
re-run (some PRs removed from batch): http://pulpito.front.sepia.ceph.com/rishabh-2023-03-23_08:27:28-fs-wip-rishabh-2023Mar20-2250-testing-default-smithi/
113 120 Rishabh Dave
114
There were many more re-runs for "failed+dead" jobs as well as for individual jobs. half of the PRs from the batch were removed (gradually over subsequent re-runs).
115
116
* https://tracker.ceph.com/issues/57676
117
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
118
* https://tracker.ceph.com/issues/54460
119
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
120
* https://tracker.ceph.com/issues/58220
121
  Command failed (workunit test fs/quota/quota.sh) on smithixxx with status 1
122
* https://tracker.ceph.com/issues/58220#note-9
123
  workunit fs/test_python.sh: test_disk_quota_exceeeded_error failure
124
* https://tracker.ceph.com/issues/56695
125
  Command failed (workunit test suites/pjd.sh)
126
* https://tracker.ceph.com/issues/58564 
127
  workuit dbench failed with error code 1
128
* https://tracker.ceph.com/issues/57206
129
  libcephfs/test.sh: ceph_test_libcephfs_reclaim
130
* https://tracker.ceph.com/issues/57580
131
  Test failure: test_newops_getvxattr (tasks.cephfs.test_newops.TestNewOps)
132
* https://tracker.ceph.com/issues/58940
133
  ceph osd hit ceph_abort
134
* https://tracker.ceph.com/issues/55805
135
  error scrub thrashing reached max tries in 900 secs
136
137 118 Venky Shankar
h3. 30 March 2023
138
139
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230315.085747
140
141
* https://tracker.ceph.com/issues/58938
142
    qa: xfstests-dev's generic test suite has 7 failures with kclient
143
* https://tracker.ceph.com/issues/51964
144
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
145
* https://tracker.ceph.com/issues/58340
146
    mds: fsstress.sh hangs with multimds
147
148 114 Venky Shankar
h3. 29 March 2023
149
150 115 Venky Shankar
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230317.095222
151 114 Venky Shankar
152
* https://tracker.ceph.com/issues/56695
153
    [RHEL stock] pjd test failures
154
* https://tracker.ceph.com/issues/57676
155
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
156
* https://tracker.ceph.com/issues/57087
157
    qa: test_fragmented_injection (tasks.cephfs.test_data_scan.TestDataScan) failure
158
* https://tracker.ceph.com/issues/58340
159
    mds: fsstress.sh hangs with multimds
160 116 Venky Shankar
* https://tracker.ceph.com/issues/57655
161
    qa: fs:mixed-clients kernel_untar_build failure
162 114 Venky Shankar
* https://tracker.ceph.com/issues/59230
163
    Test failure: test_object_deletion (tasks.cephfs.test_damage.TestDamage)
164 117 Venky Shankar
* https://tracker.ceph.com/issues/58938
165
    qa: xfstests-dev's generic test suite has 7 failures with kclient
166 114 Venky Shankar
167 113 Venky Shankar
h3. 13 Mar 2023
168
169
* https://tracker.ceph.com/issues/56695
170
    [RHEL stock] pjd test failures
171
* https://tracker.ceph.com/issues/57676
172
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
173
* https://tracker.ceph.com/issues/51964
174
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
175
* https://tracker.ceph.com/issues/54460
176
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
177
* https://tracker.ceph.com/issues/57656
178
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
179
180 112 Venky Shankar
h3. 09 Mar 2023
181
182
https://pulpito.ceph.com/vshankar-2023-03-03_04:39:14-fs-wip-vshankar-testing-20230303.023823-testing-default-smithi/
183
https://pulpito.ceph.com/vshankar-2023-03-08_15:12:36-fs-wip-vshankar-testing-20230308.112059-testing-default-smithi/
184
185
* https://tracker.ceph.com/issues/56695
186
    [RHEL stock] pjd test failures
187
* https://tracker.ceph.com/issues/57676
188
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
189
* https://tracker.ceph.com/issues/51964
190
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
191
* https://tracker.ceph.com/issues/54460
192
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
193
* https://tracker.ceph.com/issues/58340
194
    mds: fsstress.sh hangs with multimds
195
* https://tracker.ceph.com/issues/57087
196
    qa: test_fragmented_injection (tasks.cephfs.test_data_scan.TestDataScan) failure
197
198 111 Venky Shankar
h3. 07 Mar 2023
199
200
https://pulpito.ceph.com/vshankar-2023-03-02_09:21:58-fs-wip-vshankar-testing-20230222.044949-testing-default-smithi/
201
https://pulpito.ceph.com/vshankar-2023-03-07_05:15:12-fs-wip-vshankar-testing-20230307.030510-testing-default-smithi/
202
203
* https://tracker.ceph.com/issues/56695
204
    [RHEL stock] pjd test failures
205
* https://tracker.ceph.com/issues/57676
206
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
207
* https://tracker.ceph.com/issues/51964
208
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
209
* https://tracker.ceph.com/issues/57656
210
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
211
* https://tracker.ceph.com/issues/57655
212
    qa: fs:mixed-clients kernel_untar_build failure
213
* https://tracker.ceph.com/issues/58220
214
    Command failed (workunit test fs/quota/quota.sh) on smithi081 with status 1:
215
* https://tracker.ceph.com/issues/54460
216
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
217
* https://tracker.ceph.com/issues/58934
218
    snaptest-git-ceph.sh failure with ceph-fuse
219
220 109 Venky Shankar
h3. 28 Feb 2023
221
222
https://pulpito.ceph.com/vshankar-2023-02-24_02:11:45-fs-wip-vshankar-testing-20230222.025426-testing-default-smithi/
223
224
* https://tracker.ceph.com/issues/56695
225
    [RHEL stock] pjd test failures
226
* https://tracker.ceph.com/issues/57676
227
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
228
* https://tracker.ceph.com/issues/56446
229
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
230 110 Venky Shankar
231 109 Venky Shankar
(teuthology infra issues causing testing delays - merging PRs which have tests passing)
232
233 107 Venky Shankar
h3. 25 Jan 2023
234
235
https://pulpito.ceph.com/vshankar-2023-01-25_07:57:32-fs-wip-vshankar-testing-20230125.055346-testing-default-smithi/
236
237
* https://tracker.ceph.com/issues/52624
238
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
239
* https://tracker.ceph.com/issues/56695
240
    [RHEL stock] pjd test failures
241
* https://tracker.ceph.com/issues/57676
242
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
243
* https://tracker.ceph.com/issues/56446
244
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
245
* https://tracker.ceph.com/issues/57206
246
    libcephfs/test.sh: ceph_test_libcephfs_reclaim
247
* https://tracker.ceph.com/issues/58220
248
    Command failed (workunit test fs/quota/quota.sh) on smithi081 with status 1:
249
* https://tracker.ceph.com/issues/58340
250
  mds: fsstress.sh hangs with multimds
251
* https://tracker.ceph.com/issues/56011
252
    fs/thrash: snaptest-snap-rm-cmp.sh fails in mds5sum comparison
253
* https://tracker.ceph.com/issues/54460
254
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
255
256 101 Rishabh Dave
h3. 30 JAN 2023
257
258
run: http://pulpito.front.sepia.ceph.com/rishabh-2022-11-28_08:04:11-fs-wip-rishabh-testing-2022Nov24-1818-testing-default-smithi/
259
re-run: http://pulpito.front.sepia.ceph.com/rishabh-2023-01-13_12:08:33-fs-wip-rishabh-testing-2022Nov24-11Jan2023-distro-default-smithi/
260
re-run of re-run: http://pulpito.front.sepia.ceph.com/rishabh-2023-01-23_18:53:32-fs-wip-rishabh-testing-2022Nov24-11Jan2023-distro-default-smithi/
261
262 105 Rishabh Dave
* https://tracker.ceph.com/issues/52624
263
  qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
264 101 Rishabh Dave
* https://tracker.ceph.com/issues/56695
265
  [RHEL stock] pjd test failures
266
* https://tracker.ceph.com/issues/57676
267
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
268
* https://tracker.ceph.com/issues/55332
269
  Failure in snaptest-git-ceph.sh
270
* https://tracker.ceph.com/issues/51964
271
  qa: test_cephfs_mirror_restart_sync_on_blocklist failure
272
* https://tracker.ceph.com/issues/56446
273
  Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
274
* https://tracker.ceph.com/issues/57655 
275
  qa: fs:mixed-clients kernel_untar_build failure
276
* https://tracker.ceph.com/issues/54460
277
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
278
* https://tracker.ceph.com/issues/58340
279
  mds: fsstress.sh hangs with multimds
280 103 Rishabh Dave
* https://tracker.ceph.com/issues/58219
281
  Command crashed: 'ceph-dencoder type inode_backtrace_t import - decode dump_json'
282 101 Rishabh Dave
283 102 Rishabh Dave
* "Failed to load ceph-mgr modules: prometheus" in cluster log"
284
  http://pulpito.front.sepia.ceph.com/rishabh-2023-01-23_18:53:32-fs-wip-rishabh-testing-2022Nov24-11Jan2023-distro-default-smithi/7134086
285
  Acc to Venky this was fixed in https://github.com/ceph/ceph/commit/cf6089200d96fc56b08ee17a4e31f19823370dc8
286 106 Rishabh Dave
* Created https://tracker.ceph.com/issues/58564
287
  workunit test suites/dbench.sh failed error code 1
288 102 Rishabh Dave
289 100 Venky Shankar
h3. 15 Dec 2022
290
291
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20221215.112736
292
293
* https://tracker.ceph.com/issues/52624
294
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
295
* https://tracker.ceph.com/issues/56695
296
    [RHEL stock] pjd test failures
297
* https://tracker.ceph.com/issues/58219
298
* https://tracker.ceph.com/issues/57655
299
* qa: fs:mixed-clients kernel_untar_build failure
300
    Test failure: test_journal_migration (tasks.cephfs.test_journal_migration.TestJournalMigration)
301
* https://tracker.ceph.com/issues/57676
302
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
303
* https://tracker.ceph.com/issues/58340
304
    mds: fsstress.sh hangs with multimds
305
306 96 Venky Shankar
h3. 08 Dec 2022
307
308
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20221130.043104
309 99 Venky Shankar
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20221209.043803
310 96 Venky Shankar
311
(lots of transient git.ceph.com failures)
312
313
* https://tracker.ceph.com/issues/52624
314
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
315
* https://tracker.ceph.com/issues/56695
316
    [RHEL stock] pjd test failures
317
* https://tracker.ceph.com/issues/57655
318
    qa: fs:mixed-clients kernel_untar_build failure
319
* https://tracker.ceph.com/issues/58219
320
    Test failure: test_journal_migration (tasks.cephfs.test_journal_migration.TestJournalMigration)
321
* https://tracker.ceph.com/issues/58220
322
    Command failed (workunit test fs/quota/quota.sh) on smithi081 with status 1:
323
* https://tracker.ceph.com/issues/57676
324
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
325 97 Venky Shankar
* https://tracker.ceph.com/issues/53859
326
    qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
327 98 Venky Shankar
* https://tracker.ceph.com/issues/54460
328
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
329
* https://tracker.ceph.com/issues/58244
330
    Test failure: test_rebuild_inotable (tasks.cephfs.test_data_scan.TestDataScan)
331 96 Venky Shankar
332 95 Venky Shankar
h3. 14 Oct 2022
333
334
https://pulpito.ceph.com/vshankar-2022-10-12_04:56:59-fs-wip-vshankar-testing-20221011-145847-testing-default-smithi/
335
https://pulpito.ceph.com/vshankar-2022-10-14_04:04:57-fs-wip-vshankar-testing-20221014-072608-testing-default-smithi/
336
337
* https://tracker.ceph.com/issues/52624
338
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
339
* https://tracker.ceph.com/issues/55804
340
    Command failed (workunit test suites/pjd.sh)
341
* https://tracker.ceph.com/issues/51964
342
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
343
* https://tracker.ceph.com/issues/57682
344
    client: ERROR: test_reconnect_after_blocklisted
345
* https://tracker.ceph.com/issues/54460
346
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
347 90 Rishabh Dave
348 91 Rishabh Dave
h3. 10 Oct 2022
349
350
http://pulpito.front.sepia.ceph.com/rishabh-2022-09-30_19:45:21-fs-wip-rishabh-testing-30Sep2022-testing-default-smithi/
351 92 Rishabh Dave
352 91 Rishabh Dave
reruns
353
* fs-thrash, passed: http://pulpito.front.sepia.ceph.com/rishabh-2022-10-04_13:19:47-fs-wip-rishabh-testing-30Sep2022-testing-default-smithi/
354
* fs-verify, passed: http://pulpito.front.sepia.ceph.com/rishabh-2022-10-05_12:25:37-fs-wip-rishabh-testing-30Sep2022-testing-default-smithi/
355
* cephadm failures also passed after many re-runs: http://pulpito.front.sepia.ceph.com/rishabh-2022-10-06_13:50:51-fs-wip-rishabh-testing-30Sep2022-2-testing-default-smithi/
356 94 Rishabh Dave
    ** needed this PR to be merged in ceph-ci branch - https://github.com/ceph/ceph/pull/47458
357 91 Rishabh Dave
358 93 Rishabh Dave
known bugs
359 91 Rishabh Dave
* https://tracker.ceph.com/issues/52624
360
  qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
361
* https://tracker.ceph.com/issues/50223
362
  client.xxxx isn't responding to mclientcaps(revoke
363
* https://tracker.ceph.com/issues/57299
364
  qa: test_dump_loads fails with JSONDecodeError
365
* https://tracker.ceph.com/issues/57655 [Exist in main as well]
366
  qa: fs:mixed-clients kernel_untar_build failure
367
* https://tracker.ceph.com/issues/57206
368
  libcephfs/test.sh: ceph_test_libcephfs_reclaim
369
370 90 Rishabh Dave
h3. 2022 Sep 29
371
372
http://pulpito.front.sepia.ceph.com/rishabh-2022-09-14_12:48:43-fs-wip-rishabh-testing-2022Sep9-1708-testing-default-smithi/
373
374
* https://tracker.ceph.com/issues/55804
375
  Command failed (workunit test suites/pjd.sh)
376
* https://tracker.ceph.com/issues/36593
377
  Command failed (workunit test fs/quota/quota.sh) on smithixxx with status 1
378
* https://tracker.ceph.com/issues/52624
379
  qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
380
* https://tracker.ceph.com/issues/51964
381
  qa: test_cephfs_mirror_restart_sync_on_blocklist failure
382
* https://tracker.ceph.com/issues/56632
383
  Test failure: test_subvolume_snapshot_clone_quota_exceeded
384
* https://tracker.ceph.com/issues/50821
385
  qa: untar_snap_rm failure during mds thrashing
386
387 88 Patrick Donnelly
h3. 2022 Sep 26
388
389
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20220923.171109
390
391
* https://tracker.ceph.com/issues/55804
392
    qa failure: pjd link tests failed
393
* https://tracker.ceph.com/issues/57676
394
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
395
* https://tracker.ceph.com/issues/52624
396
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
397
* https://tracker.ceph.com/issues/57580
398
    Test failure: test_newops_getvxattr (tasks.cephfs.test_newops.TestNewOps)
399
* https://tracker.ceph.com/issues/48773
400
    qa: scrub does not complete
401
* https://tracker.ceph.com/issues/57299
402
    qa: test_dump_loads fails with JSONDecodeError
403
* https://tracker.ceph.com/issues/57280
404
    qa: tasks/kernel_cfuse_workunits_untarbuild_blogbench fails - Failed to fetch package version from shaman
405
* https://tracker.ceph.com/issues/57205
406
    Test failure: test_subvolume_group_ls_filter_internal_directories (tasks.cephfs.test_volumes.TestSubvolumeGroups)
407
* https://tracker.ceph.com/issues/57656
408
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
409
* https://tracker.ceph.com/issues/57677
410
    qa: "1 MDSs behind on trimming (MDS_TRIM)"
411
* https://tracker.ceph.com/issues/57206
412
    libcephfs/test.sh: ceph_test_libcephfs_reclaim
413
* https://tracker.ceph.com/issues/57446
414
    qa: test_subvolume_snapshot_info_if_orphan_clone fails
415
* https://tracker.ceph.com/issues/57655 [Exist in main as well]
416
    qa: fs:mixed-clients kernel_untar_build failure
417 89 Patrick Donnelly
* https://tracker.ceph.com/issues/57682
418
    client: ERROR: test_reconnect_after_blocklisted
419 88 Patrick Donnelly
420
421 87 Patrick Donnelly
h3. 2022 Sep 22
422
423
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20220920.234701
424
425
* https://tracker.ceph.com/issues/57299
426
    qa: test_dump_loads fails with JSONDecodeError
427
* https://tracker.ceph.com/issues/57205
428
    Test failure: test_subvolume_group_ls_filter_internal_directories (tasks.cephfs.test_volumes.TestSubvolumeGroups)
429
* https://tracker.ceph.com/issues/52624
430
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
431
* https://tracker.ceph.com/issues/57580
432
    Test failure: test_newops_getvxattr (tasks.cephfs.test_newops.TestNewOps)
433
* https://tracker.ceph.com/issues/57280
434
    qa: tasks/kernel_cfuse_workunits_untarbuild_blogbench fails - Failed to fetch package version from shaman
435
* https://tracker.ceph.com/issues/48773
436
    qa: scrub does not complete
437
* https://tracker.ceph.com/issues/56446
438
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
439
* https://tracker.ceph.com/issues/57206
440
    libcephfs/test.sh: ceph_test_libcephfs_reclaim
441
* https://tracker.ceph.com/issues/51267
442
    CommandFailedError: Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithi096 with status 1:...
443
444
NEW:
445
446
* https://tracker.ceph.com/issues/57656
447
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
448
* https://tracker.ceph.com/issues/57655 [Exist in main as well]
449
    qa: fs:mixed-clients kernel_untar_build failure
450
* https://tracker.ceph.com/issues/57657
451
    mds: scrub locates mismatch between child accounted_rstats and self rstats
452
453
Segfault probably caused by: https://github.com/ceph/ceph/pull/47795#issuecomment-1255724799
454
455
456 80 Venky Shankar
h3. 2022 Sep 16
457 79 Venky Shankar
458
https://pulpito.ceph.com/?branch=wip-vshankar-testing1-20220905-132828
459
460
* https://tracker.ceph.com/issues/57446
461
    qa: test_subvolume_snapshot_info_if_orphan_clone fails
462
* https://tracker.ceph.com/issues/57299
463
    qa: test_dump_loads fails with JSONDecodeError
464
* https://tracker.ceph.com/issues/50223
465
    client.xxxx isn't responding to mclientcaps(revoke)
466
* https://tracker.ceph.com/issues/52624
467
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
468
* https://tracker.ceph.com/issues/57205
469
    Test failure: test_subvolume_group_ls_filter_internal_directories (tasks.cephfs.test_volumes.TestSubvolumeGroups)
470
* https://tracker.ceph.com/issues/57280
471
    qa: tasks/kernel_cfuse_workunits_untarbuild_blogbench fails - Failed to fetch package version from shaman
472
* https://tracker.ceph.com/issues/51282
473
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
474
* https://tracker.ceph.com/issues/48203
475
  https://tracker.ceph.com/issues/36593
476
    qa: quota failure
477
    qa: quota failure caused by clients stepping on each other
478
* https://tracker.ceph.com/issues/57580
479
    Test failure: test_newops_getvxattr (tasks.cephfs.test_newops.TestNewOps)
480
481 77 Rishabh Dave
482
h3. 2022 Aug 26
483 76 Rishabh Dave
484
http://pulpito.front.sepia.ceph.com/rishabh-2022-08-22_17:49:59-fs-wip-rishabh-testing-2022Aug19-testing-default-smithi/
485
http://pulpito.front.sepia.ceph.com/rishabh-2022-08-24_11:56:51-fs-wip-rishabh-testing-2022Aug19-testing-default-smithi/
486
487
* https://tracker.ceph.com/issues/57206
488
  libcephfs/test.sh: ceph_test_libcephfs_reclaim
489
* https://tracker.ceph.com/issues/56632
490
  Test failure: test_subvolume_snapshot_clone_quota_exceeded (tasks.cephfs.test_volumes.TestSubvolumeSnapshotClones)
491
* https://tracker.ceph.com/issues/56446
492
  Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
493
* https://tracker.ceph.com/issues/51964
494
  qa: test_cephfs_mirror_restart_sync_on_blocklist failure
495
* https://tracker.ceph.com/issues/53859
496
  qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
497
498
* https://tracker.ceph.com/issues/54460
499
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
500
* https://tracker.ceph.com/issues/54462
501
  Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128
502
* https://tracker.ceph.com/issues/54460
503
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
504
* https://tracker.ceph.com/issues/36593
505
  Command failed (workunit test fs/quota/quota.sh) on smithixxx with status 1
506
507
* https://tracker.ceph.com/issues/52624
508
  qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
509
* https://tracker.ceph.com/issues/55804
510
  Command failed (workunit test suites/pjd.sh)
511
* https://tracker.ceph.com/issues/50223
512
  client.xxxx isn't responding to mclientcaps(revoke)
513
514
515 75 Venky Shankar
h3. 2022 Aug 22
516
517
https://pulpito.ceph.com/vshankar-2022-08-12_09:34:24-fs-wip-vshankar-testing1-20220812-072441-testing-default-smithi/
518
https://pulpito.ceph.com/vshankar-2022-08-18_04:30:42-fs-wip-vshankar-testing1-20220818-082047-testing-default-smithi/ (drop problematic PR and re-run)
519
520
* https://tracker.ceph.com/issues/52624
521
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
522
* https://tracker.ceph.com/issues/56446
523
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
524
* https://tracker.ceph.com/issues/55804
525
    Command failed (workunit test suites/pjd.sh)
526
* https://tracker.ceph.com/issues/51278
527
    mds: "FAILED ceph_assert(!segments.empty())"
528
* https://tracker.ceph.com/issues/54460
529
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
530
* https://tracker.ceph.com/issues/57205
531
    Test failure: test_subvolume_group_ls_filter_internal_directories (tasks.cephfs.test_volumes.TestSubvolumeGroups)
532
* https://tracker.ceph.com/issues/57206
533
    ceph_test_libcephfs_reclaim crashes during test
534
* https://tracker.ceph.com/issues/53859
535
  Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
536
* https://tracker.ceph.com/issues/50223
537
    client.xxxx isn't responding to mclientcaps(revoke)
538
539 72 Venky Shankar
h3. 2022 Aug 12
540
541
https://pulpito.ceph.com/vshankar-2022-08-10_04:06:00-fs-wip-vshankar-testing-20220805-190751-testing-default-smithi/
542
https://pulpito.ceph.com/vshankar-2022-08-11_12:16:58-fs-wip-vshankar-testing-20220811-145809-testing-default-smithi/ (drop problematic PR and re-run)
543
544
* https://tracker.ceph.com/issues/52624
545
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
546
* https://tracker.ceph.com/issues/56446
547
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
548
* https://tracker.ceph.com/issues/51964
549
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
550
* https://tracker.ceph.com/issues/55804
551
    Command failed (workunit test suites/pjd.sh)
552
* https://tracker.ceph.com/issues/50223
553
    client.xxxx isn't responding to mclientcaps(revoke)
554
* https://tracker.ceph.com/issues/50821
555
    qa: untar_snap_rm failure during mds thrashing
556
* https://tracker.ceph.com/issues/54460
557 73 Venky Shankar
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
558 72 Venky Shankar
559 71 Venky Shankar
h3. 2022 Aug 04
560
561
https://pulpito.ceph.com/?branch=wip-vshankar-testing1-20220804-123835 (only mgr/volumes, mgr/stats)
562
563
Unrealted teuthology failure on rhel
564
565 69 Rishabh Dave
h3. 2022 Jul 25
566 68 Rishabh Dave
567
http://pulpito.front.sepia.ceph.com/rishabh-2022-07-22_11:34:20-fs-wip-rishabh-testing-2022Jul22-1400-testing-default-smithi/
568
569
1st re-run: http://pulpito.front.sepia.ceph.com/rishabh-2022-07-24_03:51:19-fs-wip-rishabh-testing-2022Jul22-1400-testing-default-smithi
570
2nd re-run: http://pulpito.front.sepia.ceph.com/rishabh-2022-07-24_08:53:36-fs-wip-rishabh-testing-2022Jul22-1400-testing-default-smithi/
571 74 Rishabh Dave
3rd re-run: http://pulpito.front.sepia.ceph.com/rishabh-2022-07-24_08:53:36-fs-wip-rishabh-testing-2022Jul22-1400-testing-default-smithi/
572
4th (final) re-run: http://pulpito.front.sepia.ceph.com/rishabh-2022-07-28_03:59:01-fs-wip-rishabh-testing-2022Jul28-0143-testing-default-smithi/
573 68 Rishabh Dave
574
* https://tracker.ceph.com/issues/55804
575
  Command failed (workunit test suites/pjd.sh)
576
* https://tracker.ceph.com/issues/50223
577
  client.xxxx isn't responding to mclientcaps(revoke)
578
579
* https://tracker.ceph.com/issues/54460
580
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
581
* https://tracker.ceph.com/issues/36593
582
  Command failed (workunit test fs/quota/quota.sh) on smithixxx with status 1
583 1 Patrick Donnelly
* https://tracker.ceph.com/issues/54462
584 74 Rishabh Dave
  Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128~
585 68 Rishabh Dave
586 67 Patrick Donnelly
h3. 2022 July 22
587
588
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20220721.235756
589
590
MDS_HEALTH_DUMMY error in log fixed by followup commit.
591
transient selinux ping failure
592
593
* https://tracker.ceph.com/issues/56694
594
    qa: avoid blocking forever on hung umount
595
* https://tracker.ceph.com/issues/56695
596
    [RHEL stock] pjd test failures
597
* https://tracker.ceph.com/issues/56696
598
    admin keyring disappears during qa run
599
* https://tracker.ceph.com/issues/56697
600
    qa: fs/snaps fails for fuse
601
* https://tracker.ceph.com/issues/50222
602
    osd: 5.2s0 deep-scrub : stat mismatch
603
* https://tracker.ceph.com/issues/56698
604
    client: FAILED ceph_assert(_size == 0)
605
* https://tracker.ceph.com/issues/50223
606
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
607
608
609 66 Rishabh Dave
h3. 2022 Jul 15
610 65 Rishabh Dave
611
http://pulpito.front.sepia.ceph.com/rishabh-2022-07-08_23:53:34-fs-wip-rishabh-testing-2022Jul08-1820-testing-default-smithi/
612
613
re-run: http://pulpito.front.sepia.ceph.com/rishabh-2022-07-15_06:42:04-fs-wip-rishabh-testing-2022Jul08-1820-testing-default-smithi/
614
615
* https://tracker.ceph.com/issues/53859
616
  Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
617
* https://tracker.ceph.com/issues/55804
618
  Command failed (workunit test suites/pjd.sh)
619
* https://tracker.ceph.com/issues/50223
620
  client.xxxx isn't responding to mclientcaps(revoke)
621
* https://tracker.ceph.com/issues/50222
622
  osd: deep-scrub : stat mismatch
623
624
* https://tracker.ceph.com/issues/56632
625
  Test failure: test_subvolume_snapshot_clone_quota_exceeded (tasks.cephfs.test_volumes.TestSubvolumeSnapshotClones)
626
* https://tracker.ceph.com/issues/56634
627
  workunit test fs/snaps/snaptest-intodir.sh
628
* https://tracker.ceph.com/issues/56644
629
  Test failure: test_rapid_creation (tasks.cephfs.test_fragment.TestFragmentation)
630
631
632
633 61 Rishabh Dave
h3. 2022 July 05
634
635
http://pulpito.front.sepia.ceph.com/rishabh-2022-07-02_14:14:52-fs-wip-rishabh-testing-20220702-1631-testing-default-smithi/
636 62 Rishabh Dave
637 64 Rishabh Dave
On 1st re-run some jobs passed - http://pulpito.front.sepia.ceph.com/rishabh-2022-07-03_15:10:28-fs-wip-rishabh-testing-20220702-1631-distro-default-smithi/
638
639
On 2nd re-run only few jobs failed -
640
http://pulpito.front.sepia.ceph.com/rishabh-2022-07-06_05:24:29-fs-wip-rishabh-testing-20220705-2132-distro-default-smithi/
641
http://pulpito.front.sepia.ceph.com/rishabh-2022-07-06_05:24:29-fs-wip-rishabh-testing-20220705-2132-distro-default-smithi/
642 62 Rishabh Dave
643
* https://tracker.ceph.com/issues/56446
644
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
645
* https://tracker.ceph.com/issues/55804
646
    Command failed (workunit test suites/pjd.sh) on smithi047 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/
647
648
* https://tracker.ceph.com/issues/56445
649
    Command failed on smithi080 with status 123: "find /home/ubuntu/cephtest/archive/syslog -name '*.log' -print0 | sudo xargs -0 --no-run-if-empty -- gzip --"
650
* https://tracker.ceph.com/issues/51267
651 63 Rishabh Dave
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithi098 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1
652
* https://tracker.ceph.com/issues/50224
653
    Test failure: test_mirroring_init_failure_with_recovery (tasks.cephfs.test_mirroring.TestMirroring)
654 62 Rishabh Dave
655
656 61 Rishabh Dave
657 58 Venky Shankar
h3. 2022 July 04
658
659
https://pulpito.ceph.com/vshankar-2022-06-29_09:19:00-fs-wip-vshankar-testing-20220627-100931-testing-default-smithi/
660
(rhel runs were borked due to: https://lists.ceph.io/hyperkitty/list/dev@ceph.io/thread/JSZQFUKVLDND4W33PXDGCABPHNSPT6SS/, tests ran with --filter-out=rhel)
661
662
* https://tracker.ceph.com/issues/56445
663
    Command failed on smithi162 with status 123: "find /home/ubuntu/cephtest/archive/syslog -name '*.log' -print0 | sudo xargs -0 --no-run-if-empty -- gzip --"
664
* https://tracker.ceph.com/issues/56446
665 59 Rishabh Dave
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
666
* https://tracker.ceph.com/issues/51964
667
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
668
* https://tracker.ceph.com/issues/52624
669 60 Rishabh Dave
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
670 59 Rishabh Dave
671 57 Venky Shankar
h3. 2022 June 20
672
673
https://pulpito.ceph.com/vshankar-2022-06-15_04:03:39-fs-wip-vshankar-testing1-20220615-072516-testing-default-smithi/
674
https://pulpito.ceph.com/vshankar-2022-06-19_08:22:46-fs-wip-vshankar-testing1-20220619-102531-testing-default-smithi/
675
676
* https://tracker.ceph.com/issues/52624
677
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
678
* https://tracker.ceph.com/issues/55804
679
    qa failure: pjd link tests failed
680
* https://tracker.ceph.com/issues/54108
681
    qa: iogen workunit: "The following counters failed to be set on mds daemons: {'mds.exported', 'mds.imported'}"
682
* https://tracker.ceph.com/issues/55332
683
    Failure in snaptest-git-ceph.sh (it's an async unlink/create bug)
684
685 56 Patrick Donnelly
h3. 2022 June 13
686
687
https://pulpito.ceph.com/pdonnell-2022-06-12_05:08:12-fs:workload-wip-pdonnell-testing-20220612.004943-distro-default-smithi/
688
689
* https://tracker.ceph.com/issues/56024
690
    cephadm: removes ceph.conf during qa run causing command failure
691
* https://tracker.ceph.com/issues/48773
692
    qa: scrub does not complete
693
* https://tracker.ceph.com/issues/56012
694
    mds: src/mds/MDLog.cc: 283: FAILED ceph_assert(!mds->is_ any_replay())
695
696
697 55 Venky Shankar
h3. 2022 Jun 13
698 54 Venky Shankar
699
https://pulpito.ceph.com/vshankar-2022-06-07_00:25:50-fs-wip-vshankar-testing-20220606-223254-testing-default-smithi/
700
https://pulpito.ceph.com/vshankar-2022-06-10_01:04:46-fs-wip-vshankar-testing-20220609-175550-testing-default-smithi/
701
702
* https://tracker.ceph.com/issues/52624
703
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
704
* https://tracker.ceph.com/issues/51964
705
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
706
* https://tracker.ceph.com/issues/53859
707
    qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
708
* https://tracker.ceph.com/issues/55804
709
    qa failure: pjd link tests failed
710
* https://tracker.ceph.com/issues/56003
711
    client: src/include/xlist.h: 81: FAILED ceph_assert(_size == 0)
712
* https://tracker.ceph.com/issues/56011
713
    fs/thrash: snaptest-snap-rm-cmp.sh fails in mds5sum comparison
714
* https://tracker.ceph.com/issues/56012
715
    mds: src/mds/MDLog.cc: 283: FAILED ceph_assert(!mds->is_ any_replay())
716
717 53 Venky Shankar
h3. 2022 Jun 07
718
719
https://pulpito.ceph.com/vshankar-2022-06-06_21:25:41-fs-wip-vshankar-testing1-20220606-230129-testing-default-smithi/
720
https://pulpito.ceph.com/vshankar-2022-06-07_10:53:31-fs-wip-vshankar-testing1-20220607-104134-testing-default-smithi/ (rerun after dropping a problematic PR)
721
722
* https://tracker.ceph.com/issues/52624
723
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
724
* https://tracker.ceph.com/issues/50223
725
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
726
* https://tracker.ceph.com/issues/50224
727
    qa: test_mirroring_init_failure_with_recovery failure
728
729 51 Venky Shankar
h3. 2022 May 12
730
731
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20220509-125847
732 52 Venky Shankar
https://pulpito.ceph.com/vshankar-2022-05-13_17:09:16-fs-wip-vshankar-testing-20220513-120051-testing-default-smithi/ (drop prs + rerun)
733 51 Venky Shankar
734
* https://tracker.ceph.com/issues/52624
735
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
736
* https://tracker.ceph.com/issues/50223
737
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
738
* https://tracker.ceph.com/issues/55332
739
    Failure in snaptest-git-ceph.sh
740
* https://tracker.ceph.com/issues/53859
741
    qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
742
* https://tracker.ceph.com/issues/55538
743 1 Patrick Donnelly
    Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead)
744 52 Venky Shankar
* https://tracker.ceph.com/issues/55258
745
    lots of "heartbeat_check: no reply from X.X.X.X" in OSD logs (cropss up again, though very infrequent)
746 51 Venky Shankar
747 49 Venky Shankar
h3. 2022 May 04
748
749 50 Venky Shankar
https://pulpito.ceph.com/vshankar-2022-05-01_13:18:44-fs-wip-vshankar-testing1-20220428-204527-testing-default-smithi/
750
https://pulpito.ceph.com/vshankar-2022-05-02_16:58:59-fs-wip-vshankar-testing1-20220502-201957-testing-default-smithi/ (after dropping PRs)
751
752 49 Venky Shankar
* https://tracker.ceph.com/issues/52624
753
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
754
* https://tracker.ceph.com/issues/50223
755
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
756
* https://tracker.ceph.com/issues/55332
757
    Failure in snaptest-git-ceph.sh
758
* https://tracker.ceph.com/issues/53859
759
    qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
760
* https://tracker.ceph.com/issues/55516
761
    qa: fs suite tests failing with "json.decoder.JSONDecodeError: Extra data: line 2 column 82 (char 82)"
762
* https://tracker.ceph.com/issues/55537
763
    mds: crash during fs:upgrade test
764
* https://tracker.ceph.com/issues/55538
765
    Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead)
766
767 48 Venky Shankar
h3. 2022 Apr 25
768
769
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20220420-113951 (owner vshankar)
770
771
* https://tracker.ceph.com/issues/52624
772
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
773
* https://tracker.ceph.com/issues/50223
774
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
775
* https://tracker.ceph.com/issues/55258
776
    lots of "heartbeat_check: no reply from X.X.X.X" in OSD logs
777
* https://tracker.ceph.com/issues/55377
778
    kclient: mds revoke Fwb caps stuck after the kclient tries writebcak once
779
780 47 Venky Shankar
h3. 2022 Apr 14
781
782
https://pulpito.ceph.com/?branch=wip-vshankar-testing1-20220411-144044
783
784
* https://tracker.ceph.com/issues/52624
785
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
786
* https://tracker.ceph.com/issues/50223
787
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
788
* https://tracker.ceph.com/issues/52438
789
    qa: ffsb timeout
790
* https://tracker.ceph.com/issues/55170
791
    mds: crash during rejoin (CDir::fetch_keys)
792
* https://tracker.ceph.com/issues/55331
793
    pjd failure
794
* https://tracker.ceph.com/issues/48773
795
    qa: scrub does not complete
796
* https://tracker.ceph.com/issues/55332
797
    Failure in snaptest-git-ceph.sh
798
* https://tracker.ceph.com/issues/55258
799
    lots of "heartbeat_check: no reply from X.X.X.X" in OSD logs
800
801 45 Venky Shankar
h3. 2022 Apr 11
802
803 46 Venky Shankar
https://pulpito.ceph.com/?branch=wip-vshankar-testing-55110-20220408-203242
804 45 Venky Shankar
805
* https://tracker.ceph.com/issues/48773
806
    qa: scrub does not complete
807
* https://tracker.ceph.com/issues/52624
808
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
809
* https://tracker.ceph.com/issues/52438
810
    qa: ffsb timeout
811
* https://tracker.ceph.com/issues/48680
812
    mds: scrubbing stuck "scrub active (0 inodes in the stack)"
813
* https://tracker.ceph.com/issues/55236
814
    qa: fs/snaps tests fails with "hit max job timeout"
815
* https://tracker.ceph.com/issues/54108
816
    qa: iogen workunit: "The following counters failed to be set on mds daemons: {'mds.exported', 'mds.imported'}"
817
* https://tracker.ceph.com/issues/54971
818
    Test failure: test_perf_stats_stale_metrics (tasks.cephfs.test_mds_metrics.TestMDSMetrics)
819
* https://tracker.ceph.com/issues/50223
820
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
821
* https://tracker.ceph.com/issues/55258
822
    lots of "heartbeat_check: no reply from X.X.X.X" in OSD logs
823
824 44 Venky Shankar
h3. 2022 Mar 21
825 42 Venky Shankar
826 43 Venky Shankar
https://pulpito.ceph.com/vshankar-2022-03-20_02:16:37-fs-wip-vshankar-testing-20220319-163539-testing-default-smithi/
827
828
Run didn't go well, lots of failures - debugging by dropping PRs and running against master branch. Only merging unrelated PRs that pass tests.
829
830
831
h3. 2022 Mar 08
832
833 42 Venky Shankar
https://pulpito.ceph.com/vshankar-2022-02-28_04:32:15-fs-wip-vshankar-testing-20220226-211550-testing-default-smithi/
834
835
rerun with
836
- (drop) https://github.com/ceph/ceph/pull/44679
837
- (drop) https://github.com/ceph/ceph/pull/44958
838
https://pulpito.ceph.com/vshankar-2022-03-06_14:47:51-fs-wip-vshankar-testing-20220304-132102-testing-default-smithi/
839
840
* https://tracker.ceph.com/issues/54419 (new)
841
    `ceph orch upgrade start` seems to never reach completion
842
* https://tracker.ceph.com/issues/51964
843
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
844
* https://tracker.ceph.com/issues/52624
845
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
846
* https://tracker.ceph.com/issues/50223
847
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
848
* https://tracker.ceph.com/issues/52438
849
    qa: ffsb timeout
850
* https://tracker.ceph.com/issues/50821
851
    qa: untar_snap_rm failure during mds thrashing
852
853
854 41 Venky Shankar
h3. 2022 Feb 09
855
856
https://pulpito.ceph.com/vshankar-2022-02-05_17:27:49-fs-wip-vshankar-testing-20220201-113815-testing-default-smithi/
857
858
rerun with
859
- (drop) https://github.com/ceph/ceph/pull/37938
860
- (drop) https://github.com/ceph/ceph/pull/44335
861
- (drop) https://github.com/ceph/ceph/pull/44491
862
- (drop) https://github.com/ceph/ceph/pull/44501
863
https://pulpito.ceph.com/vshankar-2022-02-08_14:27:29-fs-wip-vshankar-testing-20220208-181241-testing-default-smithi/
864
865
* https://tracker.ceph.com/issues/51964
866
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
867
* https://tracker.ceph.com/issues/54066
868
    test_subvolume_no_upgrade_v1_sanity fails with `AssertionError: 1000 != 0`
869
* https://tracker.ceph.com/issues/48773
870
    qa: scrub does not complete
871
* https://tracker.ceph.com/issues/52624
872
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
873
* https://tracker.ceph.com/issues/50223
874
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
875
* https://tracker.ceph.com/issues/52438
876
    qa: ffsb timeout
877
878 40 Patrick Donnelly
h3. 2022 Feb 01
879
880
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20220127.171526
881
882
* https://tracker.ceph.com/issues/54107
883
    kclient: hang during umount
884
* https://tracker.ceph.com/issues/54106
885
    kclient: hang during workunit cleanup
886
* https://tracker.ceph.com/issues/54108
887
    qa: iogen workunit: "The following counters failed to be set on mds daemons: {'mds.exported', 'mds.imported'}"
888
* https://tracker.ceph.com/issues/48773
889
    qa: scrub does not complete
890
* https://tracker.ceph.com/issues/52438
891
    qa: ffsb timeout
892
893
894 36 Venky Shankar
h3. 2022 Jan 13
895
896
https://pulpito.ceph.com/vshankar-2022-01-06_13:18:41-fs-wip-vshankar-testing-20220106-145819-testing-default-smithi/
897 39 Venky Shankar
898 36 Venky Shankar
rerun with:
899 38 Venky Shankar
- (add) https://github.com/ceph/ceph/pull/44570
900
- (drop) https://github.com/ceph/ceph/pull/43184
901 36 Venky Shankar
https://pulpito.ceph.com/vshankar-2022-01-13_04:42:40-fs-wip-vshankar-testing-20220106-145819-testing-default-smithi/
902
903
* https://tracker.ceph.com/issues/50223
904
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
905
* https://tracker.ceph.com/issues/51282
906
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
907
* https://tracker.ceph.com/issues/48773
908
    qa: scrub does not complete
909
* https://tracker.ceph.com/issues/52624
910
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
911
* https://tracker.ceph.com/issues/53859
912
    qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
913
914 34 Venky Shankar
h3. 2022 Jan 03
915
916
https://pulpito.ceph.com/vshankar-2021-12-22_07:37:44-fs-wip-vshankar-testing-20211216-114012-testing-default-smithi/
917
https://pulpito.ceph.com/vshankar-2022-01-03_12:27:45-fs-wip-vshankar-testing-20220103-142738-testing-default-smithi/ (rerun)
918
919
* https://tracker.ceph.com/issues/50223
920
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
921
* https://tracker.ceph.com/issues/51964
922
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
923
* https://tracker.ceph.com/issues/51267
924
    CommandFailedError: Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithi096 with status 1:...
925
* https://tracker.ceph.com/issues/51282
926
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
927
* https://tracker.ceph.com/issues/50821
928
    qa: untar_snap_rm failure during mds thrashing
929
* https://tracker.ceph.com/issues/51278
930
    mds: "FAILED ceph_assert(!segments.empty())"
931 35 Ramana Raja
* https://tracker.ceph.com/issues/52279
932
    cephadm tests fail due to: error adding seccomp filter rule for syscall bdflush: requested action matches default action of filter
933
934 34 Venky Shankar
935 33 Patrick Donnelly
h3. 2021 Dec 22
936
937
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211222.014316
938
939
* https://tracker.ceph.com/issues/52624
940
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
941
* https://tracker.ceph.com/issues/50223
942
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
943
* https://tracker.ceph.com/issues/52279
944
    cephadm tests fail due to: error adding seccomp filter rule for syscall bdflush: requested action matches default action of filter
945
* https://tracker.ceph.com/issues/50224
946
    qa: test_mirroring_init_failure_with_recovery failure
947
* https://tracker.ceph.com/issues/48773
948
    qa: scrub does not complete
949
950
951 32 Venky Shankar
h3. 2021 Nov 30
952
953
https://pulpito.ceph.com/vshankar-2021-11-24_07:14:27-fs-wip-vshankar-testing-20211124-094330-testing-default-smithi/
954
https://pulpito.ceph.com/vshankar-2021-11-30_06:23:32-fs-wip-vshankar-testing-20211124-094330-distro-default-smithi/ (rerun w/ QA fixes)
955
956
* https://tracker.ceph.com/issues/53436
957
    mds, mon: mds beacon messages get dropped? (mds never reaches up:active state)
958
* https://tracker.ceph.com/issues/51964
959
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
960
* https://tracker.ceph.com/issues/48812
961
    qa: test_scrub_pause_and_resume_with_abort failure
962
* https://tracker.ceph.com/issues/51076
963
    "wait_for_recovery: failed before timeout expired" during thrashosd test with EC backend.
964
* https://tracker.ceph.com/issues/50223
965
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
966
* https://tracker.ceph.com/issues/52624
967
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
968
* https://tracker.ceph.com/issues/50250
969
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
970
971
972 31 Patrick Donnelly
h3. 2021 November 9
973
974
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211109.180315
975
976
* https://tracker.ceph.com/issues/53214
977
    qa: "dd: error reading '/sys/kernel/debug/ceph/2a934501-6731-4052-a836-f42229a869be.client4874/metrics': Is a directory"
978
* https://tracker.ceph.com/issues/48773
979
    qa: scrub does not complete
980
* https://tracker.ceph.com/issues/50223
981
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
982
* https://tracker.ceph.com/issues/51282
983
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
984
* https://tracker.ceph.com/issues/52624
985
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
986
* https://tracker.ceph.com/issues/53216
987
    qa: "RuntimeError: value of attributes should be either str or None. client_id"
988
* https://tracker.ceph.com/issues/50250
989
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
990
991
992
993 30 Patrick Donnelly
h3. 2021 November 03
994
995
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211103.023355
996
997
* https://tracker.ceph.com/issues/51964
998
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
999
* https://tracker.ceph.com/issues/51282
1000
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
1001
* https://tracker.ceph.com/issues/52436
1002
    fs/ceph: "corrupt mdsmap"
1003
* https://tracker.ceph.com/issues/53074
1004
    pybind/mgr/cephadm: upgrade sequence does not continue if no MDS are active
1005
* https://tracker.ceph.com/issues/53150
1006
    pybind/mgr/cephadm/upgrade: tolerate MDS failures during upgrade straddling v16.2.5
1007
* https://tracker.ceph.com/issues/53155
1008
    MDSMonitor: assertion during upgrade to v16.2.5+
1009
1010
1011 29 Patrick Donnelly
h3. 2021 October 26
1012
1013
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211025.000447
1014
1015
* https://tracker.ceph.com/issues/53074
1016
    pybind/mgr/cephadm: upgrade sequence does not continue if no MDS are active
1017
* https://tracker.ceph.com/issues/52997
1018
    testing: hang ing umount
1019
* https://tracker.ceph.com/issues/50824
1020
    qa: snaptest-git-ceph bus error
1021
* https://tracker.ceph.com/issues/52436
1022
    fs/ceph: "corrupt mdsmap"
1023
* https://tracker.ceph.com/issues/48773
1024
    qa: scrub does not complete
1025
* https://tracker.ceph.com/issues/53082
1026
    ceph-fuse: segmenetation fault in Client::handle_mds_map
1027
* https://tracker.ceph.com/issues/50223
1028
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
1029
* https://tracker.ceph.com/issues/52624
1030
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1031
* https://tracker.ceph.com/issues/50224
1032
    qa: test_mirroring_init_failure_with_recovery failure
1033
* https://tracker.ceph.com/issues/50821
1034
    qa: untar_snap_rm failure during mds thrashing
1035
* https://tracker.ceph.com/issues/50250
1036
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
1037
1038
1039
1040 27 Patrick Donnelly
h3. 2021 October 19
1041
1042 28 Patrick Donnelly
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211019.013028
1043 27 Patrick Donnelly
1044
* https://tracker.ceph.com/issues/52995
1045
    qa: test_standby_count_wanted failure
1046
* https://tracker.ceph.com/issues/52948
1047
    osd: fails to come up: "teuthology.misc:7 of 8 OSDs are up"
1048
* https://tracker.ceph.com/issues/52996
1049
    qa: test_perf_counters via test_openfiletable
1050
* https://tracker.ceph.com/issues/48772
1051
    qa: pjd: not ok 9, 44, 80
1052
* https://tracker.ceph.com/issues/52997
1053
    testing: hang ing umount
1054
* https://tracker.ceph.com/issues/50250
1055
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
1056
* https://tracker.ceph.com/issues/52624
1057
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1058
* https://tracker.ceph.com/issues/50223
1059
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
1060
* https://tracker.ceph.com/issues/50821
1061
    qa: untar_snap_rm failure during mds thrashing
1062
* https://tracker.ceph.com/issues/48773
1063
    qa: scrub does not complete
1064
1065
1066 26 Patrick Donnelly
h3. 2021 October 12
1067
1068
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211012.192211
1069
1070
Some failures caused by teuthology bug: https://tracker.ceph.com/issues/52944
1071
1072
New test caused failure: https://github.com/ceph/ceph/pull/43297#discussion_r729883167
1073
1074
1075
* https://tracker.ceph.com/issues/51282
1076
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
1077
* https://tracker.ceph.com/issues/52948
1078
    osd: fails to come up: "teuthology.misc:7 of 8 OSDs are up"
1079
* https://tracker.ceph.com/issues/48773
1080
    qa: scrub does not complete
1081
* https://tracker.ceph.com/issues/50224
1082
    qa: test_mirroring_init_failure_with_recovery failure
1083
* https://tracker.ceph.com/issues/52949
1084
    RuntimeError: The following counters failed to be set on mds daemons: {'mds.dir_split'}
1085
1086
1087 25 Patrick Donnelly
h3. 2021 October 02
1088 23 Patrick Donnelly
1089 24 Patrick Donnelly
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211002.163337
1090
1091
Some failures caused by cephadm upgrade test. Fixed in follow-up qa commit.
1092
1093
test_simple failures caused by PR in this set.
1094
1095
A few reruns because of QA infra noise.
1096
1097
* https://tracker.ceph.com/issues/52822
1098
    qa: failed pacific install on fs:upgrade
1099
* https://tracker.ceph.com/issues/52624
1100
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1101
* https://tracker.ceph.com/issues/50223
1102
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
1103
* https://tracker.ceph.com/issues/48773
1104
    qa: scrub does not complete
1105
1106
1107
h3. 2021 September 20
1108
1109 23 Patrick Donnelly
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210917.174826
1110
1111
* https://tracker.ceph.com/issues/52677
1112
    qa: test_simple failure
1113
* https://tracker.ceph.com/issues/51279
1114
    kclient hangs on umount (testing branch)
1115
* https://tracker.ceph.com/issues/50223
1116
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
1117
* https://tracker.ceph.com/issues/50250
1118
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
1119
* https://tracker.ceph.com/issues/52624
1120
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1121
* https://tracker.ceph.com/issues/52438
1122
    qa: ffsb timeout
1123
1124
1125 22 Patrick Donnelly
h3. 2021 September 10
1126
1127
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210910.181451
1128
1129
* https://tracker.ceph.com/issues/50223
1130
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
1131
* https://tracker.ceph.com/issues/50250
1132
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
1133
* https://tracker.ceph.com/issues/52624
1134
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1135
* https://tracker.ceph.com/issues/52625
1136
    qa: test_kill_mdstable (tasks.cephfs.test_snapshots.TestSnapshots)
1137
* https://tracker.ceph.com/issues/52439
1138
    qa: acls does not compile on centos stream
1139
* https://tracker.ceph.com/issues/50821
1140
    qa: untar_snap_rm failure during mds thrashing
1141
* https://tracker.ceph.com/issues/48773
1142
    qa: scrub does not complete
1143
* https://tracker.ceph.com/issues/52626
1144
    mds: ScrubStack.cc: 831: FAILED ceph_assert(diri)
1145
* https://tracker.ceph.com/issues/51279
1146
    kclient hangs on umount (testing branch)
1147
1148
1149 21 Patrick Donnelly
h3. 2021 August 27
1150
1151
Several jobs died because of device failures.
1152
1153
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210827.024746
1154
1155
* https://tracker.ceph.com/issues/52430
1156
    mds: fast async create client mount breaks racy test
1157
* https://tracker.ceph.com/issues/52436
1158
    fs/ceph: "corrupt mdsmap"
1159
* https://tracker.ceph.com/issues/52437
1160
    mds: InoTable::replay_release_ids abort via test_inotable_sync
1161
* https://tracker.ceph.com/issues/51282
1162
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
1163
* https://tracker.ceph.com/issues/52438
1164
    qa: ffsb timeout
1165
* https://tracker.ceph.com/issues/52439
1166
    qa: acls does not compile on centos stream
1167
1168
1169 20 Patrick Donnelly
h3. 2021 July 30
1170
1171
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210729.214022
1172
1173
* https://tracker.ceph.com/issues/50250
1174
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
1175
* https://tracker.ceph.com/issues/51282
1176
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
1177
* https://tracker.ceph.com/issues/48773
1178
    qa: scrub does not complete
1179
* https://tracker.ceph.com/issues/51975
1180
    pybind/mgr/stats: KeyError
1181
1182
1183 19 Patrick Donnelly
h3. 2021 July 28
1184
1185
https://pulpito.ceph.com/pdonnell-2021-07-28_00:39:45-fs-wip-pdonnell-testing-20210727.213757-distro-basic-smithi/
1186
1187
with qa fix: https://pulpito.ceph.com/pdonnell-2021-07-28_16:20:28-fs-wip-pdonnell-testing-20210728.141004-distro-basic-smithi/
1188
1189
* https://tracker.ceph.com/issues/51905
1190
    qa: "error reading sessionmap 'mds1_sessionmap'"
1191
* https://tracker.ceph.com/issues/48773
1192
    qa: scrub does not complete
1193
* https://tracker.ceph.com/issues/50250
1194
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
1195
* https://tracker.ceph.com/issues/51267
1196
    CommandFailedError: Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithi096 with status 1:...
1197
* https://tracker.ceph.com/issues/51279
1198
    kclient hangs on umount (testing branch)
1199
1200
1201 18 Patrick Donnelly
h3. 2021 July 16
1202
1203
https://pulpito.ceph.com/pdonnell-2021-07-16_05:50:11-fs-wip-pdonnell-testing-20210716.022804-distro-basic-smithi/
1204
1205
* https://tracker.ceph.com/issues/48773
1206
    qa: scrub does not complete
1207
* https://tracker.ceph.com/issues/48772
1208
    qa: pjd: not ok 9, 44, 80
1209
* https://tracker.ceph.com/issues/45434
1210
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
1211
* https://tracker.ceph.com/issues/51279
1212
    kclient hangs on umount (testing branch)
1213
* https://tracker.ceph.com/issues/50824
1214
    qa: snaptest-git-ceph bus error
1215
1216
1217 17 Patrick Donnelly
h3. 2021 July 04
1218
1219
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210703.052904
1220
1221
* https://tracker.ceph.com/issues/48773
1222
    qa: scrub does not complete
1223
* https://tracker.ceph.com/issues/39150
1224
    mon: "FAILED ceph_assert(session_map.sessions.empty())" when out of quorum
1225
* https://tracker.ceph.com/issues/45434
1226
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
1227
* https://tracker.ceph.com/issues/51282
1228
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
1229
* https://tracker.ceph.com/issues/48771
1230
    qa: iogen: workload fails to cause balancing
1231
* https://tracker.ceph.com/issues/51279
1232
    kclient hangs on umount (testing branch)
1233
* https://tracker.ceph.com/issues/50250
1234
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
1235
1236
1237 16 Patrick Donnelly
h3. 2021 July 01
1238
1239
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210701.192056
1240
1241
* https://tracker.ceph.com/issues/51197
1242
    qa: [WRN] Scrub error on inode 0x10000001520 (/client.0/tmp/t/linux-5.4/Documentation/driver-api) see mds.f log and `damage ls` output for details
1243
* https://tracker.ceph.com/issues/50866
1244
    osd: stat mismatch on objects
1245
* https://tracker.ceph.com/issues/48773
1246
    qa: scrub does not complete
1247
1248
1249 15 Patrick Donnelly
h3. 2021 June 26
1250
1251
https://pulpito.ceph.com/pdonnell-2021-06-26_00:57:00-fs-wip-pdonnell-testing-20210625.225421-distro-basic-smithi/
1252
1253
* https://tracker.ceph.com/issues/51183
1254
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
1255
* https://tracker.ceph.com/issues/51410
1256
    kclient: fails to finish reconnect during MDS thrashing (testing branch)
1257
* https://tracker.ceph.com/issues/48773
1258
    qa: scrub does not complete
1259
* https://tracker.ceph.com/issues/51282
1260
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
1261
* https://tracker.ceph.com/issues/51169
1262
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
1263
* https://tracker.ceph.com/issues/48772
1264
    qa: pjd: not ok 9, 44, 80
1265
1266
1267 14 Patrick Donnelly
h3. 2021 June 21
1268
1269
https://pulpito.ceph.com/pdonnell-2021-06-22_00:27:21-fs-wip-pdonnell-testing-20210621.231646-distro-basic-smithi/
1270
1271
One failure caused by PR: https://github.com/ceph/ceph/pull/41935#issuecomment-866472599
1272
1273
* https://tracker.ceph.com/issues/51282
1274
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
1275
* https://tracker.ceph.com/issues/51183
1276
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
1277
* https://tracker.ceph.com/issues/48773
1278
    qa: scrub does not complete
1279
* https://tracker.ceph.com/issues/48771
1280
    qa: iogen: workload fails to cause balancing
1281
* https://tracker.ceph.com/issues/51169
1282
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
1283
* https://tracker.ceph.com/issues/50495
1284
    libcephfs: shutdown race fails with status 141
1285
* https://tracker.ceph.com/issues/45434
1286
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
1287
* https://tracker.ceph.com/issues/50824
1288
    qa: snaptest-git-ceph bus error
1289
* https://tracker.ceph.com/issues/50223
1290
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
1291
1292
1293 13 Patrick Donnelly
h3. 2021 June 16
1294
1295
https://pulpito.ceph.com/pdonnell-2021-06-16_21:26:55-fs-wip-pdonnell-testing-20210616.191804-distro-basic-smithi/
1296
1297
MDS abort class of failures caused by PR: https://github.com/ceph/ceph/pull/41667
1298
1299
* https://tracker.ceph.com/issues/45434
1300
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
1301
* https://tracker.ceph.com/issues/51169
1302
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
1303
* https://tracker.ceph.com/issues/43216
1304
    MDSMonitor: removes MDS coming out of quorum election
1305
* https://tracker.ceph.com/issues/51278
1306
    mds: "FAILED ceph_assert(!segments.empty())"
1307
* https://tracker.ceph.com/issues/51279
1308
    kclient hangs on umount (testing branch)
1309
* https://tracker.ceph.com/issues/51280
1310
    mds: "FAILED ceph_assert(r == 0 || r == -2)"
1311
* https://tracker.ceph.com/issues/51183
1312
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
1313
* https://tracker.ceph.com/issues/51281
1314
    qa: snaptest-snap-rm-cmp.sh: "echo 'FAIL: bad match, /tmp/a 4637e766853d1ad16a7b17079e2c6f03 != real c3883760b18d50e8d78819c54d579b00'"
1315
* https://tracker.ceph.com/issues/48773
1316
    qa: scrub does not complete
1317
* https://tracker.ceph.com/issues/51076
1318
    "wait_for_recovery: failed before timeout expired" during thrashosd test with EC backend.
1319
* https://tracker.ceph.com/issues/51228
1320
    qa: rmdir: failed to remove 'a/.snap/*': No such file or directory
1321
* https://tracker.ceph.com/issues/51282
1322
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
1323
1324
1325 12 Patrick Donnelly
h3. 2021 June 14
1326
1327
https://pulpito.ceph.com/pdonnell-2021-06-14_20:53:05-fs-wip-pdonnell-testing-20210614.173325-distro-basic-smithi/
1328
1329
Some Ubuntu 20.04 upgrade fallout. In particular, upgrade tests are failing due to missing packages for 18.04 Pacific.
1330
1331
* https://tracker.ceph.com/issues/51169
1332
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
1333
* https://tracker.ceph.com/issues/51228
1334
    qa: rmdir: failed to remove 'a/.snap/*': No such file or directory
1335
* https://tracker.ceph.com/issues/48773
1336
    qa: scrub does not complete
1337
* https://tracker.ceph.com/issues/51183
1338
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
1339
* https://tracker.ceph.com/issues/45434
1340
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
1341
* https://tracker.ceph.com/issues/51182
1342
    pybind/mgr/snap_schedule: Invalid command: Unexpected argument 'fs=cephfs'
1343
* https://tracker.ceph.com/issues/51229
1344
    qa: test_multi_snap_schedule list difference failure
1345
* https://tracker.ceph.com/issues/50821
1346
    qa: untar_snap_rm failure during mds thrashing
1347
1348
1349 11 Patrick Donnelly
h3. 2021 June 13
1350
1351
https://pulpito.ceph.com/pdonnell-2021-06-12_02:45:35-fs-wip-pdonnell-testing-20210612.002809-distro-basic-smithi/
1352
1353
Some Ubuntu 20.04 upgrade fallout. In particular, upgrade tests are failing due to missing packages for 18.04 Pacific.
1354
1355
* https://tracker.ceph.com/issues/51169
1356
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
1357
* https://tracker.ceph.com/issues/48773
1358
    qa: scrub does not complete
1359
* https://tracker.ceph.com/issues/51182
1360
    pybind/mgr/snap_schedule: Invalid command: Unexpected argument 'fs=cephfs'
1361
* https://tracker.ceph.com/issues/51183
1362
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
1363
* https://tracker.ceph.com/issues/51197
1364
    qa: [WRN] Scrub error on inode 0x10000001520 (/client.0/tmp/t/linux-5.4/Documentation/driver-api) see mds.f log and `damage ls` output for details
1365
* https://tracker.ceph.com/issues/45434
1366
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
1367
1368 10 Patrick Donnelly
h3. 2021 June 11
1369
1370
https://pulpito.ceph.com/pdonnell-2021-06-11_18:02:10-fs-wip-pdonnell-testing-20210611.162716-distro-basic-smithi/
1371
1372
Some Ubuntu 20.04 upgrade fallout. In particular, upgrade tests are failing due to missing packages for 18.04 Pacific.
1373
1374
* https://tracker.ceph.com/issues/51169
1375
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
1376
* https://tracker.ceph.com/issues/45434
1377
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
1378
* https://tracker.ceph.com/issues/48771
1379
    qa: iogen: workload fails to cause balancing
1380
* https://tracker.ceph.com/issues/43216
1381
    MDSMonitor: removes MDS coming out of quorum election
1382
* https://tracker.ceph.com/issues/51182
1383
    pybind/mgr/snap_schedule: Invalid command: Unexpected argument 'fs=cephfs'
1384
* https://tracker.ceph.com/issues/50223
1385
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
1386
* https://tracker.ceph.com/issues/48773
1387
    qa: scrub does not complete
1388
* https://tracker.ceph.com/issues/51183
1389
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
1390
* https://tracker.ceph.com/issues/51184
1391
    qa: fs:bugs does not specify distro
1392
1393
1394 9 Patrick Donnelly
h3. 2021 June 03
1395
1396
https://pulpito.ceph.com/pdonnell-2021-06-03_03:40:33-fs-wip-pdonnell-testing-20210603.020013-distro-basic-smithi/
1397
1398
* https://tracker.ceph.com/issues/45434
1399
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
1400
* https://tracker.ceph.com/issues/50016
1401
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
1402
* https://tracker.ceph.com/issues/50821
1403
    qa: untar_snap_rm failure during mds thrashing
1404
* https://tracker.ceph.com/issues/50622 (regression)
1405
    msg: active_connections regression
1406
* https://tracker.ceph.com/issues/49845#note-2 (regression)
1407
    qa: failed umount in test_volumes
1408
* https://tracker.ceph.com/issues/48773
1409
    qa: scrub does not complete
1410
* https://tracker.ceph.com/issues/43216
1411
    MDSMonitor: removes MDS coming out of quorum election
1412
1413
1414 7 Patrick Donnelly
h3. 2021 May 18
1415
1416 8 Patrick Donnelly
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210518.214114
1417
1418
Regression in testing kernel caused some failures. Ilya fixed those and rerun
1419
looked better. Some odd new noise in the rerun relating to packaging and "No
1420
module named 'tasks.ceph'".
1421
1422
* https://tracker.ceph.com/issues/50824
1423
    qa: snaptest-git-ceph bus error
1424
* https://tracker.ceph.com/issues/50622 (regression)
1425
    msg: active_connections regression
1426
* https://tracker.ceph.com/issues/49845#note-2 (regression)
1427
    qa: failed umount in test_volumes
1428
* https://tracker.ceph.com/issues/48203 (stock kernel update required)
1429
    qa: quota failure
1430
1431
1432
h3. 2021 May 18
1433
1434 7 Patrick Donnelly
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210518.025642
1435
1436
* https://tracker.ceph.com/issues/50821
1437
    qa: untar_snap_rm failure during mds thrashing
1438
* https://tracker.ceph.com/issues/48773
1439
    qa: scrub does not complete
1440
* https://tracker.ceph.com/issues/45591
1441
    mgr: FAILED ceph_assert(daemon != nullptr)
1442
* https://tracker.ceph.com/issues/50866
1443
    osd: stat mismatch on objects
1444
* https://tracker.ceph.com/issues/50016
1445
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
1446
* https://tracker.ceph.com/issues/50867
1447
    qa: fs:mirror: reduced data availability
1448
* https://tracker.ceph.com/issues/50821
1449
    qa: untar_snap_rm failure during mds thrashing
1450
* https://tracker.ceph.com/issues/50622 (regression)
1451
    msg: active_connections regression
1452
* https://tracker.ceph.com/issues/50223
1453
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
1454
* https://tracker.ceph.com/issues/50868
1455
    qa: "kern.log.gz already exists; not overwritten"
1456
* https://tracker.ceph.com/issues/50870
1457
    qa: test_full: "rm: cannot remove 'large_file_a': Permission denied"
1458
1459
1460 6 Patrick Donnelly
h3. 2021 May 11
1461
1462
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210511.232042
1463
1464
* one class of failures caused by PR
1465
* https://tracker.ceph.com/issues/48812
1466
    qa: test_scrub_pause_and_resume_with_abort failure
1467
* https://tracker.ceph.com/issues/50390
1468
    mds: monclient: wait_auth_rotating timed out after 30
1469
* https://tracker.ceph.com/issues/48773
1470
    qa: scrub does not complete
1471
* https://tracker.ceph.com/issues/50821
1472
    qa: untar_snap_rm failure during mds thrashing
1473
* https://tracker.ceph.com/issues/50224
1474
    qa: test_mirroring_init_failure_with_recovery failure
1475
* https://tracker.ceph.com/issues/50622 (regression)
1476
    msg: active_connections regression
1477
* https://tracker.ceph.com/issues/50825
1478
    qa: snaptest-git-ceph hang during mon thrashing v2
1479
* https://tracker.ceph.com/issues/50821
1480
    qa: untar_snap_rm failure during mds thrashing
1481
* https://tracker.ceph.com/issues/50823
1482
    qa: RuntimeError: timeout waiting for cluster to stabilize
1483
1484
1485 5 Patrick Donnelly
h3. 2021 May 14
1486
1487
https://pulpito.ceph.com/pdonnell-2021-05-14_21:45:42-fs-master-distro-basic-smithi/
1488
1489
* https://tracker.ceph.com/issues/48812
1490
    qa: test_scrub_pause_and_resume_with_abort failure
1491
* https://tracker.ceph.com/issues/50821
1492
    qa: untar_snap_rm failure during mds thrashing
1493
* https://tracker.ceph.com/issues/50622 (regression)
1494
    msg: active_connections regression
1495
* https://tracker.ceph.com/issues/50822
1496
    qa: testing kernel patch for client metrics causes mds abort
1497
* https://tracker.ceph.com/issues/48773
1498
    qa: scrub does not complete
1499
* https://tracker.ceph.com/issues/50823
1500
    qa: RuntimeError: timeout waiting for cluster to stabilize
1501
* https://tracker.ceph.com/issues/50824
1502
    qa: snaptest-git-ceph bus error
1503
* https://tracker.ceph.com/issues/50825
1504
    qa: snaptest-git-ceph hang during mon thrashing v2
1505
* https://tracker.ceph.com/issues/50826
1506
    kceph: stock RHEL kernel hangs on snaptests with mon|osd thrashers
1507
1508
1509 4 Patrick Donnelly
h3. 2021 May 01
1510
1511
https://pulpito.ceph.com/pdonnell-2021-05-01_09:07:09-fs-wip-pdonnell-testing-20210501.040415-distro-basic-smithi/
1512
1513
* https://tracker.ceph.com/issues/45434
1514
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
1515
* https://tracker.ceph.com/issues/50281
1516
    qa: untar_snap_rm timeout
1517
* https://tracker.ceph.com/issues/48203 (stock kernel update required)
1518
    qa: quota failure
1519
* https://tracker.ceph.com/issues/48773
1520
    qa: scrub does not complete
1521
* https://tracker.ceph.com/issues/50390
1522
    mds: monclient: wait_auth_rotating timed out after 30
1523
* https://tracker.ceph.com/issues/50250
1524
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details"
1525
* https://tracker.ceph.com/issues/50622 (regression)
1526
    msg: active_connections regression
1527
* https://tracker.ceph.com/issues/45591
1528
    mgr: FAILED ceph_assert(daemon != nullptr)
1529
* https://tracker.ceph.com/issues/50221
1530
    qa: snaptest-git-ceph failure in git diff
1531
* https://tracker.ceph.com/issues/50016
1532
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
1533
1534
1535 3 Patrick Donnelly
h3. 2021 Apr 15
1536
1537
https://pulpito.ceph.com/pdonnell-2021-04-15_01:35:57-fs-wip-pdonnell-testing-20210414.230315-distro-basic-smithi/
1538
1539
* https://tracker.ceph.com/issues/50281
1540
    qa: untar_snap_rm timeout
1541
* https://tracker.ceph.com/issues/50220
1542
    qa: dbench workload timeout
1543
* https://tracker.ceph.com/issues/50246
1544
    mds: failure replaying journal (EMetaBlob)
1545
* https://tracker.ceph.com/issues/50250
1546
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details"
1547
* https://tracker.ceph.com/issues/50016
1548
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
1549
* https://tracker.ceph.com/issues/50222
1550
    osd: 5.2s0 deep-scrub : stat mismatch
1551
* https://tracker.ceph.com/issues/45434
1552
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
1553
* https://tracker.ceph.com/issues/49845
1554
    qa: failed umount in test_volumes
1555
* https://tracker.ceph.com/issues/37808
1556
    osd: osdmap cache weak_refs assert during shutdown
1557
* https://tracker.ceph.com/issues/50387
1558
    client: fs/snaps failure
1559
* https://tracker.ceph.com/issues/50389
1560
    mds: "cluster [ERR] Error recovering journal 0x203: (2) No such file or directory" in cluster log"
1561
* https://tracker.ceph.com/issues/50216
1562
    qa: "ls: cannot access 'lost+found': No such file or directory"
1563
* https://tracker.ceph.com/issues/50390
1564
    mds: monclient: wait_auth_rotating timed out after 30
1565
1566
1567
1568 1 Patrick Donnelly
h3. 2021 Apr 08
1569
1570 2 Patrick Donnelly
https://pulpito.ceph.com/pdonnell-2021-04-08_22:42:24-fs-wip-pdonnell-testing-20210408.192301-distro-basic-smithi/
1571
1572
* https://tracker.ceph.com/issues/45434
1573
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
1574
* https://tracker.ceph.com/issues/50016
1575
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
1576
* https://tracker.ceph.com/issues/48773
1577
    qa: scrub does not complete
1578
* https://tracker.ceph.com/issues/50279
1579
    qa: "Replacing daemon mds.b as rank 0 with standby daemon mds.c"
1580
* https://tracker.ceph.com/issues/50246
1581
    mds: failure replaying journal (EMetaBlob)
1582
* https://tracker.ceph.com/issues/48365
1583
    qa: ffsb build failure on CentOS 8.2
1584
* https://tracker.ceph.com/issues/50216
1585
    qa: "ls: cannot access 'lost+found': No such file or directory"
1586
* https://tracker.ceph.com/issues/50223
1587
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
1588
* https://tracker.ceph.com/issues/50280
1589
    cephadm: RuntimeError: uid/gid not found
1590
* https://tracker.ceph.com/issues/50281
1591
    qa: untar_snap_rm timeout
1592
1593
h3. 2021 Apr 08
1594
1595 1 Patrick Donnelly
https://pulpito.ceph.com/pdonnell-2021-04-08_04:31:36-fs-wip-pdonnell-testing-20210408.024225-distro-basic-smithi/
1596
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210408.142238 (with logic inversion / QA fix)
1597
1598
* https://tracker.ceph.com/issues/50246
1599
    mds: failure replaying journal (EMetaBlob)
1600
* https://tracker.ceph.com/issues/50250
1601
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details"
1602
1603
1604
h3. 2021 Apr 07
1605
1606
https://pulpito.ceph.com/pdonnell-2021-04-07_02:12:41-fs-wip-pdonnell-testing-20210406.213012-distro-basic-smithi/
1607
1608
* https://tracker.ceph.com/issues/50215
1609
    qa: "log [ERR] : error reading sessionmap 'mds2_sessionmap'"
1610
* https://tracker.ceph.com/issues/49466
1611
    qa: "Command failed on gibba030 with status 1: 'set -ex\nsudo dd of=/tmp/tmp.ZEeZBasJer'"
1612
* https://tracker.ceph.com/issues/50216
1613
    qa: "ls: cannot access 'lost+found': No such file or directory"
1614
* https://tracker.ceph.com/issues/48773
1615
    qa: scrub does not complete
1616
* https://tracker.ceph.com/issues/49845
1617
    qa: failed umount in test_volumes
1618
* https://tracker.ceph.com/issues/50220
1619
    qa: dbench workload timeout
1620
* https://tracker.ceph.com/issues/50221
1621
    qa: snaptest-git-ceph failure in git diff
1622
* https://tracker.ceph.com/issues/50222
1623
    osd: 5.2s0 deep-scrub : stat mismatch
1624
* https://tracker.ceph.com/issues/50223
1625
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
1626
* https://tracker.ceph.com/issues/50224
1627
    qa: test_mirroring_init_failure_with_recovery failure
1628
1629
h3. 2021 Apr 01
1630
1631
https://pulpito.ceph.com/pdonnell-2021-04-01_00:45:34-fs-wip-pdonnell-testing-20210331.222326-distro-basic-smithi/
1632
1633
* https://tracker.ceph.com/issues/48772
1634
    qa: pjd: not ok 9, 44, 80
1635
* https://tracker.ceph.com/issues/50177
1636
    osd: "stalled aio... buggy kernel or bad device?"
1637
* https://tracker.ceph.com/issues/48771
1638
    qa: iogen: workload fails to cause balancing
1639
* https://tracker.ceph.com/issues/49845
1640
    qa: failed umount in test_volumes
1641
* https://tracker.ceph.com/issues/48773
1642
    qa: scrub does not complete
1643
* https://tracker.ceph.com/issues/48805
1644
    mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details"
1645
* https://tracker.ceph.com/issues/50178
1646
    qa: "TypeError: run() got an unexpected keyword argument 'shell'"
1647
* https://tracker.ceph.com/issues/45434
1648
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
1649
1650
h3. 2021 Mar 24
1651
1652
https://pulpito.ceph.com/pdonnell-2021-03-24_23:26:35-fs-wip-pdonnell-testing-20210324.190252-distro-basic-smithi/
1653
1654
* https://tracker.ceph.com/issues/49500
1655
    qa: "Assertion `cb_done' failed."
1656
* https://tracker.ceph.com/issues/50019
1657
    qa: mount failure with cephadm "probably no MDS server is up?"
1658
* https://tracker.ceph.com/issues/50020
1659
    qa: "RADOS object not found (Failed to operate read op for oid cephfs_mirror)"
1660
* https://tracker.ceph.com/issues/48773
1661
    qa: scrub does not complete
1662
* https://tracker.ceph.com/issues/45434
1663
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
1664
* https://tracker.ceph.com/issues/48805
1665
    mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details"
1666
* https://tracker.ceph.com/issues/48772
1667
    qa: pjd: not ok 9, 44, 80
1668
* https://tracker.ceph.com/issues/50021
1669
    qa: snaptest-git-ceph failure during mon thrashing
1670
* https://tracker.ceph.com/issues/48771
1671
    qa: iogen: workload fails to cause balancing
1672
* https://tracker.ceph.com/issues/50016
1673
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
1674
* https://tracker.ceph.com/issues/49466
1675
    qa: "Command failed on gibba030 with status 1: 'set -ex\nsudo dd of=/tmp/tmp.ZEeZBasJer'"
1676
1677
1678
h3. 2021 Mar 18
1679
1680
https://pulpito.ceph.com/pdonnell-2021-03-18_13:46:31-fs-wip-pdonnell-testing-20210318.024145-distro-basic-smithi/
1681
1682
* https://tracker.ceph.com/issues/49466
1683
    qa: "Command failed on gibba030 with status 1: 'set -ex\nsudo dd of=/tmp/tmp.ZEeZBasJer'"
1684
* https://tracker.ceph.com/issues/48773
1685
    qa: scrub does not complete
1686
* https://tracker.ceph.com/issues/48805
1687
    mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details"
1688
* https://tracker.ceph.com/issues/45434
1689
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
1690
* https://tracker.ceph.com/issues/49845
1691
    qa: failed umount in test_volumes
1692
* https://tracker.ceph.com/issues/49605
1693
    mgr: drops command on the floor
1694
* https://tracker.ceph.com/issues/48203 (stock kernel update required)
1695
    qa: quota failure
1696
* https://tracker.ceph.com/issues/49928
1697
    client: items pinned in cache preventing unmount x2
1698
1699
h3. 2021 Mar 15
1700
1701
https://pulpito.ceph.com/pdonnell-2021-03-15_22:16:56-fs-wip-pdonnell-testing-20210315.182203-distro-basic-smithi/
1702
1703
* https://tracker.ceph.com/issues/49842
1704
    qa: stuck pkg install
1705
* https://tracker.ceph.com/issues/49466
1706
    qa: "Command failed on gibba030 with status 1: 'set -ex\nsudo dd of=/tmp/tmp.ZEeZBasJer'"
1707
* https://tracker.ceph.com/issues/49822
1708
    test: test_mirroring_command_idempotency (tasks.cephfs.test_admin.TestMirroringCommands) failure
1709
* https://tracker.ceph.com/issues/49240
1710
    terminate called after throwing an instance of 'std::bad_alloc'
1711
* https://tracker.ceph.com/issues/48773
1712
    qa: scrub does not complete
1713
* https://tracker.ceph.com/issues/45434
1714
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
1715
* https://tracker.ceph.com/issues/49500
1716
    qa: "Assertion `cb_done' failed."
1717
* https://tracker.ceph.com/issues/49843
1718
    qa: fs/snaps/snaptest-upchildrealms.sh failure
1719
* https://tracker.ceph.com/issues/49845
1720
    qa: failed umount in test_volumes
1721
* https://tracker.ceph.com/issues/48805
1722
    mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details"
1723
* https://tracker.ceph.com/issues/49605
1724
    mgr: drops command on the floor
1725
1726
and failure caused by PR: https://github.com/ceph/ceph/pull/39969
1727
1728
1729
h3. 2021 Mar 09
1730
1731
https://pulpito.ceph.com/pdonnell-2021-03-09_03:27:39-fs-wip-pdonnell-testing-20210308.214827-distro-basic-smithi/
1732
1733
* https://tracker.ceph.com/issues/49500
1734
    qa: "Assertion `cb_done' failed."
1735
* https://tracker.ceph.com/issues/48805
1736
    mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details"
1737
* https://tracker.ceph.com/issues/48773
1738
    qa: scrub does not complete
1739
* https://tracker.ceph.com/issues/45434
1740
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
1741
* https://tracker.ceph.com/issues/49240
1742
    terminate called after throwing an instance of 'std::bad_alloc'
1743
* https://tracker.ceph.com/issues/49466
1744
    qa: "Command failed on gibba030 with status 1: 'set -ex\nsudo dd of=/tmp/tmp.ZEeZBasJer'"
1745
* https://tracker.ceph.com/issues/49684
1746
    qa: fs:cephadm mount does not wait for mds to be created
1747
* https://tracker.ceph.com/issues/48771
1748
    qa: iogen: workload fails to cause balancing