Project

General

Profile

Main » History » Version 125

Venky Shankar, 05/11/2023 03:50 AM

1 79 Venky Shankar
h1. MAIN
2
3 125 Venky Shankar
h3. 11 May 2023
4
5
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230509.054005
6
7
* https://tracker.ceph.com/issues/52624
8
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
9
* https://tracker.ceph.com/issues/54460
10
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
11
* https://tracker.ceph.com/issues/57676
12
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
13
* https://tracker.ceph.com/issues/59683
14
    Error: Unable to find a match: userspace-rcu-devel libedit-devel device-mapper-devel with fscrypt tests
15
* https://tracker.ceph.com/issues/59684 [kclient bug]
16
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
17
* https://tracker.ceph.com/issues/59348
18
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
19
20 124 Venky Shankar
h3. 09 May 2023
21
22
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230506.143554
23
24
* https://tracker.ceph.com/issues/52624
25
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
26
* https://tracker.ceph.com/issues/58340
27
    mds: fsstress.sh hangs with multimds
28
* https://tracker.ceph.com/issues/54460
29
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
30
* https://tracker.ceph.com/issues/57676
31
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
32
* https://tracker.ceph.com/issues/51964
33
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
34
* https://tracker.ceph.com/issues/59350
35
    qa: test_flag_scrub_mdsdir (tasks.cephfs.test_scrub_checks.TestScrubChecks) ... ERROR
36
* https://tracker.ceph.com/issues/59683
37
    Error: Unable to find a match: userspace-rcu-devel libedit-devel device-mapper-devel with fscrypt tests
38
* https://tracker.ceph.com/issues/59684 [kclient bug]
39
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
40
* https://tracker.ceph.com/issues/59348
41
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
42
43 123 Venky Shankar
h3. 10 Apr 2023
44
45
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230330.105356
46
47
* https://tracker.ceph.com/issues/52624
48
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
49
* https://tracker.ceph.com/issues/58340
50
    mds: fsstress.sh hangs with multimds
51
* https://tracker.ceph.com/issues/54460
52
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
53
* https://tracker.ceph.com/issues/57676
54
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
55
* https://tracker.ceph.com/issues/51964
56
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
57 119 Rishabh Dave
58 120 Rishabh Dave
h3. 31 Mar 2023
59 121 Rishabh Dave
60 120 Rishabh Dave
run: http://pulpito.front.sepia.ceph.com/rishabh-2023-03-03_21:39:49-fs-wip-rishabh-2023Mar03-2316-testing-default-smithi/
61 122 Rishabh Dave
re-run (some PRs removed from batch): http://pulpito.front.sepia.ceph.com/rishabh-2023-03-11_05:54:03-fs-wip-rishabh-2023Mar10-1727-testing-default-smithi/
62
re-run (some PRs removed from batch): http://pulpito.front.sepia.ceph.com/rishabh-2023-03-23_08:27:28-fs-wip-rishabh-2023Mar20-2250-testing-default-smithi/
63 120 Rishabh Dave
64
There were many more re-runs for "failed+dead" jobs as well as for individual jobs. half of the PRs from the batch were removed (gradually over subsequent re-runs).
65
66
* https://tracker.ceph.com/issues/57676
67
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
68
* https://tracker.ceph.com/issues/54460
69
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
70
* https://tracker.ceph.com/issues/58220
71
  Command failed (workunit test fs/quota/quota.sh) on smithixxx with status 1
72
* https://tracker.ceph.com/issues/58220#note-9
73
  workunit fs/test_python.sh: test_disk_quota_exceeeded_error failure
74
* https://tracker.ceph.com/issues/56695
75
  Command failed (workunit test suites/pjd.sh)
76
* https://tracker.ceph.com/issues/58564 
77
  workuit dbench failed with error code 1
78
* https://tracker.ceph.com/issues/57206
79
  libcephfs/test.sh: ceph_test_libcephfs_reclaim
80
* https://tracker.ceph.com/issues/57580
81
  Test failure: test_newops_getvxattr (tasks.cephfs.test_newops.TestNewOps)
82
* https://tracker.ceph.com/issues/58940
83
  ceph osd hit ceph_abort
84
* https://tracker.ceph.com/issues/55805
85
  error scrub thrashing reached max tries in 900 secs
86
87 118 Venky Shankar
h3. 30 March 2023
88
89
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230315.085747
90
91
* https://tracker.ceph.com/issues/58938
92
    qa: xfstests-dev's generic test suite has 7 failures with kclient
93
* https://tracker.ceph.com/issues/51964
94
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
95
* https://tracker.ceph.com/issues/58340
96
    mds: fsstress.sh hangs with multimds
97
98 114 Venky Shankar
h3. 29 March 2023
99
100 115 Venky Shankar
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230317.095222
101 114 Venky Shankar
102
* https://tracker.ceph.com/issues/56695
103
    [RHEL stock] pjd test failures
104
* https://tracker.ceph.com/issues/57676
105
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
106
* https://tracker.ceph.com/issues/57087
107
    qa: test_fragmented_injection (tasks.cephfs.test_data_scan.TestDataScan) failure
108
* https://tracker.ceph.com/issues/58340
109
    mds: fsstress.sh hangs with multimds
110 116 Venky Shankar
* https://tracker.ceph.com/issues/57655
111
    qa: fs:mixed-clients kernel_untar_build failure
112 114 Venky Shankar
* https://tracker.ceph.com/issues/59230
113
    Test failure: test_object_deletion (tasks.cephfs.test_damage.TestDamage)
114 117 Venky Shankar
* https://tracker.ceph.com/issues/58938
115
    qa: xfstests-dev's generic test suite has 7 failures with kclient
116 114 Venky Shankar
117 113 Venky Shankar
h3. 13 Mar 2023
118
119
* https://tracker.ceph.com/issues/56695
120
    [RHEL stock] pjd test failures
121
* https://tracker.ceph.com/issues/57676
122
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
123
* https://tracker.ceph.com/issues/51964
124
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
125
* https://tracker.ceph.com/issues/54460
126
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
127
* https://tracker.ceph.com/issues/57656
128
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
129
130 112 Venky Shankar
h3. 09 Mar 2023
131
132
https://pulpito.ceph.com/vshankar-2023-03-03_04:39:14-fs-wip-vshankar-testing-20230303.023823-testing-default-smithi/
133
https://pulpito.ceph.com/vshankar-2023-03-08_15:12:36-fs-wip-vshankar-testing-20230308.112059-testing-default-smithi/
134
135
* https://tracker.ceph.com/issues/56695
136
    [RHEL stock] pjd test failures
137
* https://tracker.ceph.com/issues/57676
138
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
139
* https://tracker.ceph.com/issues/51964
140
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
141
* https://tracker.ceph.com/issues/54460
142
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
143
* https://tracker.ceph.com/issues/58340
144
    mds: fsstress.sh hangs with multimds
145
* https://tracker.ceph.com/issues/57087
146
    qa: test_fragmented_injection (tasks.cephfs.test_data_scan.TestDataScan) failure
147
148 111 Venky Shankar
h3. 07 Mar 2023
149
150
https://pulpito.ceph.com/vshankar-2023-03-02_09:21:58-fs-wip-vshankar-testing-20230222.044949-testing-default-smithi/
151
https://pulpito.ceph.com/vshankar-2023-03-07_05:15:12-fs-wip-vshankar-testing-20230307.030510-testing-default-smithi/
152
153
* https://tracker.ceph.com/issues/56695
154
    [RHEL stock] pjd test failures
155
* https://tracker.ceph.com/issues/57676
156
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
157
* https://tracker.ceph.com/issues/51964
158
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
159
* https://tracker.ceph.com/issues/57656
160
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
161
* https://tracker.ceph.com/issues/57655
162
    qa: fs:mixed-clients kernel_untar_build failure
163
* https://tracker.ceph.com/issues/58220
164
    Command failed (workunit test fs/quota/quota.sh) on smithi081 with status 1:
165
* https://tracker.ceph.com/issues/54460
166
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
167
* https://tracker.ceph.com/issues/58934
168
    snaptest-git-ceph.sh failure with ceph-fuse
169
170 109 Venky Shankar
h3. 28 Feb 2023
171
172
https://pulpito.ceph.com/vshankar-2023-02-24_02:11:45-fs-wip-vshankar-testing-20230222.025426-testing-default-smithi/
173
174
* https://tracker.ceph.com/issues/56695
175
    [RHEL stock] pjd test failures
176
* https://tracker.ceph.com/issues/57676
177
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
178
* https://tracker.ceph.com/issues/56446
179
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
180 110 Venky Shankar
181 109 Venky Shankar
(teuthology infra issues causing testing delays - merging PRs which have tests passing)
182
183 107 Venky Shankar
h3. 25 Jan 2023
184
185
https://pulpito.ceph.com/vshankar-2023-01-25_07:57:32-fs-wip-vshankar-testing-20230125.055346-testing-default-smithi/
186
187
* https://tracker.ceph.com/issues/52624
188
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
189
* https://tracker.ceph.com/issues/56695
190
    [RHEL stock] pjd test failures
191
* https://tracker.ceph.com/issues/57676
192
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
193
* https://tracker.ceph.com/issues/56446
194
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
195
* https://tracker.ceph.com/issues/57206
196
    libcephfs/test.sh: ceph_test_libcephfs_reclaim
197
* https://tracker.ceph.com/issues/58220
198
    Command failed (workunit test fs/quota/quota.sh) on smithi081 with status 1:
199
* https://tracker.ceph.com/issues/58340
200
  mds: fsstress.sh hangs with multimds
201
* https://tracker.ceph.com/issues/56011
202
    fs/thrash: snaptest-snap-rm-cmp.sh fails in mds5sum comparison
203
* https://tracker.ceph.com/issues/54460
204
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
205
206 101 Rishabh Dave
h3. 30 JAN 2023
207
208
run: http://pulpito.front.sepia.ceph.com/rishabh-2022-11-28_08:04:11-fs-wip-rishabh-testing-2022Nov24-1818-testing-default-smithi/
209
re-run: http://pulpito.front.sepia.ceph.com/rishabh-2023-01-13_12:08:33-fs-wip-rishabh-testing-2022Nov24-11Jan2023-distro-default-smithi/
210
re-run of re-run: http://pulpito.front.sepia.ceph.com/rishabh-2023-01-23_18:53:32-fs-wip-rishabh-testing-2022Nov24-11Jan2023-distro-default-smithi/
211
212 105 Rishabh Dave
* https://tracker.ceph.com/issues/52624
213
  qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
214 101 Rishabh Dave
* https://tracker.ceph.com/issues/56695
215
  [RHEL stock] pjd test failures
216
* https://tracker.ceph.com/issues/57676
217
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
218
* https://tracker.ceph.com/issues/55332
219
  Failure in snaptest-git-ceph.sh
220
* https://tracker.ceph.com/issues/51964
221
  qa: test_cephfs_mirror_restart_sync_on_blocklist failure
222
* https://tracker.ceph.com/issues/56446
223
  Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
224
* https://tracker.ceph.com/issues/57655 
225
  qa: fs:mixed-clients kernel_untar_build failure
226
* https://tracker.ceph.com/issues/54460
227
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
228
* https://tracker.ceph.com/issues/58340
229
  mds: fsstress.sh hangs with multimds
230 103 Rishabh Dave
* https://tracker.ceph.com/issues/58219
231
  Command crashed: 'ceph-dencoder type inode_backtrace_t import - decode dump_json'
232 101 Rishabh Dave
233 102 Rishabh Dave
* "Failed to load ceph-mgr modules: prometheus" in cluster log"
234
  http://pulpito.front.sepia.ceph.com/rishabh-2023-01-23_18:53:32-fs-wip-rishabh-testing-2022Nov24-11Jan2023-distro-default-smithi/7134086
235
  Acc to Venky this was fixed in https://github.com/ceph/ceph/commit/cf6089200d96fc56b08ee17a4e31f19823370dc8
236 106 Rishabh Dave
* Created https://tracker.ceph.com/issues/58564
237
  workunit test suites/dbench.sh failed error code 1
238 102 Rishabh Dave
239 100 Venky Shankar
h3. 15 Dec 2022
240
241
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20221215.112736
242
243
* https://tracker.ceph.com/issues/52624
244
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
245
* https://tracker.ceph.com/issues/56695
246
    [RHEL stock] pjd test failures
247
* https://tracker.ceph.com/issues/58219
248
* https://tracker.ceph.com/issues/57655
249
* qa: fs:mixed-clients kernel_untar_build failure
250
    Test failure: test_journal_migration (tasks.cephfs.test_journal_migration.TestJournalMigration)
251
* https://tracker.ceph.com/issues/57676
252
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
253
* https://tracker.ceph.com/issues/58340
254
    mds: fsstress.sh hangs with multimds
255
256 96 Venky Shankar
h3. 08 Dec 2022
257
258
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20221130.043104
259 99 Venky Shankar
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20221209.043803
260 96 Venky Shankar
261
(lots of transient git.ceph.com failures)
262
263
* https://tracker.ceph.com/issues/52624
264
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
265
* https://tracker.ceph.com/issues/56695
266
    [RHEL stock] pjd test failures
267
* https://tracker.ceph.com/issues/57655
268
    qa: fs:mixed-clients kernel_untar_build failure
269
* https://tracker.ceph.com/issues/58219
270
    Test failure: test_journal_migration (tasks.cephfs.test_journal_migration.TestJournalMigration)
271
* https://tracker.ceph.com/issues/58220
272
    Command failed (workunit test fs/quota/quota.sh) on smithi081 with status 1:
273
* https://tracker.ceph.com/issues/57676
274
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
275 97 Venky Shankar
* https://tracker.ceph.com/issues/53859
276
    qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
277 98 Venky Shankar
* https://tracker.ceph.com/issues/54460
278
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
279
* https://tracker.ceph.com/issues/58244
280
    Test failure: test_rebuild_inotable (tasks.cephfs.test_data_scan.TestDataScan)
281 96 Venky Shankar
282 95 Venky Shankar
h3. 14 Oct 2022
283
284
https://pulpito.ceph.com/vshankar-2022-10-12_04:56:59-fs-wip-vshankar-testing-20221011-145847-testing-default-smithi/
285
https://pulpito.ceph.com/vshankar-2022-10-14_04:04:57-fs-wip-vshankar-testing-20221014-072608-testing-default-smithi/
286
287
* https://tracker.ceph.com/issues/52624
288
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
289
* https://tracker.ceph.com/issues/55804
290
    Command failed (workunit test suites/pjd.sh)
291
* https://tracker.ceph.com/issues/51964
292
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
293
* https://tracker.ceph.com/issues/57682
294
    client: ERROR: test_reconnect_after_blocklisted
295
* https://tracker.ceph.com/issues/54460
296
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
297 90 Rishabh Dave
298 91 Rishabh Dave
h3. 10 Oct 2022
299
300
http://pulpito.front.sepia.ceph.com/rishabh-2022-09-30_19:45:21-fs-wip-rishabh-testing-30Sep2022-testing-default-smithi/
301 92 Rishabh Dave
302 91 Rishabh Dave
reruns
303
* fs-thrash, passed: http://pulpito.front.sepia.ceph.com/rishabh-2022-10-04_13:19:47-fs-wip-rishabh-testing-30Sep2022-testing-default-smithi/
304
* fs-verify, passed: http://pulpito.front.sepia.ceph.com/rishabh-2022-10-05_12:25:37-fs-wip-rishabh-testing-30Sep2022-testing-default-smithi/
305
* cephadm failures also passed after many re-runs: http://pulpito.front.sepia.ceph.com/rishabh-2022-10-06_13:50:51-fs-wip-rishabh-testing-30Sep2022-2-testing-default-smithi/
306 94 Rishabh Dave
    ** needed this PR to be merged in ceph-ci branch - https://github.com/ceph/ceph/pull/47458
307 91 Rishabh Dave
308 93 Rishabh Dave
known bugs
309 91 Rishabh Dave
* https://tracker.ceph.com/issues/52624
310
  qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
311
* https://tracker.ceph.com/issues/50223
312
  client.xxxx isn't responding to mclientcaps(revoke
313
* https://tracker.ceph.com/issues/57299
314
  qa: test_dump_loads fails with JSONDecodeError
315
* https://tracker.ceph.com/issues/57655 [Exist in main as well]
316
  qa: fs:mixed-clients kernel_untar_build failure
317
* https://tracker.ceph.com/issues/57206
318
  libcephfs/test.sh: ceph_test_libcephfs_reclaim
319
320 90 Rishabh Dave
h3. 2022 Sep 29
321
322
http://pulpito.front.sepia.ceph.com/rishabh-2022-09-14_12:48:43-fs-wip-rishabh-testing-2022Sep9-1708-testing-default-smithi/
323
324
* https://tracker.ceph.com/issues/55804
325
  Command failed (workunit test suites/pjd.sh)
326
* https://tracker.ceph.com/issues/36593
327
  Command failed (workunit test fs/quota/quota.sh) on smithixxx with status 1
328
* https://tracker.ceph.com/issues/52624
329
  qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
330
* https://tracker.ceph.com/issues/51964
331
  qa: test_cephfs_mirror_restart_sync_on_blocklist failure
332
* https://tracker.ceph.com/issues/56632
333
  Test failure: test_subvolume_snapshot_clone_quota_exceeded
334
* https://tracker.ceph.com/issues/50821
335
  qa: untar_snap_rm failure during mds thrashing
336
337 88 Patrick Donnelly
h3. 2022 Sep 26
338
339
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20220923.171109
340
341
* https://tracker.ceph.com/issues/55804
342
    qa failure: pjd link tests failed
343
* https://tracker.ceph.com/issues/57676
344
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
345
* https://tracker.ceph.com/issues/52624
346
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
347
* https://tracker.ceph.com/issues/57580
348
    Test failure: test_newops_getvxattr (tasks.cephfs.test_newops.TestNewOps)
349
* https://tracker.ceph.com/issues/48773
350
    qa: scrub does not complete
351
* https://tracker.ceph.com/issues/57299
352
    qa: test_dump_loads fails with JSONDecodeError
353
* https://tracker.ceph.com/issues/57280
354
    qa: tasks/kernel_cfuse_workunits_untarbuild_blogbench fails - Failed to fetch package version from shaman
355
* https://tracker.ceph.com/issues/57205
356
    Test failure: test_subvolume_group_ls_filter_internal_directories (tasks.cephfs.test_volumes.TestSubvolumeGroups)
357
* https://tracker.ceph.com/issues/57656
358
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
359
* https://tracker.ceph.com/issues/57677
360
    qa: "1 MDSs behind on trimming (MDS_TRIM)"
361
* https://tracker.ceph.com/issues/57206
362
    libcephfs/test.sh: ceph_test_libcephfs_reclaim
363
* https://tracker.ceph.com/issues/57446
364
    qa: test_subvolume_snapshot_info_if_orphan_clone fails
365
* https://tracker.ceph.com/issues/57655 [Exist in main as well]
366
    qa: fs:mixed-clients kernel_untar_build failure
367 89 Patrick Donnelly
* https://tracker.ceph.com/issues/57682
368
    client: ERROR: test_reconnect_after_blocklisted
369 88 Patrick Donnelly
370
371 87 Patrick Donnelly
h3. 2022 Sep 22
372
373
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20220920.234701
374
375
* https://tracker.ceph.com/issues/57299
376
    qa: test_dump_loads fails with JSONDecodeError
377
* https://tracker.ceph.com/issues/57205
378
    Test failure: test_subvolume_group_ls_filter_internal_directories (tasks.cephfs.test_volumes.TestSubvolumeGroups)
379
* https://tracker.ceph.com/issues/52624
380
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
381
* https://tracker.ceph.com/issues/57580
382
    Test failure: test_newops_getvxattr (tasks.cephfs.test_newops.TestNewOps)
383
* https://tracker.ceph.com/issues/57280
384
    qa: tasks/kernel_cfuse_workunits_untarbuild_blogbench fails - Failed to fetch package version from shaman
385
* https://tracker.ceph.com/issues/48773
386
    qa: scrub does not complete
387
* https://tracker.ceph.com/issues/56446
388
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
389
* https://tracker.ceph.com/issues/57206
390
    libcephfs/test.sh: ceph_test_libcephfs_reclaim
391
* https://tracker.ceph.com/issues/51267
392
    CommandFailedError: Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithi096 with status 1:...
393
394
NEW:
395
396
* https://tracker.ceph.com/issues/57656
397
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
398
* https://tracker.ceph.com/issues/57655 [Exist in main as well]
399
    qa: fs:mixed-clients kernel_untar_build failure
400
* https://tracker.ceph.com/issues/57657
401
    mds: scrub locates mismatch between child accounted_rstats and self rstats
402
403
Segfault probably caused by: https://github.com/ceph/ceph/pull/47795#issuecomment-1255724799
404
405
406 80 Venky Shankar
h3. 2022 Sep 16
407 79 Venky Shankar
408
https://pulpito.ceph.com/?branch=wip-vshankar-testing1-20220905-132828
409
410
* https://tracker.ceph.com/issues/57446
411
    qa: test_subvolume_snapshot_info_if_orphan_clone fails
412
* https://tracker.ceph.com/issues/57299
413
    qa: test_dump_loads fails with JSONDecodeError
414
* https://tracker.ceph.com/issues/50223
415
    client.xxxx isn't responding to mclientcaps(revoke)
416
* https://tracker.ceph.com/issues/52624
417
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
418
* https://tracker.ceph.com/issues/57205
419
    Test failure: test_subvolume_group_ls_filter_internal_directories (tasks.cephfs.test_volumes.TestSubvolumeGroups)
420
* https://tracker.ceph.com/issues/57280
421
    qa: tasks/kernel_cfuse_workunits_untarbuild_blogbench fails - Failed to fetch package version from shaman
422
* https://tracker.ceph.com/issues/51282
423
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
424
* https://tracker.ceph.com/issues/48203
425
  https://tracker.ceph.com/issues/36593
426
    qa: quota failure
427
    qa: quota failure caused by clients stepping on each other
428
* https://tracker.ceph.com/issues/57580
429
    Test failure: test_newops_getvxattr (tasks.cephfs.test_newops.TestNewOps)
430
431 77 Rishabh Dave
432
h3. 2022 Aug 26
433 76 Rishabh Dave
434
http://pulpito.front.sepia.ceph.com/rishabh-2022-08-22_17:49:59-fs-wip-rishabh-testing-2022Aug19-testing-default-smithi/
435
http://pulpito.front.sepia.ceph.com/rishabh-2022-08-24_11:56:51-fs-wip-rishabh-testing-2022Aug19-testing-default-smithi/
436
437
* https://tracker.ceph.com/issues/57206
438
  libcephfs/test.sh: ceph_test_libcephfs_reclaim
439
* https://tracker.ceph.com/issues/56632
440
  Test failure: test_subvolume_snapshot_clone_quota_exceeded (tasks.cephfs.test_volumes.TestSubvolumeSnapshotClones)
441
* https://tracker.ceph.com/issues/56446
442
  Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
443
* https://tracker.ceph.com/issues/51964
444
  qa: test_cephfs_mirror_restart_sync_on_blocklist failure
445
* https://tracker.ceph.com/issues/53859
446
  qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
447
448
* https://tracker.ceph.com/issues/54460
449
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
450
* https://tracker.ceph.com/issues/54462
451
  Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128
452
* https://tracker.ceph.com/issues/54460
453
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
454
* https://tracker.ceph.com/issues/36593
455
  Command failed (workunit test fs/quota/quota.sh) on smithixxx with status 1
456
457
* https://tracker.ceph.com/issues/52624
458
  qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
459
* https://tracker.ceph.com/issues/55804
460
  Command failed (workunit test suites/pjd.sh)
461
* https://tracker.ceph.com/issues/50223
462
  client.xxxx isn't responding to mclientcaps(revoke)
463
464
465 75 Venky Shankar
h3. 2022 Aug 22
466
467
https://pulpito.ceph.com/vshankar-2022-08-12_09:34:24-fs-wip-vshankar-testing1-20220812-072441-testing-default-smithi/
468
https://pulpito.ceph.com/vshankar-2022-08-18_04:30:42-fs-wip-vshankar-testing1-20220818-082047-testing-default-smithi/ (drop problematic PR and re-run)
469
470
* https://tracker.ceph.com/issues/52624
471
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
472
* https://tracker.ceph.com/issues/56446
473
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
474
* https://tracker.ceph.com/issues/55804
475
    Command failed (workunit test suites/pjd.sh)
476
* https://tracker.ceph.com/issues/51278
477
    mds: "FAILED ceph_assert(!segments.empty())"
478
* https://tracker.ceph.com/issues/54460
479
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
480
* https://tracker.ceph.com/issues/57205
481
    Test failure: test_subvolume_group_ls_filter_internal_directories (tasks.cephfs.test_volumes.TestSubvolumeGroups)
482
* https://tracker.ceph.com/issues/57206
483
    ceph_test_libcephfs_reclaim crashes during test
484
* https://tracker.ceph.com/issues/53859
485
  Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
486
* https://tracker.ceph.com/issues/50223
487
    client.xxxx isn't responding to mclientcaps(revoke)
488
489 72 Venky Shankar
h3. 2022 Aug 12
490
491
https://pulpito.ceph.com/vshankar-2022-08-10_04:06:00-fs-wip-vshankar-testing-20220805-190751-testing-default-smithi/
492
https://pulpito.ceph.com/vshankar-2022-08-11_12:16:58-fs-wip-vshankar-testing-20220811-145809-testing-default-smithi/ (drop problematic PR and re-run)
493
494
* https://tracker.ceph.com/issues/52624
495
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
496
* https://tracker.ceph.com/issues/56446
497
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
498
* https://tracker.ceph.com/issues/51964
499
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
500
* https://tracker.ceph.com/issues/55804
501
    Command failed (workunit test suites/pjd.sh)
502
* https://tracker.ceph.com/issues/50223
503
    client.xxxx isn't responding to mclientcaps(revoke)
504
* https://tracker.ceph.com/issues/50821
505
    qa: untar_snap_rm failure during mds thrashing
506
* https://tracker.ceph.com/issues/54460
507 73 Venky Shankar
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
508 72 Venky Shankar
509 71 Venky Shankar
h3. 2022 Aug 04
510
511
https://pulpito.ceph.com/?branch=wip-vshankar-testing1-20220804-123835 (only mgr/volumes, mgr/stats)
512
513
Unrealted teuthology failure on rhel
514
515 69 Rishabh Dave
h3. 2022 Jul 25
516 68 Rishabh Dave
517
http://pulpito.front.sepia.ceph.com/rishabh-2022-07-22_11:34:20-fs-wip-rishabh-testing-2022Jul22-1400-testing-default-smithi/
518
519
1st re-run: http://pulpito.front.sepia.ceph.com/rishabh-2022-07-24_03:51:19-fs-wip-rishabh-testing-2022Jul22-1400-testing-default-smithi
520
2nd re-run: http://pulpito.front.sepia.ceph.com/rishabh-2022-07-24_08:53:36-fs-wip-rishabh-testing-2022Jul22-1400-testing-default-smithi/
521 74 Rishabh Dave
3rd re-run: http://pulpito.front.sepia.ceph.com/rishabh-2022-07-24_08:53:36-fs-wip-rishabh-testing-2022Jul22-1400-testing-default-smithi/
522
4th (final) re-run: http://pulpito.front.sepia.ceph.com/rishabh-2022-07-28_03:59:01-fs-wip-rishabh-testing-2022Jul28-0143-testing-default-smithi/
523 68 Rishabh Dave
524
* https://tracker.ceph.com/issues/55804
525
  Command failed (workunit test suites/pjd.sh)
526
* https://tracker.ceph.com/issues/50223
527
  client.xxxx isn't responding to mclientcaps(revoke)
528
529
* https://tracker.ceph.com/issues/54460
530
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
531
* https://tracker.ceph.com/issues/36593
532
  Command failed (workunit test fs/quota/quota.sh) on smithixxx with status 1
533 1 Patrick Donnelly
* https://tracker.ceph.com/issues/54462
534 74 Rishabh Dave
  Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128~
535 68 Rishabh Dave
536 67 Patrick Donnelly
h3. 2022 July 22
537
538
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20220721.235756
539
540
MDS_HEALTH_DUMMY error in log fixed by followup commit.
541
transient selinux ping failure
542
543
* https://tracker.ceph.com/issues/56694
544
    qa: avoid blocking forever on hung umount
545
* https://tracker.ceph.com/issues/56695
546
    [RHEL stock] pjd test failures
547
* https://tracker.ceph.com/issues/56696
548
    admin keyring disappears during qa run
549
* https://tracker.ceph.com/issues/56697
550
    qa: fs/snaps fails for fuse
551
* https://tracker.ceph.com/issues/50222
552
    osd: 5.2s0 deep-scrub : stat mismatch
553
* https://tracker.ceph.com/issues/56698
554
    client: FAILED ceph_assert(_size == 0)
555
* https://tracker.ceph.com/issues/50223
556
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
557
558
559 66 Rishabh Dave
h3. 2022 Jul 15
560 65 Rishabh Dave
561
http://pulpito.front.sepia.ceph.com/rishabh-2022-07-08_23:53:34-fs-wip-rishabh-testing-2022Jul08-1820-testing-default-smithi/
562
563
re-run: http://pulpito.front.sepia.ceph.com/rishabh-2022-07-15_06:42:04-fs-wip-rishabh-testing-2022Jul08-1820-testing-default-smithi/
564
565
* https://tracker.ceph.com/issues/53859
566
  Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
567
* https://tracker.ceph.com/issues/55804
568
  Command failed (workunit test suites/pjd.sh)
569
* https://tracker.ceph.com/issues/50223
570
  client.xxxx isn't responding to mclientcaps(revoke)
571
* https://tracker.ceph.com/issues/50222
572
  osd: deep-scrub : stat mismatch
573
574
* https://tracker.ceph.com/issues/56632
575
  Test failure: test_subvolume_snapshot_clone_quota_exceeded (tasks.cephfs.test_volumes.TestSubvolumeSnapshotClones)
576
* https://tracker.ceph.com/issues/56634
577
  workunit test fs/snaps/snaptest-intodir.sh
578
* https://tracker.ceph.com/issues/56644
579
  Test failure: test_rapid_creation (tasks.cephfs.test_fragment.TestFragmentation)
580
581
582
583 61 Rishabh Dave
h3. 2022 July 05
584
585
http://pulpito.front.sepia.ceph.com/rishabh-2022-07-02_14:14:52-fs-wip-rishabh-testing-20220702-1631-testing-default-smithi/
586 62 Rishabh Dave
587 64 Rishabh Dave
On 1st re-run some jobs passed - http://pulpito.front.sepia.ceph.com/rishabh-2022-07-03_15:10:28-fs-wip-rishabh-testing-20220702-1631-distro-default-smithi/
588
589
On 2nd re-run only few jobs failed -
590
http://pulpito.front.sepia.ceph.com/rishabh-2022-07-06_05:24:29-fs-wip-rishabh-testing-20220705-2132-distro-default-smithi/
591
http://pulpito.front.sepia.ceph.com/rishabh-2022-07-06_05:24:29-fs-wip-rishabh-testing-20220705-2132-distro-default-smithi/
592 62 Rishabh Dave
593
* https://tracker.ceph.com/issues/56446
594
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
595
* https://tracker.ceph.com/issues/55804
596
    Command failed (workunit test suites/pjd.sh) on smithi047 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/
597
598
* https://tracker.ceph.com/issues/56445
599
    Command failed on smithi080 with status 123: "find /home/ubuntu/cephtest/archive/syslog -name '*.log' -print0 | sudo xargs -0 --no-run-if-empty -- gzip --"
600
* https://tracker.ceph.com/issues/51267
601 63 Rishabh Dave
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithi098 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1
602
* https://tracker.ceph.com/issues/50224
603
    Test failure: test_mirroring_init_failure_with_recovery (tasks.cephfs.test_mirroring.TestMirroring)
604 62 Rishabh Dave
605
606 61 Rishabh Dave
607 58 Venky Shankar
h3. 2022 July 04
608
609
https://pulpito.ceph.com/vshankar-2022-06-29_09:19:00-fs-wip-vshankar-testing-20220627-100931-testing-default-smithi/
610
(rhel runs were borked due to: https://lists.ceph.io/hyperkitty/list/dev@ceph.io/thread/JSZQFUKVLDND4W33PXDGCABPHNSPT6SS/, tests ran with --filter-out=rhel)
611
612
* https://tracker.ceph.com/issues/56445
613
    Command failed on smithi162 with status 123: "find /home/ubuntu/cephtest/archive/syslog -name '*.log' -print0 | sudo xargs -0 --no-run-if-empty -- gzip --"
614
* https://tracker.ceph.com/issues/56446
615 59 Rishabh Dave
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
616
* https://tracker.ceph.com/issues/51964
617
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
618
* https://tracker.ceph.com/issues/52624
619 60 Rishabh Dave
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
620 59 Rishabh Dave
621 57 Venky Shankar
h3. 2022 June 20
622
623
https://pulpito.ceph.com/vshankar-2022-06-15_04:03:39-fs-wip-vshankar-testing1-20220615-072516-testing-default-smithi/
624
https://pulpito.ceph.com/vshankar-2022-06-19_08:22:46-fs-wip-vshankar-testing1-20220619-102531-testing-default-smithi/
625
626
* https://tracker.ceph.com/issues/52624
627
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
628
* https://tracker.ceph.com/issues/55804
629
    qa failure: pjd link tests failed
630
* https://tracker.ceph.com/issues/54108
631
    qa: iogen workunit: "The following counters failed to be set on mds daemons: {'mds.exported', 'mds.imported'}"
632
* https://tracker.ceph.com/issues/55332
633
    Failure in snaptest-git-ceph.sh (it's an async unlink/create bug)
634
635 56 Patrick Donnelly
h3. 2022 June 13
636
637
https://pulpito.ceph.com/pdonnell-2022-06-12_05:08:12-fs:workload-wip-pdonnell-testing-20220612.004943-distro-default-smithi/
638
639
* https://tracker.ceph.com/issues/56024
640
    cephadm: removes ceph.conf during qa run causing command failure
641
* https://tracker.ceph.com/issues/48773
642
    qa: scrub does not complete
643
* https://tracker.ceph.com/issues/56012
644
    mds: src/mds/MDLog.cc: 283: FAILED ceph_assert(!mds->is_ any_replay())
645
646
647 55 Venky Shankar
h3. 2022 Jun 13
648 54 Venky Shankar
649
https://pulpito.ceph.com/vshankar-2022-06-07_00:25:50-fs-wip-vshankar-testing-20220606-223254-testing-default-smithi/
650
https://pulpito.ceph.com/vshankar-2022-06-10_01:04:46-fs-wip-vshankar-testing-20220609-175550-testing-default-smithi/
651
652
* https://tracker.ceph.com/issues/52624
653
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
654
* https://tracker.ceph.com/issues/51964
655
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
656
* https://tracker.ceph.com/issues/53859
657
    qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
658
* https://tracker.ceph.com/issues/55804
659
    qa failure: pjd link tests failed
660
* https://tracker.ceph.com/issues/56003
661
    client: src/include/xlist.h: 81: FAILED ceph_assert(_size == 0)
662
* https://tracker.ceph.com/issues/56011
663
    fs/thrash: snaptest-snap-rm-cmp.sh fails in mds5sum comparison
664
* https://tracker.ceph.com/issues/56012
665
    mds: src/mds/MDLog.cc: 283: FAILED ceph_assert(!mds->is_ any_replay())
666
667 53 Venky Shankar
h3. 2022 Jun 07
668
669
https://pulpito.ceph.com/vshankar-2022-06-06_21:25:41-fs-wip-vshankar-testing1-20220606-230129-testing-default-smithi/
670
https://pulpito.ceph.com/vshankar-2022-06-07_10:53:31-fs-wip-vshankar-testing1-20220607-104134-testing-default-smithi/ (rerun after dropping a problematic PR)
671
672
* https://tracker.ceph.com/issues/52624
673
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
674
* https://tracker.ceph.com/issues/50223
675
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
676
* https://tracker.ceph.com/issues/50224
677
    qa: test_mirroring_init_failure_with_recovery failure
678
679 51 Venky Shankar
h3. 2022 May 12
680
681
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20220509-125847
682 52 Venky Shankar
https://pulpito.ceph.com/vshankar-2022-05-13_17:09:16-fs-wip-vshankar-testing-20220513-120051-testing-default-smithi/ (drop prs + rerun)
683 51 Venky Shankar
684
* https://tracker.ceph.com/issues/52624
685
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
686
* https://tracker.ceph.com/issues/50223
687
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
688
* https://tracker.ceph.com/issues/55332
689
    Failure in snaptest-git-ceph.sh
690
* https://tracker.ceph.com/issues/53859
691
    qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
692
* https://tracker.ceph.com/issues/55538
693 1 Patrick Donnelly
    Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead)
694 52 Venky Shankar
* https://tracker.ceph.com/issues/55258
695
    lots of "heartbeat_check: no reply from X.X.X.X" in OSD logs (cropss up again, though very infrequent)
696 51 Venky Shankar
697 49 Venky Shankar
h3. 2022 May 04
698
699 50 Venky Shankar
https://pulpito.ceph.com/vshankar-2022-05-01_13:18:44-fs-wip-vshankar-testing1-20220428-204527-testing-default-smithi/
700
https://pulpito.ceph.com/vshankar-2022-05-02_16:58:59-fs-wip-vshankar-testing1-20220502-201957-testing-default-smithi/ (after dropping PRs)
701
702 49 Venky Shankar
* https://tracker.ceph.com/issues/52624
703
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
704
* https://tracker.ceph.com/issues/50223
705
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
706
* https://tracker.ceph.com/issues/55332
707
    Failure in snaptest-git-ceph.sh
708
* https://tracker.ceph.com/issues/53859
709
    qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
710
* https://tracker.ceph.com/issues/55516
711
    qa: fs suite tests failing with "json.decoder.JSONDecodeError: Extra data: line 2 column 82 (char 82)"
712
* https://tracker.ceph.com/issues/55537
713
    mds: crash during fs:upgrade test
714
* https://tracker.ceph.com/issues/55538
715
    Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead)
716
717 48 Venky Shankar
h3. 2022 Apr 25
718
719
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20220420-113951 (owner vshankar)
720
721
* https://tracker.ceph.com/issues/52624
722
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
723
* https://tracker.ceph.com/issues/50223
724
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
725
* https://tracker.ceph.com/issues/55258
726
    lots of "heartbeat_check: no reply from X.X.X.X" in OSD logs
727
* https://tracker.ceph.com/issues/55377
728
    kclient: mds revoke Fwb caps stuck after the kclient tries writebcak once
729
730 47 Venky Shankar
h3. 2022 Apr 14
731
732
https://pulpito.ceph.com/?branch=wip-vshankar-testing1-20220411-144044
733
734
* https://tracker.ceph.com/issues/52624
735
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
736
* https://tracker.ceph.com/issues/50223
737
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
738
* https://tracker.ceph.com/issues/52438
739
    qa: ffsb timeout
740
* https://tracker.ceph.com/issues/55170
741
    mds: crash during rejoin (CDir::fetch_keys)
742
* https://tracker.ceph.com/issues/55331
743
    pjd failure
744
* https://tracker.ceph.com/issues/48773
745
    qa: scrub does not complete
746
* https://tracker.ceph.com/issues/55332
747
    Failure in snaptest-git-ceph.sh
748
* https://tracker.ceph.com/issues/55258
749
    lots of "heartbeat_check: no reply from X.X.X.X" in OSD logs
750
751 45 Venky Shankar
h3. 2022 Apr 11
752
753 46 Venky Shankar
https://pulpito.ceph.com/?branch=wip-vshankar-testing-55110-20220408-203242
754 45 Venky Shankar
755
* https://tracker.ceph.com/issues/48773
756
    qa: scrub does not complete
757
* https://tracker.ceph.com/issues/52624
758
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
759
* https://tracker.ceph.com/issues/52438
760
    qa: ffsb timeout
761
* https://tracker.ceph.com/issues/48680
762
    mds: scrubbing stuck "scrub active (0 inodes in the stack)"
763
* https://tracker.ceph.com/issues/55236
764
    qa: fs/snaps tests fails with "hit max job timeout"
765
* https://tracker.ceph.com/issues/54108
766
    qa: iogen workunit: "The following counters failed to be set on mds daemons: {'mds.exported', 'mds.imported'}"
767
* https://tracker.ceph.com/issues/54971
768
    Test failure: test_perf_stats_stale_metrics (tasks.cephfs.test_mds_metrics.TestMDSMetrics)
769
* https://tracker.ceph.com/issues/50223
770
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
771
* https://tracker.ceph.com/issues/55258
772
    lots of "heartbeat_check: no reply from X.X.X.X" in OSD logs
773
774 44 Venky Shankar
h3. 2022 Mar 21
775 42 Venky Shankar
776 43 Venky Shankar
https://pulpito.ceph.com/vshankar-2022-03-20_02:16:37-fs-wip-vshankar-testing-20220319-163539-testing-default-smithi/
777
778
Run didn't go well, lots of failures - debugging by dropping PRs and running against master branch. Only merging unrelated PRs that pass tests.
779
780
781
h3. 2022 Mar 08
782
783 42 Venky Shankar
https://pulpito.ceph.com/vshankar-2022-02-28_04:32:15-fs-wip-vshankar-testing-20220226-211550-testing-default-smithi/
784
785
rerun with
786
- (drop) https://github.com/ceph/ceph/pull/44679
787
- (drop) https://github.com/ceph/ceph/pull/44958
788
https://pulpito.ceph.com/vshankar-2022-03-06_14:47:51-fs-wip-vshankar-testing-20220304-132102-testing-default-smithi/
789
790
* https://tracker.ceph.com/issues/54419 (new)
791
    `ceph orch upgrade start` seems to never reach completion
792
* https://tracker.ceph.com/issues/51964
793
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
794
* https://tracker.ceph.com/issues/52624
795
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
796
* https://tracker.ceph.com/issues/50223
797
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
798
* https://tracker.ceph.com/issues/52438
799
    qa: ffsb timeout
800
* https://tracker.ceph.com/issues/50821
801
    qa: untar_snap_rm failure during mds thrashing
802
803
804 41 Venky Shankar
h3. 2022 Feb 09
805
806
https://pulpito.ceph.com/vshankar-2022-02-05_17:27:49-fs-wip-vshankar-testing-20220201-113815-testing-default-smithi/
807
808
rerun with
809
- (drop) https://github.com/ceph/ceph/pull/37938
810
- (drop) https://github.com/ceph/ceph/pull/44335
811
- (drop) https://github.com/ceph/ceph/pull/44491
812
- (drop) https://github.com/ceph/ceph/pull/44501
813
https://pulpito.ceph.com/vshankar-2022-02-08_14:27:29-fs-wip-vshankar-testing-20220208-181241-testing-default-smithi/
814
815
* https://tracker.ceph.com/issues/51964
816
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
817
* https://tracker.ceph.com/issues/54066
818
    test_subvolume_no_upgrade_v1_sanity fails with `AssertionError: 1000 != 0`
819
* https://tracker.ceph.com/issues/48773
820
    qa: scrub does not complete
821
* https://tracker.ceph.com/issues/52624
822
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
823
* https://tracker.ceph.com/issues/50223
824
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
825
* https://tracker.ceph.com/issues/52438
826
    qa: ffsb timeout
827
828 40 Patrick Donnelly
h3. 2022 Feb 01
829
830
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20220127.171526
831
832
* https://tracker.ceph.com/issues/54107
833
    kclient: hang during umount
834
* https://tracker.ceph.com/issues/54106
835
    kclient: hang during workunit cleanup
836
* https://tracker.ceph.com/issues/54108
837
    qa: iogen workunit: "The following counters failed to be set on mds daemons: {'mds.exported', 'mds.imported'}"
838
* https://tracker.ceph.com/issues/48773
839
    qa: scrub does not complete
840
* https://tracker.ceph.com/issues/52438
841
    qa: ffsb timeout
842
843
844 36 Venky Shankar
h3. 2022 Jan 13
845
846
https://pulpito.ceph.com/vshankar-2022-01-06_13:18:41-fs-wip-vshankar-testing-20220106-145819-testing-default-smithi/
847 39 Venky Shankar
848 36 Venky Shankar
rerun with:
849 38 Venky Shankar
- (add) https://github.com/ceph/ceph/pull/44570
850
- (drop) https://github.com/ceph/ceph/pull/43184
851 36 Venky Shankar
https://pulpito.ceph.com/vshankar-2022-01-13_04:42:40-fs-wip-vshankar-testing-20220106-145819-testing-default-smithi/
852
853
* https://tracker.ceph.com/issues/50223
854
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
855
* https://tracker.ceph.com/issues/51282
856
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
857
* https://tracker.ceph.com/issues/48773
858
    qa: scrub does not complete
859
* https://tracker.ceph.com/issues/52624
860
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
861
* https://tracker.ceph.com/issues/53859
862
    qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
863
864 34 Venky Shankar
h3. 2022 Jan 03
865
866
https://pulpito.ceph.com/vshankar-2021-12-22_07:37:44-fs-wip-vshankar-testing-20211216-114012-testing-default-smithi/
867
https://pulpito.ceph.com/vshankar-2022-01-03_12:27:45-fs-wip-vshankar-testing-20220103-142738-testing-default-smithi/ (rerun)
868
869
* https://tracker.ceph.com/issues/50223
870
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
871
* https://tracker.ceph.com/issues/51964
872
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
873
* https://tracker.ceph.com/issues/51267
874
    CommandFailedError: Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithi096 with status 1:...
875
* https://tracker.ceph.com/issues/51282
876
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
877
* https://tracker.ceph.com/issues/50821
878
    qa: untar_snap_rm failure during mds thrashing
879
* https://tracker.ceph.com/issues/51278
880
    mds: "FAILED ceph_assert(!segments.empty())"
881 35 Ramana Raja
* https://tracker.ceph.com/issues/52279
882
    cephadm tests fail due to: error adding seccomp filter rule for syscall bdflush: requested action matches default action of filter
883
884 34 Venky Shankar
885 33 Patrick Donnelly
h3. 2021 Dec 22
886
887
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211222.014316
888
889
* https://tracker.ceph.com/issues/52624
890
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
891
* https://tracker.ceph.com/issues/50223
892
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
893
* https://tracker.ceph.com/issues/52279
894
    cephadm tests fail due to: error adding seccomp filter rule for syscall bdflush: requested action matches default action of filter
895
* https://tracker.ceph.com/issues/50224
896
    qa: test_mirroring_init_failure_with_recovery failure
897
* https://tracker.ceph.com/issues/48773
898
    qa: scrub does not complete
899
900
901 32 Venky Shankar
h3. 2021 Nov 30
902
903
https://pulpito.ceph.com/vshankar-2021-11-24_07:14:27-fs-wip-vshankar-testing-20211124-094330-testing-default-smithi/
904
https://pulpito.ceph.com/vshankar-2021-11-30_06:23:32-fs-wip-vshankar-testing-20211124-094330-distro-default-smithi/ (rerun w/ QA fixes)
905
906
* https://tracker.ceph.com/issues/53436
907
    mds, mon: mds beacon messages get dropped? (mds never reaches up:active state)
908
* https://tracker.ceph.com/issues/51964
909
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
910
* https://tracker.ceph.com/issues/48812
911
    qa: test_scrub_pause_and_resume_with_abort failure
912
* https://tracker.ceph.com/issues/51076
913
    "wait_for_recovery: failed before timeout expired" during thrashosd test with EC backend.
914
* https://tracker.ceph.com/issues/50223
915
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
916
* https://tracker.ceph.com/issues/52624
917
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
918
* https://tracker.ceph.com/issues/50250
919
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
920
921
922 31 Patrick Donnelly
h3. 2021 November 9
923
924
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211109.180315
925
926
* https://tracker.ceph.com/issues/53214
927
    qa: "dd: error reading '/sys/kernel/debug/ceph/2a934501-6731-4052-a836-f42229a869be.client4874/metrics': Is a directory"
928
* https://tracker.ceph.com/issues/48773
929
    qa: scrub does not complete
930
* https://tracker.ceph.com/issues/50223
931
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
932
* https://tracker.ceph.com/issues/51282
933
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
934
* https://tracker.ceph.com/issues/52624
935
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
936
* https://tracker.ceph.com/issues/53216
937
    qa: "RuntimeError: value of attributes should be either str or None. client_id"
938
* https://tracker.ceph.com/issues/50250
939
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
940
941
942
943 30 Patrick Donnelly
h3. 2021 November 03
944
945
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211103.023355
946
947
* https://tracker.ceph.com/issues/51964
948
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
949
* https://tracker.ceph.com/issues/51282
950
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
951
* https://tracker.ceph.com/issues/52436
952
    fs/ceph: "corrupt mdsmap"
953
* https://tracker.ceph.com/issues/53074
954
    pybind/mgr/cephadm: upgrade sequence does not continue if no MDS are active
955
* https://tracker.ceph.com/issues/53150
956
    pybind/mgr/cephadm/upgrade: tolerate MDS failures during upgrade straddling v16.2.5
957
* https://tracker.ceph.com/issues/53155
958
    MDSMonitor: assertion during upgrade to v16.2.5+
959
960
961 29 Patrick Donnelly
h3. 2021 October 26
962
963
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211025.000447
964
965
* https://tracker.ceph.com/issues/53074
966
    pybind/mgr/cephadm: upgrade sequence does not continue if no MDS are active
967
* https://tracker.ceph.com/issues/52997
968
    testing: hang ing umount
969
* https://tracker.ceph.com/issues/50824
970
    qa: snaptest-git-ceph bus error
971
* https://tracker.ceph.com/issues/52436
972
    fs/ceph: "corrupt mdsmap"
973
* https://tracker.ceph.com/issues/48773
974
    qa: scrub does not complete
975
* https://tracker.ceph.com/issues/53082
976
    ceph-fuse: segmenetation fault in Client::handle_mds_map
977
* https://tracker.ceph.com/issues/50223
978
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
979
* https://tracker.ceph.com/issues/52624
980
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
981
* https://tracker.ceph.com/issues/50224
982
    qa: test_mirroring_init_failure_with_recovery failure
983
* https://tracker.ceph.com/issues/50821
984
    qa: untar_snap_rm failure during mds thrashing
985
* https://tracker.ceph.com/issues/50250
986
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
987
988
989
990 27 Patrick Donnelly
h3. 2021 October 19
991
992 28 Patrick Donnelly
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211019.013028
993 27 Patrick Donnelly
994
* https://tracker.ceph.com/issues/52995
995
    qa: test_standby_count_wanted failure
996
* https://tracker.ceph.com/issues/52948
997
    osd: fails to come up: "teuthology.misc:7 of 8 OSDs are up"
998
* https://tracker.ceph.com/issues/52996
999
    qa: test_perf_counters via test_openfiletable
1000
* https://tracker.ceph.com/issues/48772
1001
    qa: pjd: not ok 9, 44, 80
1002
* https://tracker.ceph.com/issues/52997
1003
    testing: hang ing umount
1004
* https://tracker.ceph.com/issues/50250
1005
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
1006
* https://tracker.ceph.com/issues/52624
1007
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1008
* https://tracker.ceph.com/issues/50223
1009
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
1010
* https://tracker.ceph.com/issues/50821
1011
    qa: untar_snap_rm failure during mds thrashing
1012
* https://tracker.ceph.com/issues/48773
1013
    qa: scrub does not complete
1014
1015
1016 26 Patrick Donnelly
h3. 2021 October 12
1017
1018
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211012.192211
1019
1020
Some failures caused by teuthology bug: https://tracker.ceph.com/issues/52944
1021
1022
New test caused failure: https://github.com/ceph/ceph/pull/43297#discussion_r729883167
1023
1024
1025
* https://tracker.ceph.com/issues/51282
1026
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
1027
* https://tracker.ceph.com/issues/52948
1028
    osd: fails to come up: "teuthology.misc:7 of 8 OSDs are up"
1029
* https://tracker.ceph.com/issues/48773
1030
    qa: scrub does not complete
1031
* https://tracker.ceph.com/issues/50224
1032
    qa: test_mirroring_init_failure_with_recovery failure
1033
* https://tracker.ceph.com/issues/52949
1034
    RuntimeError: The following counters failed to be set on mds daemons: {'mds.dir_split'}
1035
1036
1037 25 Patrick Donnelly
h3. 2021 October 02
1038 23 Patrick Donnelly
1039 24 Patrick Donnelly
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211002.163337
1040
1041
Some failures caused by cephadm upgrade test. Fixed in follow-up qa commit.
1042
1043
test_simple failures caused by PR in this set.
1044
1045
A few reruns because of QA infra noise.
1046
1047
* https://tracker.ceph.com/issues/52822
1048
    qa: failed pacific install on fs:upgrade
1049
* https://tracker.ceph.com/issues/52624
1050
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1051
* https://tracker.ceph.com/issues/50223
1052
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
1053
* https://tracker.ceph.com/issues/48773
1054
    qa: scrub does not complete
1055
1056
1057
h3. 2021 September 20
1058
1059 23 Patrick Donnelly
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210917.174826
1060
1061
* https://tracker.ceph.com/issues/52677
1062
    qa: test_simple failure
1063
* https://tracker.ceph.com/issues/51279
1064
    kclient hangs on umount (testing branch)
1065
* https://tracker.ceph.com/issues/50223
1066
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
1067
* https://tracker.ceph.com/issues/50250
1068
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
1069
* https://tracker.ceph.com/issues/52624
1070
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1071
* https://tracker.ceph.com/issues/52438
1072
    qa: ffsb timeout
1073
1074
1075 22 Patrick Donnelly
h3. 2021 September 10
1076
1077
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210910.181451
1078
1079
* https://tracker.ceph.com/issues/50223
1080
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
1081
* https://tracker.ceph.com/issues/50250
1082
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
1083
* https://tracker.ceph.com/issues/52624
1084
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1085
* https://tracker.ceph.com/issues/52625
1086
    qa: test_kill_mdstable (tasks.cephfs.test_snapshots.TestSnapshots)
1087
* https://tracker.ceph.com/issues/52439
1088
    qa: acls does not compile on centos stream
1089
* https://tracker.ceph.com/issues/50821
1090
    qa: untar_snap_rm failure during mds thrashing
1091
* https://tracker.ceph.com/issues/48773
1092
    qa: scrub does not complete
1093
* https://tracker.ceph.com/issues/52626
1094
    mds: ScrubStack.cc: 831: FAILED ceph_assert(diri)
1095
* https://tracker.ceph.com/issues/51279
1096
    kclient hangs on umount (testing branch)
1097
1098
1099 21 Patrick Donnelly
h3. 2021 August 27
1100
1101
Several jobs died because of device failures.
1102
1103
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210827.024746
1104
1105
* https://tracker.ceph.com/issues/52430
1106
    mds: fast async create client mount breaks racy test
1107
* https://tracker.ceph.com/issues/52436
1108
    fs/ceph: "corrupt mdsmap"
1109
* https://tracker.ceph.com/issues/52437
1110
    mds: InoTable::replay_release_ids abort via test_inotable_sync
1111
* https://tracker.ceph.com/issues/51282
1112
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
1113
* https://tracker.ceph.com/issues/52438
1114
    qa: ffsb timeout
1115
* https://tracker.ceph.com/issues/52439
1116
    qa: acls does not compile on centos stream
1117
1118
1119 20 Patrick Donnelly
h3. 2021 July 30
1120
1121
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210729.214022
1122
1123
* https://tracker.ceph.com/issues/50250
1124
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
1125
* https://tracker.ceph.com/issues/51282
1126
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
1127
* https://tracker.ceph.com/issues/48773
1128
    qa: scrub does not complete
1129
* https://tracker.ceph.com/issues/51975
1130
    pybind/mgr/stats: KeyError
1131
1132
1133 19 Patrick Donnelly
h3. 2021 July 28
1134
1135
https://pulpito.ceph.com/pdonnell-2021-07-28_00:39:45-fs-wip-pdonnell-testing-20210727.213757-distro-basic-smithi/
1136
1137
with qa fix: https://pulpito.ceph.com/pdonnell-2021-07-28_16:20:28-fs-wip-pdonnell-testing-20210728.141004-distro-basic-smithi/
1138
1139
* https://tracker.ceph.com/issues/51905
1140
    qa: "error reading sessionmap 'mds1_sessionmap'"
1141
* https://tracker.ceph.com/issues/48773
1142
    qa: scrub does not complete
1143
* https://tracker.ceph.com/issues/50250
1144
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
1145
* https://tracker.ceph.com/issues/51267
1146
    CommandFailedError: Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithi096 with status 1:...
1147
* https://tracker.ceph.com/issues/51279
1148
    kclient hangs on umount (testing branch)
1149
1150
1151 18 Patrick Donnelly
h3. 2021 July 16
1152
1153
https://pulpito.ceph.com/pdonnell-2021-07-16_05:50:11-fs-wip-pdonnell-testing-20210716.022804-distro-basic-smithi/
1154
1155
* https://tracker.ceph.com/issues/48773
1156
    qa: scrub does not complete
1157
* https://tracker.ceph.com/issues/48772
1158
    qa: pjd: not ok 9, 44, 80
1159
* https://tracker.ceph.com/issues/45434
1160
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
1161
* https://tracker.ceph.com/issues/51279
1162
    kclient hangs on umount (testing branch)
1163
* https://tracker.ceph.com/issues/50824
1164
    qa: snaptest-git-ceph bus error
1165
1166
1167 17 Patrick Donnelly
h3. 2021 July 04
1168
1169
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210703.052904
1170
1171
* https://tracker.ceph.com/issues/48773
1172
    qa: scrub does not complete
1173
* https://tracker.ceph.com/issues/39150
1174
    mon: "FAILED ceph_assert(session_map.sessions.empty())" when out of quorum
1175
* https://tracker.ceph.com/issues/45434
1176
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
1177
* https://tracker.ceph.com/issues/51282
1178
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
1179
* https://tracker.ceph.com/issues/48771
1180
    qa: iogen: workload fails to cause balancing
1181
* https://tracker.ceph.com/issues/51279
1182
    kclient hangs on umount (testing branch)
1183
* https://tracker.ceph.com/issues/50250
1184
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
1185
1186
1187 16 Patrick Donnelly
h3. 2021 July 01
1188
1189
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210701.192056
1190
1191
* https://tracker.ceph.com/issues/51197
1192
    qa: [WRN] Scrub error on inode 0x10000001520 (/client.0/tmp/t/linux-5.4/Documentation/driver-api) see mds.f log and `damage ls` output for details
1193
* https://tracker.ceph.com/issues/50866
1194
    osd: stat mismatch on objects
1195
* https://tracker.ceph.com/issues/48773
1196
    qa: scrub does not complete
1197
1198
1199 15 Patrick Donnelly
h3. 2021 June 26
1200
1201
https://pulpito.ceph.com/pdonnell-2021-06-26_00:57:00-fs-wip-pdonnell-testing-20210625.225421-distro-basic-smithi/
1202
1203
* https://tracker.ceph.com/issues/51183
1204
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
1205
* https://tracker.ceph.com/issues/51410
1206
    kclient: fails to finish reconnect during MDS thrashing (testing branch)
1207
* https://tracker.ceph.com/issues/48773
1208
    qa: scrub does not complete
1209
* https://tracker.ceph.com/issues/51282
1210
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
1211
* https://tracker.ceph.com/issues/51169
1212
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
1213
* https://tracker.ceph.com/issues/48772
1214
    qa: pjd: not ok 9, 44, 80
1215
1216
1217 14 Patrick Donnelly
h3. 2021 June 21
1218
1219
https://pulpito.ceph.com/pdonnell-2021-06-22_00:27:21-fs-wip-pdonnell-testing-20210621.231646-distro-basic-smithi/
1220
1221
One failure caused by PR: https://github.com/ceph/ceph/pull/41935#issuecomment-866472599
1222
1223
* https://tracker.ceph.com/issues/51282
1224
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
1225
* https://tracker.ceph.com/issues/51183
1226
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
1227
* https://tracker.ceph.com/issues/48773
1228
    qa: scrub does not complete
1229
* https://tracker.ceph.com/issues/48771
1230
    qa: iogen: workload fails to cause balancing
1231
* https://tracker.ceph.com/issues/51169
1232
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
1233
* https://tracker.ceph.com/issues/50495
1234
    libcephfs: shutdown race fails with status 141
1235
* https://tracker.ceph.com/issues/45434
1236
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
1237
* https://tracker.ceph.com/issues/50824
1238
    qa: snaptest-git-ceph bus error
1239
* https://tracker.ceph.com/issues/50223
1240
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
1241
1242
1243 13 Patrick Donnelly
h3. 2021 June 16
1244
1245
https://pulpito.ceph.com/pdonnell-2021-06-16_21:26:55-fs-wip-pdonnell-testing-20210616.191804-distro-basic-smithi/
1246
1247
MDS abort class of failures caused by PR: https://github.com/ceph/ceph/pull/41667
1248
1249
* https://tracker.ceph.com/issues/45434
1250
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
1251
* https://tracker.ceph.com/issues/51169
1252
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
1253
* https://tracker.ceph.com/issues/43216
1254
    MDSMonitor: removes MDS coming out of quorum election
1255
* https://tracker.ceph.com/issues/51278
1256
    mds: "FAILED ceph_assert(!segments.empty())"
1257
* https://tracker.ceph.com/issues/51279
1258
    kclient hangs on umount (testing branch)
1259
* https://tracker.ceph.com/issues/51280
1260
    mds: "FAILED ceph_assert(r == 0 || r == -2)"
1261
* https://tracker.ceph.com/issues/51183
1262
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
1263
* https://tracker.ceph.com/issues/51281
1264
    qa: snaptest-snap-rm-cmp.sh: "echo 'FAIL: bad match, /tmp/a 4637e766853d1ad16a7b17079e2c6f03 != real c3883760b18d50e8d78819c54d579b00'"
1265
* https://tracker.ceph.com/issues/48773
1266
    qa: scrub does not complete
1267
* https://tracker.ceph.com/issues/51076
1268
    "wait_for_recovery: failed before timeout expired" during thrashosd test with EC backend.
1269
* https://tracker.ceph.com/issues/51228
1270
    qa: rmdir: failed to remove 'a/.snap/*': No such file or directory
1271
* https://tracker.ceph.com/issues/51282
1272
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
1273
1274
1275 12 Patrick Donnelly
h3. 2021 June 14
1276
1277
https://pulpito.ceph.com/pdonnell-2021-06-14_20:53:05-fs-wip-pdonnell-testing-20210614.173325-distro-basic-smithi/
1278
1279
Some Ubuntu 20.04 upgrade fallout. In particular, upgrade tests are failing due to missing packages for 18.04 Pacific.
1280
1281
* https://tracker.ceph.com/issues/51169
1282
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
1283
* https://tracker.ceph.com/issues/51228
1284
    qa: rmdir: failed to remove 'a/.snap/*': No such file or directory
1285
* https://tracker.ceph.com/issues/48773
1286
    qa: scrub does not complete
1287
* https://tracker.ceph.com/issues/51183
1288
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
1289
* https://tracker.ceph.com/issues/45434
1290
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
1291
* https://tracker.ceph.com/issues/51182
1292
    pybind/mgr/snap_schedule: Invalid command: Unexpected argument 'fs=cephfs'
1293
* https://tracker.ceph.com/issues/51229
1294
    qa: test_multi_snap_schedule list difference failure
1295
* https://tracker.ceph.com/issues/50821
1296
    qa: untar_snap_rm failure during mds thrashing
1297
1298
1299 11 Patrick Donnelly
h3. 2021 June 13
1300
1301
https://pulpito.ceph.com/pdonnell-2021-06-12_02:45:35-fs-wip-pdonnell-testing-20210612.002809-distro-basic-smithi/
1302
1303
Some Ubuntu 20.04 upgrade fallout. In particular, upgrade tests are failing due to missing packages for 18.04 Pacific.
1304
1305
* https://tracker.ceph.com/issues/51169
1306
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
1307
* https://tracker.ceph.com/issues/48773
1308
    qa: scrub does not complete
1309
* https://tracker.ceph.com/issues/51182
1310
    pybind/mgr/snap_schedule: Invalid command: Unexpected argument 'fs=cephfs'
1311
* https://tracker.ceph.com/issues/51183
1312
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
1313
* https://tracker.ceph.com/issues/51197
1314
    qa: [WRN] Scrub error on inode 0x10000001520 (/client.0/tmp/t/linux-5.4/Documentation/driver-api) see mds.f log and `damage ls` output for details
1315
* https://tracker.ceph.com/issues/45434
1316
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
1317
1318 10 Patrick Donnelly
h3. 2021 June 11
1319
1320
https://pulpito.ceph.com/pdonnell-2021-06-11_18:02:10-fs-wip-pdonnell-testing-20210611.162716-distro-basic-smithi/
1321
1322
Some Ubuntu 20.04 upgrade fallout. In particular, upgrade tests are failing due to missing packages for 18.04 Pacific.
1323
1324
* https://tracker.ceph.com/issues/51169
1325
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
1326
* https://tracker.ceph.com/issues/45434
1327
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
1328
* https://tracker.ceph.com/issues/48771
1329
    qa: iogen: workload fails to cause balancing
1330
* https://tracker.ceph.com/issues/43216
1331
    MDSMonitor: removes MDS coming out of quorum election
1332
* https://tracker.ceph.com/issues/51182
1333
    pybind/mgr/snap_schedule: Invalid command: Unexpected argument 'fs=cephfs'
1334
* https://tracker.ceph.com/issues/50223
1335
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
1336
* https://tracker.ceph.com/issues/48773
1337
    qa: scrub does not complete
1338
* https://tracker.ceph.com/issues/51183
1339
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
1340
* https://tracker.ceph.com/issues/51184
1341
    qa: fs:bugs does not specify distro
1342
1343
1344 9 Patrick Donnelly
h3. 2021 June 03
1345
1346
https://pulpito.ceph.com/pdonnell-2021-06-03_03:40:33-fs-wip-pdonnell-testing-20210603.020013-distro-basic-smithi/
1347
1348
* https://tracker.ceph.com/issues/45434
1349
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
1350
* https://tracker.ceph.com/issues/50016
1351
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
1352
* https://tracker.ceph.com/issues/50821
1353
    qa: untar_snap_rm failure during mds thrashing
1354
* https://tracker.ceph.com/issues/50622 (regression)
1355
    msg: active_connections regression
1356
* https://tracker.ceph.com/issues/49845#note-2 (regression)
1357
    qa: failed umount in test_volumes
1358
* https://tracker.ceph.com/issues/48773
1359
    qa: scrub does not complete
1360
* https://tracker.ceph.com/issues/43216
1361
    MDSMonitor: removes MDS coming out of quorum election
1362
1363
1364 7 Patrick Donnelly
h3. 2021 May 18
1365
1366 8 Patrick Donnelly
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210518.214114
1367
1368
Regression in testing kernel caused some failures. Ilya fixed those and rerun
1369
looked better. Some odd new noise in the rerun relating to packaging and "No
1370
module named 'tasks.ceph'".
1371
1372
* https://tracker.ceph.com/issues/50824
1373
    qa: snaptest-git-ceph bus error
1374
* https://tracker.ceph.com/issues/50622 (regression)
1375
    msg: active_connections regression
1376
* https://tracker.ceph.com/issues/49845#note-2 (regression)
1377
    qa: failed umount in test_volumes
1378
* https://tracker.ceph.com/issues/48203 (stock kernel update required)
1379
    qa: quota failure
1380
1381
1382
h3. 2021 May 18
1383
1384 7 Patrick Donnelly
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210518.025642
1385
1386
* https://tracker.ceph.com/issues/50821
1387
    qa: untar_snap_rm failure during mds thrashing
1388
* https://tracker.ceph.com/issues/48773
1389
    qa: scrub does not complete
1390
* https://tracker.ceph.com/issues/45591
1391
    mgr: FAILED ceph_assert(daemon != nullptr)
1392
* https://tracker.ceph.com/issues/50866
1393
    osd: stat mismatch on objects
1394
* https://tracker.ceph.com/issues/50016
1395
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
1396
* https://tracker.ceph.com/issues/50867
1397
    qa: fs:mirror: reduced data availability
1398
* https://tracker.ceph.com/issues/50821
1399
    qa: untar_snap_rm failure during mds thrashing
1400
* https://tracker.ceph.com/issues/50622 (regression)
1401
    msg: active_connections regression
1402
* https://tracker.ceph.com/issues/50223
1403
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
1404
* https://tracker.ceph.com/issues/50868
1405
    qa: "kern.log.gz already exists; not overwritten"
1406
* https://tracker.ceph.com/issues/50870
1407
    qa: test_full: "rm: cannot remove 'large_file_a': Permission denied"
1408
1409
1410 6 Patrick Donnelly
h3. 2021 May 11
1411
1412
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210511.232042
1413
1414
* one class of failures caused by PR
1415
* https://tracker.ceph.com/issues/48812
1416
    qa: test_scrub_pause_and_resume_with_abort failure
1417
* https://tracker.ceph.com/issues/50390
1418
    mds: monclient: wait_auth_rotating timed out after 30
1419
* https://tracker.ceph.com/issues/48773
1420
    qa: scrub does not complete
1421
* https://tracker.ceph.com/issues/50821
1422
    qa: untar_snap_rm failure during mds thrashing
1423
* https://tracker.ceph.com/issues/50224
1424
    qa: test_mirroring_init_failure_with_recovery failure
1425
* https://tracker.ceph.com/issues/50622 (regression)
1426
    msg: active_connections regression
1427
* https://tracker.ceph.com/issues/50825
1428
    qa: snaptest-git-ceph hang during mon thrashing v2
1429
* https://tracker.ceph.com/issues/50821
1430
    qa: untar_snap_rm failure during mds thrashing
1431
* https://tracker.ceph.com/issues/50823
1432
    qa: RuntimeError: timeout waiting for cluster to stabilize
1433
1434
1435 5 Patrick Donnelly
h3. 2021 May 14
1436
1437
https://pulpito.ceph.com/pdonnell-2021-05-14_21:45:42-fs-master-distro-basic-smithi/
1438
1439
* https://tracker.ceph.com/issues/48812
1440
    qa: test_scrub_pause_and_resume_with_abort failure
1441
* https://tracker.ceph.com/issues/50821
1442
    qa: untar_snap_rm failure during mds thrashing
1443
* https://tracker.ceph.com/issues/50622 (regression)
1444
    msg: active_connections regression
1445
* https://tracker.ceph.com/issues/50822
1446
    qa: testing kernel patch for client metrics causes mds abort
1447
* https://tracker.ceph.com/issues/48773
1448
    qa: scrub does not complete
1449
* https://tracker.ceph.com/issues/50823
1450
    qa: RuntimeError: timeout waiting for cluster to stabilize
1451
* https://tracker.ceph.com/issues/50824
1452
    qa: snaptest-git-ceph bus error
1453
* https://tracker.ceph.com/issues/50825
1454
    qa: snaptest-git-ceph hang during mon thrashing v2
1455
* https://tracker.ceph.com/issues/50826
1456
    kceph: stock RHEL kernel hangs on snaptests with mon|osd thrashers
1457
1458
1459 4 Patrick Donnelly
h3. 2021 May 01
1460
1461
https://pulpito.ceph.com/pdonnell-2021-05-01_09:07:09-fs-wip-pdonnell-testing-20210501.040415-distro-basic-smithi/
1462
1463
* https://tracker.ceph.com/issues/45434
1464
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
1465
* https://tracker.ceph.com/issues/50281
1466
    qa: untar_snap_rm timeout
1467
* https://tracker.ceph.com/issues/48203 (stock kernel update required)
1468
    qa: quota failure
1469
* https://tracker.ceph.com/issues/48773
1470
    qa: scrub does not complete
1471
* https://tracker.ceph.com/issues/50390
1472
    mds: monclient: wait_auth_rotating timed out after 30
1473
* https://tracker.ceph.com/issues/50250
1474
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details"
1475
* https://tracker.ceph.com/issues/50622 (regression)
1476
    msg: active_connections regression
1477
* https://tracker.ceph.com/issues/45591
1478
    mgr: FAILED ceph_assert(daemon != nullptr)
1479
* https://tracker.ceph.com/issues/50221
1480
    qa: snaptest-git-ceph failure in git diff
1481
* https://tracker.ceph.com/issues/50016
1482
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
1483
1484
1485 3 Patrick Donnelly
h3. 2021 Apr 15
1486
1487
https://pulpito.ceph.com/pdonnell-2021-04-15_01:35:57-fs-wip-pdonnell-testing-20210414.230315-distro-basic-smithi/
1488
1489
* https://tracker.ceph.com/issues/50281
1490
    qa: untar_snap_rm timeout
1491
* https://tracker.ceph.com/issues/50220
1492
    qa: dbench workload timeout
1493
* https://tracker.ceph.com/issues/50246
1494
    mds: failure replaying journal (EMetaBlob)
1495
* https://tracker.ceph.com/issues/50250
1496
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details"
1497
* https://tracker.ceph.com/issues/50016
1498
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
1499
* https://tracker.ceph.com/issues/50222
1500
    osd: 5.2s0 deep-scrub : stat mismatch
1501
* https://tracker.ceph.com/issues/45434
1502
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
1503
* https://tracker.ceph.com/issues/49845
1504
    qa: failed umount in test_volumes
1505
* https://tracker.ceph.com/issues/37808
1506
    osd: osdmap cache weak_refs assert during shutdown
1507
* https://tracker.ceph.com/issues/50387
1508
    client: fs/snaps failure
1509
* https://tracker.ceph.com/issues/50389
1510
    mds: "cluster [ERR] Error recovering journal 0x203: (2) No such file or directory" in cluster log"
1511
* https://tracker.ceph.com/issues/50216
1512
    qa: "ls: cannot access 'lost+found': No such file or directory"
1513
* https://tracker.ceph.com/issues/50390
1514
    mds: monclient: wait_auth_rotating timed out after 30
1515
1516
1517
1518 1 Patrick Donnelly
h3. 2021 Apr 08
1519
1520 2 Patrick Donnelly
https://pulpito.ceph.com/pdonnell-2021-04-08_22:42:24-fs-wip-pdonnell-testing-20210408.192301-distro-basic-smithi/
1521
1522
* https://tracker.ceph.com/issues/45434
1523
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
1524
* https://tracker.ceph.com/issues/50016
1525
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
1526
* https://tracker.ceph.com/issues/48773
1527
    qa: scrub does not complete
1528
* https://tracker.ceph.com/issues/50279
1529
    qa: "Replacing daemon mds.b as rank 0 with standby daemon mds.c"
1530
* https://tracker.ceph.com/issues/50246
1531
    mds: failure replaying journal (EMetaBlob)
1532
* https://tracker.ceph.com/issues/48365
1533
    qa: ffsb build failure on CentOS 8.2
1534
* https://tracker.ceph.com/issues/50216
1535
    qa: "ls: cannot access 'lost+found': No such file or directory"
1536
* https://tracker.ceph.com/issues/50223
1537
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
1538
* https://tracker.ceph.com/issues/50280
1539
    cephadm: RuntimeError: uid/gid not found
1540
* https://tracker.ceph.com/issues/50281
1541
    qa: untar_snap_rm timeout
1542
1543
h3. 2021 Apr 08
1544
1545 1 Patrick Donnelly
https://pulpito.ceph.com/pdonnell-2021-04-08_04:31:36-fs-wip-pdonnell-testing-20210408.024225-distro-basic-smithi/
1546
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210408.142238 (with logic inversion / QA fix)
1547
1548
* https://tracker.ceph.com/issues/50246
1549
    mds: failure replaying journal (EMetaBlob)
1550
* https://tracker.ceph.com/issues/50250
1551
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details"
1552
1553
1554
h3. 2021 Apr 07
1555
1556
https://pulpito.ceph.com/pdonnell-2021-04-07_02:12:41-fs-wip-pdonnell-testing-20210406.213012-distro-basic-smithi/
1557
1558
* https://tracker.ceph.com/issues/50215
1559
    qa: "log [ERR] : error reading sessionmap 'mds2_sessionmap'"
1560
* https://tracker.ceph.com/issues/49466
1561
    qa: "Command failed on gibba030 with status 1: 'set -ex\nsudo dd of=/tmp/tmp.ZEeZBasJer'"
1562
* https://tracker.ceph.com/issues/50216
1563
    qa: "ls: cannot access 'lost+found': No such file or directory"
1564
* https://tracker.ceph.com/issues/48773
1565
    qa: scrub does not complete
1566
* https://tracker.ceph.com/issues/49845
1567
    qa: failed umount in test_volumes
1568
* https://tracker.ceph.com/issues/50220
1569
    qa: dbench workload timeout
1570
* https://tracker.ceph.com/issues/50221
1571
    qa: snaptest-git-ceph failure in git diff
1572
* https://tracker.ceph.com/issues/50222
1573
    osd: 5.2s0 deep-scrub : stat mismatch
1574
* https://tracker.ceph.com/issues/50223
1575
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
1576
* https://tracker.ceph.com/issues/50224
1577
    qa: test_mirroring_init_failure_with_recovery failure
1578
1579
h3. 2021 Apr 01
1580
1581
https://pulpito.ceph.com/pdonnell-2021-04-01_00:45:34-fs-wip-pdonnell-testing-20210331.222326-distro-basic-smithi/
1582
1583
* https://tracker.ceph.com/issues/48772
1584
    qa: pjd: not ok 9, 44, 80
1585
* https://tracker.ceph.com/issues/50177
1586
    osd: "stalled aio... buggy kernel or bad device?"
1587
* https://tracker.ceph.com/issues/48771
1588
    qa: iogen: workload fails to cause balancing
1589
* https://tracker.ceph.com/issues/49845
1590
    qa: failed umount in test_volumes
1591
* https://tracker.ceph.com/issues/48773
1592
    qa: scrub does not complete
1593
* https://tracker.ceph.com/issues/48805
1594
    mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details"
1595
* https://tracker.ceph.com/issues/50178
1596
    qa: "TypeError: run() got an unexpected keyword argument 'shell'"
1597
* https://tracker.ceph.com/issues/45434
1598
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
1599
1600
h3. 2021 Mar 24
1601
1602
https://pulpito.ceph.com/pdonnell-2021-03-24_23:26:35-fs-wip-pdonnell-testing-20210324.190252-distro-basic-smithi/
1603
1604
* https://tracker.ceph.com/issues/49500
1605
    qa: "Assertion `cb_done' failed."
1606
* https://tracker.ceph.com/issues/50019
1607
    qa: mount failure with cephadm "probably no MDS server is up?"
1608
* https://tracker.ceph.com/issues/50020
1609
    qa: "RADOS object not found (Failed to operate read op for oid cephfs_mirror)"
1610
* https://tracker.ceph.com/issues/48773
1611
    qa: scrub does not complete
1612
* https://tracker.ceph.com/issues/45434
1613
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
1614
* https://tracker.ceph.com/issues/48805
1615
    mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details"
1616
* https://tracker.ceph.com/issues/48772
1617
    qa: pjd: not ok 9, 44, 80
1618
* https://tracker.ceph.com/issues/50021
1619
    qa: snaptest-git-ceph failure during mon thrashing
1620
* https://tracker.ceph.com/issues/48771
1621
    qa: iogen: workload fails to cause balancing
1622
* https://tracker.ceph.com/issues/50016
1623
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
1624
* https://tracker.ceph.com/issues/49466
1625
    qa: "Command failed on gibba030 with status 1: 'set -ex\nsudo dd of=/tmp/tmp.ZEeZBasJer'"
1626
1627
1628
h3. 2021 Mar 18
1629
1630
https://pulpito.ceph.com/pdonnell-2021-03-18_13:46:31-fs-wip-pdonnell-testing-20210318.024145-distro-basic-smithi/
1631
1632
* https://tracker.ceph.com/issues/49466
1633
    qa: "Command failed on gibba030 with status 1: 'set -ex\nsudo dd of=/tmp/tmp.ZEeZBasJer'"
1634
* https://tracker.ceph.com/issues/48773
1635
    qa: scrub does not complete
1636
* https://tracker.ceph.com/issues/48805
1637
    mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details"
1638
* https://tracker.ceph.com/issues/45434
1639
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
1640
* https://tracker.ceph.com/issues/49845
1641
    qa: failed umount in test_volumes
1642
* https://tracker.ceph.com/issues/49605
1643
    mgr: drops command on the floor
1644
* https://tracker.ceph.com/issues/48203 (stock kernel update required)
1645
    qa: quota failure
1646
* https://tracker.ceph.com/issues/49928
1647
    client: items pinned in cache preventing unmount x2
1648
1649
h3. 2021 Mar 15
1650
1651
https://pulpito.ceph.com/pdonnell-2021-03-15_22:16:56-fs-wip-pdonnell-testing-20210315.182203-distro-basic-smithi/
1652
1653
* https://tracker.ceph.com/issues/49842
1654
    qa: stuck pkg install
1655
* https://tracker.ceph.com/issues/49466
1656
    qa: "Command failed on gibba030 with status 1: 'set -ex\nsudo dd of=/tmp/tmp.ZEeZBasJer'"
1657
* https://tracker.ceph.com/issues/49822
1658
    test: test_mirroring_command_idempotency (tasks.cephfs.test_admin.TestMirroringCommands) failure
1659
* https://tracker.ceph.com/issues/49240
1660
    terminate called after throwing an instance of 'std::bad_alloc'
1661
* https://tracker.ceph.com/issues/48773
1662
    qa: scrub does not complete
1663
* https://tracker.ceph.com/issues/45434
1664
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
1665
* https://tracker.ceph.com/issues/49500
1666
    qa: "Assertion `cb_done' failed."
1667
* https://tracker.ceph.com/issues/49843
1668
    qa: fs/snaps/snaptest-upchildrealms.sh failure
1669
* https://tracker.ceph.com/issues/49845
1670
    qa: failed umount in test_volumes
1671
* https://tracker.ceph.com/issues/48805
1672
    mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details"
1673
* https://tracker.ceph.com/issues/49605
1674
    mgr: drops command on the floor
1675
1676
and failure caused by PR: https://github.com/ceph/ceph/pull/39969
1677
1678
1679
h3. 2021 Mar 09
1680
1681
https://pulpito.ceph.com/pdonnell-2021-03-09_03:27:39-fs-wip-pdonnell-testing-20210308.214827-distro-basic-smithi/
1682
1683
* https://tracker.ceph.com/issues/49500
1684
    qa: "Assertion `cb_done' failed."
1685
* https://tracker.ceph.com/issues/48805
1686
    mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details"
1687
* https://tracker.ceph.com/issues/48773
1688
    qa: scrub does not complete
1689
* https://tracker.ceph.com/issues/45434
1690
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
1691
* https://tracker.ceph.com/issues/49240
1692
    terminate called after throwing an instance of 'std::bad_alloc'
1693
* https://tracker.ceph.com/issues/49466
1694
    qa: "Command failed on gibba030 with status 1: 'set -ex\nsudo dd of=/tmp/tmp.ZEeZBasJer'"
1695
* https://tracker.ceph.com/issues/49684
1696
    qa: fs:cephadm mount does not wait for mds to be created
1697
* https://tracker.ceph.com/issues/48771
1698
    qa: iogen: workload fails to cause balancing