Project

General

Profile

Main » History » Version 132

Kotresh Hiremath Ravishankar, 05/18/2023 10:29 AM

1 79 Venky Shankar
h1. MAIN
2
3 128 Venky Shankar
h3. 15 May 2023
4
5
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230509.090020
6 130 Venky Shankar
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230509.090020-6
7 128 Venky Shankar
8
* https://tracker.ceph.com/issues/52624
9
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
10
* https://tracker.ceph.com/issues/54460
11
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
12
* https://tracker.ceph.com/issues/57676
13
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
14
* https://tracker.ceph.com/issues/59684 [kclient bug]
15
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
16
* https://tracker.ceph.com/issues/59348
17
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
18
* https://tracker.ceph.com/issues/61148
19
    dbench test results in call trace in dmesg [kclient bug]
20 131 Venky Shankar
* https://tracker.ceph.com/issues/58340
21
    mds: fsstress.sh hangs with multimds
22 128 Venky Shankar
23 125 Venky Shankar
h3. 11 May 2023
24
25 129 Rishabh Dave
https://pulpito.ceph.com/yuriw-2023-05-10_18:21:40-fs-wip-yuri7-testing-2023-05-10-0742-distro-default-smithi/
26
27
* https://tracker.ceph.com/issues/59684 [kclient bug]
28
  Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
29
* https://tracker.ceph.com/issues/59348
30
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
31
* https://tracker.ceph.com/issues/57655
32
  qa: fs:mixed-clients kernel_untar_build failure
33
* https://tracker.ceph.com/issues/57676
34
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
35
* https://tracker.ceph.com/issues/55805
36
  error during scrub thrashing reached max tries in 900 secs
37
* https://tracker.ceph.com/issues/54460
38
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
39
* https://tracker.ceph.com/issues/57656
40
  [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
41
* https://tracker.ceph.com/issues/58220
42
  Command failed (workunit test fs/quota/quota.sh) on smithi081 with status 1:
43
* https://tracker.ceph.com/issues/58220#note-9
44
  workunit fs/test_python.sh: test_disk_quota_exceeeded_error failure
45 132 Kotresh Hiremath Ravishankar
* https://tracker.ceph.com/issues/59342
46
  qa/workunits/kernel_untar_build.sh failed when compiling the Linux source
47 129 Rishabh Dave
48
h3. 11 May 2023
49
50 125 Venky Shankar
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230509.054005
51 127 Venky Shankar
52
(no fsstress job failure [https://tracker.ceph.com/issues/58340] since https://github.com/ceph/ceph/pull/49553
53 126 Venky Shankar
 was included in the branch, however, the PR got updated and needs retest).
54 125 Venky Shankar
55
* https://tracker.ceph.com/issues/52624
56
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
57
* https://tracker.ceph.com/issues/54460
58
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
59
* https://tracker.ceph.com/issues/57676
60
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
61
* https://tracker.ceph.com/issues/59683
62
    Error: Unable to find a match: userspace-rcu-devel libedit-devel device-mapper-devel with fscrypt tests
63
* https://tracker.ceph.com/issues/59684 [kclient bug]
64
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
65
* https://tracker.ceph.com/issues/59348
66
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
67
68 124 Venky Shankar
h3. 09 May 2023
69
70
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230506.143554
71
72
* https://tracker.ceph.com/issues/52624
73
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
74
* https://tracker.ceph.com/issues/58340
75
    mds: fsstress.sh hangs with multimds
76
* https://tracker.ceph.com/issues/54460
77
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
78
* https://tracker.ceph.com/issues/57676
79
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
80
* https://tracker.ceph.com/issues/51964
81
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
82
* https://tracker.ceph.com/issues/59350
83
    qa: test_flag_scrub_mdsdir (tasks.cephfs.test_scrub_checks.TestScrubChecks) ... ERROR
84
* https://tracker.ceph.com/issues/59683
85
    Error: Unable to find a match: userspace-rcu-devel libedit-devel device-mapper-devel with fscrypt tests
86
* https://tracker.ceph.com/issues/59684 [kclient bug]
87
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
88
* https://tracker.ceph.com/issues/59348
89
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
90
91 123 Venky Shankar
h3. 10 Apr 2023
92
93
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230330.105356
94
95
* https://tracker.ceph.com/issues/52624
96
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
97
* https://tracker.ceph.com/issues/58340
98
    mds: fsstress.sh hangs with multimds
99
* https://tracker.ceph.com/issues/54460
100
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
101
* https://tracker.ceph.com/issues/57676
102
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
103
* https://tracker.ceph.com/issues/51964
104
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
105 119 Rishabh Dave
106 120 Rishabh Dave
h3. 31 Mar 2023
107 121 Rishabh Dave
108 120 Rishabh Dave
run: http://pulpito.front.sepia.ceph.com/rishabh-2023-03-03_21:39:49-fs-wip-rishabh-2023Mar03-2316-testing-default-smithi/
109 122 Rishabh Dave
re-run (some PRs removed from batch): http://pulpito.front.sepia.ceph.com/rishabh-2023-03-11_05:54:03-fs-wip-rishabh-2023Mar10-1727-testing-default-smithi/
110
re-run (some PRs removed from batch): http://pulpito.front.sepia.ceph.com/rishabh-2023-03-23_08:27:28-fs-wip-rishabh-2023Mar20-2250-testing-default-smithi/
111 120 Rishabh Dave
112
There were many more re-runs for "failed+dead" jobs as well as for individual jobs. half of the PRs from the batch were removed (gradually over subsequent re-runs).
113
114
* https://tracker.ceph.com/issues/57676
115
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
116
* https://tracker.ceph.com/issues/54460
117
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
118
* https://tracker.ceph.com/issues/58220
119
  Command failed (workunit test fs/quota/quota.sh) on smithixxx with status 1
120
* https://tracker.ceph.com/issues/58220#note-9
121
  workunit fs/test_python.sh: test_disk_quota_exceeeded_error failure
122
* https://tracker.ceph.com/issues/56695
123
  Command failed (workunit test suites/pjd.sh)
124
* https://tracker.ceph.com/issues/58564 
125
  workuit dbench failed with error code 1
126
* https://tracker.ceph.com/issues/57206
127
  libcephfs/test.sh: ceph_test_libcephfs_reclaim
128
* https://tracker.ceph.com/issues/57580
129
  Test failure: test_newops_getvxattr (tasks.cephfs.test_newops.TestNewOps)
130
* https://tracker.ceph.com/issues/58940
131
  ceph osd hit ceph_abort
132
* https://tracker.ceph.com/issues/55805
133
  error scrub thrashing reached max tries in 900 secs
134
135 118 Venky Shankar
h3. 30 March 2023
136
137
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230315.085747
138
139
* https://tracker.ceph.com/issues/58938
140
    qa: xfstests-dev's generic test suite has 7 failures with kclient
141
* https://tracker.ceph.com/issues/51964
142
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
143
* https://tracker.ceph.com/issues/58340
144
    mds: fsstress.sh hangs with multimds
145
146 114 Venky Shankar
h3. 29 March 2023
147
148 115 Venky Shankar
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230317.095222
149 114 Venky Shankar
150
* https://tracker.ceph.com/issues/56695
151
    [RHEL stock] pjd test failures
152
* https://tracker.ceph.com/issues/57676
153
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
154
* https://tracker.ceph.com/issues/57087
155
    qa: test_fragmented_injection (tasks.cephfs.test_data_scan.TestDataScan) failure
156
* https://tracker.ceph.com/issues/58340
157
    mds: fsstress.sh hangs with multimds
158 116 Venky Shankar
* https://tracker.ceph.com/issues/57655
159
    qa: fs:mixed-clients kernel_untar_build failure
160 114 Venky Shankar
* https://tracker.ceph.com/issues/59230
161
    Test failure: test_object_deletion (tasks.cephfs.test_damage.TestDamage)
162 117 Venky Shankar
* https://tracker.ceph.com/issues/58938
163
    qa: xfstests-dev's generic test suite has 7 failures with kclient
164 114 Venky Shankar
165 113 Venky Shankar
h3. 13 Mar 2023
166
167
* https://tracker.ceph.com/issues/56695
168
    [RHEL stock] pjd test failures
169
* https://tracker.ceph.com/issues/57676
170
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
171
* https://tracker.ceph.com/issues/51964
172
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
173
* https://tracker.ceph.com/issues/54460
174
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
175
* https://tracker.ceph.com/issues/57656
176
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
177
178 112 Venky Shankar
h3. 09 Mar 2023
179
180
https://pulpito.ceph.com/vshankar-2023-03-03_04:39:14-fs-wip-vshankar-testing-20230303.023823-testing-default-smithi/
181
https://pulpito.ceph.com/vshankar-2023-03-08_15:12:36-fs-wip-vshankar-testing-20230308.112059-testing-default-smithi/
182
183
* https://tracker.ceph.com/issues/56695
184
    [RHEL stock] pjd test failures
185
* https://tracker.ceph.com/issues/57676
186
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
187
* https://tracker.ceph.com/issues/51964
188
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
189
* https://tracker.ceph.com/issues/54460
190
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
191
* https://tracker.ceph.com/issues/58340
192
    mds: fsstress.sh hangs with multimds
193
* https://tracker.ceph.com/issues/57087
194
    qa: test_fragmented_injection (tasks.cephfs.test_data_scan.TestDataScan) failure
195
196 111 Venky Shankar
h3. 07 Mar 2023
197
198
https://pulpito.ceph.com/vshankar-2023-03-02_09:21:58-fs-wip-vshankar-testing-20230222.044949-testing-default-smithi/
199
https://pulpito.ceph.com/vshankar-2023-03-07_05:15:12-fs-wip-vshankar-testing-20230307.030510-testing-default-smithi/
200
201
* https://tracker.ceph.com/issues/56695
202
    [RHEL stock] pjd test failures
203
* https://tracker.ceph.com/issues/57676
204
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
205
* https://tracker.ceph.com/issues/51964
206
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
207
* https://tracker.ceph.com/issues/57656
208
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
209
* https://tracker.ceph.com/issues/57655
210
    qa: fs:mixed-clients kernel_untar_build failure
211
* https://tracker.ceph.com/issues/58220
212
    Command failed (workunit test fs/quota/quota.sh) on smithi081 with status 1:
213
* https://tracker.ceph.com/issues/54460
214
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
215
* https://tracker.ceph.com/issues/58934
216
    snaptest-git-ceph.sh failure with ceph-fuse
217
218 109 Venky Shankar
h3. 28 Feb 2023
219
220
https://pulpito.ceph.com/vshankar-2023-02-24_02:11:45-fs-wip-vshankar-testing-20230222.025426-testing-default-smithi/
221
222
* https://tracker.ceph.com/issues/56695
223
    [RHEL stock] pjd test failures
224
* https://tracker.ceph.com/issues/57676
225
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
226
* https://tracker.ceph.com/issues/56446
227
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
228 110 Venky Shankar
229 109 Venky Shankar
(teuthology infra issues causing testing delays - merging PRs which have tests passing)
230
231 107 Venky Shankar
h3. 25 Jan 2023
232
233
https://pulpito.ceph.com/vshankar-2023-01-25_07:57:32-fs-wip-vshankar-testing-20230125.055346-testing-default-smithi/
234
235
* https://tracker.ceph.com/issues/52624
236
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
237
* https://tracker.ceph.com/issues/56695
238
    [RHEL stock] pjd test failures
239
* https://tracker.ceph.com/issues/57676
240
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
241
* https://tracker.ceph.com/issues/56446
242
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
243
* https://tracker.ceph.com/issues/57206
244
    libcephfs/test.sh: ceph_test_libcephfs_reclaim
245
* https://tracker.ceph.com/issues/58220
246
    Command failed (workunit test fs/quota/quota.sh) on smithi081 with status 1:
247
* https://tracker.ceph.com/issues/58340
248
  mds: fsstress.sh hangs with multimds
249
* https://tracker.ceph.com/issues/56011
250
    fs/thrash: snaptest-snap-rm-cmp.sh fails in mds5sum comparison
251
* https://tracker.ceph.com/issues/54460
252
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
253
254 101 Rishabh Dave
h3. 30 JAN 2023
255
256
run: http://pulpito.front.sepia.ceph.com/rishabh-2022-11-28_08:04:11-fs-wip-rishabh-testing-2022Nov24-1818-testing-default-smithi/
257
re-run: http://pulpito.front.sepia.ceph.com/rishabh-2023-01-13_12:08:33-fs-wip-rishabh-testing-2022Nov24-11Jan2023-distro-default-smithi/
258
re-run of re-run: http://pulpito.front.sepia.ceph.com/rishabh-2023-01-23_18:53:32-fs-wip-rishabh-testing-2022Nov24-11Jan2023-distro-default-smithi/
259
260 105 Rishabh Dave
* https://tracker.ceph.com/issues/52624
261
  qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
262 101 Rishabh Dave
* https://tracker.ceph.com/issues/56695
263
  [RHEL stock] pjd test failures
264
* https://tracker.ceph.com/issues/57676
265
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
266
* https://tracker.ceph.com/issues/55332
267
  Failure in snaptest-git-ceph.sh
268
* https://tracker.ceph.com/issues/51964
269
  qa: test_cephfs_mirror_restart_sync_on_blocklist failure
270
* https://tracker.ceph.com/issues/56446
271
  Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
272
* https://tracker.ceph.com/issues/57655 
273
  qa: fs:mixed-clients kernel_untar_build failure
274
* https://tracker.ceph.com/issues/54460
275
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
276
* https://tracker.ceph.com/issues/58340
277
  mds: fsstress.sh hangs with multimds
278 103 Rishabh Dave
* https://tracker.ceph.com/issues/58219
279
  Command crashed: 'ceph-dencoder type inode_backtrace_t import - decode dump_json'
280 101 Rishabh Dave
281 102 Rishabh Dave
* "Failed to load ceph-mgr modules: prometheus" in cluster log"
282
  http://pulpito.front.sepia.ceph.com/rishabh-2023-01-23_18:53:32-fs-wip-rishabh-testing-2022Nov24-11Jan2023-distro-default-smithi/7134086
283
  Acc to Venky this was fixed in https://github.com/ceph/ceph/commit/cf6089200d96fc56b08ee17a4e31f19823370dc8
284 106 Rishabh Dave
* Created https://tracker.ceph.com/issues/58564
285
  workunit test suites/dbench.sh failed error code 1
286 102 Rishabh Dave
287 100 Venky Shankar
h3. 15 Dec 2022
288
289
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20221215.112736
290
291
* https://tracker.ceph.com/issues/52624
292
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
293
* https://tracker.ceph.com/issues/56695
294
    [RHEL stock] pjd test failures
295
* https://tracker.ceph.com/issues/58219
296
* https://tracker.ceph.com/issues/57655
297
* qa: fs:mixed-clients kernel_untar_build failure
298
    Test failure: test_journal_migration (tasks.cephfs.test_journal_migration.TestJournalMigration)
299
* https://tracker.ceph.com/issues/57676
300
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
301
* https://tracker.ceph.com/issues/58340
302
    mds: fsstress.sh hangs with multimds
303
304 96 Venky Shankar
h3. 08 Dec 2022
305
306
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20221130.043104
307 99 Venky Shankar
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20221209.043803
308 96 Venky Shankar
309
(lots of transient git.ceph.com failures)
310
311
* https://tracker.ceph.com/issues/52624
312
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
313
* https://tracker.ceph.com/issues/56695
314
    [RHEL stock] pjd test failures
315
* https://tracker.ceph.com/issues/57655
316
    qa: fs:mixed-clients kernel_untar_build failure
317
* https://tracker.ceph.com/issues/58219
318
    Test failure: test_journal_migration (tasks.cephfs.test_journal_migration.TestJournalMigration)
319
* https://tracker.ceph.com/issues/58220
320
    Command failed (workunit test fs/quota/quota.sh) on smithi081 with status 1:
321
* https://tracker.ceph.com/issues/57676
322
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
323 97 Venky Shankar
* https://tracker.ceph.com/issues/53859
324
    qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
325 98 Venky Shankar
* https://tracker.ceph.com/issues/54460
326
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
327
* https://tracker.ceph.com/issues/58244
328
    Test failure: test_rebuild_inotable (tasks.cephfs.test_data_scan.TestDataScan)
329 96 Venky Shankar
330 95 Venky Shankar
h3. 14 Oct 2022
331
332
https://pulpito.ceph.com/vshankar-2022-10-12_04:56:59-fs-wip-vshankar-testing-20221011-145847-testing-default-smithi/
333
https://pulpito.ceph.com/vshankar-2022-10-14_04:04:57-fs-wip-vshankar-testing-20221014-072608-testing-default-smithi/
334
335
* https://tracker.ceph.com/issues/52624
336
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
337
* https://tracker.ceph.com/issues/55804
338
    Command failed (workunit test suites/pjd.sh)
339
* https://tracker.ceph.com/issues/51964
340
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
341
* https://tracker.ceph.com/issues/57682
342
    client: ERROR: test_reconnect_after_blocklisted
343
* https://tracker.ceph.com/issues/54460
344
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
345 90 Rishabh Dave
346 91 Rishabh Dave
h3. 10 Oct 2022
347
348
http://pulpito.front.sepia.ceph.com/rishabh-2022-09-30_19:45:21-fs-wip-rishabh-testing-30Sep2022-testing-default-smithi/
349 92 Rishabh Dave
350 91 Rishabh Dave
reruns
351
* fs-thrash, passed: http://pulpito.front.sepia.ceph.com/rishabh-2022-10-04_13:19:47-fs-wip-rishabh-testing-30Sep2022-testing-default-smithi/
352
* fs-verify, passed: http://pulpito.front.sepia.ceph.com/rishabh-2022-10-05_12:25:37-fs-wip-rishabh-testing-30Sep2022-testing-default-smithi/
353
* cephadm failures also passed after many re-runs: http://pulpito.front.sepia.ceph.com/rishabh-2022-10-06_13:50:51-fs-wip-rishabh-testing-30Sep2022-2-testing-default-smithi/
354 94 Rishabh Dave
    ** needed this PR to be merged in ceph-ci branch - https://github.com/ceph/ceph/pull/47458
355 91 Rishabh Dave
356 93 Rishabh Dave
known bugs
357 91 Rishabh Dave
* https://tracker.ceph.com/issues/52624
358
  qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
359
* https://tracker.ceph.com/issues/50223
360
  client.xxxx isn't responding to mclientcaps(revoke
361
* https://tracker.ceph.com/issues/57299
362
  qa: test_dump_loads fails with JSONDecodeError
363
* https://tracker.ceph.com/issues/57655 [Exist in main as well]
364
  qa: fs:mixed-clients kernel_untar_build failure
365
* https://tracker.ceph.com/issues/57206
366
  libcephfs/test.sh: ceph_test_libcephfs_reclaim
367
368 90 Rishabh Dave
h3. 2022 Sep 29
369
370
http://pulpito.front.sepia.ceph.com/rishabh-2022-09-14_12:48:43-fs-wip-rishabh-testing-2022Sep9-1708-testing-default-smithi/
371
372
* https://tracker.ceph.com/issues/55804
373
  Command failed (workunit test suites/pjd.sh)
374
* https://tracker.ceph.com/issues/36593
375
  Command failed (workunit test fs/quota/quota.sh) on smithixxx with status 1
376
* https://tracker.ceph.com/issues/52624
377
  qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
378
* https://tracker.ceph.com/issues/51964
379
  qa: test_cephfs_mirror_restart_sync_on_blocklist failure
380
* https://tracker.ceph.com/issues/56632
381
  Test failure: test_subvolume_snapshot_clone_quota_exceeded
382
* https://tracker.ceph.com/issues/50821
383
  qa: untar_snap_rm failure during mds thrashing
384
385 88 Patrick Donnelly
h3. 2022 Sep 26
386
387
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20220923.171109
388
389
* https://tracker.ceph.com/issues/55804
390
    qa failure: pjd link tests failed
391
* https://tracker.ceph.com/issues/57676
392
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
393
* https://tracker.ceph.com/issues/52624
394
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
395
* https://tracker.ceph.com/issues/57580
396
    Test failure: test_newops_getvxattr (tasks.cephfs.test_newops.TestNewOps)
397
* https://tracker.ceph.com/issues/48773
398
    qa: scrub does not complete
399
* https://tracker.ceph.com/issues/57299
400
    qa: test_dump_loads fails with JSONDecodeError
401
* https://tracker.ceph.com/issues/57280
402
    qa: tasks/kernel_cfuse_workunits_untarbuild_blogbench fails - Failed to fetch package version from shaman
403
* https://tracker.ceph.com/issues/57205
404
    Test failure: test_subvolume_group_ls_filter_internal_directories (tasks.cephfs.test_volumes.TestSubvolumeGroups)
405
* https://tracker.ceph.com/issues/57656
406
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
407
* https://tracker.ceph.com/issues/57677
408
    qa: "1 MDSs behind on trimming (MDS_TRIM)"
409
* https://tracker.ceph.com/issues/57206
410
    libcephfs/test.sh: ceph_test_libcephfs_reclaim
411
* https://tracker.ceph.com/issues/57446
412
    qa: test_subvolume_snapshot_info_if_orphan_clone fails
413
* https://tracker.ceph.com/issues/57655 [Exist in main as well]
414
    qa: fs:mixed-clients kernel_untar_build failure
415 89 Patrick Donnelly
* https://tracker.ceph.com/issues/57682
416
    client: ERROR: test_reconnect_after_blocklisted
417 88 Patrick Donnelly
418
419 87 Patrick Donnelly
h3. 2022 Sep 22
420
421
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20220920.234701
422
423
* https://tracker.ceph.com/issues/57299
424
    qa: test_dump_loads fails with JSONDecodeError
425
* https://tracker.ceph.com/issues/57205
426
    Test failure: test_subvolume_group_ls_filter_internal_directories (tasks.cephfs.test_volumes.TestSubvolumeGroups)
427
* https://tracker.ceph.com/issues/52624
428
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
429
* https://tracker.ceph.com/issues/57580
430
    Test failure: test_newops_getvxattr (tasks.cephfs.test_newops.TestNewOps)
431
* https://tracker.ceph.com/issues/57280
432
    qa: tasks/kernel_cfuse_workunits_untarbuild_blogbench fails - Failed to fetch package version from shaman
433
* https://tracker.ceph.com/issues/48773
434
    qa: scrub does not complete
435
* https://tracker.ceph.com/issues/56446
436
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
437
* https://tracker.ceph.com/issues/57206
438
    libcephfs/test.sh: ceph_test_libcephfs_reclaim
439
* https://tracker.ceph.com/issues/51267
440
    CommandFailedError: Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithi096 with status 1:...
441
442
NEW:
443
444
* https://tracker.ceph.com/issues/57656
445
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
446
* https://tracker.ceph.com/issues/57655 [Exist in main as well]
447
    qa: fs:mixed-clients kernel_untar_build failure
448
* https://tracker.ceph.com/issues/57657
449
    mds: scrub locates mismatch between child accounted_rstats and self rstats
450
451
Segfault probably caused by: https://github.com/ceph/ceph/pull/47795#issuecomment-1255724799
452
453
454 80 Venky Shankar
h3. 2022 Sep 16
455 79 Venky Shankar
456
https://pulpito.ceph.com/?branch=wip-vshankar-testing1-20220905-132828
457
458
* https://tracker.ceph.com/issues/57446
459
    qa: test_subvolume_snapshot_info_if_orphan_clone fails
460
* https://tracker.ceph.com/issues/57299
461
    qa: test_dump_loads fails with JSONDecodeError
462
* https://tracker.ceph.com/issues/50223
463
    client.xxxx isn't responding to mclientcaps(revoke)
464
* https://tracker.ceph.com/issues/52624
465
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
466
* https://tracker.ceph.com/issues/57205
467
    Test failure: test_subvolume_group_ls_filter_internal_directories (tasks.cephfs.test_volumes.TestSubvolumeGroups)
468
* https://tracker.ceph.com/issues/57280
469
    qa: tasks/kernel_cfuse_workunits_untarbuild_blogbench fails - Failed to fetch package version from shaman
470
* https://tracker.ceph.com/issues/51282
471
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
472
* https://tracker.ceph.com/issues/48203
473
  https://tracker.ceph.com/issues/36593
474
    qa: quota failure
475
    qa: quota failure caused by clients stepping on each other
476
* https://tracker.ceph.com/issues/57580
477
    Test failure: test_newops_getvxattr (tasks.cephfs.test_newops.TestNewOps)
478
479 77 Rishabh Dave
480
h3. 2022 Aug 26
481 76 Rishabh Dave
482
http://pulpito.front.sepia.ceph.com/rishabh-2022-08-22_17:49:59-fs-wip-rishabh-testing-2022Aug19-testing-default-smithi/
483
http://pulpito.front.sepia.ceph.com/rishabh-2022-08-24_11:56:51-fs-wip-rishabh-testing-2022Aug19-testing-default-smithi/
484
485
* https://tracker.ceph.com/issues/57206
486
  libcephfs/test.sh: ceph_test_libcephfs_reclaim
487
* https://tracker.ceph.com/issues/56632
488
  Test failure: test_subvolume_snapshot_clone_quota_exceeded (tasks.cephfs.test_volumes.TestSubvolumeSnapshotClones)
489
* https://tracker.ceph.com/issues/56446
490
  Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
491
* https://tracker.ceph.com/issues/51964
492
  qa: test_cephfs_mirror_restart_sync_on_blocklist failure
493
* https://tracker.ceph.com/issues/53859
494
  qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
495
496
* https://tracker.ceph.com/issues/54460
497
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
498
* https://tracker.ceph.com/issues/54462
499
  Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128
500
* https://tracker.ceph.com/issues/54460
501
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
502
* https://tracker.ceph.com/issues/36593
503
  Command failed (workunit test fs/quota/quota.sh) on smithixxx with status 1
504
505
* https://tracker.ceph.com/issues/52624
506
  qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
507
* https://tracker.ceph.com/issues/55804
508
  Command failed (workunit test suites/pjd.sh)
509
* https://tracker.ceph.com/issues/50223
510
  client.xxxx isn't responding to mclientcaps(revoke)
511
512
513 75 Venky Shankar
h3. 2022 Aug 22
514
515
https://pulpito.ceph.com/vshankar-2022-08-12_09:34:24-fs-wip-vshankar-testing1-20220812-072441-testing-default-smithi/
516
https://pulpito.ceph.com/vshankar-2022-08-18_04:30:42-fs-wip-vshankar-testing1-20220818-082047-testing-default-smithi/ (drop problematic PR and re-run)
517
518
* https://tracker.ceph.com/issues/52624
519
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
520
* https://tracker.ceph.com/issues/56446
521
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
522
* https://tracker.ceph.com/issues/55804
523
    Command failed (workunit test suites/pjd.sh)
524
* https://tracker.ceph.com/issues/51278
525
    mds: "FAILED ceph_assert(!segments.empty())"
526
* https://tracker.ceph.com/issues/54460
527
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
528
* https://tracker.ceph.com/issues/57205
529
    Test failure: test_subvolume_group_ls_filter_internal_directories (tasks.cephfs.test_volumes.TestSubvolumeGroups)
530
* https://tracker.ceph.com/issues/57206
531
    ceph_test_libcephfs_reclaim crashes during test
532
* https://tracker.ceph.com/issues/53859
533
  Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
534
* https://tracker.ceph.com/issues/50223
535
    client.xxxx isn't responding to mclientcaps(revoke)
536
537 72 Venky Shankar
h3. 2022 Aug 12
538
539
https://pulpito.ceph.com/vshankar-2022-08-10_04:06:00-fs-wip-vshankar-testing-20220805-190751-testing-default-smithi/
540
https://pulpito.ceph.com/vshankar-2022-08-11_12:16:58-fs-wip-vshankar-testing-20220811-145809-testing-default-smithi/ (drop problematic PR and re-run)
541
542
* https://tracker.ceph.com/issues/52624
543
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
544
* https://tracker.ceph.com/issues/56446
545
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
546
* https://tracker.ceph.com/issues/51964
547
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
548
* https://tracker.ceph.com/issues/55804
549
    Command failed (workunit test suites/pjd.sh)
550
* https://tracker.ceph.com/issues/50223
551
    client.xxxx isn't responding to mclientcaps(revoke)
552
* https://tracker.ceph.com/issues/50821
553
    qa: untar_snap_rm failure during mds thrashing
554
* https://tracker.ceph.com/issues/54460
555 73 Venky Shankar
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
556 72 Venky Shankar
557 71 Venky Shankar
h3. 2022 Aug 04
558
559
https://pulpito.ceph.com/?branch=wip-vshankar-testing1-20220804-123835 (only mgr/volumes, mgr/stats)
560
561
Unrealted teuthology failure on rhel
562
563 69 Rishabh Dave
h3. 2022 Jul 25
564 68 Rishabh Dave
565
http://pulpito.front.sepia.ceph.com/rishabh-2022-07-22_11:34:20-fs-wip-rishabh-testing-2022Jul22-1400-testing-default-smithi/
566
567
1st re-run: http://pulpito.front.sepia.ceph.com/rishabh-2022-07-24_03:51:19-fs-wip-rishabh-testing-2022Jul22-1400-testing-default-smithi
568
2nd re-run: http://pulpito.front.sepia.ceph.com/rishabh-2022-07-24_08:53:36-fs-wip-rishabh-testing-2022Jul22-1400-testing-default-smithi/
569 74 Rishabh Dave
3rd re-run: http://pulpito.front.sepia.ceph.com/rishabh-2022-07-24_08:53:36-fs-wip-rishabh-testing-2022Jul22-1400-testing-default-smithi/
570
4th (final) re-run: http://pulpito.front.sepia.ceph.com/rishabh-2022-07-28_03:59:01-fs-wip-rishabh-testing-2022Jul28-0143-testing-default-smithi/
571 68 Rishabh Dave
572
* https://tracker.ceph.com/issues/55804
573
  Command failed (workunit test suites/pjd.sh)
574
* https://tracker.ceph.com/issues/50223
575
  client.xxxx isn't responding to mclientcaps(revoke)
576
577
* https://tracker.ceph.com/issues/54460
578
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
579
* https://tracker.ceph.com/issues/36593
580
  Command failed (workunit test fs/quota/quota.sh) on smithixxx with status 1
581 1 Patrick Donnelly
* https://tracker.ceph.com/issues/54462
582 74 Rishabh Dave
  Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128~
583 68 Rishabh Dave
584 67 Patrick Donnelly
h3. 2022 July 22
585
586
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20220721.235756
587
588
MDS_HEALTH_DUMMY error in log fixed by followup commit.
589
transient selinux ping failure
590
591
* https://tracker.ceph.com/issues/56694
592
    qa: avoid blocking forever on hung umount
593
* https://tracker.ceph.com/issues/56695
594
    [RHEL stock] pjd test failures
595
* https://tracker.ceph.com/issues/56696
596
    admin keyring disappears during qa run
597
* https://tracker.ceph.com/issues/56697
598
    qa: fs/snaps fails for fuse
599
* https://tracker.ceph.com/issues/50222
600
    osd: 5.2s0 deep-scrub : stat mismatch
601
* https://tracker.ceph.com/issues/56698
602
    client: FAILED ceph_assert(_size == 0)
603
* https://tracker.ceph.com/issues/50223
604
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
605
606
607 66 Rishabh Dave
h3. 2022 Jul 15
608 65 Rishabh Dave
609
http://pulpito.front.sepia.ceph.com/rishabh-2022-07-08_23:53:34-fs-wip-rishabh-testing-2022Jul08-1820-testing-default-smithi/
610
611
re-run: http://pulpito.front.sepia.ceph.com/rishabh-2022-07-15_06:42:04-fs-wip-rishabh-testing-2022Jul08-1820-testing-default-smithi/
612
613
* https://tracker.ceph.com/issues/53859
614
  Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
615
* https://tracker.ceph.com/issues/55804
616
  Command failed (workunit test suites/pjd.sh)
617
* https://tracker.ceph.com/issues/50223
618
  client.xxxx isn't responding to mclientcaps(revoke)
619
* https://tracker.ceph.com/issues/50222
620
  osd: deep-scrub : stat mismatch
621
622
* https://tracker.ceph.com/issues/56632
623
  Test failure: test_subvolume_snapshot_clone_quota_exceeded (tasks.cephfs.test_volumes.TestSubvolumeSnapshotClones)
624
* https://tracker.ceph.com/issues/56634
625
  workunit test fs/snaps/snaptest-intodir.sh
626
* https://tracker.ceph.com/issues/56644
627
  Test failure: test_rapid_creation (tasks.cephfs.test_fragment.TestFragmentation)
628
629
630
631 61 Rishabh Dave
h3. 2022 July 05
632
633
http://pulpito.front.sepia.ceph.com/rishabh-2022-07-02_14:14:52-fs-wip-rishabh-testing-20220702-1631-testing-default-smithi/
634 62 Rishabh Dave
635 64 Rishabh Dave
On 1st re-run some jobs passed - http://pulpito.front.sepia.ceph.com/rishabh-2022-07-03_15:10:28-fs-wip-rishabh-testing-20220702-1631-distro-default-smithi/
636
637
On 2nd re-run only few jobs failed -
638
http://pulpito.front.sepia.ceph.com/rishabh-2022-07-06_05:24:29-fs-wip-rishabh-testing-20220705-2132-distro-default-smithi/
639
http://pulpito.front.sepia.ceph.com/rishabh-2022-07-06_05:24:29-fs-wip-rishabh-testing-20220705-2132-distro-default-smithi/
640 62 Rishabh Dave
641
* https://tracker.ceph.com/issues/56446
642
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
643
* https://tracker.ceph.com/issues/55804
644
    Command failed (workunit test suites/pjd.sh) on smithi047 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/
645
646
* https://tracker.ceph.com/issues/56445
647
    Command failed on smithi080 with status 123: "find /home/ubuntu/cephtest/archive/syslog -name '*.log' -print0 | sudo xargs -0 --no-run-if-empty -- gzip --"
648
* https://tracker.ceph.com/issues/51267
649 63 Rishabh Dave
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithi098 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1
650
* https://tracker.ceph.com/issues/50224
651
    Test failure: test_mirroring_init_failure_with_recovery (tasks.cephfs.test_mirroring.TestMirroring)
652 62 Rishabh Dave
653
654 61 Rishabh Dave
655 58 Venky Shankar
h3. 2022 July 04
656
657
https://pulpito.ceph.com/vshankar-2022-06-29_09:19:00-fs-wip-vshankar-testing-20220627-100931-testing-default-smithi/
658
(rhel runs were borked due to: https://lists.ceph.io/hyperkitty/list/dev@ceph.io/thread/JSZQFUKVLDND4W33PXDGCABPHNSPT6SS/, tests ran with --filter-out=rhel)
659
660
* https://tracker.ceph.com/issues/56445
661
    Command failed on smithi162 with status 123: "find /home/ubuntu/cephtest/archive/syslog -name '*.log' -print0 | sudo xargs -0 --no-run-if-empty -- gzip --"
662
* https://tracker.ceph.com/issues/56446
663 59 Rishabh Dave
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
664
* https://tracker.ceph.com/issues/51964
665
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
666
* https://tracker.ceph.com/issues/52624
667 60 Rishabh Dave
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
668 59 Rishabh Dave
669 57 Venky Shankar
h3. 2022 June 20
670
671
https://pulpito.ceph.com/vshankar-2022-06-15_04:03:39-fs-wip-vshankar-testing1-20220615-072516-testing-default-smithi/
672
https://pulpito.ceph.com/vshankar-2022-06-19_08:22:46-fs-wip-vshankar-testing1-20220619-102531-testing-default-smithi/
673
674
* https://tracker.ceph.com/issues/52624
675
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
676
* https://tracker.ceph.com/issues/55804
677
    qa failure: pjd link tests failed
678
* https://tracker.ceph.com/issues/54108
679
    qa: iogen workunit: "The following counters failed to be set on mds daemons: {'mds.exported', 'mds.imported'}"
680
* https://tracker.ceph.com/issues/55332
681
    Failure in snaptest-git-ceph.sh (it's an async unlink/create bug)
682
683 56 Patrick Donnelly
h3. 2022 June 13
684
685
https://pulpito.ceph.com/pdonnell-2022-06-12_05:08:12-fs:workload-wip-pdonnell-testing-20220612.004943-distro-default-smithi/
686
687
* https://tracker.ceph.com/issues/56024
688
    cephadm: removes ceph.conf during qa run causing command failure
689
* https://tracker.ceph.com/issues/48773
690
    qa: scrub does not complete
691
* https://tracker.ceph.com/issues/56012
692
    mds: src/mds/MDLog.cc: 283: FAILED ceph_assert(!mds->is_ any_replay())
693
694
695 55 Venky Shankar
h3. 2022 Jun 13
696 54 Venky Shankar
697
https://pulpito.ceph.com/vshankar-2022-06-07_00:25:50-fs-wip-vshankar-testing-20220606-223254-testing-default-smithi/
698
https://pulpito.ceph.com/vshankar-2022-06-10_01:04:46-fs-wip-vshankar-testing-20220609-175550-testing-default-smithi/
699
700
* https://tracker.ceph.com/issues/52624
701
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
702
* https://tracker.ceph.com/issues/51964
703
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
704
* https://tracker.ceph.com/issues/53859
705
    qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
706
* https://tracker.ceph.com/issues/55804
707
    qa failure: pjd link tests failed
708
* https://tracker.ceph.com/issues/56003
709
    client: src/include/xlist.h: 81: FAILED ceph_assert(_size == 0)
710
* https://tracker.ceph.com/issues/56011
711
    fs/thrash: snaptest-snap-rm-cmp.sh fails in mds5sum comparison
712
* https://tracker.ceph.com/issues/56012
713
    mds: src/mds/MDLog.cc: 283: FAILED ceph_assert(!mds->is_ any_replay())
714
715 53 Venky Shankar
h3. 2022 Jun 07
716
717
https://pulpito.ceph.com/vshankar-2022-06-06_21:25:41-fs-wip-vshankar-testing1-20220606-230129-testing-default-smithi/
718
https://pulpito.ceph.com/vshankar-2022-06-07_10:53:31-fs-wip-vshankar-testing1-20220607-104134-testing-default-smithi/ (rerun after dropping a problematic PR)
719
720
* https://tracker.ceph.com/issues/52624
721
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
722
* https://tracker.ceph.com/issues/50223
723
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
724
* https://tracker.ceph.com/issues/50224
725
    qa: test_mirroring_init_failure_with_recovery failure
726
727 51 Venky Shankar
h3. 2022 May 12
728
729
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20220509-125847
730 52 Venky Shankar
https://pulpito.ceph.com/vshankar-2022-05-13_17:09:16-fs-wip-vshankar-testing-20220513-120051-testing-default-smithi/ (drop prs + rerun)
731 51 Venky Shankar
732
* https://tracker.ceph.com/issues/52624
733
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
734
* https://tracker.ceph.com/issues/50223
735
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
736
* https://tracker.ceph.com/issues/55332
737
    Failure in snaptest-git-ceph.sh
738
* https://tracker.ceph.com/issues/53859
739
    qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
740
* https://tracker.ceph.com/issues/55538
741 1 Patrick Donnelly
    Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead)
742 52 Venky Shankar
* https://tracker.ceph.com/issues/55258
743
    lots of "heartbeat_check: no reply from X.X.X.X" in OSD logs (cropss up again, though very infrequent)
744 51 Venky Shankar
745 49 Venky Shankar
h3. 2022 May 04
746
747 50 Venky Shankar
https://pulpito.ceph.com/vshankar-2022-05-01_13:18:44-fs-wip-vshankar-testing1-20220428-204527-testing-default-smithi/
748
https://pulpito.ceph.com/vshankar-2022-05-02_16:58:59-fs-wip-vshankar-testing1-20220502-201957-testing-default-smithi/ (after dropping PRs)
749
750 49 Venky Shankar
* https://tracker.ceph.com/issues/52624
751
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
752
* https://tracker.ceph.com/issues/50223
753
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
754
* https://tracker.ceph.com/issues/55332
755
    Failure in snaptest-git-ceph.sh
756
* https://tracker.ceph.com/issues/53859
757
    qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
758
* https://tracker.ceph.com/issues/55516
759
    qa: fs suite tests failing with "json.decoder.JSONDecodeError: Extra data: line 2 column 82 (char 82)"
760
* https://tracker.ceph.com/issues/55537
761
    mds: crash during fs:upgrade test
762
* https://tracker.ceph.com/issues/55538
763
    Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead)
764
765 48 Venky Shankar
h3. 2022 Apr 25
766
767
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20220420-113951 (owner vshankar)
768
769
* https://tracker.ceph.com/issues/52624
770
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
771
* https://tracker.ceph.com/issues/50223
772
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
773
* https://tracker.ceph.com/issues/55258
774
    lots of "heartbeat_check: no reply from X.X.X.X" in OSD logs
775
* https://tracker.ceph.com/issues/55377
776
    kclient: mds revoke Fwb caps stuck after the kclient tries writebcak once
777
778 47 Venky Shankar
h3. 2022 Apr 14
779
780
https://pulpito.ceph.com/?branch=wip-vshankar-testing1-20220411-144044
781
782
* https://tracker.ceph.com/issues/52624
783
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
784
* https://tracker.ceph.com/issues/50223
785
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
786
* https://tracker.ceph.com/issues/52438
787
    qa: ffsb timeout
788
* https://tracker.ceph.com/issues/55170
789
    mds: crash during rejoin (CDir::fetch_keys)
790
* https://tracker.ceph.com/issues/55331
791
    pjd failure
792
* https://tracker.ceph.com/issues/48773
793
    qa: scrub does not complete
794
* https://tracker.ceph.com/issues/55332
795
    Failure in snaptest-git-ceph.sh
796
* https://tracker.ceph.com/issues/55258
797
    lots of "heartbeat_check: no reply from X.X.X.X" in OSD logs
798
799 45 Venky Shankar
h3. 2022 Apr 11
800
801 46 Venky Shankar
https://pulpito.ceph.com/?branch=wip-vshankar-testing-55110-20220408-203242
802 45 Venky Shankar
803
* https://tracker.ceph.com/issues/48773
804
    qa: scrub does not complete
805
* https://tracker.ceph.com/issues/52624
806
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
807
* https://tracker.ceph.com/issues/52438
808
    qa: ffsb timeout
809
* https://tracker.ceph.com/issues/48680
810
    mds: scrubbing stuck "scrub active (0 inodes in the stack)"
811
* https://tracker.ceph.com/issues/55236
812
    qa: fs/snaps tests fails with "hit max job timeout"
813
* https://tracker.ceph.com/issues/54108
814
    qa: iogen workunit: "The following counters failed to be set on mds daemons: {'mds.exported', 'mds.imported'}"
815
* https://tracker.ceph.com/issues/54971
816
    Test failure: test_perf_stats_stale_metrics (tasks.cephfs.test_mds_metrics.TestMDSMetrics)
817
* https://tracker.ceph.com/issues/50223
818
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
819
* https://tracker.ceph.com/issues/55258
820
    lots of "heartbeat_check: no reply from X.X.X.X" in OSD logs
821
822 44 Venky Shankar
h3. 2022 Mar 21
823 42 Venky Shankar
824 43 Venky Shankar
https://pulpito.ceph.com/vshankar-2022-03-20_02:16:37-fs-wip-vshankar-testing-20220319-163539-testing-default-smithi/
825
826
Run didn't go well, lots of failures - debugging by dropping PRs and running against master branch. Only merging unrelated PRs that pass tests.
827
828
829
h3. 2022 Mar 08
830
831 42 Venky Shankar
https://pulpito.ceph.com/vshankar-2022-02-28_04:32:15-fs-wip-vshankar-testing-20220226-211550-testing-default-smithi/
832
833
rerun with
834
- (drop) https://github.com/ceph/ceph/pull/44679
835
- (drop) https://github.com/ceph/ceph/pull/44958
836
https://pulpito.ceph.com/vshankar-2022-03-06_14:47:51-fs-wip-vshankar-testing-20220304-132102-testing-default-smithi/
837
838
* https://tracker.ceph.com/issues/54419 (new)
839
    `ceph orch upgrade start` seems to never reach completion
840
* https://tracker.ceph.com/issues/51964
841
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
842
* https://tracker.ceph.com/issues/52624
843
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
844
* https://tracker.ceph.com/issues/50223
845
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
846
* https://tracker.ceph.com/issues/52438
847
    qa: ffsb timeout
848
* https://tracker.ceph.com/issues/50821
849
    qa: untar_snap_rm failure during mds thrashing
850
851
852 41 Venky Shankar
h3. 2022 Feb 09
853
854
https://pulpito.ceph.com/vshankar-2022-02-05_17:27:49-fs-wip-vshankar-testing-20220201-113815-testing-default-smithi/
855
856
rerun with
857
- (drop) https://github.com/ceph/ceph/pull/37938
858
- (drop) https://github.com/ceph/ceph/pull/44335
859
- (drop) https://github.com/ceph/ceph/pull/44491
860
- (drop) https://github.com/ceph/ceph/pull/44501
861
https://pulpito.ceph.com/vshankar-2022-02-08_14:27:29-fs-wip-vshankar-testing-20220208-181241-testing-default-smithi/
862
863
* https://tracker.ceph.com/issues/51964
864
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
865
* https://tracker.ceph.com/issues/54066
866
    test_subvolume_no_upgrade_v1_sanity fails with `AssertionError: 1000 != 0`
867
* https://tracker.ceph.com/issues/48773
868
    qa: scrub does not complete
869
* https://tracker.ceph.com/issues/52624
870
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
871
* https://tracker.ceph.com/issues/50223
872
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
873
* https://tracker.ceph.com/issues/52438
874
    qa: ffsb timeout
875
876 40 Patrick Donnelly
h3. 2022 Feb 01
877
878
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20220127.171526
879
880
* https://tracker.ceph.com/issues/54107
881
    kclient: hang during umount
882
* https://tracker.ceph.com/issues/54106
883
    kclient: hang during workunit cleanup
884
* https://tracker.ceph.com/issues/54108
885
    qa: iogen workunit: "The following counters failed to be set on mds daemons: {'mds.exported', 'mds.imported'}"
886
* https://tracker.ceph.com/issues/48773
887
    qa: scrub does not complete
888
* https://tracker.ceph.com/issues/52438
889
    qa: ffsb timeout
890
891
892 36 Venky Shankar
h3. 2022 Jan 13
893
894
https://pulpito.ceph.com/vshankar-2022-01-06_13:18:41-fs-wip-vshankar-testing-20220106-145819-testing-default-smithi/
895 39 Venky Shankar
896 36 Venky Shankar
rerun with:
897 38 Venky Shankar
- (add) https://github.com/ceph/ceph/pull/44570
898
- (drop) https://github.com/ceph/ceph/pull/43184
899 36 Venky Shankar
https://pulpito.ceph.com/vshankar-2022-01-13_04:42:40-fs-wip-vshankar-testing-20220106-145819-testing-default-smithi/
900
901
* https://tracker.ceph.com/issues/50223
902
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
903
* https://tracker.ceph.com/issues/51282
904
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
905
* https://tracker.ceph.com/issues/48773
906
    qa: scrub does not complete
907
* https://tracker.ceph.com/issues/52624
908
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
909
* https://tracker.ceph.com/issues/53859
910
    qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
911
912 34 Venky Shankar
h3. 2022 Jan 03
913
914
https://pulpito.ceph.com/vshankar-2021-12-22_07:37:44-fs-wip-vshankar-testing-20211216-114012-testing-default-smithi/
915
https://pulpito.ceph.com/vshankar-2022-01-03_12:27:45-fs-wip-vshankar-testing-20220103-142738-testing-default-smithi/ (rerun)
916
917
* https://tracker.ceph.com/issues/50223
918
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
919
* https://tracker.ceph.com/issues/51964
920
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
921
* https://tracker.ceph.com/issues/51267
922
    CommandFailedError: Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithi096 with status 1:...
923
* https://tracker.ceph.com/issues/51282
924
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
925
* https://tracker.ceph.com/issues/50821
926
    qa: untar_snap_rm failure during mds thrashing
927
* https://tracker.ceph.com/issues/51278
928
    mds: "FAILED ceph_assert(!segments.empty())"
929 35 Ramana Raja
* https://tracker.ceph.com/issues/52279
930
    cephadm tests fail due to: error adding seccomp filter rule for syscall bdflush: requested action matches default action of filter
931
932 34 Venky Shankar
933 33 Patrick Donnelly
h3. 2021 Dec 22
934
935
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211222.014316
936
937
* https://tracker.ceph.com/issues/52624
938
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
939
* https://tracker.ceph.com/issues/50223
940
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
941
* https://tracker.ceph.com/issues/52279
942
    cephadm tests fail due to: error adding seccomp filter rule for syscall bdflush: requested action matches default action of filter
943
* https://tracker.ceph.com/issues/50224
944
    qa: test_mirroring_init_failure_with_recovery failure
945
* https://tracker.ceph.com/issues/48773
946
    qa: scrub does not complete
947
948
949 32 Venky Shankar
h3. 2021 Nov 30
950
951
https://pulpito.ceph.com/vshankar-2021-11-24_07:14:27-fs-wip-vshankar-testing-20211124-094330-testing-default-smithi/
952
https://pulpito.ceph.com/vshankar-2021-11-30_06:23:32-fs-wip-vshankar-testing-20211124-094330-distro-default-smithi/ (rerun w/ QA fixes)
953
954
* https://tracker.ceph.com/issues/53436
955
    mds, mon: mds beacon messages get dropped? (mds never reaches up:active state)
956
* https://tracker.ceph.com/issues/51964
957
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
958
* https://tracker.ceph.com/issues/48812
959
    qa: test_scrub_pause_and_resume_with_abort failure
960
* https://tracker.ceph.com/issues/51076
961
    "wait_for_recovery: failed before timeout expired" during thrashosd test with EC backend.
962
* https://tracker.ceph.com/issues/50223
963
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
964
* https://tracker.ceph.com/issues/52624
965
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
966
* https://tracker.ceph.com/issues/50250
967
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
968
969
970 31 Patrick Donnelly
h3. 2021 November 9
971
972
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211109.180315
973
974
* https://tracker.ceph.com/issues/53214
975
    qa: "dd: error reading '/sys/kernel/debug/ceph/2a934501-6731-4052-a836-f42229a869be.client4874/metrics': Is a directory"
976
* https://tracker.ceph.com/issues/48773
977
    qa: scrub does not complete
978
* https://tracker.ceph.com/issues/50223
979
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
980
* https://tracker.ceph.com/issues/51282
981
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
982
* https://tracker.ceph.com/issues/52624
983
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
984
* https://tracker.ceph.com/issues/53216
985
    qa: "RuntimeError: value of attributes should be either str or None. client_id"
986
* https://tracker.ceph.com/issues/50250
987
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
988
989
990
991 30 Patrick Donnelly
h3. 2021 November 03
992
993
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211103.023355
994
995
* https://tracker.ceph.com/issues/51964
996
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
997
* https://tracker.ceph.com/issues/51282
998
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
999
* https://tracker.ceph.com/issues/52436
1000
    fs/ceph: "corrupt mdsmap"
1001
* https://tracker.ceph.com/issues/53074
1002
    pybind/mgr/cephadm: upgrade sequence does not continue if no MDS are active
1003
* https://tracker.ceph.com/issues/53150
1004
    pybind/mgr/cephadm/upgrade: tolerate MDS failures during upgrade straddling v16.2.5
1005
* https://tracker.ceph.com/issues/53155
1006
    MDSMonitor: assertion during upgrade to v16.2.5+
1007
1008
1009 29 Patrick Donnelly
h3. 2021 October 26
1010
1011
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211025.000447
1012
1013
* https://tracker.ceph.com/issues/53074
1014
    pybind/mgr/cephadm: upgrade sequence does not continue if no MDS are active
1015
* https://tracker.ceph.com/issues/52997
1016
    testing: hang ing umount
1017
* https://tracker.ceph.com/issues/50824
1018
    qa: snaptest-git-ceph bus error
1019
* https://tracker.ceph.com/issues/52436
1020
    fs/ceph: "corrupt mdsmap"
1021
* https://tracker.ceph.com/issues/48773
1022
    qa: scrub does not complete
1023
* https://tracker.ceph.com/issues/53082
1024
    ceph-fuse: segmenetation fault in Client::handle_mds_map
1025
* https://tracker.ceph.com/issues/50223
1026
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
1027
* https://tracker.ceph.com/issues/52624
1028
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1029
* https://tracker.ceph.com/issues/50224
1030
    qa: test_mirroring_init_failure_with_recovery failure
1031
* https://tracker.ceph.com/issues/50821
1032
    qa: untar_snap_rm failure during mds thrashing
1033
* https://tracker.ceph.com/issues/50250
1034
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
1035
1036
1037
1038 27 Patrick Donnelly
h3. 2021 October 19
1039
1040 28 Patrick Donnelly
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211019.013028
1041 27 Patrick Donnelly
1042
* https://tracker.ceph.com/issues/52995
1043
    qa: test_standby_count_wanted failure
1044
* https://tracker.ceph.com/issues/52948
1045
    osd: fails to come up: "teuthology.misc:7 of 8 OSDs are up"
1046
* https://tracker.ceph.com/issues/52996
1047
    qa: test_perf_counters via test_openfiletable
1048
* https://tracker.ceph.com/issues/48772
1049
    qa: pjd: not ok 9, 44, 80
1050
* https://tracker.ceph.com/issues/52997
1051
    testing: hang ing umount
1052
* https://tracker.ceph.com/issues/50250
1053
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
1054
* https://tracker.ceph.com/issues/52624
1055
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1056
* https://tracker.ceph.com/issues/50223
1057
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
1058
* https://tracker.ceph.com/issues/50821
1059
    qa: untar_snap_rm failure during mds thrashing
1060
* https://tracker.ceph.com/issues/48773
1061
    qa: scrub does not complete
1062
1063
1064 26 Patrick Donnelly
h3. 2021 October 12
1065
1066
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211012.192211
1067
1068
Some failures caused by teuthology bug: https://tracker.ceph.com/issues/52944
1069
1070
New test caused failure: https://github.com/ceph/ceph/pull/43297#discussion_r729883167
1071
1072
1073
* https://tracker.ceph.com/issues/51282
1074
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
1075
* https://tracker.ceph.com/issues/52948
1076
    osd: fails to come up: "teuthology.misc:7 of 8 OSDs are up"
1077
* https://tracker.ceph.com/issues/48773
1078
    qa: scrub does not complete
1079
* https://tracker.ceph.com/issues/50224
1080
    qa: test_mirroring_init_failure_with_recovery failure
1081
* https://tracker.ceph.com/issues/52949
1082
    RuntimeError: The following counters failed to be set on mds daemons: {'mds.dir_split'}
1083
1084
1085 25 Patrick Donnelly
h3. 2021 October 02
1086 23 Patrick Donnelly
1087 24 Patrick Donnelly
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211002.163337
1088
1089
Some failures caused by cephadm upgrade test. Fixed in follow-up qa commit.
1090
1091
test_simple failures caused by PR in this set.
1092
1093
A few reruns because of QA infra noise.
1094
1095
* https://tracker.ceph.com/issues/52822
1096
    qa: failed pacific install on fs:upgrade
1097
* https://tracker.ceph.com/issues/52624
1098
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1099
* https://tracker.ceph.com/issues/50223
1100
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
1101
* https://tracker.ceph.com/issues/48773
1102
    qa: scrub does not complete
1103
1104
1105
h3. 2021 September 20
1106
1107 23 Patrick Donnelly
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210917.174826
1108
1109
* https://tracker.ceph.com/issues/52677
1110
    qa: test_simple failure
1111
* https://tracker.ceph.com/issues/51279
1112
    kclient hangs on umount (testing branch)
1113
* https://tracker.ceph.com/issues/50223
1114
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
1115
* https://tracker.ceph.com/issues/50250
1116
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
1117
* https://tracker.ceph.com/issues/52624
1118
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1119
* https://tracker.ceph.com/issues/52438
1120
    qa: ffsb timeout
1121
1122
1123 22 Patrick Donnelly
h3. 2021 September 10
1124
1125
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210910.181451
1126
1127
* https://tracker.ceph.com/issues/50223
1128
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
1129
* https://tracker.ceph.com/issues/50250
1130
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
1131
* https://tracker.ceph.com/issues/52624
1132
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1133
* https://tracker.ceph.com/issues/52625
1134
    qa: test_kill_mdstable (tasks.cephfs.test_snapshots.TestSnapshots)
1135
* https://tracker.ceph.com/issues/52439
1136
    qa: acls does not compile on centos stream
1137
* https://tracker.ceph.com/issues/50821
1138
    qa: untar_snap_rm failure during mds thrashing
1139
* https://tracker.ceph.com/issues/48773
1140
    qa: scrub does not complete
1141
* https://tracker.ceph.com/issues/52626
1142
    mds: ScrubStack.cc: 831: FAILED ceph_assert(diri)
1143
* https://tracker.ceph.com/issues/51279
1144
    kclient hangs on umount (testing branch)
1145
1146
1147 21 Patrick Donnelly
h3. 2021 August 27
1148
1149
Several jobs died because of device failures.
1150
1151
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210827.024746
1152
1153
* https://tracker.ceph.com/issues/52430
1154
    mds: fast async create client mount breaks racy test
1155
* https://tracker.ceph.com/issues/52436
1156
    fs/ceph: "corrupt mdsmap"
1157
* https://tracker.ceph.com/issues/52437
1158
    mds: InoTable::replay_release_ids abort via test_inotable_sync
1159
* https://tracker.ceph.com/issues/51282
1160
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
1161
* https://tracker.ceph.com/issues/52438
1162
    qa: ffsb timeout
1163
* https://tracker.ceph.com/issues/52439
1164
    qa: acls does not compile on centos stream
1165
1166
1167 20 Patrick Donnelly
h3. 2021 July 30
1168
1169
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210729.214022
1170
1171
* https://tracker.ceph.com/issues/50250
1172
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
1173
* https://tracker.ceph.com/issues/51282
1174
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
1175
* https://tracker.ceph.com/issues/48773
1176
    qa: scrub does not complete
1177
* https://tracker.ceph.com/issues/51975
1178
    pybind/mgr/stats: KeyError
1179
1180
1181 19 Patrick Donnelly
h3. 2021 July 28
1182
1183
https://pulpito.ceph.com/pdonnell-2021-07-28_00:39:45-fs-wip-pdonnell-testing-20210727.213757-distro-basic-smithi/
1184
1185
with qa fix: https://pulpito.ceph.com/pdonnell-2021-07-28_16:20:28-fs-wip-pdonnell-testing-20210728.141004-distro-basic-smithi/
1186
1187
* https://tracker.ceph.com/issues/51905
1188
    qa: "error reading sessionmap 'mds1_sessionmap'"
1189
* https://tracker.ceph.com/issues/48773
1190
    qa: scrub does not complete
1191
* https://tracker.ceph.com/issues/50250
1192
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
1193
* https://tracker.ceph.com/issues/51267
1194
    CommandFailedError: Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithi096 with status 1:...
1195
* https://tracker.ceph.com/issues/51279
1196
    kclient hangs on umount (testing branch)
1197
1198
1199 18 Patrick Donnelly
h3. 2021 July 16
1200
1201
https://pulpito.ceph.com/pdonnell-2021-07-16_05:50:11-fs-wip-pdonnell-testing-20210716.022804-distro-basic-smithi/
1202
1203
* https://tracker.ceph.com/issues/48773
1204
    qa: scrub does not complete
1205
* https://tracker.ceph.com/issues/48772
1206
    qa: pjd: not ok 9, 44, 80
1207
* https://tracker.ceph.com/issues/45434
1208
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
1209
* https://tracker.ceph.com/issues/51279
1210
    kclient hangs on umount (testing branch)
1211
* https://tracker.ceph.com/issues/50824
1212
    qa: snaptest-git-ceph bus error
1213
1214
1215 17 Patrick Donnelly
h3. 2021 July 04
1216
1217
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210703.052904
1218
1219
* https://tracker.ceph.com/issues/48773
1220
    qa: scrub does not complete
1221
* https://tracker.ceph.com/issues/39150
1222
    mon: "FAILED ceph_assert(session_map.sessions.empty())" when out of quorum
1223
* https://tracker.ceph.com/issues/45434
1224
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
1225
* https://tracker.ceph.com/issues/51282
1226
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
1227
* https://tracker.ceph.com/issues/48771
1228
    qa: iogen: workload fails to cause balancing
1229
* https://tracker.ceph.com/issues/51279
1230
    kclient hangs on umount (testing branch)
1231
* https://tracker.ceph.com/issues/50250
1232
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
1233
1234
1235 16 Patrick Donnelly
h3. 2021 July 01
1236
1237
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210701.192056
1238
1239
* https://tracker.ceph.com/issues/51197
1240
    qa: [WRN] Scrub error on inode 0x10000001520 (/client.0/tmp/t/linux-5.4/Documentation/driver-api) see mds.f log and `damage ls` output for details
1241
* https://tracker.ceph.com/issues/50866
1242
    osd: stat mismatch on objects
1243
* https://tracker.ceph.com/issues/48773
1244
    qa: scrub does not complete
1245
1246
1247 15 Patrick Donnelly
h3. 2021 June 26
1248
1249
https://pulpito.ceph.com/pdonnell-2021-06-26_00:57:00-fs-wip-pdonnell-testing-20210625.225421-distro-basic-smithi/
1250
1251
* https://tracker.ceph.com/issues/51183
1252
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
1253
* https://tracker.ceph.com/issues/51410
1254
    kclient: fails to finish reconnect during MDS thrashing (testing branch)
1255
* https://tracker.ceph.com/issues/48773
1256
    qa: scrub does not complete
1257
* https://tracker.ceph.com/issues/51282
1258
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
1259
* https://tracker.ceph.com/issues/51169
1260
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
1261
* https://tracker.ceph.com/issues/48772
1262
    qa: pjd: not ok 9, 44, 80
1263
1264
1265 14 Patrick Donnelly
h3. 2021 June 21
1266
1267
https://pulpito.ceph.com/pdonnell-2021-06-22_00:27:21-fs-wip-pdonnell-testing-20210621.231646-distro-basic-smithi/
1268
1269
One failure caused by PR: https://github.com/ceph/ceph/pull/41935#issuecomment-866472599
1270
1271
* https://tracker.ceph.com/issues/51282
1272
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
1273
* https://tracker.ceph.com/issues/51183
1274
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
1275
* https://tracker.ceph.com/issues/48773
1276
    qa: scrub does not complete
1277
* https://tracker.ceph.com/issues/48771
1278
    qa: iogen: workload fails to cause balancing
1279
* https://tracker.ceph.com/issues/51169
1280
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
1281
* https://tracker.ceph.com/issues/50495
1282
    libcephfs: shutdown race fails with status 141
1283
* https://tracker.ceph.com/issues/45434
1284
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
1285
* https://tracker.ceph.com/issues/50824
1286
    qa: snaptest-git-ceph bus error
1287
* https://tracker.ceph.com/issues/50223
1288
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
1289
1290
1291 13 Patrick Donnelly
h3. 2021 June 16
1292
1293
https://pulpito.ceph.com/pdonnell-2021-06-16_21:26:55-fs-wip-pdonnell-testing-20210616.191804-distro-basic-smithi/
1294
1295
MDS abort class of failures caused by PR: https://github.com/ceph/ceph/pull/41667
1296
1297
* https://tracker.ceph.com/issues/45434
1298
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
1299
* https://tracker.ceph.com/issues/51169
1300
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
1301
* https://tracker.ceph.com/issues/43216
1302
    MDSMonitor: removes MDS coming out of quorum election
1303
* https://tracker.ceph.com/issues/51278
1304
    mds: "FAILED ceph_assert(!segments.empty())"
1305
* https://tracker.ceph.com/issues/51279
1306
    kclient hangs on umount (testing branch)
1307
* https://tracker.ceph.com/issues/51280
1308
    mds: "FAILED ceph_assert(r == 0 || r == -2)"
1309
* https://tracker.ceph.com/issues/51183
1310
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
1311
* https://tracker.ceph.com/issues/51281
1312
    qa: snaptest-snap-rm-cmp.sh: "echo 'FAIL: bad match, /tmp/a 4637e766853d1ad16a7b17079e2c6f03 != real c3883760b18d50e8d78819c54d579b00'"
1313
* https://tracker.ceph.com/issues/48773
1314
    qa: scrub does not complete
1315
* https://tracker.ceph.com/issues/51076
1316
    "wait_for_recovery: failed before timeout expired" during thrashosd test with EC backend.
1317
* https://tracker.ceph.com/issues/51228
1318
    qa: rmdir: failed to remove 'a/.snap/*': No such file or directory
1319
* https://tracker.ceph.com/issues/51282
1320
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
1321
1322
1323 12 Patrick Donnelly
h3. 2021 June 14
1324
1325
https://pulpito.ceph.com/pdonnell-2021-06-14_20:53:05-fs-wip-pdonnell-testing-20210614.173325-distro-basic-smithi/
1326
1327
Some Ubuntu 20.04 upgrade fallout. In particular, upgrade tests are failing due to missing packages for 18.04 Pacific.
1328
1329
* https://tracker.ceph.com/issues/51169
1330
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
1331
* https://tracker.ceph.com/issues/51228
1332
    qa: rmdir: failed to remove 'a/.snap/*': No such file or directory
1333
* https://tracker.ceph.com/issues/48773
1334
    qa: scrub does not complete
1335
* https://tracker.ceph.com/issues/51183
1336
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
1337
* https://tracker.ceph.com/issues/45434
1338
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
1339
* https://tracker.ceph.com/issues/51182
1340
    pybind/mgr/snap_schedule: Invalid command: Unexpected argument 'fs=cephfs'
1341
* https://tracker.ceph.com/issues/51229
1342
    qa: test_multi_snap_schedule list difference failure
1343
* https://tracker.ceph.com/issues/50821
1344
    qa: untar_snap_rm failure during mds thrashing
1345
1346
1347 11 Patrick Donnelly
h3. 2021 June 13
1348
1349
https://pulpito.ceph.com/pdonnell-2021-06-12_02:45:35-fs-wip-pdonnell-testing-20210612.002809-distro-basic-smithi/
1350
1351
Some Ubuntu 20.04 upgrade fallout. In particular, upgrade tests are failing due to missing packages for 18.04 Pacific.
1352
1353
* https://tracker.ceph.com/issues/51169
1354
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
1355
* https://tracker.ceph.com/issues/48773
1356
    qa: scrub does not complete
1357
* https://tracker.ceph.com/issues/51182
1358
    pybind/mgr/snap_schedule: Invalid command: Unexpected argument 'fs=cephfs'
1359
* https://tracker.ceph.com/issues/51183
1360
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
1361
* https://tracker.ceph.com/issues/51197
1362
    qa: [WRN] Scrub error on inode 0x10000001520 (/client.0/tmp/t/linux-5.4/Documentation/driver-api) see mds.f log and `damage ls` output for details
1363
* https://tracker.ceph.com/issues/45434
1364
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
1365
1366 10 Patrick Donnelly
h3. 2021 June 11
1367
1368
https://pulpito.ceph.com/pdonnell-2021-06-11_18:02:10-fs-wip-pdonnell-testing-20210611.162716-distro-basic-smithi/
1369
1370
Some Ubuntu 20.04 upgrade fallout. In particular, upgrade tests are failing due to missing packages for 18.04 Pacific.
1371
1372
* https://tracker.ceph.com/issues/51169
1373
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
1374
* https://tracker.ceph.com/issues/45434
1375
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
1376
* https://tracker.ceph.com/issues/48771
1377
    qa: iogen: workload fails to cause balancing
1378
* https://tracker.ceph.com/issues/43216
1379
    MDSMonitor: removes MDS coming out of quorum election
1380
* https://tracker.ceph.com/issues/51182
1381
    pybind/mgr/snap_schedule: Invalid command: Unexpected argument 'fs=cephfs'
1382
* https://tracker.ceph.com/issues/50223
1383
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
1384
* https://tracker.ceph.com/issues/48773
1385
    qa: scrub does not complete
1386
* https://tracker.ceph.com/issues/51183
1387
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
1388
* https://tracker.ceph.com/issues/51184
1389
    qa: fs:bugs does not specify distro
1390
1391
1392 9 Patrick Donnelly
h3. 2021 June 03
1393
1394
https://pulpito.ceph.com/pdonnell-2021-06-03_03:40:33-fs-wip-pdonnell-testing-20210603.020013-distro-basic-smithi/
1395
1396
* https://tracker.ceph.com/issues/45434
1397
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
1398
* https://tracker.ceph.com/issues/50016
1399
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
1400
* https://tracker.ceph.com/issues/50821
1401
    qa: untar_snap_rm failure during mds thrashing
1402
* https://tracker.ceph.com/issues/50622 (regression)
1403
    msg: active_connections regression
1404
* https://tracker.ceph.com/issues/49845#note-2 (regression)
1405
    qa: failed umount in test_volumes
1406
* https://tracker.ceph.com/issues/48773
1407
    qa: scrub does not complete
1408
* https://tracker.ceph.com/issues/43216
1409
    MDSMonitor: removes MDS coming out of quorum election
1410
1411
1412 7 Patrick Donnelly
h3. 2021 May 18
1413
1414 8 Patrick Donnelly
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210518.214114
1415
1416
Regression in testing kernel caused some failures. Ilya fixed those and rerun
1417
looked better. Some odd new noise in the rerun relating to packaging and "No
1418
module named 'tasks.ceph'".
1419
1420
* https://tracker.ceph.com/issues/50824
1421
    qa: snaptest-git-ceph bus error
1422
* https://tracker.ceph.com/issues/50622 (regression)
1423
    msg: active_connections regression
1424
* https://tracker.ceph.com/issues/49845#note-2 (regression)
1425
    qa: failed umount in test_volumes
1426
* https://tracker.ceph.com/issues/48203 (stock kernel update required)
1427
    qa: quota failure
1428
1429
1430
h3. 2021 May 18
1431
1432 7 Patrick Donnelly
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210518.025642
1433
1434
* https://tracker.ceph.com/issues/50821
1435
    qa: untar_snap_rm failure during mds thrashing
1436
* https://tracker.ceph.com/issues/48773
1437
    qa: scrub does not complete
1438
* https://tracker.ceph.com/issues/45591
1439
    mgr: FAILED ceph_assert(daemon != nullptr)
1440
* https://tracker.ceph.com/issues/50866
1441
    osd: stat mismatch on objects
1442
* https://tracker.ceph.com/issues/50016
1443
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
1444
* https://tracker.ceph.com/issues/50867
1445
    qa: fs:mirror: reduced data availability
1446
* https://tracker.ceph.com/issues/50821
1447
    qa: untar_snap_rm failure during mds thrashing
1448
* https://tracker.ceph.com/issues/50622 (regression)
1449
    msg: active_connections regression
1450
* https://tracker.ceph.com/issues/50223
1451
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
1452
* https://tracker.ceph.com/issues/50868
1453
    qa: "kern.log.gz already exists; not overwritten"
1454
* https://tracker.ceph.com/issues/50870
1455
    qa: test_full: "rm: cannot remove 'large_file_a': Permission denied"
1456
1457
1458 6 Patrick Donnelly
h3. 2021 May 11
1459
1460
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210511.232042
1461
1462
* one class of failures caused by PR
1463
* https://tracker.ceph.com/issues/48812
1464
    qa: test_scrub_pause_and_resume_with_abort failure
1465
* https://tracker.ceph.com/issues/50390
1466
    mds: monclient: wait_auth_rotating timed out after 30
1467
* https://tracker.ceph.com/issues/48773
1468
    qa: scrub does not complete
1469
* https://tracker.ceph.com/issues/50821
1470
    qa: untar_snap_rm failure during mds thrashing
1471
* https://tracker.ceph.com/issues/50224
1472
    qa: test_mirroring_init_failure_with_recovery failure
1473
* https://tracker.ceph.com/issues/50622 (regression)
1474
    msg: active_connections regression
1475
* https://tracker.ceph.com/issues/50825
1476
    qa: snaptest-git-ceph hang during mon thrashing v2
1477
* https://tracker.ceph.com/issues/50821
1478
    qa: untar_snap_rm failure during mds thrashing
1479
* https://tracker.ceph.com/issues/50823
1480
    qa: RuntimeError: timeout waiting for cluster to stabilize
1481
1482
1483 5 Patrick Donnelly
h3. 2021 May 14
1484
1485
https://pulpito.ceph.com/pdonnell-2021-05-14_21:45:42-fs-master-distro-basic-smithi/
1486
1487
* https://tracker.ceph.com/issues/48812
1488
    qa: test_scrub_pause_and_resume_with_abort failure
1489
* https://tracker.ceph.com/issues/50821
1490
    qa: untar_snap_rm failure during mds thrashing
1491
* https://tracker.ceph.com/issues/50622 (regression)
1492
    msg: active_connections regression
1493
* https://tracker.ceph.com/issues/50822
1494
    qa: testing kernel patch for client metrics causes mds abort
1495
* https://tracker.ceph.com/issues/48773
1496
    qa: scrub does not complete
1497
* https://tracker.ceph.com/issues/50823
1498
    qa: RuntimeError: timeout waiting for cluster to stabilize
1499
* https://tracker.ceph.com/issues/50824
1500
    qa: snaptest-git-ceph bus error
1501
* https://tracker.ceph.com/issues/50825
1502
    qa: snaptest-git-ceph hang during mon thrashing v2
1503
* https://tracker.ceph.com/issues/50826
1504
    kceph: stock RHEL kernel hangs on snaptests with mon|osd thrashers
1505
1506
1507 4 Patrick Donnelly
h3. 2021 May 01
1508
1509
https://pulpito.ceph.com/pdonnell-2021-05-01_09:07:09-fs-wip-pdonnell-testing-20210501.040415-distro-basic-smithi/
1510
1511
* https://tracker.ceph.com/issues/45434
1512
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
1513
* https://tracker.ceph.com/issues/50281
1514
    qa: untar_snap_rm timeout
1515
* https://tracker.ceph.com/issues/48203 (stock kernel update required)
1516
    qa: quota failure
1517
* https://tracker.ceph.com/issues/48773
1518
    qa: scrub does not complete
1519
* https://tracker.ceph.com/issues/50390
1520
    mds: monclient: wait_auth_rotating timed out after 30
1521
* https://tracker.ceph.com/issues/50250
1522
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details"
1523
* https://tracker.ceph.com/issues/50622 (regression)
1524
    msg: active_connections regression
1525
* https://tracker.ceph.com/issues/45591
1526
    mgr: FAILED ceph_assert(daemon != nullptr)
1527
* https://tracker.ceph.com/issues/50221
1528
    qa: snaptest-git-ceph failure in git diff
1529
* https://tracker.ceph.com/issues/50016
1530
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
1531
1532
1533 3 Patrick Donnelly
h3. 2021 Apr 15
1534
1535
https://pulpito.ceph.com/pdonnell-2021-04-15_01:35:57-fs-wip-pdonnell-testing-20210414.230315-distro-basic-smithi/
1536
1537
* https://tracker.ceph.com/issues/50281
1538
    qa: untar_snap_rm timeout
1539
* https://tracker.ceph.com/issues/50220
1540
    qa: dbench workload timeout
1541
* https://tracker.ceph.com/issues/50246
1542
    mds: failure replaying journal (EMetaBlob)
1543
* https://tracker.ceph.com/issues/50250
1544
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details"
1545
* https://tracker.ceph.com/issues/50016
1546
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
1547
* https://tracker.ceph.com/issues/50222
1548
    osd: 5.2s0 deep-scrub : stat mismatch
1549
* https://tracker.ceph.com/issues/45434
1550
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
1551
* https://tracker.ceph.com/issues/49845
1552
    qa: failed umount in test_volumes
1553
* https://tracker.ceph.com/issues/37808
1554
    osd: osdmap cache weak_refs assert during shutdown
1555
* https://tracker.ceph.com/issues/50387
1556
    client: fs/snaps failure
1557
* https://tracker.ceph.com/issues/50389
1558
    mds: "cluster [ERR] Error recovering journal 0x203: (2) No such file or directory" in cluster log"
1559
* https://tracker.ceph.com/issues/50216
1560
    qa: "ls: cannot access 'lost+found': No such file or directory"
1561
* https://tracker.ceph.com/issues/50390
1562
    mds: monclient: wait_auth_rotating timed out after 30
1563
1564
1565
1566 1 Patrick Donnelly
h3. 2021 Apr 08
1567
1568 2 Patrick Donnelly
https://pulpito.ceph.com/pdonnell-2021-04-08_22:42:24-fs-wip-pdonnell-testing-20210408.192301-distro-basic-smithi/
1569
1570
* https://tracker.ceph.com/issues/45434
1571
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
1572
* https://tracker.ceph.com/issues/50016
1573
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
1574
* https://tracker.ceph.com/issues/48773
1575
    qa: scrub does not complete
1576
* https://tracker.ceph.com/issues/50279
1577
    qa: "Replacing daemon mds.b as rank 0 with standby daemon mds.c"
1578
* https://tracker.ceph.com/issues/50246
1579
    mds: failure replaying journal (EMetaBlob)
1580
* https://tracker.ceph.com/issues/48365
1581
    qa: ffsb build failure on CentOS 8.2
1582
* https://tracker.ceph.com/issues/50216
1583
    qa: "ls: cannot access 'lost+found': No such file or directory"
1584
* https://tracker.ceph.com/issues/50223
1585
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
1586
* https://tracker.ceph.com/issues/50280
1587
    cephadm: RuntimeError: uid/gid not found
1588
* https://tracker.ceph.com/issues/50281
1589
    qa: untar_snap_rm timeout
1590
1591
h3. 2021 Apr 08
1592
1593 1 Patrick Donnelly
https://pulpito.ceph.com/pdonnell-2021-04-08_04:31:36-fs-wip-pdonnell-testing-20210408.024225-distro-basic-smithi/
1594
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210408.142238 (with logic inversion / QA fix)
1595
1596
* https://tracker.ceph.com/issues/50246
1597
    mds: failure replaying journal (EMetaBlob)
1598
* https://tracker.ceph.com/issues/50250
1599
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details"
1600
1601
1602
h3. 2021 Apr 07
1603
1604
https://pulpito.ceph.com/pdonnell-2021-04-07_02:12:41-fs-wip-pdonnell-testing-20210406.213012-distro-basic-smithi/
1605
1606
* https://tracker.ceph.com/issues/50215
1607
    qa: "log [ERR] : error reading sessionmap 'mds2_sessionmap'"
1608
* https://tracker.ceph.com/issues/49466
1609
    qa: "Command failed on gibba030 with status 1: 'set -ex\nsudo dd of=/tmp/tmp.ZEeZBasJer'"
1610
* https://tracker.ceph.com/issues/50216
1611
    qa: "ls: cannot access 'lost+found': No such file or directory"
1612
* https://tracker.ceph.com/issues/48773
1613
    qa: scrub does not complete
1614
* https://tracker.ceph.com/issues/49845
1615
    qa: failed umount in test_volumes
1616
* https://tracker.ceph.com/issues/50220
1617
    qa: dbench workload timeout
1618
* https://tracker.ceph.com/issues/50221
1619
    qa: snaptest-git-ceph failure in git diff
1620
* https://tracker.ceph.com/issues/50222
1621
    osd: 5.2s0 deep-scrub : stat mismatch
1622
* https://tracker.ceph.com/issues/50223
1623
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
1624
* https://tracker.ceph.com/issues/50224
1625
    qa: test_mirroring_init_failure_with_recovery failure
1626
1627
h3. 2021 Apr 01
1628
1629
https://pulpito.ceph.com/pdonnell-2021-04-01_00:45:34-fs-wip-pdonnell-testing-20210331.222326-distro-basic-smithi/
1630
1631
* https://tracker.ceph.com/issues/48772
1632
    qa: pjd: not ok 9, 44, 80
1633
* https://tracker.ceph.com/issues/50177
1634
    osd: "stalled aio... buggy kernel or bad device?"
1635
* https://tracker.ceph.com/issues/48771
1636
    qa: iogen: workload fails to cause balancing
1637
* https://tracker.ceph.com/issues/49845
1638
    qa: failed umount in test_volumes
1639
* https://tracker.ceph.com/issues/48773
1640
    qa: scrub does not complete
1641
* https://tracker.ceph.com/issues/48805
1642
    mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details"
1643
* https://tracker.ceph.com/issues/50178
1644
    qa: "TypeError: run() got an unexpected keyword argument 'shell'"
1645
* https://tracker.ceph.com/issues/45434
1646
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
1647
1648
h3. 2021 Mar 24
1649
1650
https://pulpito.ceph.com/pdonnell-2021-03-24_23:26:35-fs-wip-pdonnell-testing-20210324.190252-distro-basic-smithi/
1651
1652
* https://tracker.ceph.com/issues/49500
1653
    qa: "Assertion `cb_done' failed."
1654
* https://tracker.ceph.com/issues/50019
1655
    qa: mount failure with cephadm "probably no MDS server is up?"
1656
* https://tracker.ceph.com/issues/50020
1657
    qa: "RADOS object not found (Failed to operate read op for oid cephfs_mirror)"
1658
* https://tracker.ceph.com/issues/48773
1659
    qa: scrub does not complete
1660
* https://tracker.ceph.com/issues/45434
1661
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
1662
* https://tracker.ceph.com/issues/48805
1663
    mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details"
1664
* https://tracker.ceph.com/issues/48772
1665
    qa: pjd: not ok 9, 44, 80
1666
* https://tracker.ceph.com/issues/50021
1667
    qa: snaptest-git-ceph failure during mon thrashing
1668
* https://tracker.ceph.com/issues/48771
1669
    qa: iogen: workload fails to cause balancing
1670
* https://tracker.ceph.com/issues/50016
1671
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
1672
* https://tracker.ceph.com/issues/49466
1673
    qa: "Command failed on gibba030 with status 1: 'set -ex\nsudo dd of=/tmp/tmp.ZEeZBasJer'"
1674
1675
1676
h3. 2021 Mar 18
1677
1678
https://pulpito.ceph.com/pdonnell-2021-03-18_13:46:31-fs-wip-pdonnell-testing-20210318.024145-distro-basic-smithi/
1679
1680
* https://tracker.ceph.com/issues/49466
1681
    qa: "Command failed on gibba030 with status 1: 'set -ex\nsudo dd of=/tmp/tmp.ZEeZBasJer'"
1682
* https://tracker.ceph.com/issues/48773
1683
    qa: scrub does not complete
1684
* https://tracker.ceph.com/issues/48805
1685
    mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details"
1686
* https://tracker.ceph.com/issues/45434
1687
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
1688
* https://tracker.ceph.com/issues/49845
1689
    qa: failed umount in test_volumes
1690
* https://tracker.ceph.com/issues/49605
1691
    mgr: drops command on the floor
1692
* https://tracker.ceph.com/issues/48203 (stock kernel update required)
1693
    qa: quota failure
1694
* https://tracker.ceph.com/issues/49928
1695
    client: items pinned in cache preventing unmount x2
1696
1697
h3. 2021 Mar 15
1698
1699
https://pulpito.ceph.com/pdonnell-2021-03-15_22:16:56-fs-wip-pdonnell-testing-20210315.182203-distro-basic-smithi/
1700
1701
* https://tracker.ceph.com/issues/49842
1702
    qa: stuck pkg install
1703
* https://tracker.ceph.com/issues/49466
1704
    qa: "Command failed on gibba030 with status 1: 'set -ex\nsudo dd of=/tmp/tmp.ZEeZBasJer'"
1705
* https://tracker.ceph.com/issues/49822
1706
    test: test_mirroring_command_idempotency (tasks.cephfs.test_admin.TestMirroringCommands) failure
1707
* https://tracker.ceph.com/issues/49240
1708
    terminate called after throwing an instance of 'std::bad_alloc'
1709
* https://tracker.ceph.com/issues/48773
1710
    qa: scrub does not complete
1711
* https://tracker.ceph.com/issues/45434
1712
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
1713
* https://tracker.ceph.com/issues/49500
1714
    qa: "Assertion `cb_done' failed."
1715
* https://tracker.ceph.com/issues/49843
1716
    qa: fs/snaps/snaptest-upchildrealms.sh failure
1717
* https://tracker.ceph.com/issues/49845
1718
    qa: failed umount in test_volumes
1719
* https://tracker.ceph.com/issues/48805
1720
    mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details"
1721
* https://tracker.ceph.com/issues/49605
1722
    mgr: drops command on the floor
1723
1724
and failure caused by PR: https://github.com/ceph/ceph/pull/39969
1725
1726
1727
h3. 2021 Mar 09
1728
1729
https://pulpito.ceph.com/pdonnell-2021-03-09_03:27:39-fs-wip-pdonnell-testing-20210308.214827-distro-basic-smithi/
1730
1731
* https://tracker.ceph.com/issues/49500
1732
    qa: "Assertion `cb_done' failed."
1733
* https://tracker.ceph.com/issues/48805
1734
    mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details"
1735
* https://tracker.ceph.com/issues/48773
1736
    qa: scrub does not complete
1737
* https://tracker.ceph.com/issues/45434
1738
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
1739
* https://tracker.ceph.com/issues/49240
1740
    terminate called after throwing an instance of 'std::bad_alloc'
1741
* https://tracker.ceph.com/issues/49466
1742
    qa: "Command failed on gibba030 with status 1: 'set -ex\nsudo dd of=/tmp/tmp.ZEeZBasJer'"
1743
* https://tracker.ceph.com/issues/49684
1744
    qa: fs:cephadm mount does not wait for mds to be created
1745
* https://tracker.ceph.com/issues/48771
1746
    qa: iogen: workload fails to cause balancing