Project

General

Profile

Main » History » Version 117

Venky Shankar, 03/30/2023 09:21 AM

1 79 Venky Shankar
h1. MAIN
2
3 114 Venky Shankar
h3. 29 March 2023
4
5 115 Venky Shankar
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230317.095222
6 114 Venky Shankar
7
* https://tracker.ceph.com/issues/56695
8
    [RHEL stock] pjd test failures
9
* https://tracker.ceph.com/issues/57676
10
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
11
* https://tracker.ceph.com/issues/57087
12
    qa: test_fragmented_injection (tasks.cephfs.test_data_scan.TestDataScan) failure
13
* https://tracker.ceph.com/issues/58340
14
    mds: fsstress.sh hangs with multimds
15 116 Venky Shankar
* https://tracker.ceph.com/issues/57655
16
    qa: fs:mixed-clients kernel_untar_build failure
17 114 Venky Shankar
* https://tracker.ceph.com/issues/59230
18
    Test failure: test_object_deletion (tasks.cephfs.test_damage.TestDamage)
19 117 Venky Shankar
* https://tracker.ceph.com/issues/58938
20
    qa: xfstests-dev's generic test suite has 7 failures with kclient
21 114 Venky Shankar
22 113 Venky Shankar
h3. 13 Mar 2023
23
24
* https://tracker.ceph.com/issues/56695
25
    [RHEL stock] pjd test failures
26
* https://tracker.ceph.com/issues/57676
27
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
28
* https://tracker.ceph.com/issues/51964
29
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
30
* https://tracker.ceph.com/issues/54460
31
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
32
* https://tracker.ceph.com/issues/57656
33
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
34
35 112 Venky Shankar
h3. 09 Mar 2023
36
37
https://pulpito.ceph.com/vshankar-2023-03-03_04:39:14-fs-wip-vshankar-testing-20230303.023823-testing-default-smithi/
38
https://pulpito.ceph.com/vshankar-2023-03-08_15:12:36-fs-wip-vshankar-testing-20230308.112059-testing-default-smithi/
39
40
* https://tracker.ceph.com/issues/56695
41
    [RHEL stock] pjd test failures
42
* https://tracker.ceph.com/issues/57676
43
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
44
* https://tracker.ceph.com/issues/51964
45
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
46
* https://tracker.ceph.com/issues/54460
47
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
48
* https://tracker.ceph.com/issues/58340
49
    mds: fsstress.sh hangs with multimds
50
* https://tracker.ceph.com/issues/57087
51
    qa: test_fragmented_injection (tasks.cephfs.test_data_scan.TestDataScan) failure
52
53 111 Venky Shankar
h3. 07 Mar 2023
54
55
https://pulpito.ceph.com/vshankar-2023-03-02_09:21:58-fs-wip-vshankar-testing-20230222.044949-testing-default-smithi/
56
https://pulpito.ceph.com/vshankar-2023-03-07_05:15:12-fs-wip-vshankar-testing-20230307.030510-testing-default-smithi/
57
58
* https://tracker.ceph.com/issues/56695
59
    [RHEL stock] pjd test failures
60
* https://tracker.ceph.com/issues/57676
61
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
62
* https://tracker.ceph.com/issues/51964
63
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
64
* https://tracker.ceph.com/issues/57656
65
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
66
* https://tracker.ceph.com/issues/57655
67
    qa: fs:mixed-clients kernel_untar_build failure
68
* https://tracker.ceph.com/issues/58220
69
    Command failed (workunit test fs/quota/quota.sh) on smithi081 with status 1:
70
* https://tracker.ceph.com/issues/54460
71
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
72
* https://tracker.ceph.com/issues/58934
73
    snaptest-git-ceph.sh failure with ceph-fuse
74
75 109 Venky Shankar
h3. 28 Feb 2023
76
77
https://pulpito.ceph.com/vshankar-2023-02-24_02:11:45-fs-wip-vshankar-testing-20230222.025426-testing-default-smithi/
78
79
* https://tracker.ceph.com/issues/56695
80
    [RHEL stock] pjd test failures
81
* https://tracker.ceph.com/issues/57676
82
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
83
* https://tracker.ceph.com/issues/56446
84
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
85 110 Venky Shankar
86 109 Venky Shankar
(teuthology infra issues causing testing delays - merging PRs which have tests passing)
87
88 107 Venky Shankar
h3. 25 Jan 2023
89
90
https://pulpito.ceph.com/vshankar-2023-01-25_07:57:32-fs-wip-vshankar-testing-20230125.055346-testing-default-smithi/
91
92
* https://tracker.ceph.com/issues/52624
93
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
94
* https://tracker.ceph.com/issues/56695
95
    [RHEL stock] pjd test failures
96
* https://tracker.ceph.com/issues/57676
97
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
98
* https://tracker.ceph.com/issues/56446
99
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
100
* https://tracker.ceph.com/issues/57206
101
    libcephfs/test.sh: ceph_test_libcephfs_reclaim
102
* https://tracker.ceph.com/issues/58220
103
    Command failed (workunit test fs/quota/quota.sh) on smithi081 with status 1:
104
* https://tracker.ceph.com/issues/58340
105
  mds: fsstress.sh hangs with multimds
106
* https://tracker.ceph.com/issues/56011
107
    fs/thrash: snaptest-snap-rm-cmp.sh fails in mds5sum comparison
108
* https://tracker.ceph.com/issues/54460
109
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
110
111 101 Rishabh Dave
h3. 30 JAN 2023
112
113
run: http://pulpito.front.sepia.ceph.com/rishabh-2022-11-28_08:04:11-fs-wip-rishabh-testing-2022Nov24-1818-testing-default-smithi/
114
re-run: http://pulpito.front.sepia.ceph.com/rishabh-2023-01-13_12:08:33-fs-wip-rishabh-testing-2022Nov24-11Jan2023-distro-default-smithi/
115
re-run of re-run: http://pulpito.front.sepia.ceph.com/rishabh-2023-01-23_18:53:32-fs-wip-rishabh-testing-2022Nov24-11Jan2023-distro-default-smithi/
116
117 105 Rishabh Dave
* https://tracker.ceph.com/issues/52624
118
  qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
119 101 Rishabh Dave
* https://tracker.ceph.com/issues/56695
120
  [RHEL stock] pjd test failures
121
* https://tracker.ceph.com/issues/57676
122
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
123
* https://tracker.ceph.com/issues/55332
124
  Failure in snaptest-git-ceph.sh
125
* https://tracker.ceph.com/issues/51964
126
  qa: test_cephfs_mirror_restart_sync_on_blocklist failure
127
* https://tracker.ceph.com/issues/56446
128
  Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
129
* https://tracker.ceph.com/issues/57655 
130
  qa: fs:mixed-clients kernel_untar_build failure
131
* https://tracker.ceph.com/issues/54460
132
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
133
* https://tracker.ceph.com/issues/58340
134
  mds: fsstress.sh hangs with multimds
135 103 Rishabh Dave
* https://tracker.ceph.com/issues/58219
136
  Command crashed: 'ceph-dencoder type inode_backtrace_t import - decode dump_json'
137 101 Rishabh Dave
138 102 Rishabh Dave
* "Failed to load ceph-mgr modules: prometheus" in cluster log"
139
  http://pulpito.front.sepia.ceph.com/rishabh-2023-01-23_18:53:32-fs-wip-rishabh-testing-2022Nov24-11Jan2023-distro-default-smithi/7134086
140
  Acc to Venky this was fixed in https://github.com/ceph/ceph/commit/cf6089200d96fc56b08ee17a4e31f19823370dc8
141 106 Rishabh Dave
* Created https://tracker.ceph.com/issues/58564
142
  workunit test suites/dbench.sh failed error code 1
143 102 Rishabh Dave
144 100 Venky Shankar
h3. 15 Dec 2022
145
146
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20221215.112736
147
148
* https://tracker.ceph.com/issues/52624
149
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
150
* https://tracker.ceph.com/issues/56695
151
    [RHEL stock] pjd test failures
152
* https://tracker.ceph.com/issues/58219
153
* https://tracker.ceph.com/issues/57655
154
* qa: fs:mixed-clients kernel_untar_build failure
155
    Test failure: test_journal_migration (tasks.cephfs.test_journal_migration.TestJournalMigration)
156
* https://tracker.ceph.com/issues/57676
157
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
158
* https://tracker.ceph.com/issues/58340
159
    mds: fsstress.sh hangs with multimds
160
161 96 Venky Shankar
h3. 08 Dec 2022
162
163
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20221130.043104
164 99 Venky Shankar
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20221209.043803
165 96 Venky Shankar
166
(lots of transient git.ceph.com failures)
167
168
* https://tracker.ceph.com/issues/52624
169
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
170
* https://tracker.ceph.com/issues/56695
171
    [RHEL stock] pjd test failures
172
* https://tracker.ceph.com/issues/57655
173
    qa: fs:mixed-clients kernel_untar_build failure
174
* https://tracker.ceph.com/issues/58219
175
    Test failure: test_journal_migration (tasks.cephfs.test_journal_migration.TestJournalMigration)
176
* https://tracker.ceph.com/issues/58220
177
    Command failed (workunit test fs/quota/quota.sh) on smithi081 with status 1:
178
* https://tracker.ceph.com/issues/57676
179
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
180 97 Venky Shankar
* https://tracker.ceph.com/issues/53859
181
    qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
182 98 Venky Shankar
* https://tracker.ceph.com/issues/54460
183
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
184
* https://tracker.ceph.com/issues/58244
185
    Test failure: test_rebuild_inotable (tasks.cephfs.test_data_scan.TestDataScan)
186 96 Venky Shankar
187 95 Venky Shankar
h3. 14 Oct 2022
188
189
https://pulpito.ceph.com/vshankar-2022-10-12_04:56:59-fs-wip-vshankar-testing-20221011-145847-testing-default-smithi/
190
https://pulpito.ceph.com/vshankar-2022-10-14_04:04:57-fs-wip-vshankar-testing-20221014-072608-testing-default-smithi/
191
192
* https://tracker.ceph.com/issues/52624
193
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
194
* https://tracker.ceph.com/issues/55804
195
    Command failed (workunit test suites/pjd.sh)
196
* https://tracker.ceph.com/issues/51964
197
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
198
* https://tracker.ceph.com/issues/57682
199
    client: ERROR: test_reconnect_after_blocklisted
200
* https://tracker.ceph.com/issues/54460
201
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
202 90 Rishabh Dave
203 91 Rishabh Dave
h3. 10 Oct 2022
204
205
http://pulpito.front.sepia.ceph.com/rishabh-2022-09-30_19:45:21-fs-wip-rishabh-testing-30Sep2022-testing-default-smithi/
206 92 Rishabh Dave
207 91 Rishabh Dave
reruns
208
* fs-thrash, passed: http://pulpito.front.sepia.ceph.com/rishabh-2022-10-04_13:19:47-fs-wip-rishabh-testing-30Sep2022-testing-default-smithi/
209
* fs-verify, passed: http://pulpito.front.sepia.ceph.com/rishabh-2022-10-05_12:25:37-fs-wip-rishabh-testing-30Sep2022-testing-default-smithi/
210
* cephadm failures also passed after many re-runs: http://pulpito.front.sepia.ceph.com/rishabh-2022-10-06_13:50:51-fs-wip-rishabh-testing-30Sep2022-2-testing-default-smithi/
211 94 Rishabh Dave
    ** needed this PR to be merged in ceph-ci branch - https://github.com/ceph/ceph/pull/47458
212 91 Rishabh Dave
213 93 Rishabh Dave
known bugs
214 91 Rishabh Dave
* https://tracker.ceph.com/issues/52624
215
  qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
216
* https://tracker.ceph.com/issues/50223
217
  client.xxxx isn't responding to mclientcaps(revoke
218
* https://tracker.ceph.com/issues/57299
219
  qa: test_dump_loads fails with JSONDecodeError
220
* https://tracker.ceph.com/issues/57655 [Exist in main as well]
221
  qa: fs:mixed-clients kernel_untar_build failure
222
* https://tracker.ceph.com/issues/57206
223
  libcephfs/test.sh: ceph_test_libcephfs_reclaim
224
225 90 Rishabh Dave
h3. 2022 Sep 29
226
227
http://pulpito.front.sepia.ceph.com/rishabh-2022-09-14_12:48:43-fs-wip-rishabh-testing-2022Sep9-1708-testing-default-smithi/
228
229
* https://tracker.ceph.com/issues/55804
230
  Command failed (workunit test suites/pjd.sh)
231
* https://tracker.ceph.com/issues/36593
232
  Command failed (workunit test fs/quota/quota.sh) on smithixxx with status 1
233
* https://tracker.ceph.com/issues/52624
234
  qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
235
* https://tracker.ceph.com/issues/51964
236
  qa: test_cephfs_mirror_restart_sync_on_blocklist failure
237
* https://tracker.ceph.com/issues/56632
238
  Test failure: test_subvolume_snapshot_clone_quota_exceeded
239
* https://tracker.ceph.com/issues/50821
240
  qa: untar_snap_rm failure during mds thrashing
241
242 88 Patrick Donnelly
h3. 2022 Sep 26
243
244
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20220923.171109
245
246
* https://tracker.ceph.com/issues/55804
247
    qa failure: pjd link tests failed
248
* https://tracker.ceph.com/issues/57676
249
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
250
* https://tracker.ceph.com/issues/52624
251
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
252
* https://tracker.ceph.com/issues/57580
253
    Test failure: test_newops_getvxattr (tasks.cephfs.test_newops.TestNewOps)
254
* https://tracker.ceph.com/issues/48773
255
    qa: scrub does not complete
256
* https://tracker.ceph.com/issues/57299
257
    qa: test_dump_loads fails with JSONDecodeError
258
* https://tracker.ceph.com/issues/57280
259
    qa: tasks/kernel_cfuse_workunits_untarbuild_blogbench fails - Failed to fetch package version from shaman
260
* https://tracker.ceph.com/issues/57205
261
    Test failure: test_subvolume_group_ls_filter_internal_directories (tasks.cephfs.test_volumes.TestSubvolumeGroups)
262
* https://tracker.ceph.com/issues/57656
263
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
264
* https://tracker.ceph.com/issues/57677
265
    qa: "1 MDSs behind on trimming (MDS_TRIM)"
266
* https://tracker.ceph.com/issues/57206
267
    libcephfs/test.sh: ceph_test_libcephfs_reclaim
268
* https://tracker.ceph.com/issues/57446
269
    qa: test_subvolume_snapshot_info_if_orphan_clone fails
270
* https://tracker.ceph.com/issues/57655 [Exist in main as well]
271
    qa: fs:mixed-clients kernel_untar_build failure
272 89 Patrick Donnelly
* https://tracker.ceph.com/issues/57682
273
    client: ERROR: test_reconnect_after_blocklisted
274 88 Patrick Donnelly
275
276 87 Patrick Donnelly
h3. 2022 Sep 22
277
278
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20220920.234701
279
280
* https://tracker.ceph.com/issues/57299
281
    qa: test_dump_loads fails with JSONDecodeError
282
* https://tracker.ceph.com/issues/57205
283
    Test failure: test_subvolume_group_ls_filter_internal_directories (tasks.cephfs.test_volumes.TestSubvolumeGroups)
284
* https://tracker.ceph.com/issues/52624
285
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
286
* https://tracker.ceph.com/issues/57580
287
    Test failure: test_newops_getvxattr (tasks.cephfs.test_newops.TestNewOps)
288
* https://tracker.ceph.com/issues/57280
289
    qa: tasks/kernel_cfuse_workunits_untarbuild_blogbench fails - Failed to fetch package version from shaman
290
* https://tracker.ceph.com/issues/48773
291
    qa: scrub does not complete
292
* https://tracker.ceph.com/issues/56446
293
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
294
* https://tracker.ceph.com/issues/57206
295
    libcephfs/test.sh: ceph_test_libcephfs_reclaim
296
* https://tracker.ceph.com/issues/51267
297
    CommandFailedError: Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithi096 with status 1:...
298
299
NEW:
300
301
* https://tracker.ceph.com/issues/57656
302
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
303
* https://tracker.ceph.com/issues/57655 [Exist in main as well]
304
    qa: fs:mixed-clients kernel_untar_build failure
305
* https://tracker.ceph.com/issues/57657
306
    mds: scrub locates mismatch between child accounted_rstats and self rstats
307
308
Segfault probably caused by: https://github.com/ceph/ceph/pull/47795#issuecomment-1255724799
309
310
311 80 Venky Shankar
h3. 2022 Sep 16
312 79 Venky Shankar
313
https://pulpito.ceph.com/?branch=wip-vshankar-testing1-20220905-132828
314
315
* https://tracker.ceph.com/issues/57446
316
    qa: test_subvolume_snapshot_info_if_orphan_clone fails
317
* https://tracker.ceph.com/issues/57299
318
    qa: test_dump_loads fails with JSONDecodeError
319
* https://tracker.ceph.com/issues/50223
320
    client.xxxx isn't responding to mclientcaps(revoke)
321
* https://tracker.ceph.com/issues/52624
322
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
323
* https://tracker.ceph.com/issues/57205
324
    Test failure: test_subvolume_group_ls_filter_internal_directories (tasks.cephfs.test_volumes.TestSubvolumeGroups)
325
* https://tracker.ceph.com/issues/57280
326
    qa: tasks/kernel_cfuse_workunits_untarbuild_blogbench fails - Failed to fetch package version from shaman
327
* https://tracker.ceph.com/issues/51282
328
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
329
* https://tracker.ceph.com/issues/48203
330
  https://tracker.ceph.com/issues/36593
331
    qa: quota failure
332
    qa: quota failure caused by clients stepping on each other
333
* https://tracker.ceph.com/issues/57580
334
    Test failure: test_newops_getvxattr (tasks.cephfs.test_newops.TestNewOps)
335
336 77 Rishabh Dave
337
h3. 2022 Aug 26
338 76 Rishabh Dave
339
http://pulpito.front.sepia.ceph.com/rishabh-2022-08-22_17:49:59-fs-wip-rishabh-testing-2022Aug19-testing-default-smithi/
340
http://pulpito.front.sepia.ceph.com/rishabh-2022-08-24_11:56:51-fs-wip-rishabh-testing-2022Aug19-testing-default-smithi/
341
342
* https://tracker.ceph.com/issues/57206
343
  libcephfs/test.sh: ceph_test_libcephfs_reclaim
344
* https://tracker.ceph.com/issues/56632
345
  Test failure: test_subvolume_snapshot_clone_quota_exceeded (tasks.cephfs.test_volumes.TestSubvolumeSnapshotClones)
346
* https://tracker.ceph.com/issues/56446
347
  Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
348
* https://tracker.ceph.com/issues/51964
349
  qa: test_cephfs_mirror_restart_sync_on_blocklist failure
350
* https://tracker.ceph.com/issues/53859
351
  qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
352
353
* https://tracker.ceph.com/issues/54460
354
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
355
* https://tracker.ceph.com/issues/54462
356
  Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128
357
* https://tracker.ceph.com/issues/54460
358
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
359
* https://tracker.ceph.com/issues/36593
360
  Command failed (workunit test fs/quota/quota.sh) on smithixxx with status 1
361
362
* https://tracker.ceph.com/issues/52624
363
  qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
364
* https://tracker.ceph.com/issues/55804
365
  Command failed (workunit test suites/pjd.sh)
366
* https://tracker.ceph.com/issues/50223
367
  client.xxxx isn't responding to mclientcaps(revoke)
368
369
370 75 Venky Shankar
h3. 2022 Aug 22
371
372
https://pulpito.ceph.com/vshankar-2022-08-12_09:34:24-fs-wip-vshankar-testing1-20220812-072441-testing-default-smithi/
373
https://pulpito.ceph.com/vshankar-2022-08-18_04:30:42-fs-wip-vshankar-testing1-20220818-082047-testing-default-smithi/ (drop problematic PR and re-run)
374
375
* https://tracker.ceph.com/issues/52624
376
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
377
* https://tracker.ceph.com/issues/56446
378
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
379
* https://tracker.ceph.com/issues/55804
380
    Command failed (workunit test suites/pjd.sh)
381
* https://tracker.ceph.com/issues/51278
382
    mds: "FAILED ceph_assert(!segments.empty())"
383
* https://tracker.ceph.com/issues/54460
384
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
385
* https://tracker.ceph.com/issues/57205
386
    Test failure: test_subvolume_group_ls_filter_internal_directories (tasks.cephfs.test_volumes.TestSubvolumeGroups)
387
* https://tracker.ceph.com/issues/57206
388
    ceph_test_libcephfs_reclaim crashes during test
389
* https://tracker.ceph.com/issues/53859
390
  Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
391
* https://tracker.ceph.com/issues/50223
392
    client.xxxx isn't responding to mclientcaps(revoke)
393
394 72 Venky Shankar
h3. 2022 Aug 12
395
396
https://pulpito.ceph.com/vshankar-2022-08-10_04:06:00-fs-wip-vshankar-testing-20220805-190751-testing-default-smithi/
397
https://pulpito.ceph.com/vshankar-2022-08-11_12:16:58-fs-wip-vshankar-testing-20220811-145809-testing-default-smithi/ (drop problematic PR and re-run)
398
399
* https://tracker.ceph.com/issues/52624
400
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
401
* https://tracker.ceph.com/issues/56446
402
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
403
* https://tracker.ceph.com/issues/51964
404
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
405
* https://tracker.ceph.com/issues/55804
406
    Command failed (workunit test suites/pjd.sh)
407
* https://tracker.ceph.com/issues/50223
408
    client.xxxx isn't responding to mclientcaps(revoke)
409
* https://tracker.ceph.com/issues/50821
410
    qa: untar_snap_rm failure during mds thrashing
411
* https://tracker.ceph.com/issues/54460
412 73 Venky Shankar
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
413 72 Venky Shankar
414 71 Venky Shankar
h3. 2022 Aug 04
415
416
https://pulpito.ceph.com/?branch=wip-vshankar-testing1-20220804-123835 (only mgr/volumes, mgr/stats)
417
418
Unrealted teuthology failure on rhel
419
420 69 Rishabh Dave
h3. 2022 Jul 25
421 68 Rishabh Dave
422
http://pulpito.front.sepia.ceph.com/rishabh-2022-07-22_11:34:20-fs-wip-rishabh-testing-2022Jul22-1400-testing-default-smithi/
423
424
1st re-run: http://pulpito.front.sepia.ceph.com/rishabh-2022-07-24_03:51:19-fs-wip-rishabh-testing-2022Jul22-1400-testing-default-smithi
425
2nd re-run: http://pulpito.front.sepia.ceph.com/rishabh-2022-07-24_08:53:36-fs-wip-rishabh-testing-2022Jul22-1400-testing-default-smithi/
426 74 Rishabh Dave
3rd re-run: http://pulpito.front.sepia.ceph.com/rishabh-2022-07-24_08:53:36-fs-wip-rishabh-testing-2022Jul22-1400-testing-default-smithi/
427
4th (final) re-run: http://pulpito.front.sepia.ceph.com/rishabh-2022-07-28_03:59:01-fs-wip-rishabh-testing-2022Jul28-0143-testing-default-smithi/
428 68 Rishabh Dave
429
* https://tracker.ceph.com/issues/55804
430
  Command failed (workunit test suites/pjd.sh)
431
* https://tracker.ceph.com/issues/50223
432
  client.xxxx isn't responding to mclientcaps(revoke)
433
434
* https://tracker.ceph.com/issues/54460
435
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
436
* https://tracker.ceph.com/issues/36593
437
  Command failed (workunit test fs/quota/quota.sh) on smithixxx with status 1
438 1 Patrick Donnelly
* https://tracker.ceph.com/issues/54462
439 74 Rishabh Dave
  Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128~
440 68 Rishabh Dave
441 67 Patrick Donnelly
h3. 2022 July 22
442
443
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20220721.235756
444
445
MDS_HEALTH_DUMMY error in log fixed by followup commit.
446
transient selinux ping failure
447
448
* https://tracker.ceph.com/issues/56694
449
    qa: avoid blocking forever on hung umount
450
* https://tracker.ceph.com/issues/56695
451
    [RHEL stock] pjd test failures
452
* https://tracker.ceph.com/issues/56696
453
    admin keyring disappears during qa run
454
* https://tracker.ceph.com/issues/56697
455
    qa: fs/snaps fails for fuse
456
* https://tracker.ceph.com/issues/50222
457
    osd: 5.2s0 deep-scrub : stat mismatch
458
* https://tracker.ceph.com/issues/56698
459
    client: FAILED ceph_assert(_size == 0)
460
* https://tracker.ceph.com/issues/50223
461
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
462
463
464 66 Rishabh Dave
h3. 2022 Jul 15
465 65 Rishabh Dave
466
http://pulpito.front.sepia.ceph.com/rishabh-2022-07-08_23:53:34-fs-wip-rishabh-testing-2022Jul08-1820-testing-default-smithi/
467
468
re-run: http://pulpito.front.sepia.ceph.com/rishabh-2022-07-15_06:42:04-fs-wip-rishabh-testing-2022Jul08-1820-testing-default-smithi/
469
470
* https://tracker.ceph.com/issues/53859
471
  Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
472
* https://tracker.ceph.com/issues/55804
473
  Command failed (workunit test suites/pjd.sh)
474
* https://tracker.ceph.com/issues/50223
475
  client.xxxx isn't responding to mclientcaps(revoke)
476
* https://tracker.ceph.com/issues/50222
477
  osd: deep-scrub : stat mismatch
478
479
* https://tracker.ceph.com/issues/56632
480
  Test failure: test_subvolume_snapshot_clone_quota_exceeded (tasks.cephfs.test_volumes.TestSubvolumeSnapshotClones)
481
* https://tracker.ceph.com/issues/56634
482
  workunit test fs/snaps/snaptest-intodir.sh
483
* https://tracker.ceph.com/issues/56644
484
  Test failure: test_rapid_creation (tasks.cephfs.test_fragment.TestFragmentation)
485
486
487
488 61 Rishabh Dave
h3. 2022 July 05
489
490
http://pulpito.front.sepia.ceph.com/rishabh-2022-07-02_14:14:52-fs-wip-rishabh-testing-20220702-1631-testing-default-smithi/
491 62 Rishabh Dave
492 64 Rishabh Dave
On 1st re-run some jobs passed - http://pulpito.front.sepia.ceph.com/rishabh-2022-07-03_15:10:28-fs-wip-rishabh-testing-20220702-1631-distro-default-smithi/
493
494
On 2nd re-run only few jobs failed -
495
http://pulpito.front.sepia.ceph.com/rishabh-2022-07-06_05:24:29-fs-wip-rishabh-testing-20220705-2132-distro-default-smithi/
496
http://pulpito.front.sepia.ceph.com/rishabh-2022-07-06_05:24:29-fs-wip-rishabh-testing-20220705-2132-distro-default-smithi/
497 62 Rishabh Dave
498
* https://tracker.ceph.com/issues/56446
499
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
500
* https://tracker.ceph.com/issues/55804
501
    Command failed (workunit test suites/pjd.sh) on smithi047 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/
502
503
* https://tracker.ceph.com/issues/56445
504
    Command failed on smithi080 with status 123: "find /home/ubuntu/cephtest/archive/syslog -name '*.log' -print0 | sudo xargs -0 --no-run-if-empty -- gzip --"
505
* https://tracker.ceph.com/issues/51267
506 63 Rishabh Dave
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithi098 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1
507
* https://tracker.ceph.com/issues/50224
508
    Test failure: test_mirroring_init_failure_with_recovery (tasks.cephfs.test_mirroring.TestMirroring)
509 62 Rishabh Dave
510
511 61 Rishabh Dave
512 58 Venky Shankar
h3. 2022 July 04
513
514
https://pulpito.ceph.com/vshankar-2022-06-29_09:19:00-fs-wip-vshankar-testing-20220627-100931-testing-default-smithi/
515
(rhel runs were borked due to: https://lists.ceph.io/hyperkitty/list/dev@ceph.io/thread/JSZQFUKVLDND4W33PXDGCABPHNSPT6SS/, tests ran with --filter-out=rhel)
516
517
* https://tracker.ceph.com/issues/56445
518
    Command failed on smithi162 with status 123: "find /home/ubuntu/cephtest/archive/syslog -name '*.log' -print0 | sudo xargs -0 --no-run-if-empty -- gzip --"
519
* https://tracker.ceph.com/issues/56446
520 59 Rishabh Dave
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
521
* https://tracker.ceph.com/issues/51964
522
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
523
* https://tracker.ceph.com/issues/52624
524 60 Rishabh Dave
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
525 59 Rishabh Dave
526 57 Venky Shankar
h3. 2022 June 20
527
528
https://pulpito.ceph.com/vshankar-2022-06-15_04:03:39-fs-wip-vshankar-testing1-20220615-072516-testing-default-smithi/
529
https://pulpito.ceph.com/vshankar-2022-06-19_08:22:46-fs-wip-vshankar-testing1-20220619-102531-testing-default-smithi/
530
531
* https://tracker.ceph.com/issues/52624
532
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
533
* https://tracker.ceph.com/issues/55804
534
    qa failure: pjd link tests failed
535
* https://tracker.ceph.com/issues/54108
536
    qa: iogen workunit: "The following counters failed to be set on mds daemons: {'mds.exported', 'mds.imported'}"
537
* https://tracker.ceph.com/issues/55332
538
    Failure in snaptest-git-ceph.sh (it's an async unlink/create bug)
539
540 56 Patrick Donnelly
h3. 2022 June 13
541
542
https://pulpito.ceph.com/pdonnell-2022-06-12_05:08:12-fs:workload-wip-pdonnell-testing-20220612.004943-distro-default-smithi/
543
544
* https://tracker.ceph.com/issues/56024
545
    cephadm: removes ceph.conf during qa run causing command failure
546
* https://tracker.ceph.com/issues/48773
547
    qa: scrub does not complete
548
* https://tracker.ceph.com/issues/56012
549
    mds: src/mds/MDLog.cc: 283: FAILED ceph_assert(!mds->is_ any_replay())
550
551
552 55 Venky Shankar
h3. 2022 Jun 13
553 54 Venky Shankar
554
https://pulpito.ceph.com/vshankar-2022-06-07_00:25:50-fs-wip-vshankar-testing-20220606-223254-testing-default-smithi/
555
https://pulpito.ceph.com/vshankar-2022-06-10_01:04:46-fs-wip-vshankar-testing-20220609-175550-testing-default-smithi/
556
557
* https://tracker.ceph.com/issues/52624
558
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
559
* https://tracker.ceph.com/issues/51964
560
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
561
* https://tracker.ceph.com/issues/53859
562
    qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
563
* https://tracker.ceph.com/issues/55804
564
    qa failure: pjd link tests failed
565
* https://tracker.ceph.com/issues/56003
566
    client: src/include/xlist.h: 81: FAILED ceph_assert(_size == 0)
567
* https://tracker.ceph.com/issues/56011
568
    fs/thrash: snaptest-snap-rm-cmp.sh fails in mds5sum comparison
569
* https://tracker.ceph.com/issues/56012
570
    mds: src/mds/MDLog.cc: 283: FAILED ceph_assert(!mds->is_ any_replay())
571
572 53 Venky Shankar
h3. 2022 Jun 07
573
574
https://pulpito.ceph.com/vshankar-2022-06-06_21:25:41-fs-wip-vshankar-testing1-20220606-230129-testing-default-smithi/
575
https://pulpito.ceph.com/vshankar-2022-06-07_10:53:31-fs-wip-vshankar-testing1-20220607-104134-testing-default-smithi/ (rerun after dropping a problematic PR)
576
577
* https://tracker.ceph.com/issues/52624
578
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
579
* https://tracker.ceph.com/issues/50223
580
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
581
* https://tracker.ceph.com/issues/50224
582
    qa: test_mirroring_init_failure_with_recovery failure
583
584 51 Venky Shankar
h3. 2022 May 12
585
586
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20220509-125847
587 52 Venky Shankar
https://pulpito.ceph.com/vshankar-2022-05-13_17:09:16-fs-wip-vshankar-testing-20220513-120051-testing-default-smithi/ (drop prs + rerun)
588 51 Venky Shankar
589
* https://tracker.ceph.com/issues/52624
590
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
591
* https://tracker.ceph.com/issues/50223
592
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
593
* https://tracker.ceph.com/issues/55332
594
    Failure in snaptest-git-ceph.sh
595
* https://tracker.ceph.com/issues/53859
596
    qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
597
* https://tracker.ceph.com/issues/55538
598 1 Patrick Donnelly
    Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead)
599 52 Venky Shankar
* https://tracker.ceph.com/issues/55258
600
    lots of "heartbeat_check: no reply from X.X.X.X" in OSD logs (cropss up again, though very infrequent)
601 51 Venky Shankar
602 49 Venky Shankar
h3. 2022 May 04
603
604 50 Venky Shankar
https://pulpito.ceph.com/vshankar-2022-05-01_13:18:44-fs-wip-vshankar-testing1-20220428-204527-testing-default-smithi/
605
https://pulpito.ceph.com/vshankar-2022-05-02_16:58:59-fs-wip-vshankar-testing1-20220502-201957-testing-default-smithi/ (after dropping PRs)
606
607 49 Venky Shankar
* https://tracker.ceph.com/issues/52624
608
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
609
* https://tracker.ceph.com/issues/50223
610
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
611
* https://tracker.ceph.com/issues/55332
612
    Failure in snaptest-git-ceph.sh
613
* https://tracker.ceph.com/issues/53859
614
    qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
615
* https://tracker.ceph.com/issues/55516
616
    qa: fs suite tests failing with "json.decoder.JSONDecodeError: Extra data: line 2 column 82 (char 82)"
617
* https://tracker.ceph.com/issues/55537
618
    mds: crash during fs:upgrade test
619
* https://tracker.ceph.com/issues/55538
620
    Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead)
621
622 48 Venky Shankar
h3. 2022 Apr 25
623
624
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20220420-113951 (owner vshankar)
625
626
* https://tracker.ceph.com/issues/52624
627
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
628
* https://tracker.ceph.com/issues/50223
629
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
630
* https://tracker.ceph.com/issues/55258
631
    lots of "heartbeat_check: no reply from X.X.X.X" in OSD logs
632
* https://tracker.ceph.com/issues/55377
633
    kclient: mds revoke Fwb caps stuck after the kclient tries writebcak once
634
635 47 Venky Shankar
h3. 2022 Apr 14
636
637
https://pulpito.ceph.com/?branch=wip-vshankar-testing1-20220411-144044
638
639
* https://tracker.ceph.com/issues/52624
640
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
641
* https://tracker.ceph.com/issues/50223
642
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
643
* https://tracker.ceph.com/issues/52438
644
    qa: ffsb timeout
645
* https://tracker.ceph.com/issues/55170
646
    mds: crash during rejoin (CDir::fetch_keys)
647
* https://tracker.ceph.com/issues/55331
648
    pjd failure
649
* https://tracker.ceph.com/issues/48773
650
    qa: scrub does not complete
651
* https://tracker.ceph.com/issues/55332
652
    Failure in snaptest-git-ceph.sh
653
* https://tracker.ceph.com/issues/55258
654
    lots of "heartbeat_check: no reply from X.X.X.X" in OSD logs
655
656 45 Venky Shankar
h3. 2022 Apr 11
657
658 46 Venky Shankar
https://pulpito.ceph.com/?branch=wip-vshankar-testing-55110-20220408-203242
659 45 Venky Shankar
660
* https://tracker.ceph.com/issues/48773
661
    qa: scrub does not complete
662
* https://tracker.ceph.com/issues/52624
663
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
664
* https://tracker.ceph.com/issues/52438
665
    qa: ffsb timeout
666
* https://tracker.ceph.com/issues/48680
667
    mds: scrubbing stuck "scrub active (0 inodes in the stack)"
668
* https://tracker.ceph.com/issues/55236
669
    qa: fs/snaps tests fails with "hit max job timeout"
670
* https://tracker.ceph.com/issues/54108
671
    qa: iogen workunit: "The following counters failed to be set on mds daemons: {'mds.exported', 'mds.imported'}"
672
* https://tracker.ceph.com/issues/54971
673
    Test failure: test_perf_stats_stale_metrics (tasks.cephfs.test_mds_metrics.TestMDSMetrics)
674
* https://tracker.ceph.com/issues/50223
675
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
676
* https://tracker.ceph.com/issues/55258
677
    lots of "heartbeat_check: no reply from X.X.X.X" in OSD logs
678
679 44 Venky Shankar
h3. 2022 Mar 21
680 42 Venky Shankar
681 43 Venky Shankar
https://pulpito.ceph.com/vshankar-2022-03-20_02:16:37-fs-wip-vshankar-testing-20220319-163539-testing-default-smithi/
682
683
Run didn't go well, lots of failures - debugging by dropping PRs and running against master branch. Only merging unrelated PRs that pass tests.
684
685
686
h3. 2022 Mar 08
687
688 42 Venky Shankar
https://pulpito.ceph.com/vshankar-2022-02-28_04:32:15-fs-wip-vshankar-testing-20220226-211550-testing-default-smithi/
689
690
rerun with
691
- (drop) https://github.com/ceph/ceph/pull/44679
692
- (drop) https://github.com/ceph/ceph/pull/44958
693
https://pulpito.ceph.com/vshankar-2022-03-06_14:47:51-fs-wip-vshankar-testing-20220304-132102-testing-default-smithi/
694
695
* https://tracker.ceph.com/issues/54419 (new)
696
    `ceph orch upgrade start` seems to never reach completion
697
* https://tracker.ceph.com/issues/51964
698
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
699
* https://tracker.ceph.com/issues/52624
700
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
701
* https://tracker.ceph.com/issues/50223
702
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
703
* https://tracker.ceph.com/issues/52438
704
    qa: ffsb timeout
705
* https://tracker.ceph.com/issues/50821
706
    qa: untar_snap_rm failure during mds thrashing
707
708
709 41 Venky Shankar
h3. 2022 Feb 09
710
711
https://pulpito.ceph.com/vshankar-2022-02-05_17:27:49-fs-wip-vshankar-testing-20220201-113815-testing-default-smithi/
712
713
rerun with
714
- (drop) https://github.com/ceph/ceph/pull/37938
715
- (drop) https://github.com/ceph/ceph/pull/44335
716
- (drop) https://github.com/ceph/ceph/pull/44491
717
- (drop) https://github.com/ceph/ceph/pull/44501
718
https://pulpito.ceph.com/vshankar-2022-02-08_14:27:29-fs-wip-vshankar-testing-20220208-181241-testing-default-smithi/
719
720
* https://tracker.ceph.com/issues/51964
721
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
722
* https://tracker.ceph.com/issues/54066
723
    test_subvolume_no_upgrade_v1_sanity fails with `AssertionError: 1000 != 0`
724
* https://tracker.ceph.com/issues/48773
725
    qa: scrub does not complete
726
* https://tracker.ceph.com/issues/52624
727
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
728
* https://tracker.ceph.com/issues/50223
729
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
730
* https://tracker.ceph.com/issues/52438
731
    qa: ffsb timeout
732
733 40 Patrick Donnelly
h3. 2022 Feb 01
734
735
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20220127.171526
736
737
* https://tracker.ceph.com/issues/54107
738
    kclient: hang during umount
739
* https://tracker.ceph.com/issues/54106
740
    kclient: hang during workunit cleanup
741
* https://tracker.ceph.com/issues/54108
742
    qa: iogen workunit: "The following counters failed to be set on mds daemons: {'mds.exported', 'mds.imported'}"
743
* https://tracker.ceph.com/issues/48773
744
    qa: scrub does not complete
745
* https://tracker.ceph.com/issues/52438
746
    qa: ffsb timeout
747
748
749 36 Venky Shankar
h3. 2022 Jan 13
750
751
https://pulpito.ceph.com/vshankar-2022-01-06_13:18:41-fs-wip-vshankar-testing-20220106-145819-testing-default-smithi/
752 39 Venky Shankar
753 36 Venky Shankar
rerun with:
754 38 Venky Shankar
- (add) https://github.com/ceph/ceph/pull/44570
755
- (drop) https://github.com/ceph/ceph/pull/43184
756 36 Venky Shankar
https://pulpito.ceph.com/vshankar-2022-01-13_04:42:40-fs-wip-vshankar-testing-20220106-145819-testing-default-smithi/
757
758
* https://tracker.ceph.com/issues/50223
759
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
760
* https://tracker.ceph.com/issues/51282
761
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
762
* https://tracker.ceph.com/issues/48773
763
    qa: scrub does not complete
764
* https://tracker.ceph.com/issues/52624
765
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
766
* https://tracker.ceph.com/issues/53859
767
    qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
768
769 34 Venky Shankar
h3. 2022 Jan 03
770
771
https://pulpito.ceph.com/vshankar-2021-12-22_07:37:44-fs-wip-vshankar-testing-20211216-114012-testing-default-smithi/
772
https://pulpito.ceph.com/vshankar-2022-01-03_12:27:45-fs-wip-vshankar-testing-20220103-142738-testing-default-smithi/ (rerun)
773
774
* https://tracker.ceph.com/issues/50223
775
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
776
* https://tracker.ceph.com/issues/51964
777
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
778
* https://tracker.ceph.com/issues/51267
779
    CommandFailedError: Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithi096 with status 1:...
780
* https://tracker.ceph.com/issues/51282
781
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
782
* https://tracker.ceph.com/issues/50821
783
    qa: untar_snap_rm failure during mds thrashing
784
* https://tracker.ceph.com/issues/51278
785
    mds: "FAILED ceph_assert(!segments.empty())"
786 35 Ramana Raja
* https://tracker.ceph.com/issues/52279
787
    cephadm tests fail due to: error adding seccomp filter rule for syscall bdflush: requested action matches default action of filter
788
789 34 Venky Shankar
790 33 Patrick Donnelly
h3. 2021 Dec 22
791
792
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211222.014316
793
794
* https://tracker.ceph.com/issues/52624
795
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
796
* https://tracker.ceph.com/issues/50223
797
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
798
* https://tracker.ceph.com/issues/52279
799
    cephadm tests fail due to: error adding seccomp filter rule for syscall bdflush: requested action matches default action of filter
800
* https://tracker.ceph.com/issues/50224
801
    qa: test_mirroring_init_failure_with_recovery failure
802
* https://tracker.ceph.com/issues/48773
803
    qa: scrub does not complete
804
805
806 32 Venky Shankar
h3. 2021 Nov 30
807
808
https://pulpito.ceph.com/vshankar-2021-11-24_07:14:27-fs-wip-vshankar-testing-20211124-094330-testing-default-smithi/
809
https://pulpito.ceph.com/vshankar-2021-11-30_06:23:32-fs-wip-vshankar-testing-20211124-094330-distro-default-smithi/ (rerun w/ QA fixes)
810
811
* https://tracker.ceph.com/issues/53436
812
    mds, mon: mds beacon messages get dropped? (mds never reaches up:active state)
813
* https://tracker.ceph.com/issues/51964
814
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
815
* https://tracker.ceph.com/issues/48812
816
    qa: test_scrub_pause_and_resume_with_abort failure
817
* https://tracker.ceph.com/issues/51076
818
    "wait_for_recovery: failed before timeout expired" during thrashosd test with EC backend.
819
* https://tracker.ceph.com/issues/50223
820
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
821
* https://tracker.ceph.com/issues/52624
822
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
823
* https://tracker.ceph.com/issues/50250
824
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
825
826
827 31 Patrick Donnelly
h3. 2021 November 9
828
829
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211109.180315
830
831
* https://tracker.ceph.com/issues/53214
832
    qa: "dd: error reading '/sys/kernel/debug/ceph/2a934501-6731-4052-a836-f42229a869be.client4874/metrics': Is a directory"
833
* https://tracker.ceph.com/issues/48773
834
    qa: scrub does not complete
835
* https://tracker.ceph.com/issues/50223
836
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
837
* https://tracker.ceph.com/issues/51282
838
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
839
* https://tracker.ceph.com/issues/52624
840
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
841
* https://tracker.ceph.com/issues/53216
842
    qa: "RuntimeError: value of attributes should be either str or None. client_id"
843
* https://tracker.ceph.com/issues/50250
844
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
845
846
847
848 30 Patrick Donnelly
h3. 2021 November 03
849
850
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211103.023355
851
852
* https://tracker.ceph.com/issues/51964
853
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
854
* https://tracker.ceph.com/issues/51282
855
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
856
* https://tracker.ceph.com/issues/52436
857
    fs/ceph: "corrupt mdsmap"
858
* https://tracker.ceph.com/issues/53074
859
    pybind/mgr/cephadm: upgrade sequence does not continue if no MDS are active
860
* https://tracker.ceph.com/issues/53150
861
    pybind/mgr/cephadm/upgrade: tolerate MDS failures during upgrade straddling v16.2.5
862
* https://tracker.ceph.com/issues/53155
863
    MDSMonitor: assertion during upgrade to v16.2.5+
864
865
866 29 Patrick Donnelly
h3. 2021 October 26
867
868
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211025.000447
869
870
* https://tracker.ceph.com/issues/53074
871
    pybind/mgr/cephadm: upgrade sequence does not continue if no MDS are active
872
* https://tracker.ceph.com/issues/52997
873
    testing: hang ing umount
874
* https://tracker.ceph.com/issues/50824
875
    qa: snaptest-git-ceph bus error
876
* https://tracker.ceph.com/issues/52436
877
    fs/ceph: "corrupt mdsmap"
878
* https://tracker.ceph.com/issues/48773
879
    qa: scrub does not complete
880
* https://tracker.ceph.com/issues/53082
881
    ceph-fuse: segmenetation fault in Client::handle_mds_map
882
* https://tracker.ceph.com/issues/50223
883
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
884
* https://tracker.ceph.com/issues/52624
885
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
886
* https://tracker.ceph.com/issues/50224
887
    qa: test_mirroring_init_failure_with_recovery failure
888
* https://tracker.ceph.com/issues/50821
889
    qa: untar_snap_rm failure during mds thrashing
890
* https://tracker.ceph.com/issues/50250
891
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
892
893
894
895 27 Patrick Donnelly
h3. 2021 October 19
896
897 28 Patrick Donnelly
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211019.013028
898 27 Patrick Donnelly
899
* https://tracker.ceph.com/issues/52995
900
    qa: test_standby_count_wanted failure
901
* https://tracker.ceph.com/issues/52948
902
    osd: fails to come up: "teuthology.misc:7 of 8 OSDs are up"
903
* https://tracker.ceph.com/issues/52996
904
    qa: test_perf_counters via test_openfiletable
905
* https://tracker.ceph.com/issues/48772
906
    qa: pjd: not ok 9, 44, 80
907
* https://tracker.ceph.com/issues/52997
908
    testing: hang ing umount
909
* https://tracker.ceph.com/issues/50250
910
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
911
* https://tracker.ceph.com/issues/52624
912
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
913
* https://tracker.ceph.com/issues/50223
914
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
915
* https://tracker.ceph.com/issues/50821
916
    qa: untar_snap_rm failure during mds thrashing
917
* https://tracker.ceph.com/issues/48773
918
    qa: scrub does not complete
919
920
921 26 Patrick Donnelly
h3. 2021 October 12
922
923
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211012.192211
924
925
Some failures caused by teuthology bug: https://tracker.ceph.com/issues/52944
926
927
New test caused failure: https://github.com/ceph/ceph/pull/43297#discussion_r729883167
928
929
930
* https://tracker.ceph.com/issues/51282
931
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
932
* https://tracker.ceph.com/issues/52948
933
    osd: fails to come up: "teuthology.misc:7 of 8 OSDs are up"
934
* https://tracker.ceph.com/issues/48773
935
    qa: scrub does not complete
936
* https://tracker.ceph.com/issues/50224
937
    qa: test_mirroring_init_failure_with_recovery failure
938
* https://tracker.ceph.com/issues/52949
939
    RuntimeError: The following counters failed to be set on mds daemons: {'mds.dir_split'}
940
941
942 25 Patrick Donnelly
h3. 2021 October 02
943 23 Patrick Donnelly
944 24 Patrick Donnelly
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211002.163337
945
946
Some failures caused by cephadm upgrade test. Fixed in follow-up qa commit.
947
948
test_simple failures caused by PR in this set.
949
950
A few reruns because of QA infra noise.
951
952
* https://tracker.ceph.com/issues/52822
953
    qa: failed pacific install on fs:upgrade
954
* https://tracker.ceph.com/issues/52624
955
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
956
* https://tracker.ceph.com/issues/50223
957
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
958
* https://tracker.ceph.com/issues/48773
959
    qa: scrub does not complete
960
961
962
h3. 2021 September 20
963
964 23 Patrick Donnelly
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210917.174826
965
966
* https://tracker.ceph.com/issues/52677
967
    qa: test_simple failure
968
* https://tracker.ceph.com/issues/51279
969
    kclient hangs on umount (testing branch)
970
* https://tracker.ceph.com/issues/50223
971
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
972
* https://tracker.ceph.com/issues/50250
973
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
974
* https://tracker.ceph.com/issues/52624
975
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
976
* https://tracker.ceph.com/issues/52438
977
    qa: ffsb timeout
978
979
980 22 Patrick Donnelly
h3. 2021 September 10
981
982
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210910.181451
983
984
* https://tracker.ceph.com/issues/50223
985
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
986
* https://tracker.ceph.com/issues/50250
987
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
988
* https://tracker.ceph.com/issues/52624
989
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
990
* https://tracker.ceph.com/issues/52625
991
    qa: test_kill_mdstable (tasks.cephfs.test_snapshots.TestSnapshots)
992
* https://tracker.ceph.com/issues/52439
993
    qa: acls does not compile on centos stream
994
* https://tracker.ceph.com/issues/50821
995
    qa: untar_snap_rm failure during mds thrashing
996
* https://tracker.ceph.com/issues/48773
997
    qa: scrub does not complete
998
* https://tracker.ceph.com/issues/52626
999
    mds: ScrubStack.cc: 831: FAILED ceph_assert(diri)
1000
* https://tracker.ceph.com/issues/51279
1001
    kclient hangs on umount (testing branch)
1002
1003
1004 21 Patrick Donnelly
h3. 2021 August 27
1005
1006
Several jobs died because of device failures.
1007
1008
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210827.024746
1009
1010
* https://tracker.ceph.com/issues/52430
1011
    mds: fast async create client mount breaks racy test
1012
* https://tracker.ceph.com/issues/52436
1013
    fs/ceph: "corrupt mdsmap"
1014
* https://tracker.ceph.com/issues/52437
1015
    mds: InoTable::replay_release_ids abort via test_inotable_sync
1016
* https://tracker.ceph.com/issues/51282
1017
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
1018
* https://tracker.ceph.com/issues/52438
1019
    qa: ffsb timeout
1020
* https://tracker.ceph.com/issues/52439
1021
    qa: acls does not compile on centos stream
1022
1023
1024 20 Patrick Donnelly
h3. 2021 July 30
1025
1026
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210729.214022
1027
1028
* https://tracker.ceph.com/issues/50250
1029
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
1030
* https://tracker.ceph.com/issues/51282
1031
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
1032
* https://tracker.ceph.com/issues/48773
1033
    qa: scrub does not complete
1034
* https://tracker.ceph.com/issues/51975
1035
    pybind/mgr/stats: KeyError
1036
1037
1038 19 Patrick Donnelly
h3. 2021 July 28
1039
1040
https://pulpito.ceph.com/pdonnell-2021-07-28_00:39:45-fs-wip-pdonnell-testing-20210727.213757-distro-basic-smithi/
1041
1042
with qa fix: https://pulpito.ceph.com/pdonnell-2021-07-28_16:20:28-fs-wip-pdonnell-testing-20210728.141004-distro-basic-smithi/
1043
1044
* https://tracker.ceph.com/issues/51905
1045
    qa: "error reading sessionmap 'mds1_sessionmap'"
1046
* https://tracker.ceph.com/issues/48773
1047
    qa: scrub does not complete
1048
* https://tracker.ceph.com/issues/50250
1049
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
1050
* https://tracker.ceph.com/issues/51267
1051
    CommandFailedError: Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithi096 with status 1:...
1052
* https://tracker.ceph.com/issues/51279
1053
    kclient hangs on umount (testing branch)
1054
1055
1056 18 Patrick Donnelly
h3. 2021 July 16
1057
1058
https://pulpito.ceph.com/pdonnell-2021-07-16_05:50:11-fs-wip-pdonnell-testing-20210716.022804-distro-basic-smithi/
1059
1060
* https://tracker.ceph.com/issues/48773
1061
    qa: scrub does not complete
1062
* https://tracker.ceph.com/issues/48772
1063
    qa: pjd: not ok 9, 44, 80
1064
* https://tracker.ceph.com/issues/45434
1065
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
1066
* https://tracker.ceph.com/issues/51279
1067
    kclient hangs on umount (testing branch)
1068
* https://tracker.ceph.com/issues/50824
1069
    qa: snaptest-git-ceph bus error
1070
1071
1072 17 Patrick Donnelly
h3. 2021 July 04
1073
1074
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210703.052904
1075
1076
* https://tracker.ceph.com/issues/48773
1077
    qa: scrub does not complete
1078
* https://tracker.ceph.com/issues/39150
1079
    mon: "FAILED ceph_assert(session_map.sessions.empty())" when out of quorum
1080
* https://tracker.ceph.com/issues/45434
1081
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
1082
* https://tracker.ceph.com/issues/51282
1083
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
1084
* https://tracker.ceph.com/issues/48771
1085
    qa: iogen: workload fails to cause balancing
1086
* https://tracker.ceph.com/issues/51279
1087
    kclient hangs on umount (testing branch)
1088
* https://tracker.ceph.com/issues/50250
1089
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
1090
1091
1092 16 Patrick Donnelly
h3. 2021 July 01
1093
1094
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210701.192056
1095
1096
* https://tracker.ceph.com/issues/51197
1097
    qa: [WRN] Scrub error on inode 0x10000001520 (/client.0/tmp/t/linux-5.4/Documentation/driver-api) see mds.f log and `damage ls` output for details
1098
* https://tracker.ceph.com/issues/50866
1099
    osd: stat mismatch on objects
1100
* https://tracker.ceph.com/issues/48773
1101
    qa: scrub does not complete
1102
1103
1104 15 Patrick Donnelly
h3. 2021 June 26
1105
1106
https://pulpito.ceph.com/pdonnell-2021-06-26_00:57:00-fs-wip-pdonnell-testing-20210625.225421-distro-basic-smithi/
1107
1108
* https://tracker.ceph.com/issues/51183
1109
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
1110
* https://tracker.ceph.com/issues/51410
1111
    kclient: fails to finish reconnect during MDS thrashing (testing branch)
1112
* https://tracker.ceph.com/issues/48773
1113
    qa: scrub does not complete
1114
* https://tracker.ceph.com/issues/51282
1115
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
1116
* https://tracker.ceph.com/issues/51169
1117
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
1118
* https://tracker.ceph.com/issues/48772
1119
    qa: pjd: not ok 9, 44, 80
1120
1121
1122 14 Patrick Donnelly
h3. 2021 June 21
1123
1124
https://pulpito.ceph.com/pdonnell-2021-06-22_00:27:21-fs-wip-pdonnell-testing-20210621.231646-distro-basic-smithi/
1125
1126
One failure caused by PR: https://github.com/ceph/ceph/pull/41935#issuecomment-866472599
1127
1128
* https://tracker.ceph.com/issues/51282
1129
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
1130
* https://tracker.ceph.com/issues/51183
1131
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
1132
* https://tracker.ceph.com/issues/48773
1133
    qa: scrub does not complete
1134
* https://tracker.ceph.com/issues/48771
1135
    qa: iogen: workload fails to cause balancing
1136
* https://tracker.ceph.com/issues/51169
1137
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
1138
* https://tracker.ceph.com/issues/50495
1139
    libcephfs: shutdown race fails with status 141
1140
* https://tracker.ceph.com/issues/45434
1141
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
1142
* https://tracker.ceph.com/issues/50824
1143
    qa: snaptest-git-ceph bus error
1144
* https://tracker.ceph.com/issues/50223
1145
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
1146
1147
1148 13 Patrick Donnelly
h3. 2021 June 16
1149
1150
https://pulpito.ceph.com/pdonnell-2021-06-16_21:26:55-fs-wip-pdonnell-testing-20210616.191804-distro-basic-smithi/
1151
1152
MDS abort class of failures caused by PR: https://github.com/ceph/ceph/pull/41667
1153
1154
* https://tracker.ceph.com/issues/45434
1155
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
1156
* https://tracker.ceph.com/issues/51169
1157
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
1158
* https://tracker.ceph.com/issues/43216
1159
    MDSMonitor: removes MDS coming out of quorum election
1160
* https://tracker.ceph.com/issues/51278
1161
    mds: "FAILED ceph_assert(!segments.empty())"
1162
* https://tracker.ceph.com/issues/51279
1163
    kclient hangs on umount (testing branch)
1164
* https://tracker.ceph.com/issues/51280
1165
    mds: "FAILED ceph_assert(r == 0 || r == -2)"
1166
* https://tracker.ceph.com/issues/51183
1167
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
1168
* https://tracker.ceph.com/issues/51281
1169
    qa: snaptest-snap-rm-cmp.sh: "echo 'FAIL: bad match, /tmp/a 4637e766853d1ad16a7b17079e2c6f03 != real c3883760b18d50e8d78819c54d579b00'"
1170
* https://tracker.ceph.com/issues/48773
1171
    qa: scrub does not complete
1172
* https://tracker.ceph.com/issues/51076
1173
    "wait_for_recovery: failed before timeout expired" during thrashosd test with EC backend.
1174
* https://tracker.ceph.com/issues/51228
1175
    qa: rmdir: failed to remove 'a/.snap/*': No such file or directory
1176
* https://tracker.ceph.com/issues/51282
1177
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
1178
1179
1180 12 Patrick Donnelly
h3. 2021 June 14
1181
1182
https://pulpito.ceph.com/pdonnell-2021-06-14_20:53:05-fs-wip-pdonnell-testing-20210614.173325-distro-basic-smithi/
1183
1184
Some Ubuntu 20.04 upgrade fallout. In particular, upgrade tests are failing due to missing packages for 18.04 Pacific.
1185
1186
* https://tracker.ceph.com/issues/51169
1187
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
1188
* https://tracker.ceph.com/issues/51228
1189
    qa: rmdir: failed to remove 'a/.snap/*': No such file or directory
1190
* https://tracker.ceph.com/issues/48773
1191
    qa: scrub does not complete
1192
* https://tracker.ceph.com/issues/51183
1193
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
1194
* https://tracker.ceph.com/issues/45434
1195
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
1196
* https://tracker.ceph.com/issues/51182
1197
    pybind/mgr/snap_schedule: Invalid command: Unexpected argument 'fs=cephfs'
1198
* https://tracker.ceph.com/issues/51229
1199
    qa: test_multi_snap_schedule list difference failure
1200
* https://tracker.ceph.com/issues/50821
1201
    qa: untar_snap_rm failure during mds thrashing
1202
1203
1204 11 Patrick Donnelly
h3. 2021 June 13
1205
1206
https://pulpito.ceph.com/pdonnell-2021-06-12_02:45:35-fs-wip-pdonnell-testing-20210612.002809-distro-basic-smithi/
1207
1208
Some Ubuntu 20.04 upgrade fallout. In particular, upgrade tests are failing due to missing packages for 18.04 Pacific.
1209
1210
* https://tracker.ceph.com/issues/51169
1211
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
1212
* https://tracker.ceph.com/issues/48773
1213
    qa: scrub does not complete
1214
* https://tracker.ceph.com/issues/51182
1215
    pybind/mgr/snap_schedule: Invalid command: Unexpected argument 'fs=cephfs'
1216
* https://tracker.ceph.com/issues/51183
1217
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
1218
* https://tracker.ceph.com/issues/51197
1219
    qa: [WRN] Scrub error on inode 0x10000001520 (/client.0/tmp/t/linux-5.4/Documentation/driver-api) see mds.f log and `damage ls` output for details
1220
* https://tracker.ceph.com/issues/45434
1221
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
1222
1223 10 Patrick Donnelly
h3. 2021 June 11
1224
1225
https://pulpito.ceph.com/pdonnell-2021-06-11_18:02:10-fs-wip-pdonnell-testing-20210611.162716-distro-basic-smithi/
1226
1227
Some Ubuntu 20.04 upgrade fallout. In particular, upgrade tests are failing due to missing packages for 18.04 Pacific.
1228
1229
* https://tracker.ceph.com/issues/51169
1230
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
1231
* https://tracker.ceph.com/issues/45434
1232
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
1233
* https://tracker.ceph.com/issues/48771
1234
    qa: iogen: workload fails to cause balancing
1235
* https://tracker.ceph.com/issues/43216
1236
    MDSMonitor: removes MDS coming out of quorum election
1237
* https://tracker.ceph.com/issues/51182
1238
    pybind/mgr/snap_schedule: Invalid command: Unexpected argument 'fs=cephfs'
1239
* https://tracker.ceph.com/issues/50223
1240
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
1241
* https://tracker.ceph.com/issues/48773
1242
    qa: scrub does not complete
1243
* https://tracker.ceph.com/issues/51183
1244
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
1245
* https://tracker.ceph.com/issues/51184
1246
    qa: fs:bugs does not specify distro
1247
1248
1249 9 Patrick Donnelly
h3. 2021 June 03
1250
1251
https://pulpito.ceph.com/pdonnell-2021-06-03_03:40:33-fs-wip-pdonnell-testing-20210603.020013-distro-basic-smithi/
1252
1253
* https://tracker.ceph.com/issues/45434
1254
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
1255
* https://tracker.ceph.com/issues/50016
1256
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
1257
* https://tracker.ceph.com/issues/50821
1258
    qa: untar_snap_rm failure during mds thrashing
1259
* https://tracker.ceph.com/issues/50622 (regression)
1260
    msg: active_connections regression
1261
* https://tracker.ceph.com/issues/49845#note-2 (regression)
1262
    qa: failed umount in test_volumes
1263
* https://tracker.ceph.com/issues/48773
1264
    qa: scrub does not complete
1265
* https://tracker.ceph.com/issues/43216
1266
    MDSMonitor: removes MDS coming out of quorum election
1267
1268
1269 7 Patrick Donnelly
h3. 2021 May 18
1270
1271 8 Patrick Donnelly
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210518.214114
1272
1273
Regression in testing kernel caused some failures. Ilya fixed those and rerun
1274
looked better. Some odd new noise in the rerun relating to packaging and "No
1275
module named 'tasks.ceph'".
1276
1277
* https://tracker.ceph.com/issues/50824
1278
    qa: snaptest-git-ceph bus error
1279
* https://tracker.ceph.com/issues/50622 (regression)
1280
    msg: active_connections regression
1281
* https://tracker.ceph.com/issues/49845#note-2 (regression)
1282
    qa: failed umount in test_volumes
1283
* https://tracker.ceph.com/issues/48203 (stock kernel update required)
1284
    qa: quota failure
1285
1286
1287
h3. 2021 May 18
1288
1289 7 Patrick Donnelly
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210518.025642
1290
1291
* https://tracker.ceph.com/issues/50821
1292
    qa: untar_snap_rm failure during mds thrashing
1293
* https://tracker.ceph.com/issues/48773
1294
    qa: scrub does not complete
1295
* https://tracker.ceph.com/issues/45591
1296
    mgr: FAILED ceph_assert(daemon != nullptr)
1297
* https://tracker.ceph.com/issues/50866
1298
    osd: stat mismatch on objects
1299
* https://tracker.ceph.com/issues/50016
1300
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
1301
* https://tracker.ceph.com/issues/50867
1302
    qa: fs:mirror: reduced data availability
1303
* https://tracker.ceph.com/issues/50821
1304
    qa: untar_snap_rm failure during mds thrashing
1305
* https://tracker.ceph.com/issues/50622 (regression)
1306
    msg: active_connections regression
1307
* https://tracker.ceph.com/issues/50223
1308
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
1309
* https://tracker.ceph.com/issues/50868
1310
    qa: "kern.log.gz already exists; not overwritten"
1311
* https://tracker.ceph.com/issues/50870
1312
    qa: test_full: "rm: cannot remove 'large_file_a': Permission denied"
1313
1314
1315 6 Patrick Donnelly
h3. 2021 May 11
1316
1317
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210511.232042
1318
1319
* one class of failures caused by PR
1320
* https://tracker.ceph.com/issues/48812
1321
    qa: test_scrub_pause_and_resume_with_abort failure
1322
* https://tracker.ceph.com/issues/50390
1323
    mds: monclient: wait_auth_rotating timed out after 30
1324
* https://tracker.ceph.com/issues/48773
1325
    qa: scrub does not complete
1326
* https://tracker.ceph.com/issues/50821
1327
    qa: untar_snap_rm failure during mds thrashing
1328
* https://tracker.ceph.com/issues/50224
1329
    qa: test_mirroring_init_failure_with_recovery failure
1330
* https://tracker.ceph.com/issues/50622 (regression)
1331
    msg: active_connections regression
1332
* https://tracker.ceph.com/issues/50825
1333
    qa: snaptest-git-ceph hang during mon thrashing v2
1334
* https://tracker.ceph.com/issues/50821
1335
    qa: untar_snap_rm failure during mds thrashing
1336
* https://tracker.ceph.com/issues/50823
1337
    qa: RuntimeError: timeout waiting for cluster to stabilize
1338
1339
1340 5 Patrick Donnelly
h3. 2021 May 14
1341
1342
https://pulpito.ceph.com/pdonnell-2021-05-14_21:45:42-fs-master-distro-basic-smithi/
1343
1344
* https://tracker.ceph.com/issues/48812
1345
    qa: test_scrub_pause_and_resume_with_abort failure
1346
* https://tracker.ceph.com/issues/50821
1347
    qa: untar_snap_rm failure during mds thrashing
1348
* https://tracker.ceph.com/issues/50622 (regression)
1349
    msg: active_connections regression
1350
* https://tracker.ceph.com/issues/50822
1351
    qa: testing kernel patch for client metrics causes mds abort
1352
* https://tracker.ceph.com/issues/48773
1353
    qa: scrub does not complete
1354
* https://tracker.ceph.com/issues/50823
1355
    qa: RuntimeError: timeout waiting for cluster to stabilize
1356
* https://tracker.ceph.com/issues/50824
1357
    qa: snaptest-git-ceph bus error
1358
* https://tracker.ceph.com/issues/50825
1359
    qa: snaptest-git-ceph hang during mon thrashing v2
1360
* https://tracker.ceph.com/issues/50826
1361
    kceph: stock RHEL kernel hangs on snaptests with mon|osd thrashers
1362
1363
1364 4 Patrick Donnelly
h3. 2021 May 01
1365
1366
https://pulpito.ceph.com/pdonnell-2021-05-01_09:07:09-fs-wip-pdonnell-testing-20210501.040415-distro-basic-smithi/
1367
1368
* https://tracker.ceph.com/issues/45434
1369
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
1370
* https://tracker.ceph.com/issues/50281
1371
    qa: untar_snap_rm timeout
1372
* https://tracker.ceph.com/issues/48203 (stock kernel update required)
1373
    qa: quota failure
1374
* https://tracker.ceph.com/issues/48773
1375
    qa: scrub does not complete
1376
* https://tracker.ceph.com/issues/50390
1377
    mds: monclient: wait_auth_rotating timed out after 30
1378
* https://tracker.ceph.com/issues/50250
1379
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details"
1380
* https://tracker.ceph.com/issues/50622 (regression)
1381
    msg: active_connections regression
1382
* https://tracker.ceph.com/issues/45591
1383
    mgr: FAILED ceph_assert(daemon != nullptr)
1384
* https://tracker.ceph.com/issues/50221
1385
    qa: snaptest-git-ceph failure in git diff
1386
* https://tracker.ceph.com/issues/50016
1387
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
1388
1389
1390 3 Patrick Donnelly
h3. 2021 Apr 15
1391
1392
https://pulpito.ceph.com/pdonnell-2021-04-15_01:35:57-fs-wip-pdonnell-testing-20210414.230315-distro-basic-smithi/
1393
1394
* https://tracker.ceph.com/issues/50281
1395
    qa: untar_snap_rm timeout
1396
* https://tracker.ceph.com/issues/50220
1397
    qa: dbench workload timeout
1398
* https://tracker.ceph.com/issues/50246
1399
    mds: failure replaying journal (EMetaBlob)
1400
* https://tracker.ceph.com/issues/50250
1401
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details"
1402
* https://tracker.ceph.com/issues/50016
1403
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
1404
* https://tracker.ceph.com/issues/50222
1405
    osd: 5.2s0 deep-scrub : stat mismatch
1406
* https://tracker.ceph.com/issues/45434
1407
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
1408
* https://tracker.ceph.com/issues/49845
1409
    qa: failed umount in test_volumes
1410
* https://tracker.ceph.com/issues/37808
1411
    osd: osdmap cache weak_refs assert during shutdown
1412
* https://tracker.ceph.com/issues/50387
1413
    client: fs/snaps failure
1414
* https://tracker.ceph.com/issues/50389
1415
    mds: "cluster [ERR] Error recovering journal 0x203: (2) No such file or directory" in cluster log"
1416
* https://tracker.ceph.com/issues/50216
1417
    qa: "ls: cannot access 'lost+found': No such file or directory"
1418
* https://tracker.ceph.com/issues/50390
1419
    mds: monclient: wait_auth_rotating timed out after 30
1420
1421
1422
1423 1 Patrick Donnelly
h3. 2021 Apr 08
1424
1425 2 Patrick Donnelly
https://pulpito.ceph.com/pdonnell-2021-04-08_22:42:24-fs-wip-pdonnell-testing-20210408.192301-distro-basic-smithi/
1426
1427
* https://tracker.ceph.com/issues/45434
1428
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
1429
* https://tracker.ceph.com/issues/50016
1430
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
1431
* https://tracker.ceph.com/issues/48773
1432
    qa: scrub does not complete
1433
* https://tracker.ceph.com/issues/50279
1434
    qa: "Replacing daemon mds.b as rank 0 with standby daemon mds.c"
1435
* https://tracker.ceph.com/issues/50246
1436
    mds: failure replaying journal (EMetaBlob)
1437
* https://tracker.ceph.com/issues/48365
1438
    qa: ffsb build failure on CentOS 8.2
1439
* https://tracker.ceph.com/issues/50216
1440
    qa: "ls: cannot access 'lost+found': No such file or directory"
1441
* https://tracker.ceph.com/issues/50223
1442
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
1443
* https://tracker.ceph.com/issues/50280
1444
    cephadm: RuntimeError: uid/gid not found
1445
* https://tracker.ceph.com/issues/50281
1446
    qa: untar_snap_rm timeout
1447
1448
h3. 2021 Apr 08
1449
1450 1 Patrick Donnelly
https://pulpito.ceph.com/pdonnell-2021-04-08_04:31:36-fs-wip-pdonnell-testing-20210408.024225-distro-basic-smithi/
1451
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210408.142238 (with logic inversion / QA fix)
1452
1453
* https://tracker.ceph.com/issues/50246
1454
    mds: failure replaying journal (EMetaBlob)
1455
* https://tracker.ceph.com/issues/50250
1456
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details"
1457
1458
1459
h3. 2021 Apr 07
1460
1461
https://pulpito.ceph.com/pdonnell-2021-04-07_02:12:41-fs-wip-pdonnell-testing-20210406.213012-distro-basic-smithi/
1462
1463
* https://tracker.ceph.com/issues/50215
1464
    qa: "log [ERR] : error reading sessionmap 'mds2_sessionmap'"
1465
* https://tracker.ceph.com/issues/49466
1466
    qa: "Command failed on gibba030 with status 1: 'set -ex\nsudo dd of=/tmp/tmp.ZEeZBasJer'"
1467
* https://tracker.ceph.com/issues/50216
1468
    qa: "ls: cannot access 'lost+found': No such file or directory"
1469
* https://tracker.ceph.com/issues/48773
1470
    qa: scrub does not complete
1471
* https://tracker.ceph.com/issues/49845
1472
    qa: failed umount in test_volumes
1473
* https://tracker.ceph.com/issues/50220
1474
    qa: dbench workload timeout
1475
* https://tracker.ceph.com/issues/50221
1476
    qa: snaptest-git-ceph failure in git diff
1477
* https://tracker.ceph.com/issues/50222
1478
    osd: 5.2s0 deep-scrub : stat mismatch
1479
* https://tracker.ceph.com/issues/50223
1480
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
1481
* https://tracker.ceph.com/issues/50224
1482
    qa: test_mirroring_init_failure_with_recovery failure
1483
1484
h3. 2021 Apr 01
1485
1486
https://pulpito.ceph.com/pdonnell-2021-04-01_00:45:34-fs-wip-pdonnell-testing-20210331.222326-distro-basic-smithi/
1487
1488
* https://tracker.ceph.com/issues/48772
1489
    qa: pjd: not ok 9, 44, 80
1490
* https://tracker.ceph.com/issues/50177
1491
    osd: "stalled aio... buggy kernel or bad device?"
1492
* https://tracker.ceph.com/issues/48771
1493
    qa: iogen: workload fails to cause balancing
1494
* https://tracker.ceph.com/issues/49845
1495
    qa: failed umount in test_volumes
1496
* https://tracker.ceph.com/issues/48773
1497
    qa: scrub does not complete
1498
* https://tracker.ceph.com/issues/48805
1499
    mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details"
1500
* https://tracker.ceph.com/issues/50178
1501
    qa: "TypeError: run() got an unexpected keyword argument 'shell'"
1502
* https://tracker.ceph.com/issues/45434
1503
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
1504
1505
h3. 2021 Mar 24
1506
1507
https://pulpito.ceph.com/pdonnell-2021-03-24_23:26:35-fs-wip-pdonnell-testing-20210324.190252-distro-basic-smithi/
1508
1509
* https://tracker.ceph.com/issues/49500
1510
    qa: "Assertion `cb_done' failed."
1511
* https://tracker.ceph.com/issues/50019
1512
    qa: mount failure with cephadm "probably no MDS server is up?"
1513
* https://tracker.ceph.com/issues/50020
1514
    qa: "RADOS object not found (Failed to operate read op for oid cephfs_mirror)"
1515
* https://tracker.ceph.com/issues/48773
1516
    qa: scrub does not complete
1517
* https://tracker.ceph.com/issues/45434
1518
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
1519
* https://tracker.ceph.com/issues/48805
1520
    mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details"
1521
* https://tracker.ceph.com/issues/48772
1522
    qa: pjd: not ok 9, 44, 80
1523
* https://tracker.ceph.com/issues/50021
1524
    qa: snaptest-git-ceph failure during mon thrashing
1525
* https://tracker.ceph.com/issues/48771
1526
    qa: iogen: workload fails to cause balancing
1527
* https://tracker.ceph.com/issues/50016
1528
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
1529
* https://tracker.ceph.com/issues/49466
1530
    qa: "Command failed on gibba030 with status 1: 'set -ex\nsudo dd of=/tmp/tmp.ZEeZBasJer'"
1531
1532
1533
h3. 2021 Mar 18
1534
1535
https://pulpito.ceph.com/pdonnell-2021-03-18_13:46:31-fs-wip-pdonnell-testing-20210318.024145-distro-basic-smithi/
1536
1537
* https://tracker.ceph.com/issues/49466
1538
    qa: "Command failed on gibba030 with status 1: 'set -ex\nsudo dd of=/tmp/tmp.ZEeZBasJer'"
1539
* https://tracker.ceph.com/issues/48773
1540
    qa: scrub does not complete
1541
* https://tracker.ceph.com/issues/48805
1542
    mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details"
1543
* https://tracker.ceph.com/issues/45434
1544
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
1545
* https://tracker.ceph.com/issues/49845
1546
    qa: failed umount in test_volumes
1547
* https://tracker.ceph.com/issues/49605
1548
    mgr: drops command on the floor
1549
* https://tracker.ceph.com/issues/48203 (stock kernel update required)
1550
    qa: quota failure
1551
* https://tracker.ceph.com/issues/49928
1552
    client: items pinned in cache preventing unmount x2
1553
1554
h3. 2021 Mar 15
1555
1556
https://pulpito.ceph.com/pdonnell-2021-03-15_22:16:56-fs-wip-pdonnell-testing-20210315.182203-distro-basic-smithi/
1557
1558
* https://tracker.ceph.com/issues/49842
1559
    qa: stuck pkg install
1560
* https://tracker.ceph.com/issues/49466
1561
    qa: "Command failed on gibba030 with status 1: 'set -ex\nsudo dd of=/tmp/tmp.ZEeZBasJer'"
1562
* https://tracker.ceph.com/issues/49822
1563
    test: test_mirroring_command_idempotency (tasks.cephfs.test_admin.TestMirroringCommands) failure
1564
* https://tracker.ceph.com/issues/49240
1565
    terminate called after throwing an instance of 'std::bad_alloc'
1566
* https://tracker.ceph.com/issues/48773
1567
    qa: scrub does not complete
1568
* https://tracker.ceph.com/issues/45434
1569
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
1570
* https://tracker.ceph.com/issues/49500
1571
    qa: "Assertion `cb_done' failed."
1572
* https://tracker.ceph.com/issues/49843
1573
    qa: fs/snaps/snaptest-upchildrealms.sh failure
1574
* https://tracker.ceph.com/issues/49845
1575
    qa: failed umount in test_volumes
1576
* https://tracker.ceph.com/issues/48805
1577
    mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details"
1578
* https://tracker.ceph.com/issues/49605
1579
    mgr: drops command on the floor
1580
1581
and failure caused by PR: https://github.com/ceph/ceph/pull/39969
1582
1583
1584
h3. 2021 Mar 09
1585
1586
https://pulpito.ceph.com/pdonnell-2021-03-09_03:27:39-fs-wip-pdonnell-testing-20210308.214827-distro-basic-smithi/
1587
1588
* https://tracker.ceph.com/issues/49500
1589
    qa: "Assertion `cb_done' failed."
1590
* https://tracker.ceph.com/issues/48805
1591
    mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details"
1592
* https://tracker.ceph.com/issues/48773
1593
    qa: scrub does not complete
1594
* https://tracker.ceph.com/issues/45434
1595
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
1596
* https://tracker.ceph.com/issues/49240
1597
    terminate called after throwing an instance of 'std::bad_alloc'
1598
* https://tracker.ceph.com/issues/49466
1599
    qa: "Command failed on gibba030 with status 1: 'set -ex\nsudo dd of=/tmp/tmp.ZEeZBasJer'"
1600
* https://tracker.ceph.com/issues/49684
1601
    qa: fs:cephadm mount does not wait for mds to be created
1602
* https://tracker.ceph.com/issues/48771
1603
    qa: iogen: workload fails to cause balancing