Main » History » Version 170
Rishabh Dave, 09/05/2023 02:48 PM
1 | 79 | Venky Shankar | h1. MAIN |
---|---|---|---|
2 | |||
3 | 148 | Rishabh Dave | h3. NEW ENTRY BELOW |
4 | |||
5 | 170 | Rishabh Dave | h3. 5 Sep 2023 |
6 | |||
7 | https://pulpito.ceph.com/rishabh-2023-08-25_06:38:25-fs-wip-rishabh-2023aug3-b5-testing-default-smithi/ |
||
8 | orch:cephadm suite run: http://pulpito.front.sepia.ceph.com/rishabh-2023-09-05_12:16:09-orch:cephadm-wip-rishabh-2023aug3-b5-testing-default-smithi/ |
||
9 | this run has failures but acc to Adam King these are not relevant and should be ignored |
||
10 | |||
11 | * https://tracker.ceph.com/issues/61892 |
||
12 | test_snapshot_remove (test_strays.TestStrays) failed |
||
13 | * https://tracker.ceph.com/issues/59348 |
||
14 | test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota |
||
15 | * https://tracker.ceph.com/issues/54462 |
||
16 | Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128 |
||
17 | * https://tracker.ceph.com/issues/62067 |
||
18 | ffsb.sh failure "Resource temporarily unavailable" |
||
19 | * https://tracker.ceph.com/issues/57656 |
||
20 | dbench: write failed on handle 10010 (Resource temporarily unavailable) |
||
21 | * https://tracker.ceph.com/issues/59346 |
||
22 | fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" |
||
23 | * https://tracker.ceph.com/issues/59344 |
||
24 | qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" |
||
25 | * https://tracker.ceph.com/issues/50223 |
||
26 | client.xxxx isn't responding to mclientcaps(revoke) |
||
27 | * https://tracker.ceph.com/issues/57655 |
||
28 | qa: fs:mixed-clients kernel_untar_build failure |
||
29 | * https://tracker.ceph.com/issues/62187 |
||
30 | iozone.sh: line 5: iozone: command not found |
||
31 | |||
32 | * https://tracker.ceph.com/issues/61399 |
||
33 | ior build failure |
||
34 | * https://tracker.ceph.com/issues/57676 |
||
35 | qa: error during scrub thrashing: rank damage found: {'backtrace'} |
||
36 | * https://tracker.ceph.com/issues/55805 |
||
37 | error scrub thrashing reached max tries in 900 secs |
||
38 | |||
39 | |||
40 | 169 | Venky Shankar | h3. 31 Aug 2023 |
41 | |||
42 | https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230824.045828 |
||
43 | |||
44 | * https://tracker.ceph.com/issues/52624 |
||
45 | qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" |
||
46 | * https://tracker.ceph.com/issues/62187 |
||
47 | iozone: command not found |
||
48 | * https://tracker.ceph.com/issues/61399 |
||
49 | ior build failure |
||
50 | * https://tracker.ceph.com/issues/59531 |
||
51 | quincy: "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio)" |
||
52 | * https://tracker.ceph.com/issues/61399 |
||
53 | qa: build failure for ior (the failed instance is when compiling `mdtest`) |
||
54 | * https://tracker.ceph.com/issues/57655 |
||
55 | qa: fs:mixed-clients kernel_untar_build failure |
||
56 | * https://tracker.ceph.com/issues/59344 |
||
57 | qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" |
||
58 | * https://tracker.ceph.com/issues/59346 |
||
59 | fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" |
||
60 | * https://tracker.ceph.com/issues/59348 |
||
61 | qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota) |
||
62 | * https://tracker.ceph.com/issues/59413 |
||
63 | cephfs: qa snaptest-git-ceph.sh failed with "got remote process result: 128" |
||
64 | * https://tracker.ceph.com/issues/62653 |
||
65 | qa: unimplemented fcntl command: 1036 with fsstress |
||
66 | * https://tracker.ceph.com/issues/61400 |
||
67 | valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default |
||
68 | * https://tracker.ceph.com/issues/62658 |
||
69 | error during scrub thrashing: reached maximum tries (31) after waiting for 900 seconds |
||
70 | * https://tracker.ceph.com/issues/62188 |
||
71 | AttributeError: 'RemoteProcess' object has no attribute 'read' (happens only with multis-auth test) |
||
72 | |||
73 | |||
74 | 168 | Venky Shankar | h3. 25 Aug 2023 |
75 | |||
76 | https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230822.064807 |
||
77 | |||
78 | * https://tracker.ceph.com/issues/59344 |
||
79 | qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" |
||
80 | * https://tracker.ceph.com/issues/59346 |
||
81 | fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" |
||
82 | * https://tracker.ceph.com/issues/59348 |
||
83 | qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota) |
||
84 | * https://tracker.ceph.com/issues/57655 |
||
85 | qa: fs:mixed-clients kernel_untar_build failure |
||
86 | * https://tracker.ceph.com/issues/61243 |
||
87 | test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed |
||
88 | * https://tracker.ceph.com/issues/61399 |
||
89 | ior build failure |
||
90 | * https://tracker.ceph.com/issues/61399 |
||
91 | qa: build failure for ior (the failed instance is when compiling `mdtest`) |
||
92 | * https://tracker.ceph.com/issues/62484 |
||
93 | qa: ffsb.sh test failure |
||
94 | * https://tracker.ceph.com/issues/59531 |
||
95 | quincy: "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio)" |
||
96 | * https://tracker.ceph.com/issues/62510 |
||
97 | snaptest-git-ceph.sh failure with fs/thrash |
||
98 | |||
99 | |||
100 | 167 | Venky Shankar | h3. 24 Aug 2023 |
101 | |||
102 | https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230822.060131 |
||
103 | |||
104 | * https://tracker.ceph.com/issues/57676 |
||
105 | qa: error during scrub thrashing: rank damage found: {'backtrace'} |
||
106 | * https://tracker.ceph.com/issues/51964 |
||
107 | qa: test_cephfs_mirror_restart_sync_on_blocklist failure |
||
108 | * https://tracker.ceph.com/issues/59344 |
||
109 | qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" |
||
110 | * https://tracker.ceph.com/issues/59346 |
||
111 | fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" |
||
112 | * https://tracker.ceph.com/issues/59348 |
||
113 | qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota) |
||
114 | * https://tracker.ceph.com/issues/61399 |
||
115 | ior build failure |
||
116 | * https://tracker.ceph.com/issues/61399 |
||
117 | qa: build failure for ior (the failed instance is when compiling `mdtest`) |
||
118 | * https://tracker.ceph.com/issues/62510 |
||
119 | snaptest-git-ceph.sh failure with fs/thrash |
||
120 | * https://tracker.ceph.com/issues/62484 |
||
121 | qa: ffsb.sh test failure |
||
122 | * https://tracker.ceph.com/issues/57087 |
||
123 | qa: test_fragmented_injection (tasks.cephfs.test_data_scan.TestDataScan) failure |
||
124 | * https://tracker.ceph.com/issues/57656 |
||
125 | [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable) |
||
126 | * https://tracker.ceph.com/issues/62187 |
||
127 | iozone: command not found |
||
128 | * https://tracker.ceph.com/issues/62188 |
||
129 | AttributeError: 'RemoteProcess' object has no attribute 'read' (happens only with multis-auth test) |
||
130 | * https://tracker.ceph.com/issues/62567 |
||
131 | postgres workunit times out - MDS_SLOW_REQUEST in logs |
||
132 | |||
133 | |||
134 | 166 | Venky Shankar | h3. 22 Aug 2023 |
135 | |||
136 | https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230809.035933 |
||
137 | |||
138 | * https://tracker.ceph.com/issues/57676 |
||
139 | qa: error during scrub thrashing: rank damage found: {'backtrace'} |
||
140 | * https://tracker.ceph.com/issues/51964 |
||
141 | qa: test_cephfs_mirror_restart_sync_on_blocklist failure |
||
142 | * https://tracker.ceph.com/issues/59344 |
||
143 | qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" |
||
144 | * https://tracker.ceph.com/issues/59346 |
||
145 | fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" |
||
146 | * https://tracker.ceph.com/issues/59348 |
||
147 | qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota) |
||
148 | * https://tracker.ceph.com/issues/61399 |
||
149 | ior build failure |
||
150 | * https://tracker.ceph.com/issues/61399 |
||
151 | qa: build failure for ior (the failed instance is when compiling `mdtest`) |
||
152 | * https://tracker.ceph.com/issues/57655 |
||
153 | qa: fs:mixed-clients kernel_untar_build failure |
||
154 | * https://tracker.ceph.com/issues/61243 |
||
155 | test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed |
||
156 | * https://tracker.ceph.com/issues/62188 |
||
157 | AttributeError: 'RemoteProcess' object has no attribute 'read' (happens only with multis-auth test) |
||
158 | * https://tracker.ceph.com/issues/62510 |
||
159 | snaptest-git-ceph.sh failure with fs/thrash |
||
160 | * https://tracker.ceph.com/issues/62511 |
||
161 | src/mds/MDLog.cc: 299: FAILED ceph_assert(!mds_is_shutting_down) |
||
162 | |||
163 | |||
164 | 165 | Venky Shankar | h3. 14 Aug 2023 |
165 | |||
166 | https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230808.093601 |
||
167 | |||
168 | * https://tracker.ceph.com/issues/51964 |
||
169 | qa: test_cephfs_mirror_restart_sync_on_blocklist failure |
||
170 | * https://tracker.ceph.com/issues/61400 |
||
171 | valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default |
||
172 | * https://tracker.ceph.com/issues/61399 |
||
173 | ior build failure |
||
174 | * https://tracker.ceph.com/issues/59348 |
||
175 | qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota) |
||
176 | * https://tracker.ceph.com/issues/59531 |
||
177 | cluster [WRN] OSD bench result of 137706.272521 IOPS exceeded the threshold |
||
178 | * https://tracker.ceph.com/issues/59344 |
||
179 | qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" |
||
180 | * https://tracker.ceph.com/issues/59346 |
||
181 | fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" |
||
182 | * https://tracker.ceph.com/issues/61399 |
||
183 | qa: build failure for ior (the failed instance is when compiling `mdtest`) |
||
184 | * https://tracker.ceph.com/issues/59684 [kclient bug] |
||
185 | Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt) |
||
186 | * https://tracker.ceph.com/issues/61243 (NEW) |
||
187 | test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed |
||
188 | * https://tracker.ceph.com/issues/57655 |
||
189 | qa: fs:mixed-clients kernel_untar_build failure |
||
190 | * https://tracker.ceph.com/issues/57656 |
||
191 | [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable) |
||
192 | |||
193 | |||
194 | 163 | Venky Shankar | h3. 28 JULY 2023 |
195 | |||
196 | https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230725.053049 |
||
197 | |||
198 | * https://tracker.ceph.com/issues/51964 |
||
199 | qa: test_cephfs_mirror_restart_sync_on_blocklist failure |
||
200 | * https://tracker.ceph.com/issues/61400 |
||
201 | valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default |
||
202 | * https://tracker.ceph.com/issues/61399 |
||
203 | ior build failure |
||
204 | * https://tracker.ceph.com/issues/57676 |
||
205 | qa: error during scrub thrashing: rank damage found: {'backtrace'} |
||
206 | * https://tracker.ceph.com/issues/59348 |
||
207 | qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota) |
||
208 | * https://tracker.ceph.com/issues/59531 |
||
209 | cluster [WRN] OSD bench result of 137706.272521 IOPS exceeded the threshold |
||
210 | * https://tracker.ceph.com/issues/59344 |
||
211 | qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" |
||
212 | * https://tracker.ceph.com/issues/59346 |
||
213 | fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" |
||
214 | * https://github.com/ceph/ceph/pull/52556 |
||
215 | task/test_nfs: AttributeError: 'TestNFS' object has no attribute 'run_ceph_cmd' (see note #4) |
||
216 | * https://tracker.ceph.com/issues/62187 |
||
217 | iozone: command not found |
||
218 | * https://tracker.ceph.com/issues/61399 |
||
219 | qa: build failure for ior (the failed instance is when compiling `mdtest`) |
||
220 | * https://tracker.ceph.com/issues/62188 |
||
221 | AttributeError: 'RemoteProcess' object has no attribute 'read' (happens only with multis-auth test) |
||
222 | |||
223 | 164 | Rishabh Dave | h3. 24 Jul 2023 |
224 | 158 | Rishabh Dave | |
225 | https://pulpito.ceph.com/rishabh-2023-07-13_21:35:13-fs-wip-rishabh-2023Jul13-testing-default-smithi/ |
||
226 | https://pulpito.ceph.com/rishabh-2023-07-14_10:26:42-fs-wip-rishabh-2023Jul13-testing-default-smithi/ |
||
227 | There were few failure from one of the PRs under testing. Following run confirms that removing this PR fixes these failures - |
||
228 | https://pulpito.ceph.com/rishabh-2023-07-18_02:11:50-fs-wip-rishabh-2023Jul13-m-quota-testing-default-smithi/ |
||
229 | One more extra run to check if blogbench.sh fail every time: |
||
230 | https://pulpito.ceph.com/rishabh-2023-07-21_17:58:19-fs-wip-rishabh-2023Jul13-m-quota-testing-default-smithi/ |
||
231 | blogbench.sh failure were seen on above runs for first time, following run with main branch that confirms that "blogbench.sh" was not related to any of the PRs that are under testing - |
||
232 | https://pulpito.ceph.com/rishabh-2023-07-21_21:30:53-fs-wip-rishabh-2023Jul13-base-2-testing-default-smithi/ |
||
233 | |||
234 | 161 | Rishabh Dave | * https://tracker.ceph.com/issues/61892 |
235 | test_snapshot_remove (test_strays.TestStrays) failed |
||
236 | * https://tracker.ceph.com/issues/53859 |
||
237 | test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm) |
||
238 | * https://tracker.ceph.com/issues/61982 |
||
239 | test_clean_stale_subvolume_snapshot_metadata (tasks.cephfs.test_volumes.TestSubvolumeSnapshots) |
||
240 | * https://tracker.ceph.com/issues/52438 |
||
241 | qa: ffsb timeout |
||
242 | * https://tracker.ceph.com/issues/54460 |
||
243 | Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1 |
||
244 | * https://tracker.ceph.com/issues/57655 |
||
245 | qa: fs:mixed-clients kernel_untar_build failure |
||
246 | * https://tracker.ceph.com/issues/48773 |
||
247 | reached max tries: scrub does not complete |
||
248 | * https://tracker.ceph.com/issues/58340 |
||
249 | mds: fsstress.sh hangs with multimds |
||
250 | * https://tracker.ceph.com/issues/61400 |
||
251 | valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default |
||
252 | * https://tracker.ceph.com/issues/57206 |
||
253 | libcephfs/test.sh: ceph_test_libcephfs_reclaim |
||
254 | |||
255 | * https://tracker.ceph.com/issues/57656 |
||
256 | [testing] dbench: write failed on handle 10010 (Resource temporarily unavailable) |
||
257 | * https://tracker.ceph.com/issues/61399 |
||
258 | ior build failure |
||
259 | * https://tracker.ceph.com/issues/57676 |
||
260 | error during scrub thrashing: backtrace |
||
261 | |||
262 | * https://tracker.ceph.com/issues/38452 |
||
263 | 'sudo -u postgres -- pgbench -s 500 -i' failed |
||
264 | * https://tracker.ceph.com/issues/62126 |
||
265 | blogbench.sh failure |
||
266 | 158 | Rishabh Dave | |
267 | 157 | Venky Shankar | h3. 18 July 2023 |
268 | |||
269 | * https://tracker.ceph.com/issues/52624 |
||
270 | qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" |
||
271 | * https://tracker.ceph.com/issues/57676 |
||
272 | qa: error during scrub thrashing: rank damage found: {'backtrace'} |
||
273 | * https://tracker.ceph.com/issues/54460 |
||
274 | Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1 |
||
275 | * https://tracker.ceph.com/issues/57655 |
||
276 | qa: fs:mixed-clients kernel_untar_build failure |
||
277 | * https://tracker.ceph.com/issues/51964 |
||
278 | qa: test_cephfs_mirror_restart_sync_on_blocklist failure |
||
279 | * https://tracker.ceph.com/issues/59344 |
||
280 | qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" |
||
281 | * https://tracker.ceph.com/issues/61182 |
||
282 | cephfs-mirror-ha-workunit: reached maximum tries (50) after waiting for 300 seconds |
||
283 | * https://tracker.ceph.com/issues/61957 |
||
284 | test_client_limits.TestClientLimits.test_client_release_bug |
||
285 | * https://tracker.ceph.com/issues/59348 |
||
286 | qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota) |
||
287 | * https://tracker.ceph.com/issues/61892 |
||
288 | test_strays.TestStrays.test_snapshot_remove failed |
||
289 | * https://tracker.ceph.com/issues/59346 |
||
290 | fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" |
||
291 | * https://tracker.ceph.com/issues/44565 |
||
292 | src/mds/SimpleLock.h: 528: FAILED ceph_assert(state == LOCK_XLOCK || state == LOCK_XLOCKDONE || state == LOCK_XLOCKSNAP || state == LOCK_LOCK_XLOCK || state == LOCK_LOCK || is_locallock()) |
||
293 | * https://tracker.ceph.com/issues/62067 |
||
294 | ffsb.sh failure "Resource temporarily unavailable" |
||
295 | |||
296 | |||
297 | 156 | Venky Shankar | h3. 17 July 2023 |
298 | |||
299 | https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230704.040136 |
||
300 | |||
301 | * https://tracker.ceph.com/issues/61982 |
||
302 | Test failure: test_clean_stale_subvolume_snapshot_metadata (tasks.cephfs.test_volumes.TestSubvolumeSnapshots) |
||
303 | * https://tracker.ceph.com/issues/59344 |
||
304 | qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" |
||
305 | * https://tracker.ceph.com/issues/61182 |
||
306 | cephfs-mirror-ha-workunit: reached maximum tries (50) after waiting for 300 seconds |
||
307 | * https://tracker.ceph.com/issues/61957 |
||
308 | test_client_limits.TestClientLimits.test_client_release_bug |
||
309 | * https://tracker.ceph.com/issues/61400 |
||
310 | valgrind+ceph-mon: segmentation fault in rocksdb+tcmalloc |
||
311 | * https://tracker.ceph.com/issues/59348 |
||
312 | qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota) |
||
313 | * https://tracker.ceph.com/issues/61892 |
||
314 | test_strays.TestStrays.test_snapshot_remove failed |
||
315 | * https://tracker.ceph.com/issues/59346 |
||
316 | fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" |
||
317 | * https://tracker.ceph.com/issues/62036 |
||
318 | src/mds/MDCache.cc: 5131: FAILED ceph_assert(isolated_inodes.empty()) |
||
319 | * https://tracker.ceph.com/issues/61737 |
||
320 | coredump from '/bin/podman pull quay.ceph.io/ceph-ci/ceph:pacific' |
||
321 | * https://tracker.ceph.com/issues/44565 |
||
322 | src/mds/SimpleLock.h: 528: FAILED ceph_assert(state == LOCK_XLOCK || state == LOCK_XLOCKDONE || state == LOCK_XLOCKSNAP || state == LOCK_LOCK_XLOCK || state == LOCK_LOCK || is_locallock()) |
||
323 | |||
324 | |||
325 | 155 | Rishabh Dave | h3. 13 July 2023 Run 2 |
326 | 1 | Patrick Donnelly | |
327 | 153 | Rishabh Dave | |
328 | 152 | Rishabh Dave | https://pulpito.ceph.com/rishabh-2023-07-08_23:33:40-fs-wip-rishabh-2023Jul9-testing-default-smithi/ |
329 | https://pulpito.ceph.com/rishabh-2023-07-09_20:19:09-fs-wip-rishabh-2023Jul9-testing-default-smithi/ |
||
330 | |||
331 | * https://tracker.ceph.com/issues/61957 |
||
332 | test_client_limits.TestClientLimits.test_client_release_bug |
||
333 | * https://tracker.ceph.com/issues/61982 |
||
334 | Test failure: test_clean_stale_subvolume_snapshot_metadata (tasks.cephfs.test_volumes.TestSubvolumeSnapshots) |
||
335 | * https://tracker.ceph.com/issues/59348 |
||
336 | qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota) |
||
337 | * https://tracker.ceph.com/issues/59344 |
||
338 | qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" |
||
339 | * https://tracker.ceph.com/issues/54460 |
||
340 | Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1 |
||
341 | * https://tracker.ceph.com/issues/57655 |
||
342 | qa: fs:mixed-clients kernel_untar_build failure |
||
343 | * https://tracker.ceph.com/issues/61400 |
||
344 | valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default |
||
345 | * https://tracker.ceph.com/issues/61399 |
||
346 | ior build failure |
||
347 | |||
348 | h3. 13 July 2023 |
||
349 | |||
350 | 151 | Venky Shankar | https://pulpito.ceph.com/vshankar-2023-07-04_11:45:30-fs-wip-vshankar-testing-20230704.040242-testing-default-smithi/ |
351 | |||
352 | * https://tracker.ceph.com/issues/54460 |
||
353 | Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1 |
||
354 | * https://tracker.ceph.com/issues/61400 |
||
355 | valgrind+ceph-mon: segmentation fault in rocksdb+tcmalloc |
||
356 | * https://tracker.ceph.com/issues/57655 |
||
357 | qa: fs:mixed-clients kernel_untar_build failure |
||
358 | * https://tracker.ceph.com/issues/61945 |
||
359 | LibCephFS.DelegTimeout failure |
||
360 | * https://tracker.ceph.com/issues/52624 |
||
361 | qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" |
||
362 | * https://tracker.ceph.com/issues/57676 |
||
363 | qa: error during scrub thrashing: rank damage found: {'backtrace'} |
||
364 | * https://tracker.ceph.com/issues/59348 |
||
365 | qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota) |
||
366 | * https://tracker.ceph.com/issues/59344 |
||
367 | qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" |
||
368 | * https://tracker.ceph.com/issues/51964 |
||
369 | qa: test_cephfs_mirror_restart_sync_on_blocklist failure |
||
370 | * https://tracker.ceph.com/issues/59346 |
||
371 | fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" |
||
372 | * https://tracker.ceph.com/issues/61982 |
||
373 | Test failure: test_clean_stale_subvolume_snapshot_metadata (tasks.cephfs.test_volumes.TestSubvolumeSnapshots) |
||
374 | |||
375 | |||
376 | 150 | Rishabh Dave | h3. 13 Jul 2023 |
377 | |||
378 | https://pulpito.ceph.com/rishabh-2023-07-05_22:21:20-fs-wip-rishabh-2023Jul5-testing-default-smithi/ |
||
379 | https://pulpito.ceph.com/rishabh-2023-07-06_19:33:28-fs-wip-rishabh-2023Jul5-testing-default-smithi/ |
||
380 | |||
381 | * https://tracker.ceph.com/issues/61957 |
||
382 | test_client_limits.TestClientLimits.test_client_release_bug |
||
383 | * https://tracker.ceph.com/issues/59348 |
||
384 | qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota) |
||
385 | * https://tracker.ceph.com/issues/59346 |
||
386 | fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" |
||
387 | * https://tracker.ceph.com/issues/48773 |
||
388 | scrub does not complete: reached max tries |
||
389 | * https://tracker.ceph.com/issues/59344 |
||
390 | qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" |
||
391 | * https://tracker.ceph.com/issues/52438 |
||
392 | qa: ffsb timeout |
||
393 | * https://tracker.ceph.com/issues/57656 |
||
394 | [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable) |
||
395 | * https://tracker.ceph.com/issues/58742 |
||
396 | xfstests-dev: kcephfs: generic |
||
397 | * https://tracker.ceph.com/issues/61399 |
||
398 | libmpich: undefined references to fi_strerror |
||
399 | |||
400 | 148 | Rishabh Dave | h3. 12 July 2023 |
401 | 149 | Rishabh Dave | |
402 | 148 | Rishabh Dave | https://pulpito.ceph.com/rishabh-2023-07-05_18:32:52-fs-wip-rishabh-CephManager-in-CephFSTestCase-testing-default-smithi/ |
403 | https://pulpito.ceph.com/rishabh-2023-07-06_19:46:43-fs-wip-rishabh-CephManager-in-CephFSTestCase-testing-default-smithi/ |
||
404 | |||
405 | * https://tracker.ceph.com/issues/61892 |
||
406 | test_strays.TestStrays.test_snapshot_remove failed |
||
407 | * https://tracker.ceph.com/issues/59348 |
||
408 | qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota) |
||
409 | * https://tracker.ceph.com/issues/53859 |
||
410 | qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm) |
||
411 | * https://tracker.ceph.com/issues/59346 |
||
412 | fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" |
||
413 | * https://tracker.ceph.com/issues/58742 |
||
414 | xfstests-dev: kcephfs: generic |
||
415 | * https://tracker.ceph.com/issues/59344 |
||
416 | qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" |
||
417 | * https://tracker.ceph.com/issues/52438 |
||
418 | qa: ffsb timeout |
||
419 | * https://tracker.ceph.com/issues/57656 |
||
420 | [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable) |
||
421 | * https://tracker.ceph.com/issues/54460 |
||
422 | Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1 |
||
423 | * https://tracker.ceph.com/issues/57655 |
||
424 | qa: fs:mixed-clients kernel_untar_build failure |
||
425 | * https://tracker.ceph.com/issues/61182 |
||
426 | cephfs-mirror-ha-workunit: reached maximum tries (50) after waiting for 300 seconds |
||
427 | * https://tracker.ceph.com/issues/61400 |
||
428 | valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default |
||
429 | * https://tracker.ceph.com/issues/48773 |
||
430 | reached max tries: scrub does not complete |
||
431 | 147 | Rishabh Dave | |
432 | 146 | Patrick Donnelly | h3. 05 July 2023 |
433 | |||
434 | https://pulpito.ceph.com/pdonnell-2023-07-05_03:38:33-fs:libcephfs-wip-pdonnell-testing-20230705.003205-distro-default-smithi/ |
||
435 | |||
436 | * https://tracker.ceph.com/issues/59346 |
||
437 | fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" |
||
438 | 137 | Rishabh Dave | |
439 | 143 | Rishabh Dave | h3. 27 Jun 2023 |
440 | |||
441 | https://pulpito.ceph.com/rishabh-2023-06-21_23:38:17-fs-wip-rishabh-improvements-authmon-testing-default-smithi/ |
||
442 | https://pulpito.ceph.com/rishabh-2023-06-23_17:37:30-fs-wip-rishabh-improvements-authmon-distro-default-smithi/ |
||
443 | |||
444 | 144 | Rishabh Dave | * https://tracker.ceph.com/issues/59348 |
445 | qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota) |
||
446 | * https://tracker.ceph.com/issues/54460 |
||
447 | Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1 |
||
448 | * https://tracker.ceph.com/issues/59346 |
||
449 | fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" |
||
450 | * https://tracker.ceph.com/issues/59344 |
||
451 | qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" |
||
452 | * https://tracker.ceph.com/issues/61399 |
||
453 | libmpich: undefined references to fi_strerror |
||
454 | * https://tracker.ceph.com/issues/50223 |
||
455 | client.xxxx isn't responding to mclientcaps(revoke) |
||
456 | * https://tracker.ceph.com/issues/61831 |
||
457 | Test failure: test_mirroring_init_failure_with_recovery (tasks.cephfs.test_mirroring.TestMirroring) |
||
458 | 143 | Rishabh Dave | |
459 | |||
460 | 142 | Venky Shankar | h3. 22 June 2023 |
461 | |||
462 | * https://tracker.ceph.com/issues/57676 |
||
463 | qa: error during scrub thrashing: rank damage found: {'backtrace'} |
||
464 | * https://tracker.ceph.com/issues/54460 |
||
465 | Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1 |
||
466 | * https://tracker.ceph.com/issues/59344 |
||
467 | qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" |
||
468 | * https://tracker.ceph.com/issues/59348 |
||
469 | qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota) |
||
470 | * https://tracker.ceph.com/issues/61400 |
||
471 | valgrind+ceph-mon: segmentation fault in rocksdb+tcmalloc |
||
472 | * https://tracker.ceph.com/issues/57655 |
||
473 | qa: fs:mixed-clients kernel_untar_build failure |
||
474 | * https://tracker.ceph.com/issues/61394 |
||
475 | qa/quincy: cluster [WRN] evicting unresponsive client smithi152 (4298), after 303.726 seconds" in cluster log |
||
476 | * https://tracker.ceph.com/issues/61762 |
||
477 | qa: wait_for_clean: failed before timeout expired |
||
478 | * https://tracker.ceph.com/issues/61775 |
||
479 | cephfs-mirror: mirror daemon does not shutdown (in mirror ha tests) |
||
480 | * https://tracker.ceph.com/issues/44565 |
||
481 | src/mds/SimpleLock.h: 528: FAILED ceph_assert(state == LOCK_XLOCK || state == LOCK_XLOCKDONE || state == LOCK_XLOCKSNAP || state == LOCK_LOCK_XLOCK || state == LOCK_LOCK || is_locallock()) |
||
482 | * https://tracker.ceph.com/issues/61790 |
||
483 | cephfs client to mds comms remain silent after reconnect |
||
484 | * https://tracker.ceph.com/issues/61791 |
||
485 | snaptest-git-ceph.sh test timed out (job dead) |
||
486 | |||
487 | |||
488 | 139 | Venky Shankar | h3. 20 June 2023 |
489 | |||
490 | https://pulpito.ceph.com/vshankar-2023-06-15_04:58:28-fs-wip-vshankar-testing-20230614.124123-testing-default-smithi/ |
||
491 | |||
492 | * https://tracker.ceph.com/issues/57676 |
||
493 | qa: error during scrub thrashing: rank damage found: {'backtrace'} |
||
494 | * https://tracker.ceph.com/issues/54460 |
||
495 | Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1 |
||
496 | * https://tracker.ceph.com/issues/54462 |
||
497 | Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128 |
||
498 | 140 | Venky Shankar | * https://tracker.ceph.com/issues/58340 |
499 | 1 | Patrick Donnelly | mds: fsstress.sh hangs with multimds |
500 | 141 | Venky Shankar | * https://tracker.ceph.com/issues/59344 |
501 | 139 | Venky Shankar | qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" |
502 | * https://tracker.ceph.com/issues/59348 |
||
503 | qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota) |
||
504 | * https://tracker.ceph.com/issues/57656 |
||
505 | [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable) |
||
506 | * https://tracker.ceph.com/issues/61400 |
||
507 | valgrind+ceph-mon: segmentation fault in rocksdb+tcmalloc |
||
508 | * https://tracker.ceph.com/issues/57655 |
||
509 | qa: fs:mixed-clients kernel_untar_build failure |
||
510 | * https://tracker.ceph.com/issues/44565 |
||
511 | src/mds/SimpleLock.h: 528: FAILED ceph_assert(state == LOCK_XLOCK || state == LOCK_XLOCKDONE || state == LOCK_XLOCKSNAP || state == LOCK_LOCK_XLOCK || state == LOCK_LOCK || is_locallock()) |
||
512 | * https://tracker.ceph.com/issues/61737 |
||
513 | coredump from '/bin/podman pull quay.ceph.io/ceph-ci/ceph:pacific' |
||
514 | |||
515 | 138 | Rishabh Dave | h3. 16 June 2023 |
516 | |||
517 | https://pulpito.ceph.com/rishabh-2023-05-16_10:39:13-fs-wip-rishabh-2023May15-1524-testing-default-smithi/ |
||
518 | https://pulpito.ceph.com/rishabh-2023-05-17_11:09:48-fs-wip-rishabh-2023May15-1524-testing-default-smithi/ |
||
519 | 1 | Patrick Donnelly | https://pulpito.ceph.com/rishabh-2023-05-18_10:01:53-fs-wip-rishabh-2023May15-1524-testing-default-smithi/ |
520 | 145 | Rishabh Dave | (bins were rebuilt with a subset of orig PRs) https://pulpito.ceph.com/rishabh-2023-06-09_10:19:22-fs-wip-rishabh-2023Jun9-1308-testing-default-smithi/ |
521 | 138 | Rishabh Dave | |
522 | 1 | Patrick Donnelly | |
523 | * https://tracker.ceph.com/issues/59344 |
||
524 | qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" |
||
525 | * https://tracker.ceph.com/issues/59348 |
||
526 | qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota) |
||
527 | 138 | Rishabh Dave | * https://tracker.ceph.com/issues/59346 |
528 | fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" |
||
529 | 145 | Rishabh Dave | * https://tracker.ceph.com/issues/57656 |
530 | [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable) |
||
531 | * https://tracker.ceph.com/issues/54460 |
||
532 | Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1 |
||
533 | * https://tracker.ceph.com/issues/54462 |
||
534 | Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128 |
||
535 | 138 | Rishabh Dave | * https://tracker.ceph.com/issues/61399 |
536 | libmpich: undefined references to fi_strerror |
||
537 | 145 | Rishabh Dave | * https://tracker.ceph.com/issues/58945 |
538 | xfstests-dev: ceph-fuse: generic |
||
539 | * https://tracker.ceph.com/issues/58742 |
||
540 | xfstests-dev: kcephfs: generic |
||
541 | 138 | Rishabh Dave | |
542 | 136 | Patrick Donnelly | h3. 24 May 2023 |
543 | |||
544 | https://pulpito.ceph.com/pdonnell-2023-05-23_18:20:18-fs-wip-pdonnell-testing-20230523.134409-distro-default-smithi/ |
||
545 | |||
546 | * https://tracker.ceph.com/issues/57676 |
||
547 | qa: error during scrub thrashing: rank damage found: {'backtrace'} |
||
548 | * https://tracker.ceph.com/issues/59683 |
||
549 | Error: Unable to find a match: userspace-rcu-devel libedit-devel device-mapper-devel with fscrypt tests |
||
550 | * https://tracker.ceph.com/issues/61399 |
||
551 | qa: "[Makefile:299: ior] Error 1" |
||
552 | * https://tracker.ceph.com/issues/61265 |
||
553 | qa: tasks.cephfs.fuse_mount:process failed to terminate after unmount |
||
554 | * https://tracker.ceph.com/issues/59348 |
||
555 | qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota) |
||
556 | * https://tracker.ceph.com/issues/59346 |
||
557 | qa/workunits/fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" |
||
558 | * https://tracker.ceph.com/issues/61400 |
||
559 | valgrind+ceph-mon: segmentation fault in rocksdb+tcmalloc |
||
560 | * https://tracker.ceph.com/issues/54460 |
||
561 | Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1 |
||
562 | * https://tracker.ceph.com/issues/51964 |
||
563 | qa: test_cephfs_mirror_restart_sync_on_blocklist failure |
||
564 | * https://tracker.ceph.com/issues/59344 |
||
565 | qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" |
||
566 | * https://tracker.ceph.com/issues/61407 |
||
567 | mds: abort on CInode::verify_dirfrags |
||
568 | * https://tracker.ceph.com/issues/48773 |
||
569 | qa: scrub does not complete |
||
570 | * https://tracker.ceph.com/issues/57655 |
||
571 | qa: fs:mixed-clients kernel_untar_build failure |
||
572 | * https://tracker.ceph.com/issues/61409 |
||
573 | qa: _test_stale_caps does not wait for file flush before stat |
||
574 | |||
575 | 128 | Venky Shankar | h3. 15 May 2023 |
576 | |||
577 | https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230509.090020 |
||
578 | 130 | Venky Shankar | https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230509.090020-6 |
579 | 128 | Venky Shankar | |
580 | * https://tracker.ceph.com/issues/52624 |
||
581 | qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" |
||
582 | * https://tracker.ceph.com/issues/54460 |
||
583 | Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1 |
||
584 | * https://tracker.ceph.com/issues/57676 |
||
585 | qa: error during scrub thrashing: rank damage found: {'backtrace'} |
||
586 | * https://tracker.ceph.com/issues/59684 [kclient bug] |
||
587 | Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt) |
||
588 | * https://tracker.ceph.com/issues/59348 |
||
589 | qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota) |
||
590 | * https://tracker.ceph.com/issues/61148 |
||
591 | dbench test results in call trace in dmesg [kclient bug] |
||
592 | 131 | Venky Shankar | * https://tracker.ceph.com/issues/58340 |
593 | mds: fsstress.sh hangs with multimds |
||
594 | 133 | Kotresh Hiremath Ravishankar | |
595 | 134 | Kotresh Hiremath Ravishankar | |
596 | 125 | Venky Shankar | h3. 11 May 2023 |
597 | |||
598 | 129 | Rishabh Dave | https://pulpito.ceph.com/yuriw-2023-05-10_18:21:40-fs-wip-yuri7-testing-2023-05-10-0742-distro-default-smithi/ |
599 | |||
600 | * https://tracker.ceph.com/issues/59684 [kclient bug] |
||
601 | Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt) |
||
602 | * https://tracker.ceph.com/issues/59348 |
||
603 | qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota) |
||
604 | * https://tracker.ceph.com/issues/57655 |
||
605 | qa: fs:mixed-clients kernel_untar_build failure |
||
606 | * https://tracker.ceph.com/issues/57676 |
||
607 | qa: error during scrub thrashing: rank damage found: {'backtrace'} |
||
608 | * https://tracker.ceph.com/issues/55805 |
||
609 | error during scrub thrashing reached max tries in 900 secs |
||
610 | * https://tracker.ceph.com/issues/54460 |
||
611 | Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1 |
||
612 | * https://tracker.ceph.com/issues/57656 |
||
613 | [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable) |
||
614 | * https://tracker.ceph.com/issues/58220 |
||
615 | Command failed (workunit test fs/quota/quota.sh) on smithi081 with status 1: |
||
616 | * https://tracker.ceph.com/issues/58220#note-9 |
||
617 | workunit fs/test_python.sh: test_disk_quota_exceeeded_error failure |
||
618 | 1 | Patrick Donnelly | * https://tracker.ceph.com/issues/59342 |
619 | qa/workunits/kernel_untar_build.sh failed when compiling the Linux source |
||
620 | 134 | Kotresh Hiremath Ravishankar | * https://tracker.ceph.com/issues/58949 |
621 | test_cephfs.test_disk_quota_exceeeded_error - AssertionError: DiskQuotaExceeded not raised by write |
||
622 | 135 | Kotresh Hiremath Ravishankar | * https://tracker.ceph.com/issues/61243 (NEW) |
623 | test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed |
||
624 | 129 | Rishabh Dave | |
625 | h3. 11 May 2023 |
||
626 | |||
627 | 125 | Venky Shankar | https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230509.054005 |
628 | 127 | Venky Shankar | |
629 | (no fsstress job failure [https://tracker.ceph.com/issues/58340] since https://github.com/ceph/ceph/pull/49553 |
||
630 | 126 | Venky Shankar | was included in the branch, however, the PR got updated and needs retest). |
631 | 125 | Venky Shankar | |
632 | * https://tracker.ceph.com/issues/52624 |
||
633 | qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" |
||
634 | * https://tracker.ceph.com/issues/54460 |
||
635 | Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1 |
||
636 | * https://tracker.ceph.com/issues/57676 |
||
637 | qa: error during scrub thrashing: rank damage found: {'backtrace'} |
||
638 | * https://tracker.ceph.com/issues/59683 |
||
639 | Error: Unable to find a match: userspace-rcu-devel libedit-devel device-mapper-devel with fscrypt tests |
||
640 | * https://tracker.ceph.com/issues/59684 [kclient bug] |
||
641 | Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt) |
||
642 | * https://tracker.ceph.com/issues/59348 |
||
643 | qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota) |
||
644 | |||
645 | 124 | Venky Shankar | h3. 09 May 2023 |
646 | |||
647 | https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230506.143554 |
||
648 | |||
649 | * https://tracker.ceph.com/issues/52624 |
||
650 | qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" |
||
651 | * https://tracker.ceph.com/issues/58340 |
||
652 | mds: fsstress.sh hangs with multimds |
||
653 | * https://tracker.ceph.com/issues/54460 |
||
654 | Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1 |
||
655 | * https://tracker.ceph.com/issues/57676 |
||
656 | qa: error during scrub thrashing: rank damage found: {'backtrace'} |
||
657 | * https://tracker.ceph.com/issues/51964 |
||
658 | qa: test_cephfs_mirror_restart_sync_on_blocklist failure |
||
659 | * https://tracker.ceph.com/issues/59350 |
||
660 | qa: test_flag_scrub_mdsdir (tasks.cephfs.test_scrub_checks.TestScrubChecks) ... ERROR |
||
661 | * https://tracker.ceph.com/issues/59683 |
||
662 | Error: Unable to find a match: userspace-rcu-devel libedit-devel device-mapper-devel with fscrypt tests |
||
663 | * https://tracker.ceph.com/issues/59684 [kclient bug] |
||
664 | Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt) |
||
665 | * https://tracker.ceph.com/issues/59348 |
||
666 | qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota) |
||
667 | |||
668 | 123 | Venky Shankar | h3. 10 Apr 2023 |
669 | |||
670 | https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230330.105356 |
||
671 | |||
672 | * https://tracker.ceph.com/issues/52624 |
||
673 | qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" |
||
674 | * https://tracker.ceph.com/issues/58340 |
||
675 | mds: fsstress.sh hangs with multimds |
||
676 | * https://tracker.ceph.com/issues/54460 |
||
677 | Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1 |
||
678 | * https://tracker.ceph.com/issues/57676 |
||
679 | qa: error during scrub thrashing: rank damage found: {'backtrace'} |
||
680 | * https://tracker.ceph.com/issues/51964 |
||
681 | qa: test_cephfs_mirror_restart_sync_on_blocklist failure |
||
682 | 119 | Rishabh Dave | |
683 | 120 | Rishabh Dave | h3. 31 Mar 2023 |
684 | 121 | Rishabh Dave | |
685 | 120 | Rishabh Dave | run: http://pulpito.front.sepia.ceph.com/rishabh-2023-03-03_21:39:49-fs-wip-rishabh-2023Mar03-2316-testing-default-smithi/ |
686 | 122 | Rishabh Dave | re-run (some PRs removed from batch): http://pulpito.front.sepia.ceph.com/rishabh-2023-03-11_05:54:03-fs-wip-rishabh-2023Mar10-1727-testing-default-smithi/ |
687 | re-run (some PRs removed from batch): http://pulpito.front.sepia.ceph.com/rishabh-2023-03-23_08:27:28-fs-wip-rishabh-2023Mar20-2250-testing-default-smithi/ |
||
688 | 120 | Rishabh Dave | |
689 | There were many more re-runs for "failed+dead" jobs as well as for individual jobs. half of the PRs from the batch were removed (gradually over subsequent re-runs). |
||
690 | |||
691 | * https://tracker.ceph.com/issues/57676 |
||
692 | qa: error during scrub thrashing: rank damage found: {'backtrace'} |
||
693 | * https://tracker.ceph.com/issues/54460 |
||
694 | Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1 |
||
695 | * https://tracker.ceph.com/issues/58220 |
||
696 | Command failed (workunit test fs/quota/quota.sh) on smithixxx with status 1 |
||
697 | * https://tracker.ceph.com/issues/58220#note-9 |
||
698 | workunit fs/test_python.sh: test_disk_quota_exceeeded_error failure |
||
699 | * https://tracker.ceph.com/issues/56695 |
||
700 | Command failed (workunit test suites/pjd.sh) |
||
701 | * https://tracker.ceph.com/issues/58564 |
||
702 | workuit dbench failed with error code 1 |
||
703 | * https://tracker.ceph.com/issues/57206 |
||
704 | libcephfs/test.sh: ceph_test_libcephfs_reclaim |
||
705 | * https://tracker.ceph.com/issues/57580 |
||
706 | Test failure: test_newops_getvxattr (tasks.cephfs.test_newops.TestNewOps) |
||
707 | * https://tracker.ceph.com/issues/58940 |
||
708 | ceph osd hit ceph_abort |
||
709 | * https://tracker.ceph.com/issues/55805 |
||
710 | error scrub thrashing reached max tries in 900 secs |
||
711 | |||
712 | 118 | Venky Shankar | h3. 30 March 2023 |
713 | |||
714 | https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230315.085747 |
||
715 | |||
716 | * https://tracker.ceph.com/issues/58938 |
||
717 | qa: xfstests-dev's generic test suite has 7 failures with kclient |
||
718 | * https://tracker.ceph.com/issues/51964 |
||
719 | qa: test_cephfs_mirror_restart_sync_on_blocklist failure |
||
720 | * https://tracker.ceph.com/issues/58340 |
||
721 | mds: fsstress.sh hangs with multimds |
||
722 | |||
723 | 114 | Venky Shankar | h3. 29 March 2023 |
724 | |||
725 | 115 | Venky Shankar | https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230317.095222 |
726 | 114 | Venky Shankar | |
727 | * https://tracker.ceph.com/issues/56695 |
||
728 | [RHEL stock] pjd test failures |
||
729 | * https://tracker.ceph.com/issues/57676 |
||
730 | qa: error during scrub thrashing: rank damage found: {'backtrace'} |
||
731 | * https://tracker.ceph.com/issues/57087 |
||
732 | qa: test_fragmented_injection (tasks.cephfs.test_data_scan.TestDataScan) failure |
||
733 | * https://tracker.ceph.com/issues/58340 |
||
734 | mds: fsstress.sh hangs with multimds |
||
735 | 116 | Venky Shankar | * https://tracker.ceph.com/issues/57655 |
736 | qa: fs:mixed-clients kernel_untar_build failure |
||
737 | 114 | Venky Shankar | * https://tracker.ceph.com/issues/59230 |
738 | Test failure: test_object_deletion (tasks.cephfs.test_damage.TestDamage) |
||
739 | 117 | Venky Shankar | * https://tracker.ceph.com/issues/58938 |
740 | qa: xfstests-dev's generic test suite has 7 failures with kclient |
||
741 | 114 | Venky Shankar | |
742 | 113 | Venky Shankar | h3. 13 Mar 2023 |
743 | |||
744 | * https://tracker.ceph.com/issues/56695 |
||
745 | [RHEL stock] pjd test failures |
||
746 | * https://tracker.ceph.com/issues/57676 |
||
747 | qa: error during scrub thrashing: rank damage found: {'backtrace'} |
||
748 | * https://tracker.ceph.com/issues/51964 |
||
749 | qa: test_cephfs_mirror_restart_sync_on_blocklist failure |
||
750 | * https://tracker.ceph.com/issues/54460 |
||
751 | Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1 |
||
752 | * https://tracker.ceph.com/issues/57656 |
||
753 | [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable) |
||
754 | |||
755 | 112 | Venky Shankar | h3. 09 Mar 2023 |
756 | |||
757 | https://pulpito.ceph.com/vshankar-2023-03-03_04:39:14-fs-wip-vshankar-testing-20230303.023823-testing-default-smithi/ |
||
758 | https://pulpito.ceph.com/vshankar-2023-03-08_15:12:36-fs-wip-vshankar-testing-20230308.112059-testing-default-smithi/ |
||
759 | |||
760 | * https://tracker.ceph.com/issues/56695 |
||
761 | [RHEL stock] pjd test failures |
||
762 | * https://tracker.ceph.com/issues/57676 |
||
763 | qa: error during scrub thrashing: rank damage found: {'backtrace'} |
||
764 | * https://tracker.ceph.com/issues/51964 |
||
765 | qa: test_cephfs_mirror_restart_sync_on_blocklist failure |
||
766 | * https://tracker.ceph.com/issues/54460 |
||
767 | Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1 |
||
768 | * https://tracker.ceph.com/issues/58340 |
||
769 | mds: fsstress.sh hangs with multimds |
||
770 | * https://tracker.ceph.com/issues/57087 |
||
771 | qa: test_fragmented_injection (tasks.cephfs.test_data_scan.TestDataScan) failure |
||
772 | |||
773 | 111 | Venky Shankar | h3. 07 Mar 2023 |
774 | |||
775 | https://pulpito.ceph.com/vshankar-2023-03-02_09:21:58-fs-wip-vshankar-testing-20230222.044949-testing-default-smithi/ |
||
776 | https://pulpito.ceph.com/vshankar-2023-03-07_05:15:12-fs-wip-vshankar-testing-20230307.030510-testing-default-smithi/ |
||
777 | |||
778 | * https://tracker.ceph.com/issues/56695 |
||
779 | [RHEL stock] pjd test failures |
||
780 | * https://tracker.ceph.com/issues/57676 |
||
781 | qa: error during scrub thrashing: rank damage found: {'backtrace'} |
||
782 | * https://tracker.ceph.com/issues/51964 |
||
783 | qa: test_cephfs_mirror_restart_sync_on_blocklist failure |
||
784 | * https://tracker.ceph.com/issues/57656 |
||
785 | [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable) |
||
786 | * https://tracker.ceph.com/issues/57655 |
||
787 | qa: fs:mixed-clients kernel_untar_build failure |
||
788 | * https://tracker.ceph.com/issues/58220 |
||
789 | Command failed (workunit test fs/quota/quota.sh) on smithi081 with status 1: |
||
790 | * https://tracker.ceph.com/issues/54460 |
||
791 | Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1 |
||
792 | * https://tracker.ceph.com/issues/58934 |
||
793 | snaptest-git-ceph.sh failure with ceph-fuse |
||
794 | |||
795 | 109 | Venky Shankar | h3. 28 Feb 2023 |
796 | |||
797 | https://pulpito.ceph.com/vshankar-2023-02-24_02:11:45-fs-wip-vshankar-testing-20230222.025426-testing-default-smithi/ |
||
798 | |||
799 | * https://tracker.ceph.com/issues/56695 |
||
800 | [RHEL stock] pjd test failures |
||
801 | * https://tracker.ceph.com/issues/57676 |
||
802 | qa: error during scrub thrashing: rank damage found: {'backtrace'} |
||
803 | * https://tracker.ceph.com/issues/56446 |
||
804 | Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits) |
||
805 | 110 | Venky Shankar | |
806 | 109 | Venky Shankar | (teuthology infra issues causing testing delays - merging PRs which have tests passing) |
807 | |||
808 | 107 | Venky Shankar | h3. 25 Jan 2023 |
809 | |||
810 | https://pulpito.ceph.com/vshankar-2023-01-25_07:57:32-fs-wip-vshankar-testing-20230125.055346-testing-default-smithi/ |
||
811 | |||
812 | * https://tracker.ceph.com/issues/52624 |
||
813 | qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" |
||
814 | * https://tracker.ceph.com/issues/56695 |
||
815 | [RHEL stock] pjd test failures |
||
816 | * https://tracker.ceph.com/issues/57676 |
||
817 | qa: error during scrub thrashing: rank damage found: {'backtrace'} |
||
818 | * https://tracker.ceph.com/issues/56446 |
||
819 | Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits) |
||
820 | * https://tracker.ceph.com/issues/57206 |
||
821 | libcephfs/test.sh: ceph_test_libcephfs_reclaim |
||
822 | * https://tracker.ceph.com/issues/58220 |
||
823 | Command failed (workunit test fs/quota/quota.sh) on smithi081 with status 1: |
||
824 | * https://tracker.ceph.com/issues/58340 |
||
825 | mds: fsstress.sh hangs with multimds |
||
826 | * https://tracker.ceph.com/issues/56011 |
||
827 | fs/thrash: snaptest-snap-rm-cmp.sh fails in mds5sum comparison |
||
828 | * https://tracker.ceph.com/issues/54460 |
||
829 | Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1 |
||
830 | |||
831 | 101 | Rishabh Dave | h3. 30 JAN 2023 |
832 | |||
833 | run: http://pulpito.front.sepia.ceph.com/rishabh-2022-11-28_08:04:11-fs-wip-rishabh-testing-2022Nov24-1818-testing-default-smithi/ |
||
834 | re-run: http://pulpito.front.sepia.ceph.com/rishabh-2023-01-13_12:08:33-fs-wip-rishabh-testing-2022Nov24-11Jan2023-distro-default-smithi/ |
||
835 | re-run of re-run: http://pulpito.front.sepia.ceph.com/rishabh-2023-01-23_18:53:32-fs-wip-rishabh-testing-2022Nov24-11Jan2023-distro-default-smithi/ |
||
836 | |||
837 | 105 | Rishabh Dave | * https://tracker.ceph.com/issues/52624 |
838 | qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" |
||
839 | 101 | Rishabh Dave | * https://tracker.ceph.com/issues/56695 |
840 | [RHEL stock] pjd test failures |
||
841 | * https://tracker.ceph.com/issues/57676 |
||
842 | qa: error during scrub thrashing: rank damage found: {'backtrace'} |
||
843 | * https://tracker.ceph.com/issues/55332 |
||
844 | Failure in snaptest-git-ceph.sh |
||
845 | * https://tracker.ceph.com/issues/51964 |
||
846 | qa: test_cephfs_mirror_restart_sync_on_blocklist failure |
||
847 | * https://tracker.ceph.com/issues/56446 |
||
848 | Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits) |
||
849 | * https://tracker.ceph.com/issues/57655 |
||
850 | qa: fs:mixed-clients kernel_untar_build failure |
||
851 | * https://tracker.ceph.com/issues/54460 |
||
852 | Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1 |
||
853 | * https://tracker.ceph.com/issues/58340 |
||
854 | mds: fsstress.sh hangs with multimds |
||
855 | 103 | Rishabh Dave | * https://tracker.ceph.com/issues/58219 |
856 | Command crashed: 'ceph-dencoder type inode_backtrace_t import - decode dump_json' |
||
857 | 101 | Rishabh Dave | |
858 | 102 | Rishabh Dave | * "Failed to load ceph-mgr modules: prometheus" in cluster log" |
859 | http://pulpito.front.sepia.ceph.com/rishabh-2023-01-23_18:53:32-fs-wip-rishabh-testing-2022Nov24-11Jan2023-distro-default-smithi/7134086 |
||
860 | Acc to Venky this was fixed in https://github.com/ceph/ceph/commit/cf6089200d96fc56b08ee17a4e31f19823370dc8 |
||
861 | 106 | Rishabh Dave | * Created https://tracker.ceph.com/issues/58564 |
862 | workunit test suites/dbench.sh failed error code 1 |
||
863 | 102 | Rishabh Dave | |
864 | 100 | Venky Shankar | h3. 15 Dec 2022 |
865 | |||
866 | https://pulpito.ceph.com/?branch=wip-vshankar-testing-20221215.112736 |
||
867 | |||
868 | * https://tracker.ceph.com/issues/52624 |
||
869 | qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" |
||
870 | * https://tracker.ceph.com/issues/56695 |
||
871 | [RHEL stock] pjd test failures |
||
872 | * https://tracker.ceph.com/issues/58219 |
||
873 | * https://tracker.ceph.com/issues/57655 |
||
874 | * qa: fs:mixed-clients kernel_untar_build failure |
||
875 | Test failure: test_journal_migration (tasks.cephfs.test_journal_migration.TestJournalMigration) |
||
876 | * https://tracker.ceph.com/issues/57676 |
||
877 | qa: error during scrub thrashing: rank damage found: {'backtrace'} |
||
878 | * https://tracker.ceph.com/issues/58340 |
||
879 | mds: fsstress.sh hangs with multimds |
||
880 | |||
881 | 96 | Venky Shankar | h3. 08 Dec 2022 |
882 | |||
883 | https://pulpito.ceph.com/?branch=wip-vshankar-testing-20221130.043104 |
||
884 | 99 | Venky Shankar | https://pulpito.ceph.com/?branch=wip-vshankar-testing-20221209.043803 |
885 | 96 | Venky Shankar | |
886 | (lots of transient git.ceph.com failures) |
||
887 | |||
888 | * https://tracker.ceph.com/issues/52624 |
||
889 | qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" |
||
890 | * https://tracker.ceph.com/issues/56695 |
||
891 | [RHEL stock] pjd test failures |
||
892 | * https://tracker.ceph.com/issues/57655 |
||
893 | qa: fs:mixed-clients kernel_untar_build failure |
||
894 | * https://tracker.ceph.com/issues/58219 |
||
895 | Test failure: test_journal_migration (tasks.cephfs.test_journal_migration.TestJournalMigration) |
||
896 | * https://tracker.ceph.com/issues/58220 |
||
897 | Command failed (workunit test fs/quota/quota.sh) on smithi081 with status 1: |
||
898 | * https://tracker.ceph.com/issues/57676 |
||
899 | qa: error during scrub thrashing: rank damage found: {'backtrace'} |
||
900 | 97 | Venky Shankar | * https://tracker.ceph.com/issues/53859 |
901 | qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm) |
||
902 | 98 | Venky Shankar | * https://tracker.ceph.com/issues/54460 |
903 | Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1 |
||
904 | * https://tracker.ceph.com/issues/58244 |
||
905 | Test failure: test_rebuild_inotable (tasks.cephfs.test_data_scan.TestDataScan) |
||
906 | 96 | Venky Shankar | |
907 | 95 | Venky Shankar | h3. 14 Oct 2022 |
908 | |||
909 | https://pulpito.ceph.com/vshankar-2022-10-12_04:56:59-fs-wip-vshankar-testing-20221011-145847-testing-default-smithi/ |
||
910 | https://pulpito.ceph.com/vshankar-2022-10-14_04:04:57-fs-wip-vshankar-testing-20221014-072608-testing-default-smithi/ |
||
911 | |||
912 | * https://tracker.ceph.com/issues/52624 |
||
913 | qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" |
||
914 | * https://tracker.ceph.com/issues/55804 |
||
915 | Command failed (workunit test suites/pjd.sh) |
||
916 | * https://tracker.ceph.com/issues/51964 |
||
917 | qa: test_cephfs_mirror_restart_sync_on_blocklist failure |
||
918 | * https://tracker.ceph.com/issues/57682 |
||
919 | client: ERROR: test_reconnect_after_blocklisted |
||
920 | * https://tracker.ceph.com/issues/54460 |
||
921 | Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1 |
||
922 | 90 | Rishabh Dave | |
923 | 91 | Rishabh Dave | h3. 10 Oct 2022 |
924 | |||
925 | http://pulpito.front.sepia.ceph.com/rishabh-2022-09-30_19:45:21-fs-wip-rishabh-testing-30Sep2022-testing-default-smithi/ |
||
926 | 92 | Rishabh Dave | |
927 | 91 | Rishabh Dave | reruns |
928 | * fs-thrash, passed: http://pulpito.front.sepia.ceph.com/rishabh-2022-10-04_13:19:47-fs-wip-rishabh-testing-30Sep2022-testing-default-smithi/ |
||
929 | * fs-verify, passed: http://pulpito.front.sepia.ceph.com/rishabh-2022-10-05_12:25:37-fs-wip-rishabh-testing-30Sep2022-testing-default-smithi/ |
||
930 | * cephadm failures also passed after many re-runs: http://pulpito.front.sepia.ceph.com/rishabh-2022-10-06_13:50:51-fs-wip-rishabh-testing-30Sep2022-2-testing-default-smithi/ |
||
931 | 94 | Rishabh Dave | ** needed this PR to be merged in ceph-ci branch - https://github.com/ceph/ceph/pull/47458 |
932 | 91 | Rishabh Dave | |
933 | 93 | Rishabh Dave | known bugs |
934 | 91 | Rishabh Dave | * https://tracker.ceph.com/issues/52624 |
935 | qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" |
||
936 | * https://tracker.ceph.com/issues/50223 |
||
937 | client.xxxx isn't responding to mclientcaps(revoke |
||
938 | * https://tracker.ceph.com/issues/57299 |
||
939 | qa: test_dump_loads fails with JSONDecodeError |
||
940 | * https://tracker.ceph.com/issues/57655 [Exist in main as well] |
||
941 | qa: fs:mixed-clients kernel_untar_build failure |
||
942 | * https://tracker.ceph.com/issues/57206 |
||
943 | libcephfs/test.sh: ceph_test_libcephfs_reclaim |
||
944 | |||
945 | 90 | Rishabh Dave | h3. 2022 Sep 29 |
946 | |||
947 | http://pulpito.front.sepia.ceph.com/rishabh-2022-09-14_12:48:43-fs-wip-rishabh-testing-2022Sep9-1708-testing-default-smithi/ |
||
948 | |||
949 | * https://tracker.ceph.com/issues/55804 |
||
950 | Command failed (workunit test suites/pjd.sh) |
||
951 | * https://tracker.ceph.com/issues/36593 |
||
952 | Command failed (workunit test fs/quota/quota.sh) on smithixxx with status 1 |
||
953 | * https://tracker.ceph.com/issues/52624 |
||
954 | qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" |
||
955 | * https://tracker.ceph.com/issues/51964 |
||
956 | qa: test_cephfs_mirror_restart_sync_on_blocklist failure |
||
957 | * https://tracker.ceph.com/issues/56632 |
||
958 | Test failure: test_subvolume_snapshot_clone_quota_exceeded |
||
959 | * https://tracker.ceph.com/issues/50821 |
||
960 | qa: untar_snap_rm failure during mds thrashing |
||
961 | |||
962 | 88 | Patrick Donnelly | h3. 2022 Sep 26 |
963 | |||
964 | https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20220923.171109 |
||
965 | |||
966 | * https://tracker.ceph.com/issues/55804 |
||
967 | qa failure: pjd link tests failed |
||
968 | * https://tracker.ceph.com/issues/57676 |
||
969 | qa: error during scrub thrashing: rank damage found: {'backtrace'} |
||
970 | * https://tracker.ceph.com/issues/52624 |
||
971 | qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" |
||
972 | * https://tracker.ceph.com/issues/57580 |
||
973 | Test failure: test_newops_getvxattr (tasks.cephfs.test_newops.TestNewOps) |
||
974 | * https://tracker.ceph.com/issues/48773 |
||
975 | qa: scrub does not complete |
||
976 | * https://tracker.ceph.com/issues/57299 |
||
977 | qa: test_dump_loads fails with JSONDecodeError |
||
978 | * https://tracker.ceph.com/issues/57280 |
||
979 | qa: tasks/kernel_cfuse_workunits_untarbuild_blogbench fails - Failed to fetch package version from shaman |
||
980 | * https://tracker.ceph.com/issues/57205 |
||
981 | Test failure: test_subvolume_group_ls_filter_internal_directories (tasks.cephfs.test_volumes.TestSubvolumeGroups) |
||
982 | * https://tracker.ceph.com/issues/57656 |
||
983 | [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable) |
||
984 | * https://tracker.ceph.com/issues/57677 |
||
985 | qa: "1 MDSs behind on trimming (MDS_TRIM)" |
||
986 | * https://tracker.ceph.com/issues/57206 |
||
987 | libcephfs/test.sh: ceph_test_libcephfs_reclaim |
||
988 | * https://tracker.ceph.com/issues/57446 |
||
989 | qa: test_subvolume_snapshot_info_if_orphan_clone fails |
||
990 | * https://tracker.ceph.com/issues/57655 [Exist in main as well] |
||
991 | qa: fs:mixed-clients kernel_untar_build failure |
||
992 | 89 | Patrick Donnelly | * https://tracker.ceph.com/issues/57682 |
993 | client: ERROR: test_reconnect_after_blocklisted |
||
994 | 88 | Patrick Donnelly | |
995 | |||
996 | 87 | Patrick Donnelly | h3. 2022 Sep 22 |
997 | |||
998 | https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20220920.234701 |
||
999 | |||
1000 | * https://tracker.ceph.com/issues/57299 |
||
1001 | qa: test_dump_loads fails with JSONDecodeError |
||
1002 | * https://tracker.ceph.com/issues/57205 |
||
1003 | Test failure: test_subvolume_group_ls_filter_internal_directories (tasks.cephfs.test_volumes.TestSubvolumeGroups) |
||
1004 | * https://tracker.ceph.com/issues/52624 |
||
1005 | qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" |
||
1006 | * https://tracker.ceph.com/issues/57580 |
||
1007 | Test failure: test_newops_getvxattr (tasks.cephfs.test_newops.TestNewOps) |
||
1008 | * https://tracker.ceph.com/issues/57280 |
||
1009 | qa: tasks/kernel_cfuse_workunits_untarbuild_blogbench fails - Failed to fetch package version from shaman |
||
1010 | * https://tracker.ceph.com/issues/48773 |
||
1011 | qa: scrub does not complete |
||
1012 | * https://tracker.ceph.com/issues/56446 |
||
1013 | Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits) |
||
1014 | * https://tracker.ceph.com/issues/57206 |
||
1015 | libcephfs/test.sh: ceph_test_libcephfs_reclaim |
||
1016 | * https://tracker.ceph.com/issues/51267 |
||
1017 | CommandFailedError: Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithi096 with status 1:... |
||
1018 | |||
1019 | NEW: |
||
1020 | |||
1021 | * https://tracker.ceph.com/issues/57656 |
||
1022 | [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable) |
||
1023 | * https://tracker.ceph.com/issues/57655 [Exist in main as well] |
||
1024 | qa: fs:mixed-clients kernel_untar_build failure |
||
1025 | * https://tracker.ceph.com/issues/57657 |
||
1026 | mds: scrub locates mismatch between child accounted_rstats and self rstats |
||
1027 | |||
1028 | Segfault probably caused by: https://github.com/ceph/ceph/pull/47795#issuecomment-1255724799 |
||
1029 | |||
1030 | |||
1031 | 80 | Venky Shankar | h3. 2022 Sep 16 |
1032 | 79 | Venky Shankar | |
1033 | https://pulpito.ceph.com/?branch=wip-vshankar-testing1-20220905-132828 |
||
1034 | |||
1035 | * https://tracker.ceph.com/issues/57446 |
||
1036 | qa: test_subvolume_snapshot_info_if_orphan_clone fails |
||
1037 | * https://tracker.ceph.com/issues/57299 |
||
1038 | qa: test_dump_loads fails with JSONDecodeError |
||
1039 | * https://tracker.ceph.com/issues/50223 |
||
1040 | client.xxxx isn't responding to mclientcaps(revoke) |
||
1041 | * https://tracker.ceph.com/issues/52624 |
||
1042 | qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" |
||
1043 | * https://tracker.ceph.com/issues/57205 |
||
1044 | Test failure: test_subvolume_group_ls_filter_internal_directories (tasks.cephfs.test_volumes.TestSubvolumeGroups) |
||
1045 | * https://tracker.ceph.com/issues/57280 |
||
1046 | qa: tasks/kernel_cfuse_workunits_untarbuild_blogbench fails - Failed to fetch package version from shaman |
||
1047 | * https://tracker.ceph.com/issues/51282 |
||
1048 | pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings |
||
1049 | * https://tracker.ceph.com/issues/48203 |
||
1050 | https://tracker.ceph.com/issues/36593 |
||
1051 | qa: quota failure |
||
1052 | qa: quota failure caused by clients stepping on each other |
||
1053 | * https://tracker.ceph.com/issues/57580 |
||
1054 | Test failure: test_newops_getvxattr (tasks.cephfs.test_newops.TestNewOps) |
||
1055 | |||
1056 | 77 | Rishabh Dave | |
1057 | h3. 2022 Aug 26 |
||
1058 | 76 | Rishabh Dave | |
1059 | http://pulpito.front.sepia.ceph.com/rishabh-2022-08-22_17:49:59-fs-wip-rishabh-testing-2022Aug19-testing-default-smithi/ |
||
1060 | http://pulpito.front.sepia.ceph.com/rishabh-2022-08-24_11:56:51-fs-wip-rishabh-testing-2022Aug19-testing-default-smithi/ |
||
1061 | |||
1062 | * https://tracker.ceph.com/issues/57206 |
||
1063 | libcephfs/test.sh: ceph_test_libcephfs_reclaim |
||
1064 | * https://tracker.ceph.com/issues/56632 |
||
1065 | Test failure: test_subvolume_snapshot_clone_quota_exceeded (tasks.cephfs.test_volumes.TestSubvolumeSnapshotClones) |
||
1066 | * https://tracker.ceph.com/issues/56446 |
||
1067 | Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits) |
||
1068 | * https://tracker.ceph.com/issues/51964 |
||
1069 | qa: test_cephfs_mirror_restart_sync_on_blocklist failure |
||
1070 | * https://tracker.ceph.com/issues/53859 |
||
1071 | qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm) |
||
1072 | |||
1073 | * https://tracker.ceph.com/issues/54460 |
||
1074 | Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1 |
||
1075 | * https://tracker.ceph.com/issues/54462 |
||
1076 | Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128 |
||
1077 | * https://tracker.ceph.com/issues/54460 |
||
1078 | Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1 |
||
1079 | * https://tracker.ceph.com/issues/36593 |
||
1080 | Command failed (workunit test fs/quota/quota.sh) on smithixxx with status 1 |
||
1081 | |||
1082 | * https://tracker.ceph.com/issues/52624 |
||
1083 | qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" |
||
1084 | * https://tracker.ceph.com/issues/55804 |
||
1085 | Command failed (workunit test suites/pjd.sh) |
||
1086 | * https://tracker.ceph.com/issues/50223 |
||
1087 | client.xxxx isn't responding to mclientcaps(revoke) |
||
1088 | |||
1089 | |||
1090 | 75 | Venky Shankar | h3. 2022 Aug 22 |
1091 | |||
1092 | https://pulpito.ceph.com/vshankar-2022-08-12_09:34:24-fs-wip-vshankar-testing1-20220812-072441-testing-default-smithi/ |
||
1093 | https://pulpito.ceph.com/vshankar-2022-08-18_04:30:42-fs-wip-vshankar-testing1-20220818-082047-testing-default-smithi/ (drop problematic PR and re-run) |
||
1094 | |||
1095 | * https://tracker.ceph.com/issues/52624 |
||
1096 | qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" |
||
1097 | * https://tracker.ceph.com/issues/56446 |
||
1098 | Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits) |
||
1099 | * https://tracker.ceph.com/issues/55804 |
||
1100 | Command failed (workunit test suites/pjd.sh) |
||
1101 | * https://tracker.ceph.com/issues/51278 |
||
1102 | mds: "FAILED ceph_assert(!segments.empty())" |
||
1103 | * https://tracker.ceph.com/issues/54460 |
||
1104 | Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1 |
||
1105 | * https://tracker.ceph.com/issues/57205 |
||
1106 | Test failure: test_subvolume_group_ls_filter_internal_directories (tasks.cephfs.test_volumes.TestSubvolumeGroups) |
||
1107 | * https://tracker.ceph.com/issues/57206 |
||
1108 | ceph_test_libcephfs_reclaim crashes during test |
||
1109 | * https://tracker.ceph.com/issues/53859 |
||
1110 | Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm) |
||
1111 | * https://tracker.ceph.com/issues/50223 |
||
1112 | client.xxxx isn't responding to mclientcaps(revoke) |
||
1113 | |||
1114 | 72 | Venky Shankar | h3. 2022 Aug 12 |
1115 | |||
1116 | https://pulpito.ceph.com/vshankar-2022-08-10_04:06:00-fs-wip-vshankar-testing-20220805-190751-testing-default-smithi/ |
||
1117 | https://pulpito.ceph.com/vshankar-2022-08-11_12:16:58-fs-wip-vshankar-testing-20220811-145809-testing-default-smithi/ (drop problematic PR and re-run) |
||
1118 | |||
1119 | * https://tracker.ceph.com/issues/52624 |
||
1120 | qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" |
||
1121 | * https://tracker.ceph.com/issues/56446 |
||
1122 | Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits) |
||
1123 | * https://tracker.ceph.com/issues/51964 |
||
1124 | qa: test_cephfs_mirror_restart_sync_on_blocklist failure |
||
1125 | * https://tracker.ceph.com/issues/55804 |
||
1126 | Command failed (workunit test suites/pjd.sh) |
||
1127 | * https://tracker.ceph.com/issues/50223 |
||
1128 | client.xxxx isn't responding to mclientcaps(revoke) |
||
1129 | * https://tracker.ceph.com/issues/50821 |
||
1130 | qa: untar_snap_rm failure during mds thrashing |
||
1131 | * https://tracker.ceph.com/issues/54460 |
||
1132 | 73 | Venky Shankar | Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1 |
1133 | 72 | Venky Shankar | |
1134 | 71 | Venky Shankar | h3. 2022 Aug 04 |
1135 | |||
1136 | https://pulpito.ceph.com/?branch=wip-vshankar-testing1-20220804-123835 (only mgr/volumes, mgr/stats) |
||
1137 | |||
1138 | Unrealted teuthology failure on rhel |
||
1139 | |||
1140 | 69 | Rishabh Dave | h3. 2022 Jul 25 |
1141 | 68 | Rishabh Dave | |
1142 | http://pulpito.front.sepia.ceph.com/rishabh-2022-07-22_11:34:20-fs-wip-rishabh-testing-2022Jul22-1400-testing-default-smithi/ |
||
1143 | |||
1144 | 1st re-run: http://pulpito.front.sepia.ceph.com/rishabh-2022-07-24_03:51:19-fs-wip-rishabh-testing-2022Jul22-1400-testing-default-smithi |
||
1145 | 2nd re-run: http://pulpito.front.sepia.ceph.com/rishabh-2022-07-24_08:53:36-fs-wip-rishabh-testing-2022Jul22-1400-testing-default-smithi/ |
||
1146 | 74 | Rishabh Dave | 3rd re-run: http://pulpito.front.sepia.ceph.com/rishabh-2022-07-24_08:53:36-fs-wip-rishabh-testing-2022Jul22-1400-testing-default-smithi/ |
1147 | 4th (final) re-run: http://pulpito.front.sepia.ceph.com/rishabh-2022-07-28_03:59:01-fs-wip-rishabh-testing-2022Jul28-0143-testing-default-smithi/ |
||
1148 | 68 | Rishabh Dave | |
1149 | * https://tracker.ceph.com/issues/55804 |
||
1150 | Command failed (workunit test suites/pjd.sh) |
||
1151 | * https://tracker.ceph.com/issues/50223 |
||
1152 | client.xxxx isn't responding to mclientcaps(revoke) |
||
1153 | |||
1154 | * https://tracker.ceph.com/issues/54460 |
||
1155 | Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1 |
||
1156 | * https://tracker.ceph.com/issues/36593 |
||
1157 | Command failed (workunit test fs/quota/quota.sh) on smithixxx with status 1 |
||
1158 | 1 | Patrick Donnelly | * https://tracker.ceph.com/issues/54462 |
1159 | 74 | Rishabh Dave | Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128~ |
1160 | 68 | Rishabh Dave | |
1161 | 67 | Patrick Donnelly | h3. 2022 July 22 |
1162 | |||
1163 | https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20220721.235756 |
||
1164 | |||
1165 | MDS_HEALTH_DUMMY error in log fixed by followup commit. |
||
1166 | transient selinux ping failure |
||
1167 | |||
1168 | * https://tracker.ceph.com/issues/56694 |
||
1169 | qa: avoid blocking forever on hung umount |
||
1170 | * https://tracker.ceph.com/issues/56695 |
||
1171 | [RHEL stock] pjd test failures |
||
1172 | * https://tracker.ceph.com/issues/56696 |
||
1173 | admin keyring disappears during qa run |
||
1174 | * https://tracker.ceph.com/issues/56697 |
||
1175 | qa: fs/snaps fails for fuse |
||
1176 | * https://tracker.ceph.com/issues/50222 |
||
1177 | osd: 5.2s0 deep-scrub : stat mismatch |
||
1178 | * https://tracker.ceph.com/issues/56698 |
||
1179 | client: FAILED ceph_assert(_size == 0) |
||
1180 | * https://tracker.ceph.com/issues/50223 |
||
1181 | qa: "client.4737 isn't responding to mclientcaps(revoke)" |
||
1182 | |||
1183 | |||
1184 | 66 | Rishabh Dave | h3. 2022 Jul 15 |
1185 | 65 | Rishabh Dave | |
1186 | http://pulpito.front.sepia.ceph.com/rishabh-2022-07-08_23:53:34-fs-wip-rishabh-testing-2022Jul08-1820-testing-default-smithi/ |
||
1187 | |||
1188 | re-run: http://pulpito.front.sepia.ceph.com/rishabh-2022-07-15_06:42:04-fs-wip-rishabh-testing-2022Jul08-1820-testing-default-smithi/ |
||
1189 | |||
1190 | * https://tracker.ceph.com/issues/53859 |
||
1191 | Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm) |
||
1192 | * https://tracker.ceph.com/issues/55804 |
||
1193 | Command failed (workunit test suites/pjd.sh) |
||
1194 | * https://tracker.ceph.com/issues/50223 |
||
1195 | client.xxxx isn't responding to mclientcaps(revoke) |
||
1196 | * https://tracker.ceph.com/issues/50222 |
||
1197 | osd: deep-scrub : stat mismatch |
||
1198 | |||
1199 | * https://tracker.ceph.com/issues/56632 |
||
1200 | Test failure: test_subvolume_snapshot_clone_quota_exceeded (tasks.cephfs.test_volumes.TestSubvolumeSnapshotClones) |
||
1201 | * https://tracker.ceph.com/issues/56634 |
||
1202 | workunit test fs/snaps/snaptest-intodir.sh |
||
1203 | * https://tracker.ceph.com/issues/56644 |
||
1204 | Test failure: test_rapid_creation (tasks.cephfs.test_fragment.TestFragmentation) |
||
1205 | |||
1206 | |||
1207 | |||
1208 | 61 | Rishabh Dave | h3. 2022 July 05 |
1209 | |||
1210 | http://pulpito.front.sepia.ceph.com/rishabh-2022-07-02_14:14:52-fs-wip-rishabh-testing-20220702-1631-testing-default-smithi/ |
||
1211 | 62 | Rishabh Dave | |
1212 | 64 | Rishabh Dave | On 1st re-run some jobs passed - http://pulpito.front.sepia.ceph.com/rishabh-2022-07-03_15:10:28-fs-wip-rishabh-testing-20220702-1631-distro-default-smithi/ |
1213 | |||
1214 | On 2nd re-run only few jobs failed - |
||
1215 | http://pulpito.front.sepia.ceph.com/rishabh-2022-07-06_05:24:29-fs-wip-rishabh-testing-20220705-2132-distro-default-smithi/ |
||
1216 | http://pulpito.front.sepia.ceph.com/rishabh-2022-07-06_05:24:29-fs-wip-rishabh-testing-20220705-2132-distro-default-smithi/ |
||
1217 | 62 | Rishabh Dave | |
1218 | * https://tracker.ceph.com/issues/56446 |
||
1219 | Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits) |
||
1220 | * https://tracker.ceph.com/issues/55804 |
||
1221 | Command failed (workunit test suites/pjd.sh) on smithi047 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/ |
||
1222 | |||
1223 | * https://tracker.ceph.com/issues/56445 |
||
1224 | Command failed on smithi080 with status 123: "find /home/ubuntu/cephtest/archive/syslog -name '*.log' -print0 | sudo xargs -0 --no-run-if-empty -- gzip --" |
||
1225 | * https://tracker.ceph.com/issues/51267 |
||
1226 | 63 | Rishabh Dave | Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithi098 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 |
1227 | * https://tracker.ceph.com/issues/50224 |
||
1228 | Test failure: test_mirroring_init_failure_with_recovery (tasks.cephfs.test_mirroring.TestMirroring) |
||
1229 | 62 | Rishabh Dave | |
1230 | |||
1231 | 61 | Rishabh Dave | |
1232 | 58 | Venky Shankar | h3. 2022 July 04 |
1233 | |||
1234 | https://pulpito.ceph.com/vshankar-2022-06-29_09:19:00-fs-wip-vshankar-testing-20220627-100931-testing-default-smithi/ |
||
1235 | (rhel runs were borked due to: https://lists.ceph.io/hyperkitty/list/dev@ceph.io/thread/JSZQFUKVLDND4W33PXDGCABPHNSPT6SS/, tests ran with --filter-out=rhel) |
||
1236 | |||
1237 | * https://tracker.ceph.com/issues/56445 |
||
1238 | Command failed on smithi162 with status 123: "find /home/ubuntu/cephtest/archive/syslog -name '*.log' -print0 | sudo xargs -0 --no-run-if-empty -- gzip --" |
||
1239 | * https://tracker.ceph.com/issues/56446 |
||
1240 | 59 | Rishabh Dave | Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits) |
1241 | * https://tracker.ceph.com/issues/51964 |
||
1242 | qa: test_cephfs_mirror_restart_sync_on_blocklist failure |
||
1243 | * https://tracker.ceph.com/issues/52624 |
||
1244 | 60 | Rishabh Dave | qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" |
1245 | 59 | Rishabh Dave | |
1246 | 57 | Venky Shankar | h3. 2022 June 20 |
1247 | |||
1248 | https://pulpito.ceph.com/vshankar-2022-06-15_04:03:39-fs-wip-vshankar-testing1-20220615-072516-testing-default-smithi/ |
||
1249 | https://pulpito.ceph.com/vshankar-2022-06-19_08:22:46-fs-wip-vshankar-testing1-20220619-102531-testing-default-smithi/ |
||
1250 | |||
1251 | * https://tracker.ceph.com/issues/52624 |
||
1252 | qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" |
||
1253 | * https://tracker.ceph.com/issues/55804 |
||
1254 | qa failure: pjd link tests failed |
||
1255 | * https://tracker.ceph.com/issues/54108 |
||
1256 | qa: iogen workunit: "The following counters failed to be set on mds daemons: {'mds.exported', 'mds.imported'}" |
||
1257 | * https://tracker.ceph.com/issues/55332 |
||
1258 | Failure in snaptest-git-ceph.sh (it's an async unlink/create bug) |
||
1259 | |||
1260 | 56 | Patrick Donnelly | h3. 2022 June 13 |
1261 | |||
1262 | https://pulpito.ceph.com/pdonnell-2022-06-12_05:08:12-fs:workload-wip-pdonnell-testing-20220612.004943-distro-default-smithi/ |
||
1263 | |||
1264 | * https://tracker.ceph.com/issues/56024 |
||
1265 | cephadm: removes ceph.conf during qa run causing command failure |
||
1266 | * https://tracker.ceph.com/issues/48773 |
||
1267 | qa: scrub does not complete |
||
1268 | * https://tracker.ceph.com/issues/56012 |
||
1269 | mds: src/mds/MDLog.cc: 283: FAILED ceph_assert(!mds->is_ any_replay()) |
||
1270 | |||
1271 | |||
1272 | 55 | Venky Shankar | h3. 2022 Jun 13 |
1273 | 54 | Venky Shankar | |
1274 | https://pulpito.ceph.com/vshankar-2022-06-07_00:25:50-fs-wip-vshankar-testing-20220606-223254-testing-default-smithi/ |
||
1275 | https://pulpito.ceph.com/vshankar-2022-06-10_01:04:46-fs-wip-vshankar-testing-20220609-175550-testing-default-smithi/ |
||
1276 | |||
1277 | * https://tracker.ceph.com/issues/52624 |
||
1278 | qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" |
||
1279 | * https://tracker.ceph.com/issues/51964 |
||
1280 | qa: test_cephfs_mirror_restart_sync_on_blocklist failure |
||
1281 | * https://tracker.ceph.com/issues/53859 |
||
1282 | qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm) |
||
1283 | * https://tracker.ceph.com/issues/55804 |
||
1284 | qa failure: pjd link tests failed |
||
1285 | * https://tracker.ceph.com/issues/56003 |
||
1286 | client: src/include/xlist.h: 81: FAILED ceph_assert(_size == 0) |
||
1287 | * https://tracker.ceph.com/issues/56011 |
||
1288 | fs/thrash: snaptest-snap-rm-cmp.sh fails in mds5sum comparison |
||
1289 | * https://tracker.ceph.com/issues/56012 |
||
1290 | mds: src/mds/MDLog.cc: 283: FAILED ceph_assert(!mds->is_ any_replay()) |
||
1291 | |||
1292 | 53 | Venky Shankar | h3. 2022 Jun 07 |
1293 | |||
1294 | https://pulpito.ceph.com/vshankar-2022-06-06_21:25:41-fs-wip-vshankar-testing1-20220606-230129-testing-default-smithi/ |
||
1295 | https://pulpito.ceph.com/vshankar-2022-06-07_10:53:31-fs-wip-vshankar-testing1-20220607-104134-testing-default-smithi/ (rerun after dropping a problematic PR) |
||
1296 | |||
1297 | * https://tracker.ceph.com/issues/52624 |
||
1298 | qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" |
||
1299 | * https://tracker.ceph.com/issues/50223 |
||
1300 | qa: "client.4737 isn't responding to mclientcaps(revoke)" |
||
1301 | * https://tracker.ceph.com/issues/50224 |
||
1302 | qa: test_mirroring_init_failure_with_recovery failure |
||
1303 | |||
1304 | 51 | Venky Shankar | h3. 2022 May 12 |
1305 | |||
1306 | https://pulpito.ceph.com/?branch=wip-vshankar-testing-20220509-125847 |
||
1307 | 52 | Venky Shankar | https://pulpito.ceph.com/vshankar-2022-05-13_17:09:16-fs-wip-vshankar-testing-20220513-120051-testing-default-smithi/ (drop prs + rerun) |
1308 | 51 | Venky Shankar | |
1309 | * https://tracker.ceph.com/issues/52624 |
||
1310 | qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" |
||
1311 | * https://tracker.ceph.com/issues/50223 |
||
1312 | qa: "client.4737 isn't responding to mclientcaps(revoke)" |
||
1313 | * https://tracker.ceph.com/issues/55332 |
||
1314 | Failure in snaptest-git-ceph.sh |
||
1315 | * https://tracker.ceph.com/issues/53859 |
||
1316 | qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm) |
||
1317 | * https://tracker.ceph.com/issues/55538 |
||
1318 | 1 | Patrick Donnelly | Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead) |
1319 | 52 | Venky Shankar | * https://tracker.ceph.com/issues/55258 |
1320 | lots of "heartbeat_check: no reply from X.X.X.X" in OSD logs (cropss up again, though very infrequent) |
||
1321 | 51 | Venky Shankar | |
1322 | 49 | Venky Shankar | h3. 2022 May 04 |
1323 | |||
1324 | 50 | Venky Shankar | https://pulpito.ceph.com/vshankar-2022-05-01_13:18:44-fs-wip-vshankar-testing1-20220428-204527-testing-default-smithi/ |
1325 | https://pulpito.ceph.com/vshankar-2022-05-02_16:58:59-fs-wip-vshankar-testing1-20220502-201957-testing-default-smithi/ (after dropping PRs) |
||
1326 | |||
1327 | 49 | Venky Shankar | * https://tracker.ceph.com/issues/52624 |
1328 | qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" |
||
1329 | * https://tracker.ceph.com/issues/50223 |
||
1330 | qa: "client.4737 isn't responding to mclientcaps(revoke)" |
||
1331 | * https://tracker.ceph.com/issues/55332 |
||
1332 | Failure in snaptest-git-ceph.sh |
||
1333 | * https://tracker.ceph.com/issues/53859 |
||
1334 | qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm) |
||
1335 | * https://tracker.ceph.com/issues/55516 |
||
1336 | qa: fs suite tests failing with "json.decoder.JSONDecodeError: Extra data: line 2 column 82 (char 82)" |
||
1337 | * https://tracker.ceph.com/issues/55537 |
||
1338 | mds: crash during fs:upgrade test |
||
1339 | * https://tracker.ceph.com/issues/55538 |
||
1340 | Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead) |
||
1341 | |||
1342 | 48 | Venky Shankar | h3. 2022 Apr 25 |
1343 | |||
1344 | https://pulpito.ceph.com/?branch=wip-vshankar-testing-20220420-113951 (owner vshankar) |
||
1345 | |||
1346 | * https://tracker.ceph.com/issues/52624 |
||
1347 | qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" |
||
1348 | * https://tracker.ceph.com/issues/50223 |
||
1349 | qa: "client.4737 isn't responding to mclientcaps(revoke)" |
||
1350 | * https://tracker.ceph.com/issues/55258 |
||
1351 | lots of "heartbeat_check: no reply from X.X.X.X" in OSD logs |
||
1352 | * https://tracker.ceph.com/issues/55377 |
||
1353 | kclient: mds revoke Fwb caps stuck after the kclient tries writebcak once |
||
1354 | |||
1355 | 47 | Venky Shankar | h3. 2022 Apr 14 |
1356 | |||
1357 | https://pulpito.ceph.com/?branch=wip-vshankar-testing1-20220411-144044 |
||
1358 | |||
1359 | * https://tracker.ceph.com/issues/52624 |
||
1360 | qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" |
||
1361 | * https://tracker.ceph.com/issues/50223 |
||
1362 | qa: "client.4737 isn't responding to mclientcaps(revoke)" |
||
1363 | * https://tracker.ceph.com/issues/52438 |
||
1364 | qa: ffsb timeout |
||
1365 | * https://tracker.ceph.com/issues/55170 |
||
1366 | mds: crash during rejoin (CDir::fetch_keys) |
||
1367 | * https://tracker.ceph.com/issues/55331 |
||
1368 | pjd failure |
||
1369 | * https://tracker.ceph.com/issues/48773 |
||
1370 | qa: scrub does not complete |
||
1371 | * https://tracker.ceph.com/issues/55332 |
||
1372 | Failure in snaptest-git-ceph.sh |
||
1373 | * https://tracker.ceph.com/issues/55258 |
||
1374 | lots of "heartbeat_check: no reply from X.X.X.X" in OSD logs |
||
1375 | |||
1376 | 45 | Venky Shankar | h3. 2022 Apr 11 |
1377 | |||
1378 | 46 | Venky Shankar | https://pulpito.ceph.com/?branch=wip-vshankar-testing-55110-20220408-203242 |
1379 | 45 | Venky Shankar | |
1380 | * https://tracker.ceph.com/issues/48773 |
||
1381 | qa: scrub does not complete |
||
1382 | * https://tracker.ceph.com/issues/52624 |
||
1383 | qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" |
||
1384 | * https://tracker.ceph.com/issues/52438 |
||
1385 | qa: ffsb timeout |
||
1386 | * https://tracker.ceph.com/issues/48680 |
||
1387 | mds: scrubbing stuck "scrub active (0 inodes in the stack)" |
||
1388 | * https://tracker.ceph.com/issues/55236 |
||
1389 | qa: fs/snaps tests fails with "hit max job timeout" |
||
1390 | * https://tracker.ceph.com/issues/54108 |
||
1391 | qa: iogen workunit: "The following counters failed to be set on mds daemons: {'mds.exported', 'mds.imported'}" |
||
1392 | * https://tracker.ceph.com/issues/54971 |
||
1393 | Test failure: test_perf_stats_stale_metrics (tasks.cephfs.test_mds_metrics.TestMDSMetrics) |
||
1394 | * https://tracker.ceph.com/issues/50223 |
||
1395 | qa: "client.4737 isn't responding to mclientcaps(revoke)" |
||
1396 | * https://tracker.ceph.com/issues/55258 |
||
1397 | lots of "heartbeat_check: no reply from X.X.X.X" in OSD logs |
||
1398 | |||
1399 | 44 | Venky Shankar | h3. 2022 Mar 21 |
1400 | 42 | Venky Shankar | |
1401 | 43 | Venky Shankar | https://pulpito.ceph.com/vshankar-2022-03-20_02:16:37-fs-wip-vshankar-testing-20220319-163539-testing-default-smithi/ |
1402 | |||
1403 | Run didn't go well, lots of failures - debugging by dropping PRs and running against master branch. Only merging unrelated PRs that pass tests. |
||
1404 | |||
1405 | |||
1406 | h3. 2022 Mar 08 |
||
1407 | |||
1408 | 42 | Venky Shankar | https://pulpito.ceph.com/vshankar-2022-02-28_04:32:15-fs-wip-vshankar-testing-20220226-211550-testing-default-smithi/ |
1409 | |||
1410 | rerun with |
||
1411 | - (drop) https://github.com/ceph/ceph/pull/44679 |
||
1412 | - (drop) https://github.com/ceph/ceph/pull/44958 |
||
1413 | https://pulpito.ceph.com/vshankar-2022-03-06_14:47:51-fs-wip-vshankar-testing-20220304-132102-testing-default-smithi/ |
||
1414 | |||
1415 | * https://tracker.ceph.com/issues/54419 (new) |
||
1416 | `ceph orch upgrade start` seems to never reach completion |
||
1417 | * https://tracker.ceph.com/issues/51964 |
||
1418 | qa: test_cephfs_mirror_restart_sync_on_blocklist failure |
||
1419 | * https://tracker.ceph.com/issues/52624 |
||
1420 | qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" |
||
1421 | * https://tracker.ceph.com/issues/50223 |
||
1422 | qa: "client.4737 isn't responding to mclientcaps(revoke)" |
||
1423 | * https://tracker.ceph.com/issues/52438 |
||
1424 | qa: ffsb timeout |
||
1425 | * https://tracker.ceph.com/issues/50821 |
||
1426 | qa: untar_snap_rm failure during mds thrashing |
||
1427 | |||
1428 | |||
1429 | 41 | Venky Shankar | h3. 2022 Feb 09 |
1430 | |||
1431 | https://pulpito.ceph.com/vshankar-2022-02-05_17:27:49-fs-wip-vshankar-testing-20220201-113815-testing-default-smithi/ |
||
1432 | |||
1433 | rerun with |
||
1434 | - (drop) https://github.com/ceph/ceph/pull/37938 |
||
1435 | - (drop) https://github.com/ceph/ceph/pull/44335 |
||
1436 | - (drop) https://github.com/ceph/ceph/pull/44491 |
||
1437 | - (drop) https://github.com/ceph/ceph/pull/44501 |
||
1438 | https://pulpito.ceph.com/vshankar-2022-02-08_14:27:29-fs-wip-vshankar-testing-20220208-181241-testing-default-smithi/ |
||
1439 | |||
1440 | * https://tracker.ceph.com/issues/51964 |
||
1441 | qa: test_cephfs_mirror_restart_sync_on_blocklist failure |
||
1442 | * https://tracker.ceph.com/issues/54066 |
||
1443 | test_subvolume_no_upgrade_v1_sanity fails with `AssertionError: 1000 != 0` |
||
1444 | * https://tracker.ceph.com/issues/48773 |
||
1445 | qa: scrub does not complete |
||
1446 | * https://tracker.ceph.com/issues/52624 |
||
1447 | qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" |
||
1448 | * https://tracker.ceph.com/issues/50223 |
||
1449 | qa: "client.4737 isn't responding to mclientcaps(revoke)" |
||
1450 | * https://tracker.ceph.com/issues/52438 |
||
1451 | qa: ffsb timeout |
||
1452 | |||
1453 | 40 | Patrick Donnelly | h3. 2022 Feb 01 |
1454 | |||
1455 | https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20220127.171526 |
||
1456 | |||
1457 | * https://tracker.ceph.com/issues/54107 |
||
1458 | kclient: hang during umount |
||
1459 | * https://tracker.ceph.com/issues/54106 |
||
1460 | kclient: hang during workunit cleanup |
||
1461 | * https://tracker.ceph.com/issues/54108 |
||
1462 | qa: iogen workunit: "The following counters failed to be set on mds daemons: {'mds.exported', 'mds.imported'}" |
||
1463 | * https://tracker.ceph.com/issues/48773 |
||
1464 | qa: scrub does not complete |
||
1465 | * https://tracker.ceph.com/issues/52438 |
||
1466 | qa: ffsb timeout |
||
1467 | |||
1468 | |||
1469 | 36 | Venky Shankar | h3. 2022 Jan 13 |
1470 | |||
1471 | https://pulpito.ceph.com/vshankar-2022-01-06_13:18:41-fs-wip-vshankar-testing-20220106-145819-testing-default-smithi/ |
||
1472 | 39 | Venky Shankar | |
1473 | 36 | Venky Shankar | rerun with: |
1474 | 38 | Venky Shankar | - (add) https://github.com/ceph/ceph/pull/44570 |
1475 | - (drop) https://github.com/ceph/ceph/pull/43184 |
||
1476 | 36 | Venky Shankar | https://pulpito.ceph.com/vshankar-2022-01-13_04:42:40-fs-wip-vshankar-testing-20220106-145819-testing-default-smithi/ |
1477 | |||
1478 | * https://tracker.ceph.com/issues/50223 |
||
1479 | qa: "client.4737 isn't responding to mclientcaps(revoke)" |
||
1480 | * https://tracker.ceph.com/issues/51282 |
||
1481 | pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings |
||
1482 | * https://tracker.ceph.com/issues/48773 |
||
1483 | qa: scrub does not complete |
||
1484 | * https://tracker.ceph.com/issues/52624 |
||
1485 | qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" |
||
1486 | * https://tracker.ceph.com/issues/53859 |
||
1487 | qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm) |
||
1488 | |||
1489 | 34 | Venky Shankar | h3. 2022 Jan 03 |
1490 | |||
1491 | https://pulpito.ceph.com/vshankar-2021-12-22_07:37:44-fs-wip-vshankar-testing-20211216-114012-testing-default-smithi/ |
||
1492 | https://pulpito.ceph.com/vshankar-2022-01-03_12:27:45-fs-wip-vshankar-testing-20220103-142738-testing-default-smithi/ (rerun) |
||
1493 | |||
1494 | * https://tracker.ceph.com/issues/50223 |
||
1495 | qa: "client.4737 isn't responding to mclientcaps(revoke)" |
||
1496 | * https://tracker.ceph.com/issues/51964 |
||
1497 | qa: test_cephfs_mirror_restart_sync_on_blocklist failure |
||
1498 | * https://tracker.ceph.com/issues/51267 |
||
1499 | CommandFailedError: Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithi096 with status 1:... |
||
1500 | * https://tracker.ceph.com/issues/51282 |
||
1501 | pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings |
||
1502 | * https://tracker.ceph.com/issues/50821 |
||
1503 | qa: untar_snap_rm failure during mds thrashing |
||
1504 | * https://tracker.ceph.com/issues/51278 |
||
1505 | mds: "FAILED ceph_assert(!segments.empty())" |
||
1506 | 35 | Ramana Raja | * https://tracker.ceph.com/issues/52279 |
1507 | cephadm tests fail due to: error adding seccomp filter rule for syscall bdflush: requested action matches default action of filter |
||
1508 | |||
1509 | 34 | Venky Shankar | |
1510 | 33 | Patrick Donnelly | h3. 2021 Dec 22 |
1511 | |||
1512 | https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211222.014316 |
||
1513 | |||
1514 | * https://tracker.ceph.com/issues/52624 |
||
1515 | qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" |
||
1516 | * https://tracker.ceph.com/issues/50223 |
||
1517 | qa: "client.4737 isn't responding to mclientcaps(revoke)" |
||
1518 | * https://tracker.ceph.com/issues/52279 |
||
1519 | cephadm tests fail due to: error adding seccomp filter rule for syscall bdflush: requested action matches default action of filter |
||
1520 | * https://tracker.ceph.com/issues/50224 |
||
1521 | qa: test_mirroring_init_failure_with_recovery failure |
||
1522 | * https://tracker.ceph.com/issues/48773 |
||
1523 | qa: scrub does not complete |
||
1524 | |||
1525 | |||
1526 | 32 | Venky Shankar | h3. 2021 Nov 30 |
1527 | |||
1528 | https://pulpito.ceph.com/vshankar-2021-11-24_07:14:27-fs-wip-vshankar-testing-20211124-094330-testing-default-smithi/ |
||
1529 | https://pulpito.ceph.com/vshankar-2021-11-30_06:23:32-fs-wip-vshankar-testing-20211124-094330-distro-default-smithi/ (rerun w/ QA fixes) |
||
1530 | |||
1531 | * https://tracker.ceph.com/issues/53436 |
||
1532 | mds, mon: mds beacon messages get dropped? (mds never reaches up:active state) |
||
1533 | * https://tracker.ceph.com/issues/51964 |
||
1534 | qa: test_cephfs_mirror_restart_sync_on_blocklist failure |
||
1535 | * https://tracker.ceph.com/issues/48812 |
||
1536 | qa: test_scrub_pause_and_resume_with_abort failure |
||
1537 | * https://tracker.ceph.com/issues/51076 |
||
1538 | "wait_for_recovery: failed before timeout expired" during thrashosd test with EC backend. |
||
1539 | * https://tracker.ceph.com/issues/50223 |
||
1540 | qa: "client.4737 isn't responding to mclientcaps(revoke)" |
||
1541 | * https://tracker.ceph.com/issues/52624 |
||
1542 | qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" |
||
1543 | * https://tracker.ceph.com/issues/50250 |
||
1544 | mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones") |
||
1545 | |||
1546 | |||
1547 | 31 | Patrick Donnelly | h3. 2021 November 9 |
1548 | |||
1549 | https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211109.180315 |
||
1550 | |||
1551 | * https://tracker.ceph.com/issues/53214 |
||
1552 | qa: "dd: error reading '/sys/kernel/debug/ceph/2a934501-6731-4052-a836-f42229a869be.client4874/metrics': Is a directory" |
||
1553 | * https://tracker.ceph.com/issues/48773 |
||
1554 | qa: scrub does not complete |
||
1555 | * https://tracker.ceph.com/issues/50223 |
||
1556 | qa: "client.4737 isn't responding to mclientcaps(revoke)" |
||
1557 | * https://tracker.ceph.com/issues/51282 |
||
1558 | pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings |
||
1559 | * https://tracker.ceph.com/issues/52624 |
||
1560 | qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" |
||
1561 | * https://tracker.ceph.com/issues/53216 |
||
1562 | qa: "RuntimeError: value of attributes should be either str or None. client_id" |
||
1563 | * https://tracker.ceph.com/issues/50250 |
||
1564 | mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones") |
||
1565 | |||
1566 | |||
1567 | |||
1568 | 30 | Patrick Donnelly | h3. 2021 November 03 |
1569 | |||
1570 | https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211103.023355 |
||
1571 | |||
1572 | * https://tracker.ceph.com/issues/51964 |
||
1573 | qa: test_cephfs_mirror_restart_sync_on_blocklist failure |
||
1574 | * https://tracker.ceph.com/issues/51282 |
||
1575 | pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings |
||
1576 | * https://tracker.ceph.com/issues/52436 |
||
1577 | fs/ceph: "corrupt mdsmap" |
||
1578 | * https://tracker.ceph.com/issues/53074 |
||
1579 | pybind/mgr/cephadm: upgrade sequence does not continue if no MDS are active |
||
1580 | * https://tracker.ceph.com/issues/53150 |
||
1581 | pybind/mgr/cephadm/upgrade: tolerate MDS failures during upgrade straddling v16.2.5 |
||
1582 | * https://tracker.ceph.com/issues/53155 |
||
1583 | MDSMonitor: assertion during upgrade to v16.2.5+ |
||
1584 | |||
1585 | |||
1586 | 29 | Patrick Donnelly | h3. 2021 October 26 |
1587 | |||
1588 | https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211025.000447 |
||
1589 | |||
1590 | * https://tracker.ceph.com/issues/53074 |
||
1591 | pybind/mgr/cephadm: upgrade sequence does not continue if no MDS are active |
||
1592 | * https://tracker.ceph.com/issues/52997 |
||
1593 | testing: hang ing umount |
||
1594 | * https://tracker.ceph.com/issues/50824 |
||
1595 | qa: snaptest-git-ceph bus error |
||
1596 | * https://tracker.ceph.com/issues/52436 |
||
1597 | fs/ceph: "corrupt mdsmap" |
||
1598 | * https://tracker.ceph.com/issues/48773 |
||
1599 | qa: scrub does not complete |
||
1600 | * https://tracker.ceph.com/issues/53082 |
||
1601 | ceph-fuse: segmenetation fault in Client::handle_mds_map |
||
1602 | * https://tracker.ceph.com/issues/50223 |
||
1603 | qa: "client.4737 isn't responding to mclientcaps(revoke)" |
||
1604 | * https://tracker.ceph.com/issues/52624 |
||
1605 | qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" |
||
1606 | * https://tracker.ceph.com/issues/50224 |
||
1607 | qa: test_mirroring_init_failure_with_recovery failure |
||
1608 | * https://tracker.ceph.com/issues/50821 |
||
1609 | qa: untar_snap_rm failure during mds thrashing |
||
1610 | * https://tracker.ceph.com/issues/50250 |
||
1611 | mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones") |
||
1612 | |||
1613 | |||
1614 | |||
1615 | 27 | Patrick Donnelly | h3. 2021 October 19 |
1616 | |||
1617 | 28 | Patrick Donnelly | https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211019.013028 |
1618 | 27 | Patrick Donnelly | |
1619 | * https://tracker.ceph.com/issues/52995 |
||
1620 | qa: test_standby_count_wanted failure |
||
1621 | * https://tracker.ceph.com/issues/52948 |
||
1622 | osd: fails to come up: "teuthology.misc:7 of 8 OSDs are up" |
||
1623 | * https://tracker.ceph.com/issues/52996 |
||
1624 | qa: test_perf_counters via test_openfiletable |
||
1625 | * https://tracker.ceph.com/issues/48772 |
||
1626 | qa: pjd: not ok 9, 44, 80 |
||
1627 | * https://tracker.ceph.com/issues/52997 |
||
1628 | testing: hang ing umount |
||
1629 | * https://tracker.ceph.com/issues/50250 |
||
1630 | mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones") |
||
1631 | * https://tracker.ceph.com/issues/52624 |
||
1632 | qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" |
||
1633 | * https://tracker.ceph.com/issues/50223 |
||
1634 | qa: "client.4737 isn't responding to mclientcaps(revoke)" |
||
1635 | * https://tracker.ceph.com/issues/50821 |
||
1636 | qa: untar_snap_rm failure during mds thrashing |
||
1637 | * https://tracker.ceph.com/issues/48773 |
||
1638 | qa: scrub does not complete |
||
1639 | |||
1640 | |||
1641 | 26 | Patrick Donnelly | h3. 2021 October 12 |
1642 | |||
1643 | https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211012.192211 |
||
1644 | |||
1645 | Some failures caused by teuthology bug: https://tracker.ceph.com/issues/52944 |
||
1646 | |||
1647 | New test caused failure: https://github.com/ceph/ceph/pull/43297#discussion_r729883167 |
||
1648 | |||
1649 | |||
1650 | * https://tracker.ceph.com/issues/51282 |
||
1651 | pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings |
||
1652 | * https://tracker.ceph.com/issues/52948 |
||
1653 | osd: fails to come up: "teuthology.misc:7 of 8 OSDs are up" |
||
1654 | * https://tracker.ceph.com/issues/48773 |
||
1655 | qa: scrub does not complete |
||
1656 | * https://tracker.ceph.com/issues/50224 |
||
1657 | qa: test_mirroring_init_failure_with_recovery failure |
||
1658 | * https://tracker.ceph.com/issues/52949 |
||
1659 | RuntimeError: The following counters failed to be set on mds daemons: {'mds.dir_split'} |
||
1660 | |||
1661 | |||
1662 | 25 | Patrick Donnelly | h3. 2021 October 02 |
1663 | 23 | Patrick Donnelly | |
1664 | 24 | Patrick Donnelly | https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211002.163337 |
1665 | |||
1666 | Some failures caused by cephadm upgrade test. Fixed in follow-up qa commit. |
||
1667 | |||
1668 | test_simple failures caused by PR in this set. |
||
1669 | |||
1670 | A few reruns because of QA infra noise. |
||
1671 | |||
1672 | * https://tracker.ceph.com/issues/52822 |
||
1673 | qa: failed pacific install on fs:upgrade |
||
1674 | * https://tracker.ceph.com/issues/52624 |
||
1675 | qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" |
||
1676 | * https://tracker.ceph.com/issues/50223 |
||
1677 | qa: "client.4737 isn't responding to mclientcaps(revoke)" |
||
1678 | * https://tracker.ceph.com/issues/48773 |
||
1679 | qa: scrub does not complete |
||
1680 | |||
1681 | |||
1682 | h3. 2021 September 20 |
||
1683 | |||
1684 | 23 | Patrick Donnelly | https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210917.174826 |
1685 | |||
1686 | * https://tracker.ceph.com/issues/52677 |
||
1687 | qa: test_simple failure |
||
1688 | * https://tracker.ceph.com/issues/51279 |
||
1689 | kclient hangs on umount (testing branch) |
||
1690 | * https://tracker.ceph.com/issues/50223 |
||
1691 | qa: "client.4737 isn't responding to mclientcaps(revoke)" |
||
1692 | * https://tracker.ceph.com/issues/50250 |
||
1693 | mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones") |
||
1694 | * https://tracker.ceph.com/issues/52624 |
||
1695 | qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" |
||
1696 | * https://tracker.ceph.com/issues/52438 |
||
1697 | qa: ffsb timeout |
||
1698 | |||
1699 | |||
1700 | 22 | Patrick Donnelly | h3. 2021 September 10 |
1701 | |||
1702 | https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210910.181451 |
||
1703 | |||
1704 | * https://tracker.ceph.com/issues/50223 |
||
1705 | qa: "client.4737 isn't responding to mclientcaps(revoke)" |
||
1706 | * https://tracker.ceph.com/issues/50250 |
||
1707 | mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones") |
||
1708 | * https://tracker.ceph.com/issues/52624 |
||
1709 | qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" |
||
1710 | * https://tracker.ceph.com/issues/52625 |
||
1711 | qa: test_kill_mdstable (tasks.cephfs.test_snapshots.TestSnapshots) |
||
1712 | * https://tracker.ceph.com/issues/52439 |
||
1713 | qa: acls does not compile on centos stream |
||
1714 | * https://tracker.ceph.com/issues/50821 |
||
1715 | qa: untar_snap_rm failure during mds thrashing |
||
1716 | * https://tracker.ceph.com/issues/48773 |
||
1717 | qa: scrub does not complete |
||
1718 | * https://tracker.ceph.com/issues/52626 |
||
1719 | mds: ScrubStack.cc: 831: FAILED ceph_assert(diri) |
||
1720 | * https://tracker.ceph.com/issues/51279 |
||
1721 | kclient hangs on umount (testing branch) |
||
1722 | |||
1723 | |||
1724 | 21 | Patrick Donnelly | h3. 2021 August 27 |
1725 | |||
1726 | Several jobs died because of device failures. |
||
1727 | |||
1728 | https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210827.024746 |
||
1729 | |||
1730 | * https://tracker.ceph.com/issues/52430 |
||
1731 | mds: fast async create client mount breaks racy test |
||
1732 | * https://tracker.ceph.com/issues/52436 |
||
1733 | fs/ceph: "corrupt mdsmap" |
||
1734 | * https://tracker.ceph.com/issues/52437 |
||
1735 | mds: InoTable::replay_release_ids abort via test_inotable_sync |
||
1736 | * https://tracker.ceph.com/issues/51282 |
||
1737 | pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings |
||
1738 | * https://tracker.ceph.com/issues/52438 |
||
1739 | qa: ffsb timeout |
||
1740 | * https://tracker.ceph.com/issues/52439 |
||
1741 | qa: acls does not compile on centos stream |
||
1742 | |||
1743 | |||
1744 | 20 | Patrick Donnelly | h3. 2021 July 30 |
1745 | |||
1746 | https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210729.214022 |
||
1747 | |||
1748 | * https://tracker.ceph.com/issues/50250 |
||
1749 | mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones") |
||
1750 | * https://tracker.ceph.com/issues/51282 |
||
1751 | pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings |
||
1752 | * https://tracker.ceph.com/issues/48773 |
||
1753 | qa: scrub does not complete |
||
1754 | * https://tracker.ceph.com/issues/51975 |
||
1755 | pybind/mgr/stats: KeyError |
||
1756 | |||
1757 | |||
1758 | 19 | Patrick Donnelly | h3. 2021 July 28 |
1759 | |||
1760 | https://pulpito.ceph.com/pdonnell-2021-07-28_00:39:45-fs-wip-pdonnell-testing-20210727.213757-distro-basic-smithi/ |
||
1761 | |||
1762 | with qa fix: https://pulpito.ceph.com/pdonnell-2021-07-28_16:20:28-fs-wip-pdonnell-testing-20210728.141004-distro-basic-smithi/ |
||
1763 | |||
1764 | * https://tracker.ceph.com/issues/51905 |
||
1765 | qa: "error reading sessionmap 'mds1_sessionmap'" |
||
1766 | * https://tracker.ceph.com/issues/48773 |
||
1767 | qa: scrub does not complete |
||
1768 | * https://tracker.ceph.com/issues/50250 |
||
1769 | mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones") |
||
1770 | * https://tracker.ceph.com/issues/51267 |
||
1771 | CommandFailedError: Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithi096 with status 1:... |
||
1772 | * https://tracker.ceph.com/issues/51279 |
||
1773 | kclient hangs on umount (testing branch) |
||
1774 | |||
1775 | |||
1776 | 18 | Patrick Donnelly | h3. 2021 July 16 |
1777 | |||
1778 | https://pulpito.ceph.com/pdonnell-2021-07-16_05:50:11-fs-wip-pdonnell-testing-20210716.022804-distro-basic-smithi/ |
||
1779 | |||
1780 | * https://tracker.ceph.com/issues/48773 |
||
1781 | qa: scrub does not complete |
||
1782 | * https://tracker.ceph.com/issues/48772 |
||
1783 | qa: pjd: not ok 9, 44, 80 |
||
1784 | * https://tracker.ceph.com/issues/45434 |
||
1785 | qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed |
||
1786 | * https://tracker.ceph.com/issues/51279 |
||
1787 | kclient hangs on umount (testing branch) |
||
1788 | * https://tracker.ceph.com/issues/50824 |
||
1789 | qa: snaptest-git-ceph bus error |
||
1790 | |||
1791 | |||
1792 | 17 | Patrick Donnelly | h3. 2021 July 04 |
1793 | |||
1794 | https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210703.052904 |
||
1795 | |||
1796 | * https://tracker.ceph.com/issues/48773 |
||
1797 | qa: scrub does not complete |
||
1798 | * https://tracker.ceph.com/issues/39150 |
||
1799 | mon: "FAILED ceph_assert(session_map.sessions.empty())" when out of quorum |
||
1800 | * https://tracker.ceph.com/issues/45434 |
||
1801 | qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed |
||
1802 | * https://tracker.ceph.com/issues/51282 |
||
1803 | pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings |
||
1804 | * https://tracker.ceph.com/issues/48771 |
||
1805 | qa: iogen: workload fails to cause balancing |
||
1806 | * https://tracker.ceph.com/issues/51279 |
||
1807 | kclient hangs on umount (testing branch) |
||
1808 | * https://tracker.ceph.com/issues/50250 |
||
1809 | mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones") |
||
1810 | |||
1811 | |||
1812 | 16 | Patrick Donnelly | h3. 2021 July 01 |
1813 | |||
1814 | https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210701.192056 |
||
1815 | |||
1816 | * https://tracker.ceph.com/issues/51197 |
||
1817 | qa: [WRN] Scrub error on inode 0x10000001520 (/client.0/tmp/t/linux-5.4/Documentation/driver-api) see mds.f log and `damage ls` output for details |
||
1818 | * https://tracker.ceph.com/issues/50866 |
||
1819 | osd: stat mismatch on objects |
||
1820 | * https://tracker.ceph.com/issues/48773 |
||
1821 | qa: scrub does not complete |
||
1822 | |||
1823 | |||
1824 | 15 | Patrick Donnelly | h3. 2021 June 26 |
1825 | |||
1826 | https://pulpito.ceph.com/pdonnell-2021-06-26_00:57:00-fs-wip-pdonnell-testing-20210625.225421-distro-basic-smithi/ |
||
1827 | |||
1828 | * https://tracker.ceph.com/issues/51183 |
||
1829 | qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions' |
||
1830 | * https://tracker.ceph.com/issues/51410 |
||
1831 | kclient: fails to finish reconnect during MDS thrashing (testing branch) |
||
1832 | * https://tracker.ceph.com/issues/48773 |
||
1833 | qa: scrub does not complete |
||
1834 | * https://tracker.ceph.com/issues/51282 |
||
1835 | pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings |
||
1836 | * https://tracker.ceph.com/issues/51169 |
||
1837 | qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp |
||
1838 | * https://tracker.ceph.com/issues/48772 |
||
1839 | qa: pjd: not ok 9, 44, 80 |
||
1840 | |||
1841 | |||
1842 | 14 | Patrick Donnelly | h3. 2021 June 21 |
1843 | |||
1844 | https://pulpito.ceph.com/pdonnell-2021-06-22_00:27:21-fs-wip-pdonnell-testing-20210621.231646-distro-basic-smithi/ |
||
1845 | |||
1846 | One failure caused by PR: https://github.com/ceph/ceph/pull/41935#issuecomment-866472599 |
||
1847 | |||
1848 | * https://tracker.ceph.com/issues/51282 |
||
1849 | pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings |
||
1850 | * https://tracker.ceph.com/issues/51183 |
||
1851 | qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions' |
||
1852 | * https://tracker.ceph.com/issues/48773 |
||
1853 | qa: scrub does not complete |
||
1854 | * https://tracker.ceph.com/issues/48771 |
||
1855 | qa: iogen: workload fails to cause balancing |
||
1856 | * https://tracker.ceph.com/issues/51169 |
||
1857 | qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp |
||
1858 | * https://tracker.ceph.com/issues/50495 |
||
1859 | libcephfs: shutdown race fails with status 141 |
||
1860 | * https://tracker.ceph.com/issues/45434 |
||
1861 | qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed |
||
1862 | * https://tracker.ceph.com/issues/50824 |
||
1863 | qa: snaptest-git-ceph bus error |
||
1864 | * https://tracker.ceph.com/issues/50223 |
||
1865 | qa: "client.4737 isn't responding to mclientcaps(revoke)" |
||
1866 | |||
1867 | |||
1868 | 13 | Patrick Donnelly | h3. 2021 June 16 |
1869 | |||
1870 | https://pulpito.ceph.com/pdonnell-2021-06-16_21:26:55-fs-wip-pdonnell-testing-20210616.191804-distro-basic-smithi/ |
||
1871 | |||
1872 | MDS abort class of failures caused by PR: https://github.com/ceph/ceph/pull/41667 |
||
1873 | |||
1874 | * https://tracker.ceph.com/issues/45434 |
||
1875 | qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed |
||
1876 | * https://tracker.ceph.com/issues/51169 |
||
1877 | qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp |
||
1878 | * https://tracker.ceph.com/issues/43216 |
||
1879 | MDSMonitor: removes MDS coming out of quorum election |
||
1880 | * https://tracker.ceph.com/issues/51278 |
||
1881 | mds: "FAILED ceph_assert(!segments.empty())" |
||
1882 | * https://tracker.ceph.com/issues/51279 |
||
1883 | kclient hangs on umount (testing branch) |
||
1884 | * https://tracker.ceph.com/issues/51280 |
||
1885 | mds: "FAILED ceph_assert(r == 0 || r == -2)" |
||
1886 | * https://tracker.ceph.com/issues/51183 |
||
1887 | qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions' |
||
1888 | * https://tracker.ceph.com/issues/51281 |
||
1889 | qa: snaptest-snap-rm-cmp.sh: "echo 'FAIL: bad match, /tmp/a 4637e766853d1ad16a7b17079e2c6f03 != real c3883760b18d50e8d78819c54d579b00'" |
||
1890 | * https://tracker.ceph.com/issues/48773 |
||
1891 | qa: scrub does not complete |
||
1892 | * https://tracker.ceph.com/issues/51076 |
||
1893 | "wait_for_recovery: failed before timeout expired" during thrashosd test with EC backend. |
||
1894 | * https://tracker.ceph.com/issues/51228 |
||
1895 | qa: rmdir: failed to remove 'a/.snap/*': No such file or directory |
||
1896 | * https://tracker.ceph.com/issues/51282 |
||
1897 | pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings |
||
1898 | |||
1899 | |||
1900 | 12 | Patrick Donnelly | h3. 2021 June 14 |
1901 | |||
1902 | https://pulpito.ceph.com/pdonnell-2021-06-14_20:53:05-fs-wip-pdonnell-testing-20210614.173325-distro-basic-smithi/ |
||
1903 | |||
1904 | Some Ubuntu 20.04 upgrade fallout. In particular, upgrade tests are failing due to missing packages for 18.04 Pacific. |
||
1905 | |||
1906 | * https://tracker.ceph.com/issues/51169 |
||
1907 | qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp |
||
1908 | * https://tracker.ceph.com/issues/51228 |
||
1909 | qa: rmdir: failed to remove 'a/.snap/*': No such file or directory |
||
1910 | * https://tracker.ceph.com/issues/48773 |
||
1911 | qa: scrub does not complete |
||
1912 | * https://tracker.ceph.com/issues/51183 |
||
1913 | qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions' |
||
1914 | * https://tracker.ceph.com/issues/45434 |
||
1915 | qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed |
||
1916 | * https://tracker.ceph.com/issues/51182 |
||
1917 | pybind/mgr/snap_schedule: Invalid command: Unexpected argument 'fs=cephfs' |
||
1918 | * https://tracker.ceph.com/issues/51229 |
||
1919 | qa: test_multi_snap_schedule list difference failure |
||
1920 | * https://tracker.ceph.com/issues/50821 |
||
1921 | qa: untar_snap_rm failure during mds thrashing |
||
1922 | |||
1923 | |||
1924 | 11 | Patrick Donnelly | h3. 2021 June 13 |
1925 | |||
1926 | https://pulpito.ceph.com/pdonnell-2021-06-12_02:45:35-fs-wip-pdonnell-testing-20210612.002809-distro-basic-smithi/ |
||
1927 | |||
1928 | Some Ubuntu 20.04 upgrade fallout. In particular, upgrade tests are failing due to missing packages for 18.04 Pacific. |
||
1929 | |||
1930 | * https://tracker.ceph.com/issues/51169 |
||
1931 | qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp |
||
1932 | * https://tracker.ceph.com/issues/48773 |
||
1933 | qa: scrub does not complete |
||
1934 | * https://tracker.ceph.com/issues/51182 |
||
1935 | pybind/mgr/snap_schedule: Invalid command: Unexpected argument 'fs=cephfs' |
||
1936 | * https://tracker.ceph.com/issues/51183 |
||
1937 | qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions' |
||
1938 | * https://tracker.ceph.com/issues/51197 |
||
1939 | qa: [WRN] Scrub error on inode 0x10000001520 (/client.0/tmp/t/linux-5.4/Documentation/driver-api) see mds.f log and `damage ls` output for details |
||
1940 | * https://tracker.ceph.com/issues/45434 |
||
1941 | qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed |
||
1942 | |||
1943 | 10 | Patrick Donnelly | h3. 2021 June 11 |
1944 | |||
1945 | https://pulpito.ceph.com/pdonnell-2021-06-11_18:02:10-fs-wip-pdonnell-testing-20210611.162716-distro-basic-smithi/ |
||
1946 | |||
1947 | Some Ubuntu 20.04 upgrade fallout. In particular, upgrade tests are failing due to missing packages for 18.04 Pacific. |
||
1948 | |||
1949 | * https://tracker.ceph.com/issues/51169 |
||
1950 | qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp |
||
1951 | * https://tracker.ceph.com/issues/45434 |
||
1952 | qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed |
||
1953 | * https://tracker.ceph.com/issues/48771 |
||
1954 | qa: iogen: workload fails to cause balancing |
||
1955 | * https://tracker.ceph.com/issues/43216 |
||
1956 | MDSMonitor: removes MDS coming out of quorum election |
||
1957 | * https://tracker.ceph.com/issues/51182 |
||
1958 | pybind/mgr/snap_schedule: Invalid command: Unexpected argument 'fs=cephfs' |
||
1959 | * https://tracker.ceph.com/issues/50223 |
||
1960 | qa: "client.4737 isn't responding to mclientcaps(revoke)" |
||
1961 | * https://tracker.ceph.com/issues/48773 |
||
1962 | qa: scrub does not complete |
||
1963 | * https://tracker.ceph.com/issues/51183 |
||
1964 | qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions' |
||
1965 | * https://tracker.ceph.com/issues/51184 |
||
1966 | qa: fs:bugs does not specify distro |
||
1967 | |||
1968 | |||
1969 | 9 | Patrick Donnelly | h3. 2021 June 03 |
1970 | |||
1971 | https://pulpito.ceph.com/pdonnell-2021-06-03_03:40:33-fs-wip-pdonnell-testing-20210603.020013-distro-basic-smithi/ |
||
1972 | |||
1973 | * https://tracker.ceph.com/issues/45434 |
||
1974 | qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed |
||
1975 | * https://tracker.ceph.com/issues/50016 |
||
1976 | qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes" |
||
1977 | * https://tracker.ceph.com/issues/50821 |
||
1978 | qa: untar_snap_rm failure during mds thrashing |
||
1979 | * https://tracker.ceph.com/issues/50622 (regression) |
||
1980 | msg: active_connections regression |
||
1981 | * https://tracker.ceph.com/issues/49845#note-2 (regression) |
||
1982 | qa: failed umount in test_volumes |
||
1983 | * https://tracker.ceph.com/issues/48773 |
||
1984 | qa: scrub does not complete |
||
1985 | * https://tracker.ceph.com/issues/43216 |
||
1986 | MDSMonitor: removes MDS coming out of quorum election |
||
1987 | |||
1988 | |||
1989 | 7 | Patrick Donnelly | h3. 2021 May 18 |
1990 | |||
1991 | 8 | Patrick Donnelly | https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210518.214114 |
1992 | |||
1993 | Regression in testing kernel caused some failures. Ilya fixed those and rerun |
||
1994 | looked better. Some odd new noise in the rerun relating to packaging and "No |
||
1995 | module named 'tasks.ceph'". |
||
1996 | |||
1997 | * https://tracker.ceph.com/issues/50824 |
||
1998 | qa: snaptest-git-ceph bus error |
||
1999 | * https://tracker.ceph.com/issues/50622 (regression) |
||
2000 | msg: active_connections regression |
||
2001 | * https://tracker.ceph.com/issues/49845#note-2 (regression) |
||
2002 | qa: failed umount in test_volumes |
||
2003 | * https://tracker.ceph.com/issues/48203 (stock kernel update required) |
||
2004 | qa: quota failure |
||
2005 | |||
2006 | |||
2007 | h3. 2021 May 18 |
||
2008 | |||
2009 | 7 | Patrick Donnelly | https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210518.025642 |
2010 | |||
2011 | * https://tracker.ceph.com/issues/50821 |
||
2012 | qa: untar_snap_rm failure during mds thrashing |
||
2013 | * https://tracker.ceph.com/issues/48773 |
||
2014 | qa: scrub does not complete |
||
2015 | * https://tracker.ceph.com/issues/45591 |
||
2016 | mgr: FAILED ceph_assert(daemon != nullptr) |
||
2017 | * https://tracker.ceph.com/issues/50866 |
||
2018 | osd: stat mismatch on objects |
||
2019 | * https://tracker.ceph.com/issues/50016 |
||
2020 | qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes" |
||
2021 | * https://tracker.ceph.com/issues/50867 |
||
2022 | qa: fs:mirror: reduced data availability |
||
2023 | * https://tracker.ceph.com/issues/50821 |
||
2024 | qa: untar_snap_rm failure during mds thrashing |
||
2025 | * https://tracker.ceph.com/issues/50622 (regression) |
||
2026 | msg: active_connections regression |
||
2027 | * https://tracker.ceph.com/issues/50223 |
||
2028 | qa: "client.4737 isn't responding to mclientcaps(revoke)" |
||
2029 | * https://tracker.ceph.com/issues/50868 |
||
2030 | qa: "kern.log.gz already exists; not overwritten" |
||
2031 | * https://tracker.ceph.com/issues/50870 |
||
2032 | qa: test_full: "rm: cannot remove 'large_file_a': Permission denied" |
||
2033 | |||
2034 | |||
2035 | 6 | Patrick Donnelly | h3. 2021 May 11 |
2036 | |||
2037 | https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210511.232042 |
||
2038 | |||
2039 | * one class of failures caused by PR |
||
2040 | * https://tracker.ceph.com/issues/48812 |
||
2041 | qa: test_scrub_pause_and_resume_with_abort failure |
||
2042 | * https://tracker.ceph.com/issues/50390 |
||
2043 | mds: monclient: wait_auth_rotating timed out after 30 |
||
2044 | * https://tracker.ceph.com/issues/48773 |
||
2045 | qa: scrub does not complete |
||
2046 | * https://tracker.ceph.com/issues/50821 |
||
2047 | qa: untar_snap_rm failure during mds thrashing |
||
2048 | * https://tracker.ceph.com/issues/50224 |
||
2049 | qa: test_mirroring_init_failure_with_recovery failure |
||
2050 | * https://tracker.ceph.com/issues/50622 (regression) |
||
2051 | msg: active_connections regression |
||
2052 | * https://tracker.ceph.com/issues/50825 |
||
2053 | qa: snaptest-git-ceph hang during mon thrashing v2 |
||
2054 | * https://tracker.ceph.com/issues/50821 |
||
2055 | qa: untar_snap_rm failure during mds thrashing |
||
2056 | * https://tracker.ceph.com/issues/50823 |
||
2057 | qa: RuntimeError: timeout waiting for cluster to stabilize |
||
2058 | |||
2059 | |||
2060 | 5 | Patrick Donnelly | h3. 2021 May 14 |
2061 | |||
2062 | https://pulpito.ceph.com/pdonnell-2021-05-14_21:45:42-fs-master-distro-basic-smithi/ |
||
2063 | |||
2064 | * https://tracker.ceph.com/issues/48812 |
||
2065 | qa: test_scrub_pause_and_resume_with_abort failure |
||
2066 | * https://tracker.ceph.com/issues/50821 |
||
2067 | qa: untar_snap_rm failure during mds thrashing |
||
2068 | * https://tracker.ceph.com/issues/50622 (regression) |
||
2069 | msg: active_connections regression |
||
2070 | * https://tracker.ceph.com/issues/50822 |
||
2071 | qa: testing kernel patch for client metrics causes mds abort |
||
2072 | * https://tracker.ceph.com/issues/48773 |
||
2073 | qa: scrub does not complete |
||
2074 | * https://tracker.ceph.com/issues/50823 |
||
2075 | qa: RuntimeError: timeout waiting for cluster to stabilize |
||
2076 | * https://tracker.ceph.com/issues/50824 |
||
2077 | qa: snaptest-git-ceph bus error |
||
2078 | * https://tracker.ceph.com/issues/50825 |
||
2079 | qa: snaptest-git-ceph hang during mon thrashing v2 |
||
2080 | * https://tracker.ceph.com/issues/50826 |
||
2081 | kceph: stock RHEL kernel hangs on snaptests with mon|osd thrashers |
||
2082 | |||
2083 | |||
2084 | 4 | Patrick Donnelly | h3. 2021 May 01 |
2085 | |||
2086 | https://pulpito.ceph.com/pdonnell-2021-05-01_09:07:09-fs-wip-pdonnell-testing-20210501.040415-distro-basic-smithi/ |
||
2087 | |||
2088 | * https://tracker.ceph.com/issues/45434 |
||
2089 | qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed |
||
2090 | * https://tracker.ceph.com/issues/50281 |
||
2091 | qa: untar_snap_rm timeout |
||
2092 | * https://tracker.ceph.com/issues/48203 (stock kernel update required) |
||
2093 | qa: quota failure |
||
2094 | * https://tracker.ceph.com/issues/48773 |
||
2095 | qa: scrub does not complete |
||
2096 | * https://tracker.ceph.com/issues/50390 |
||
2097 | mds: monclient: wait_auth_rotating timed out after 30 |
||
2098 | * https://tracker.ceph.com/issues/50250 |
||
2099 | mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" |
||
2100 | * https://tracker.ceph.com/issues/50622 (regression) |
||
2101 | msg: active_connections regression |
||
2102 | * https://tracker.ceph.com/issues/45591 |
||
2103 | mgr: FAILED ceph_assert(daemon != nullptr) |
||
2104 | * https://tracker.ceph.com/issues/50221 |
||
2105 | qa: snaptest-git-ceph failure in git diff |
||
2106 | * https://tracker.ceph.com/issues/50016 |
||
2107 | qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes" |
||
2108 | |||
2109 | |||
2110 | 3 | Patrick Donnelly | h3. 2021 Apr 15 |
2111 | |||
2112 | https://pulpito.ceph.com/pdonnell-2021-04-15_01:35:57-fs-wip-pdonnell-testing-20210414.230315-distro-basic-smithi/ |
||
2113 | |||
2114 | * https://tracker.ceph.com/issues/50281 |
||
2115 | qa: untar_snap_rm timeout |
||
2116 | * https://tracker.ceph.com/issues/50220 |
||
2117 | qa: dbench workload timeout |
||
2118 | * https://tracker.ceph.com/issues/50246 |
||
2119 | mds: failure replaying journal (EMetaBlob) |
||
2120 | * https://tracker.ceph.com/issues/50250 |
||
2121 | mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" |
||
2122 | * https://tracker.ceph.com/issues/50016 |
||
2123 | qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes" |
||
2124 | * https://tracker.ceph.com/issues/50222 |
||
2125 | osd: 5.2s0 deep-scrub : stat mismatch |
||
2126 | * https://tracker.ceph.com/issues/45434 |
||
2127 | qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed |
||
2128 | * https://tracker.ceph.com/issues/49845 |
||
2129 | qa: failed umount in test_volumes |
||
2130 | * https://tracker.ceph.com/issues/37808 |
||
2131 | osd: osdmap cache weak_refs assert during shutdown |
||
2132 | * https://tracker.ceph.com/issues/50387 |
||
2133 | client: fs/snaps failure |
||
2134 | * https://tracker.ceph.com/issues/50389 |
||
2135 | mds: "cluster [ERR] Error recovering journal 0x203: (2) No such file or directory" in cluster log" |
||
2136 | * https://tracker.ceph.com/issues/50216 |
||
2137 | qa: "ls: cannot access 'lost+found': No such file or directory" |
||
2138 | * https://tracker.ceph.com/issues/50390 |
||
2139 | mds: monclient: wait_auth_rotating timed out after 30 |
||
2140 | |||
2141 | |||
2142 | |||
2143 | 1 | Patrick Donnelly | h3. 2021 Apr 08 |
2144 | |||
2145 | 2 | Patrick Donnelly | https://pulpito.ceph.com/pdonnell-2021-04-08_22:42:24-fs-wip-pdonnell-testing-20210408.192301-distro-basic-smithi/ |
2146 | |||
2147 | * https://tracker.ceph.com/issues/45434 |
||
2148 | qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed |
||
2149 | * https://tracker.ceph.com/issues/50016 |
||
2150 | qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes" |
||
2151 | * https://tracker.ceph.com/issues/48773 |
||
2152 | qa: scrub does not complete |
||
2153 | * https://tracker.ceph.com/issues/50279 |
||
2154 | qa: "Replacing daemon mds.b as rank 0 with standby daemon mds.c" |
||
2155 | * https://tracker.ceph.com/issues/50246 |
||
2156 | mds: failure replaying journal (EMetaBlob) |
||
2157 | * https://tracker.ceph.com/issues/48365 |
||
2158 | qa: ffsb build failure on CentOS 8.2 |
||
2159 | * https://tracker.ceph.com/issues/50216 |
||
2160 | qa: "ls: cannot access 'lost+found': No such file or directory" |
||
2161 | * https://tracker.ceph.com/issues/50223 |
||
2162 | qa: "client.4737 isn't responding to mclientcaps(revoke)" |
||
2163 | * https://tracker.ceph.com/issues/50280 |
||
2164 | cephadm: RuntimeError: uid/gid not found |
||
2165 | * https://tracker.ceph.com/issues/50281 |
||
2166 | qa: untar_snap_rm timeout |
||
2167 | |||
2168 | h3. 2021 Apr 08 |
||
2169 | |||
2170 | 1 | Patrick Donnelly | https://pulpito.ceph.com/pdonnell-2021-04-08_04:31:36-fs-wip-pdonnell-testing-20210408.024225-distro-basic-smithi/ |
2171 | https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210408.142238 (with logic inversion / QA fix) |
||
2172 | |||
2173 | * https://tracker.ceph.com/issues/50246 |
||
2174 | mds: failure replaying journal (EMetaBlob) |
||
2175 | * https://tracker.ceph.com/issues/50250 |
||
2176 | mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" |
||
2177 | |||
2178 | |||
2179 | h3. 2021 Apr 07 |
||
2180 | |||
2181 | https://pulpito.ceph.com/pdonnell-2021-04-07_02:12:41-fs-wip-pdonnell-testing-20210406.213012-distro-basic-smithi/ |
||
2182 | |||
2183 | * https://tracker.ceph.com/issues/50215 |
||
2184 | qa: "log [ERR] : error reading sessionmap 'mds2_sessionmap'" |
||
2185 | * https://tracker.ceph.com/issues/49466 |
||
2186 | qa: "Command failed on gibba030 with status 1: 'set -ex\nsudo dd of=/tmp/tmp.ZEeZBasJer'" |
||
2187 | * https://tracker.ceph.com/issues/50216 |
||
2188 | qa: "ls: cannot access 'lost+found': No such file or directory" |
||
2189 | * https://tracker.ceph.com/issues/48773 |
||
2190 | qa: scrub does not complete |
||
2191 | * https://tracker.ceph.com/issues/49845 |
||
2192 | qa: failed umount in test_volumes |
||
2193 | * https://tracker.ceph.com/issues/50220 |
||
2194 | qa: dbench workload timeout |
||
2195 | * https://tracker.ceph.com/issues/50221 |
||
2196 | qa: snaptest-git-ceph failure in git diff |
||
2197 | * https://tracker.ceph.com/issues/50222 |
||
2198 | osd: 5.2s0 deep-scrub : stat mismatch |
||
2199 | * https://tracker.ceph.com/issues/50223 |
||
2200 | qa: "client.4737 isn't responding to mclientcaps(revoke)" |
||
2201 | * https://tracker.ceph.com/issues/50224 |
||
2202 | qa: test_mirroring_init_failure_with_recovery failure |
||
2203 | |||
2204 | h3. 2021 Apr 01 |
||
2205 | |||
2206 | https://pulpito.ceph.com/pdonnell-2021-04-01_00:45:34-fs-wip-pdonnell-testing-20210331.222326-distro-basic-smithi/ |
||
2207 | |||
2208 | * https://tracker.ceph.com/issues/48772 |
||
2209 | qa: pjd: not ok 9, 44, 80 |
||
2210 | * https://tracker.ceph.com/issues/50177 |
||
2211 | osd: "stalled aio... buggy kernel or bad device?" |
||
2212 | * https://tracker.ceph.com/issues/48771 |
||
2213 | qa: iogen: workload fails to cause balancing |
||
2214 | * https://tracker.ceph.com/issues/49845 |
||
2215 | qa: failed umount in test_volumes |
||
2216 | * https://tracker.ceph.com/issues/48773 |
||
2217 | qa: scrub does not complete |
||
2218 | * https://tracker.ceph.com/issues/48805 |
||
2219 | mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details" |
||
2220 | * https://tracker.ceph.com/issues/50178 |
||
2221 | qa: "TypeError: run() got an unexpected keyword argument 'shell'" |
||
2222 | * https://tracker.ceph.com/issues/45434 |
||
2223 | qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed |
||
2224 | |||
2225 | h3. 2021 Mar 24 |
||
2226 | |||
2227 | https://pulpito.ceph.com/pdonnell-2021-03-24_23:26:35-fs-wip-pdonnell-testing-20210324.190252-distro-basic-smithi/ |
||
2228 | |||
2229 | * https://tracker.ceph.com/issues/49500 |
||
2230 | qa: "Assertion `cb_done' failed." |
||
2231 | * https://tracker.ceph.com/issues/50019 |
||
2232 | qa: mount failure with cephadm "probably no MDS server is up?" |
||
2233 | * https://tracker.ceph.com/issues/50020 |
||
2234 | qa: "RADOS object not found (Failed to operate read op for oid cephfs_mirror)" |
||
2235 | * https://tracker.ceph.com/issues/48773 |
||
2236 | qa: scrub does not complete |
||
2237 | * https://tracker.ceph.com/issues/45434 |
||
2238 | qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed |
||
2239 | * https://tracker.ceph.com/issues/48805 |
||
2240 | mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details" |
||
2241 | * https://tracker.ceph.com/issues/48772 |
||
2242 | qa: pjd: not ok 9, 44, 80 |
||
2243 | * https://tracker.ceph.com/issues/50021 |
||
2244 | qa: snaptest-git-ceph failure during mon thrashing |
||
2245 | * https://tracker.ceph.com/issues/48771 |
||
2246 | qa: iogen: workload fails to cause balancing |
||
2247 | * https://tracker.ceph.com/issues/50016 |
||
2248 | qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes" |
||
2249 | * https://tracker.ceph.com/issues/49466 |
||
2250 | qa: "Command failed on gibba030 with status 1: 'set -ex\nsudo dd of=/tmp/tmp.ZEeZBasJer'" |
||
2251 | |||
2252 | |||
2253 | h3. 2021 Mar 18 |
||
2254 | |||
2255 | https://pulpito.ceph.com/pdonnell-2021-03-18_13:46:31-fs-wip-pdonnell-testing-20210318.024145-distro-basic-smithi/ |
||
2256 | |||
2257 | * https://tracker.ceph.com/issues/49466 |
||
2258 | qa: "Command failed on gibba030 with status 1: 'set -ex\nsudo dd of=/tmp/tmp.ZEeZBasJer'" |
||
2259 | * https://tracker.ceph.com/issues/48773 |
||
2260 | qa: scrub does not complete |
||
2261 | * https://tracker.ceph.com/issues/48805 |
||
2262 | mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details" |
||
2263 | * https://tracker.ceph.com/issues/45434 |
||
2264 | qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed |
||
2265 | * https://tracker.ceph.com/issues/49845 |
||
2266 | qa: failed umount in test_volumes |
||
2267 | * https://tracker.ceph.com/issues/49605 |
||
2268 | mgr: drops command on the floor |
||
2269 | * https://tracker.ceph.com/issues/48203 (stock kernel update required) |
||
2270 | qa: quota failure |
||
2271 | * https://tracker.ceph.com/issues/49928 |
||
2272 | client: items pinned in cache preventing unmount x2 |
||
2273 | |||
2274 | h3. 2021 Mar 15 |
||
2275 | |||
2276 | https://pulpito.ceph.com/pdonnell-2021-03-15_22:16:56-fs-wip-pdonnell-testing-20210315.182203-distro-basic-smithi/ |
||
2277 | |||
2278 | * https://tracker.ceph.com/issues/49842 |
||
2279 | qa: stuck pkg install |
||
2280 | * https://tracker.ceph.com/issues/49466 |
||
2281 | qa: "Command failed on gibba030 with status 1: 'set -ex\nsudo dd of=/tmp/tmp.ZEeZBasJer'" |
||
2282 | * https://tracker.ceph.com/issues/49822 |
||
2283 | test: test_mirroring_command_idempotency (tasks.cephfs.test_admin.TestMirroringCommands) failure |
||
2284 | * https://tracker.ceph.com/issues/49240 |
||
2285 | terminate called after throwing an instance of 'std::bad_alloc' |
||
2286 | * https://tracker.ceph.com/issues/48773 |
||
2287 | qa: scrub does not complete |
||
2288 | * https://tracker.ceph.com/issues/45434 |
||
2289 | qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed |
||
2290 | * https://tracker.ceph.com/issues/49500 |
||
2291 | qa: "Assertion `cb_done' failed." |
||
2292 | * https://tracker.ceph.com/issues/49843 |
||
2293 | qa: fs/snaps/snaptest-upchildrealms.sh failure |
||
2294 | * https://tracker.ceph.com/issues/49845 |
||
2295 | qa: failed umount in test_volumes |
||
2296 | * https://tracker.ceph.com/issues/48805 |
||
2297 | mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details" |
||
2298 | * https://tracker.ceph.com/issues/49605 |
||
2299 | mgr: drops command on the floor |
||
2300 | |||
2301 | and failure caused by PR: https://github.com/ceph/ceph/pull/39969 |
||
2302 | |||
2303 | |||
2304 | h3. 2021 Mar 09 |
||
2305 | |||
2306 | https://pulpito.ceph.com/pdonnell-2021-03-09_03:27:39-fs-wip-pdonnell-testing-20210308.214827-distro-basic-smithi/ |
||
2307 | |||
2308 | * https://tracker.ceph.com/issues/49500 |
||
2309 | qa: "Assertion `cb_done' failed." |
||
2310 | * https://tracker.ceph.com/issues/48805 |
||
2311 | mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details" |
||
2312 | * https://tracker.ceph.com/issues/48773 |
||
2313 | qa: scrub does not complete |
||
2314 | * https://tracker.ceph.com/issues/45434 |
||
2315 | qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed |
||
2316 | * https://tracker.ceph.com/issues/49240 |
||
2317 | terminate called after throwing an instance of 'std::bad_alloc' |
||
2318 | * https://tracker.ceph.com/issues/49466 |
||
2319 | qa: "Command failed on gibba030 with status 1: 'set -ex\nsudo dd of=/tmp/tmp.ZEeZBasJer'" |
||
2320 | * https://tracker.ceph.com/issues/49684 |
||
2321 | qa: fs:cephadm mount does not wait for mds to be created |
||
2322 | * https://tracker.ceph.com/issues/48771 |
||
2323 | qa: iogen: workload fails to cause balancing |