Main » History » Version 238
Patrick Donnelly, 03/28/2024 06:33 PM
1 | 221 | Patrick Donnelly | h1. <code>main</code> branch |
---|---|---|---|
2 | 1 | Patrick Donnelly | |
3 | 236 | Patrick Donnelly | h3. 2024-03-28 |
4 | |||
5 | https://tracker.ceph.com/issues/65213 |
||
6 | |||
7 | 237 | Patrick Donnelly | * "qa: error during scrub thrashing: rank damage found: {'backtrace'}":https://tracker.ceph.com/issues/57676 |
8 | * "workunits/fsx.sh failure":https://tracker.ceph.com/issues/64572 |
||
9 | * "PG_DEGRADED warnings during cluster creation via cephadm: Health check failed: Degraded data":https://tracker.ceph.com/issues/65018 |
||
10 | 238 | Patrick Donnelly | * "suites/fsstress.sh hangs on one client - test times out":https://tracker.ceph.com/issues/64707 |
11 | * "qa: ceph tell 4.3a deep-scrub command not found":https://tracker.ceph.com/issues/64972 |
||
12 | * "qa: iogen workunit: The following counters failed to be set on mds daemons: {'mds.exported', 'mds.imported'}":https://tracker.ceph.com/issues/54108 |
||
13 | |||
14 | 237 | Patrick Donnelly | |
15 | |||
16 | 236 | Patrick Donnelly | |
17 | 235 | Milind Changire | h3. 2024-03-25 |
18 | |||
19 | https://pulpito.ceph.com/mchangir-2024-03-22_09:46:06-fs:upgrade-wip-mchangir-testing-main-20240318.032620-testing-default-smithi/ |
||
20 | * https://tracker.ceph.com/issues/64502 |
||
21 | fusermount -u fails with: teuthology.exceptions.MaxWhileTries: reached maximum tries (51) after waiting for 300 seconds |
||
22 | |||
23 | https://pulpito.ceph.com/mchangir-2024-03-22_09:48:09-fs:libcephfs-wip-mchangir-testing-main-20240318.032620-testing-default-smithi/ |
||
24 | |||
25 | * https://tracker.ceph.com/issues/62245 |
||
26 | libcephfs/test.sh failed - https://tracker.ceph.com/issues/62245#note-3 |
||
27 | |||
28 | |||
29 | 228 | Patrick Donnelly | h3. 2024-03-20 |
30 | |||
31 | 234 | Patrick Donnelly | https://pulpito.ceph.com/?branch=wip-batrick-testing-20240320.145742 |
32 | 228 | Patrick Donnelly | |
33 | 233 | Patrick Donnelly | https://github.com/batrick/ceph/commit/360516069d9393362c4cc6eb9371680fe16d66ab |
34 | |||
35 | 229 | Patrick Donnelly | Ubuntu jobs filtered out because builds were skipped by jenkins/shaman. |
36 | 1 | Patrick Donnelly | |
37 | 229 | Patrick Donnelly | This run has a lot more failures because https://github.com/ceph/ceph/pull/55455 fixed log WRN/ERR checks. |
38 | 228 | Patrick Donnelly | |
39 | 229 | Patrick Donnelly | * https://tracker.ceph.com/issues/57676 |
40 | qa: error during scrub thrashing: rank damage found: {'backtrace'} |
||
41 | * https://tracker.ceph.com/issues/64572 |
||
42 | workunits/fsx.sh failure |
||
43 | * https://tracker.ceph.com/issues/65018 |
||
44 | PG_DEGRADED warnings during cluster creation via cephadm: "Health check failed: Degraded data redundancy: 2/192 objects degraded (1.042%), 1 pg degraded (PG_DEGRADED)" |
||
45 | * https://tracker.ceph.com/issues/64707 (new issue) |
||
46 | suites/fsstress.sh hangs on one client - test times out |
||
47 | 1 | Patrick Donnelly | * https://tracker.ceph.com/issues/64988 |
48 | qa: fs:workloads mgr client evicted indicated by "cluster [WRN] evicting unresponsive client smithi042:x (15288), after 303.306 seconds" |
||
49 | * https://tracker.ceph.com/issues/59684 |
||
50 | Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt) |
||
51 | 230 | Patrick Donnelly | * https://tracker.ceph.com/issues/64972 |
52 | qa: "ceph tell 4.3a deep-scrub" command not found |
||
53 | * https://tracker.ceph.com/issues/54108 |
||
54 | qa: iogen workunit: "The following counters failed to be set on mds daemons: {'mds.exported', 'mds.imported'}" |
||
55 | * https://tracker.ceph.com/issues/65019 |
||
56 | qa/suites/fs/top: [WRN] Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)" in cluster log |
||
57 | * https://tracker.ceph.com/issues/65020 |
||
58 | qa: Scrub error on inode 0x1000000356c (/volumes/qa/sv_0/2f8f6bb4-3ea9-47a0-bd79-a0f50dc149d5/client.0/tmp/clients/client7/~dmtmp/PARADOX) see mds.b log and `damage ls` output for details" in cluster log |
||
59 | * https://tracker.ceph.com/issues/65021 |
||
60 | qa/suites/fs/nfs: cluster [WRN] Health check failed: 1 stray daemon(s) not managed by cephadm (CEPHADM_STRAY_DAEMON)" in cluster log |
||
61 | * https://tracker.ceph.com/issues/63699 |
||
62 | qa: failed cephfs-shell test_reading_conf |
||
63 | 231 | Patrick Donnelly | * https://tracker.ceph.com/issues/64711 |
64 | Test failure: test_cephfs_mirror_cancel_mirroring_and_readd (tasks.cephfs.test_mirroring.TestMirroring) |
||
65 | * https://tracker.ceph.com/issues/50821 |
||
66 | qa: untar_snap_rm failure during mds thrashing |
||
67 | 232 | Patrick Donnelly | * https://tracker.ceph.com/issues/65022 |
68 | qa: test_max_items_per_obj open procs not fully cleaned up |
||
69 | 228 | Patrick Donnelly | |
70 | 226 | Venky Shankar | h3. 14th March 2024 |
71 | |||
72 | https://pulpito.ceph.com/?branch=wip-vshankar-testing-20240307.013758 |
||
73 | |||
74 | 227 | Venky Shankar | (pjd.sh failures are related to a bug in the testing kernel. See - https://tracker.ceph.com/issues/64679#note-4) |
75 | 226 | Venky Shankar | |
76 | * https://tracker.ceph.com/issues/62067 |
||
77 | ffsb.sh failure "Resource temporarily unavailable" |
||
78 | * https://tracker.ceph.com/issues/57676 |
||
79 | qa: error during scrub thrashing: rank damage found: {'backtrace'} |
||
80 | * https://tracker.ceph.com/issues/64502 |
||
81 | pacific/quincy/v18.2.0: client: ceph-fuse fails to unmount after upgrade to main |
||
82 | * https://tracker.ceph.com/issues/64572 |
||
83 | workunits/fsx.sh failure |
||
84 | * https://tracker.ceph.com/issues/63700 |
||
85 | qa: test_cd_with_args failure |
||
86 | * https://tracker.ceph.com/issues/59684 |
||
87 | Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt) |
||
88 | * https://tracker.ceph.com/issues/61243 |
||
89 | test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed |
||
90 | |||
91 | 225 | Venky Shankar | h3. 5th March 2024 |
92 | |||
93 | https://pulpito.ceph.com/?branch=wip-vshankar-testing-20240304.042522 |
||
94 | |||
95 | * https://tracker.ceph.com/issues/57676 |
||
96 | qa: error during scrub thrashing: rank damage found: {'backtrace'} |
||
97 | * https://tracker.ceph.com/issues/64502 |
||
98 | pacific/quincy/v18.2.0: client: ceph-fuse fails to unmount after upgrade to main |
||
99 | * https://tracker.ceph.com/issues/63949 |
||
100 | leak in mds.c detected by valgrind during CephFS QA run |
||
101 | * https://tracker.ceph.com/issues/57656 |
||
102 | [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable) |
||
103 | * https://tracker.ceph.com/issues/63699 |
||
104 | qa: failed cephfs-shell test_reading_conf |
||
105 | * https://tracker.ceph.com/issues/64572 |
||
106 | workunits/fsx.sh failure |
||
107 | * https://tracker.ceph.com/issues/64707 (new issue) |
||
108 | suites/fsstress.sh hangs on one client - test times out |
||
109 | * https://tracker.ceph.com/issues/59684 |
||
110 | Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt) |
||
111 | * https://tracker.ceph.com/issues/63700 |
||
112 | qa: test_cd_with_args failure |
||
113 | * https://tracker.ceph.com/issues/64711 |
||
114 | Test failure: test_cephfs_mirror_cancel_mirroring_and_readd (tasks.cephfs.test_mirroring.TestMirroring) |
||
115 | * https://tracker.ceph.com/issues/64729 (new issue) |
||
116 | mon.a (mon.0) 1281 : cluster 3 [WRN] MDS_SLOW_METADATA_IO: 3 MDSs report slow metadata IOs" in cluster log |
||
117 | * https://tracker.ceph.com/issues/64730 |
||
118 | fs/misc/multiple_rsync.sh workunit times out |
||
119 | |||
120 | 224 | Venky Shankar | h3. 26th Feb 2024 |
121 | |||
122 | https://pulpito.ceph.com/?branch=wip-vshankar-testing-20240216.060239 |
||
123 | |||
124 | (This run is a bit messy due to |
||
125 | |||
126 | a) OCI runtime issues in the testing kernel with centos9 |
||
127 | b) SELinux denials related failures |
||
128 | c) Unrelated MON_DOWN warnings) |
||
129 | |||
130 | * https://tracker.ceph.com/issues/57676 |
||
131 | qa: error during scrub thrashing: rank damage found: {'backtrace'} |
||
132 | * https://tracker.ceph.com/issues/63700 |
||
133 | qa: test_cd_with_args failure |
||
134 | * https://tracker.ceph.com/issues/63949 |
||
135 | leak in mds.c detected by valgrind during CephFS QA run |
||
136 | * https://tracker.ceph.com/issues/59684 |
||
137 | Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt) |
||
138 | * https://tracker.ceph.com/issues/61243 |
||
139 | test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed |
||
140 | * https://tracker.ceph.com/issues/63699 |
||
141 | qa: failed cephfs-shell test_reading_conf |
||
142 | * https://tracker.ceph.com/issues/64172 |
||
143 | Test failure: test_multiple_path_r (tasks.cephfs.test_admin.TestFsAuthorize) |
||
144 | * https://tracker.ceph.com/issues/57656 |
||
145 | [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable) |
||
146 | * https://tracker.ceph.com/issues/64572 |
||
147 | workunits/fsx.sh failure |
||
148 | |||
149 | 222 | Patrick Donnelly | h3. 20th Feb 2024 |
150 | |||
151 | https://github.com/ceph/ceph/pull/55601 |
||
152 | https://github.com/ceph/ceph/pull/55659 |
||
153 | |||
154 | https://pulpito.ceph.com/pdonnell-2024-02-20_07:23:03-fs:upgrade:mds_upgrade_sequence-wip-batrick-testing-20240220.022152-distro-default-smithi/ |
||
155 | |||
156 | * https://tracker.ceph.com/issues/64502 |
||
157 | client: quincy ceph-fuse fails to unmount after upgrade to main |
||
158 | |||
159 | 223 | Patrick Donnelly | This run has numerous problems. #55601 introduces testing for the upgrade sequence from </code>reef/{v18.2.0,v18.2.1,reef}</code> as well as an extra dimension for the ceph-fuse client. The main "big" issue is i64502: the ceph-fuse client is not being unmounted when <code>fusermount -u</code> is called. Instead, the client begins to unmount only after daemons are shut down during test cleanup. |
160 | 218 | Venky Shankar | |
161 | h3. 19th Feb 2024 |
||
162 | |||
163 | 220 | Venky Shankar | https://pulpito.ceph.com/?branch=wip-vshankar-testing-20240217.015652 |
164 | |||
165 | 218 | Venky Shankar | * https://tracker.ceph.com/issues/61243 |
166 | test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed |
||
167 | * https://tracker.ceph.com/issues/63700 |
||
168 | qa: test_cd_with_args failure |
||
169 | * https://tracker.ceph.com/issues/63141 |
||
170 | qa/cephfs: test_idem_unaffected_root_squash fails |
||
171 | * https://tracker.ceph.com/issues/59684 |
||
172 | Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt) |
||
173 | * https://tracker.ceph.com/issues/63949 |
||
174 | leak in mds.c detected by valgrind during CephFS QA run |
||
175 | * https://tracker.ceph.com/issues/63764 |
||
176 | Test failure: test_r_with_fsname_and_no_path_in_cap (tasks.cephfs.test_multifs_auth.TestMDSCaps) |
||
177 | * https://tracker.ceph.com/issues/63699 |
||
178 | qa: failed cephfs-shell test_reading_conf |
||
179 | 219 | Venky Shankar | * https://tracker.ceph.com/issues/64482 |
180 | ceph: stderr Error: OCI runtime error: crun: bpf create ``: Function not implemented |
||
181 | 201 | Rishabh Dave | |
182 | 217 | Venky Shankar | h3. 29 Jan 2024 |
183 | |||
184 | https://pulpito.ceph.com/?branch=wip-vshankar-testing-20240119.075157-1 |
||
185 | |||
186 | * https://tracker.ceph.com/issues/57676 |
||
187 | qa: error during scrub thrashing: rank damage found: {'backtrace'} |
||
188 | * https://tracker.ceph.com/issues/63949 |
||
189 | leak in mds.c detected by valgrind during CephFS QA run |
||
190 | * https://tracker.ceph.com/issues/62067 |
||
191 | ffsb.sh failure "Resource temporarily unavailable" |
||
192 | * https://tracker.ceph.com/issues/64172 |
||
193 | Test failure: test_multiple_path_r (tasks.cephfs.test_admin.TestFsAuthorize) |
||
194 | * https://tracker.ceph.com/issues/63265 |
||
195 | qa: fs/snaps/snaptest-git-ceph.sh failed when reseting to tag 'v0.1' |
||
196 | * https://tracker.ceph.com/issues/61243 |
||
197 | test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed |
||
198 | * https://tracker.ceph.com/issues/59684 |
||
199 | Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt) |
||
200 | * https://tracker.ceph.com/issues/57656 |
||
201 | [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable) |
||
202 | * https://tracker.ceph.com/issues/64209 |
||
203 | snaptest-multiple-capsnaps.sh fails with "got remote process result: 1" |
||
204 | |||
205 | 216 | Venky Shankar | h3. 17th Jan 2024 |
206 | |||
207 | https://pulpito.ceph.com/?branch=wip-vshankar-testing-20240103.072409-1 |
||
208 | |||
209 | * https://tracker.ceph.com/issues/63764 |
||
210 | Test failure: test_r_with_fsname_and_no_path_in_cap (tasks.cephfs.test_multifs_auth.TestMDSCaps) |
||
211 | * https://tracker.ceph.com/issues/57676 |
||
212 | qa: error during scrub thrashing: rank damage found: {'backtrace'} |
||
213 | * https://tracker.ceph.com/issues/51964 |
||
214 | qa: test_cephfs_mirror_restart_sync_on_blocklist failure |
||
215 | * https://tracker.ceph.com/issues/63949 |
||
216 | leak in mds.c detected by valgrind during CephFS QA run |
||
217 | * https://tracker.ceph.com/issues/62067 |
||
218 | ffsb.sh failure "Resource temporarily unavailable" |
||
219 | * https://tracker.ceph.com/issues/61243 |
||
220 | test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed |
||
221 | * https://tracker.ceph.com/issues/63259 |
||
222 | mds: failed to store backtrace and force file system read-only |
||
223 | * https://tracker.ceph.com/issues/63265 |
||
224 | qa: fs/snaps/snaptest-git-ceph.sh failed when reseting to tag 'v0.1' |
||
225 | |||
226 | h3. 16 Jan 2024 |
||
227 | 215 | Rishabh Dave | |
228 | 214 | Rishabh Dave | https://pulpito.ceph.com/rishabh-2023-12-11_15:37:57-fs-rishabh-2023dec11-testing-default-smithi/ |
229 | https://pulpito.ceph.com/rishabh-2023-12-17_11:19:43-fs-rishabh-2023dec11-testing-default-smithi/ |
||
230 | https://pulpito.ceph.com/rishabh-2024-01-04_18:43:16-fs-rishabh-2024jan4-testing-default-smithi |
||
231 | |||
232 | * https://tracker.ceph.com/issues/63764 |
||
233 | Test failure: test_r_with_fsname_and_no_path_in_cap (tasks.cephfs.test_multifs_auth.TestMDSCaps) |
||
234 | * https://tracker.ceph.com/issues/63141 |
||
235 | qa/cephfs: test_idem_unaffected_root_squash fails |
||
236 | * https://tracker.ceph.com/issues/62067 |
||
237 | ffsb.sh failure "Resource temporarily unavailable" |
||
238 | * https://tracker.ceph.com/issues/51964 |
||
239 | qa: test_cephfs_mirror_restart_sync_on_blocklist failure |
||
240 | * https://tracker.ceph.com/issues/54462 |
||
241 | Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128 |
||
242 | * https://tracker.ceph.com/issues/57676 |
||
243 | qa: error during scrub thrashing: rank damage found: {'backtrace'} |
||
244 | |||
245 | * https://tracker.ceph.com/issues/63949 |
||
246 | valgrind leak in MDS |
||
247 | * https://tracker.ceph.com/issues/64041 |
||
248 | qa/cephfs: fs/upgrade/nofs suite attempts to jump more than 2 releases |
||
249 | * fsstress failure in last run was due a kernel MM layer failure, unrelated to CephFS |
||
250 | * from last run, job #7507400 failed due to MGR; FS wasn't degraded, so it's unrelated to CephFS |
||
251 | |||
252 | 213 | Venky Shankar | h3. 06 Dec 2023 |
253 | |||
254 | https://pulpito.ceph.com/?branch=wip-vshankar-testing-20231206.125818 |
||
255 | https://pulpito.ceph.com/?branch=wip-vshankar-testing-20231206.125818-x (rerun w/ squid kickoff changes) |
||
256 | |||
257 | * https://tracker.ceph.com/issues/63764 |
||
258 | Test failure: test_r_with_fsname_and_no_path_in_cap (tasks.cephfs.test_multifs_auth.TestMDSCaps) |
||
259 | * https://tracker.ceph.com/issues/63233 |
||
260 | mon|client|mds: valgrind reports possible leaks in the MDS |
||
261 | * https://tracker.ceph.com/issues/57676 |
||
262 | qa: error during scrub thrashing: rank damage found: {'backtrace'} |
||
263 | * https://tracker.ceph.com/issues/62580 |
||
264 | testing: Test failure: test_snapshot_remove (tasks.cephfs.test_strays.TestStrays) |
||
265 | * https://tracker.ceph.com/issues/62067 |
||
266 | ffsb.sh failure "Resource temporarily unavailable" |
||
267 | * https://tracker.ceph.com/issues/61243 |
||
268 | test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed |
||
269 | * https://tracker.ceph.com/issues/62081 |
||
270 | tasks/fscrypt-common does not finish, timesout |
||
271 | * https://tracker.ceph.com/issues/63265 |
||
272 | qa: fs/snaps/snaptest-git-ceph.sh failed when reseting to tag 'v0.1' |
||
273 | * https://tracker.ceph.com/issues/63806 |
||
274 | ffsb.sh workunit failure (MDS: std::out_of_range, damaged) |
||
275 | |||
276 | 211 | Patrick Donnelly | h3. 30 Nov 2023 |
277 | |||
278 | https://pulpito.ceph.com/pdonnell-2023-11-30_08:05:19-fs:shell-wip-batrick-testing-20231130.014408-distro-default-smithi/ |
||
279 | |||
280 | * https://tracker.ceph.com/issues/63699 |
||
281 | 212 | Patrick Donnelly | qa: failed cephfs-shell test_reading_conf |
282 | * https://tracker.ceph.com/issues/63700 |
||
283 | qa: test_cd_with_args failure |
||
284 | 211 | Patrick Donnelly | |
285 | 210 | Venky Shankar | h3. 29 Nov 2023 |
286 | |||
287 | https://pulpito.ceph.com/?branch=wip-vshankar-testing-20231107.042705 |
||
288 | |||
289 | * https://tracker.ceph.com/issues/63233 |
||
290 | mon|client|mds: valgrind reports possible leaks in the MDS |
||
291 | * https://tracker.ceph.com/issues/63141 |
||
292 | qa/cephfs: test_idem_unaffected_root_squash fails |
||
293 | * https://tracker.ceph.com/issues/57676 |
||
294 | qa: error during scrub thrashing: rank damage found: {'backtrace'} |
||
295 | * https://tracker.ceph.com/issues/57655 |
||
296 | qa: fs:mixed-clients kernel_untar_build failure |
||
297 | * https://tracker.ceph.com/issues/62067 |
||
298 | ffsb.sh failure "Resource temporarily unavailable" |
||
299 | * https://tracker.ceph.com/issues/61243 |
||
300 | test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed |
||
301 | * https://tracker.ceph.com/issues/62510 (pending RHEL back port) snaptest-git-ceph.sh failure with fs/thrash |
||
302 | * https://tracker.ceph.com/issues/62810 |
||
303 | Failure in snaptest-git-ceph.sh (it's an async unlink/create bug) -- Need to fix again |
||
304 | |||
305 | 206 | Venky Shankar | h3. 14 Nov 2023 |
306 | 207 | Milind Changire | (Milind) |
307 | |||
308 | https://pulpito.ceph.com/mchangir-2023-11-13_10:27:15-fs-wip-mchangir-testing-20231110.052303-testing-default-smithi/ |
||
309 | |||
310 | * https://tracker.ceph.com/issues/53859 |
||
311 | qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm) |
||
312 | * https://tracker.ceph.com/issues/63233 |
||
313 | mon|client|mds: valgrind reports possible leaks in the MDS |
||
314 | * https://tracker.ceph.com/issues/63521 |
||
315 | qa: Test failure: test_scrub_merge_dirfrags (tasks.cephfs.test_scrub_checks.TestScrubChecks) |
||
316 | * https://tracker.ceph.com/issues/57655 |
||
317 | qa: fs:mixed-clients kernel_untar_build failure |
||
318 | * https://tracker.ceph.com/issues/62580 |
||
319 | testing: Test failure: test_snapshot_remove (tasks.cephfs.test_strays.TestStrays) |
||
320 | * https://tracker.ceph.com/issues/57676 |
||
321 | qa: error during scrub thrashing: rank damage found: {'backtrace'} |
||
322 | * https://tracker.ceph.com/issues/61243 |
||
323 | test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed |
||
324 | * https://tracker.ceph.com/issues/63141 |
||
325 | qa/cephfs: test_idem_unaffected_root_squash fails |
||
326 | * https://tracker.ceph.com/issues/51964 |
||
327 | qa: test_cephfs_mirror_restart_sync_on_blocklist failure |
||
328 | * https://tracker.ceph.com/issues/63522 |
||
329 | No module named 'tasks.ceph_fuse' |
||
330 | No module named 'tasks.kclient' |
||
331 | No module named 'tasks.cephfs.fuse_mount' |
||
332 | No module named 'tasks.ceph' |
||
333 | * https://tracker.ceph.com/issues/63523 |
||
334 | Command failed - qa/workunits/fs/misc/general_vxattrs.sh |
||
335 | |||
336 | |||
337 | h3. 14 Nov 2023 |
||
338 | 206 | Venky Shankar | |
339 | https://pulpito.ceph.com/?branch=wip-vshankar-testing-20231106.073650 |
||
340 | |||
341 | (nvm the fs:upgrade test failure - the PR is excluded from merge) |
||
342 | |||
343 | * https://tracker.ceph.com/issues/57676 |
||
344 | qa: error during scrub thrashing: rank damage found: {'backtrace'} |
||
345 | * https://tracker.ceph.com/issues/63233 |
||
346 | mon|client|mds: valgrind reports possible leaks in the MDS |
||
347 | * https://tracker.ceph.com/issues/63141 |
||
348 | qa/cephfs: test_idem_unaffected_root_squash fails |
||
349 | * https://tracker.ceph.com/issues/62580 |
||
350 | testing: Test failure: test_snapshot_remove (tasks.cephfs.test_strays.TestStrays) |
||
351 | * https://tracker.ceph.com/issues/57655 |
||
352 | qa: fs:mixed-clients kernel_untar_build failure |
||
353 | * https://tracker.ceph.com/issues/51964 |
||
354 | qa: test_cephfs_mirror_restart_sync_on_blocklist failure |
||
355 | * https://tracker.ceph.com/issues/63519 |
||
356 | ceph-fuse: reef ceph-fuse crashes with main branch ceph-mds |
||
357 | * https://tracker.ceph.com/issues/57087 |
||
358 | qa: test_fragmented_injection (tasks.cephfs.test_data_scan.TestDataScan) failure |
||
359 | * https://tracker.ceph.com/issues/58945 |
||
360 | qa: xfstests-dev's generic test suite has 20 failures with fuse client |
||
361 | |||
362 | 204 | Rishabh Dave | h3. 7 Nov 2023 |
363 | |||
364 | 205 | Rishabh Dave | fs: https://pulpito.ceph.com/rishabh-2023-11-04_04:30:51-fs-rishabh-2023nov3-testing-default-smithi/ |
365 | re-run: https://pulpito.ceph.com/rishabh-2023-11-05_14:10:09-fs-rishabh-2023nov3-testing-default-smithi/ |
||
366 | smoke: https://pulpito.ceph.com/rishabh-2023-11-08_08:39:05-smoke-rishabh-2023nov3-testing-default-smithi/ |
||
367 | 204 | Rishabh Dave | |
368 | * https://tracker.ceph.com/issues/53859 |
||
369 | qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm) |
||
370 | * https://tracker.ceph.com/issues/63233 |
||
371 | mon|client|mds: valgrind reports possible leaks in the MDS |
||
372 | * https://tracker.ceph.com/issues/57655 |
||
373 | qa: fs:mixed-clients kernel_untar_build failure |
||
374 | * https://tracker.ceph.com/issues/57676 |
||
375 | qa: error during scrub thrashing: rank damage found: {'backtrace'} |
||
376 | |||
377 | * https://tracker.ceph.com/issues/63473 |
||
378 | fsstress.sh failed with errno 124 |
||
379 | |||
380 | 202 | Rishabh Dave | h3. 3 Nov 2023 |
381 | 203 | Rishabh Dave | |
382 | 202 | Rishabh Dave | https://pulpito.ceph.com/rishabh-2023-10-27_06:26:52-fs-rishabh-2023oct26-testing-default-smithi/ |
383 | |||
384 | * https://tracker.ceph.com/issues/63141 |
||
385 | qa/cephfs: test_idem_unaffected_root_squash fails |
||
386 | * https://tracker.ceph.com/issues/63233 |
||
387 | mon|client|mds: valgrind reports possible leaks in the MDS |
||
388 | * https://tracker.ceph.com/issues/57656 |
||
389 | dbench: write failed on handle 10010 (Resource temporarily unavailable) |
||
390 | * https://tracker.ceph.com/issues/57655 |
||
391 | qa: fs:mixed-clients kernel_untar_build failure |
||
392 | * https://tracker.ceph.com/issues/57676 |
||
393 | qa: error during scrub thrashing: rank damage found: {'backtrace'} |
||
394 | |||
395 | * https://tracker.ceph.com/issues/59531 |
||
396 | "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio)" |
||
397 | * https://tracker.ceph.com/issues/52624 |
||
398 | qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" |
||
399 | |||
400 | 198 | Patrick Donnelly | h3. 24 October 2023 |
401 | |||
402 | https://pulpito.ceph.com/?branch=wip-batrick-testing-20231024.144545 |
||
403 | |||
404 | 200 | Patrick Donnelly | Two failures: |
405 | |||
406 | https://pulpito.ceph.com/pdonnell-2023-10-26_05:21:22-fs-wip-batrick-testing-20231024.144545-distro-default-smithi/7438459/ |
||
407 | https://pulpito.ceph.com/pdonnell-2023-10-26_05:21:22-fs-wip-batrick-testing-20231024.144545-distro-default-smithi/7438468/ |
||
408 | |||
409 | probably related to https://github.com/ceph/ceph/pull/53255. Killing the mount as part of the test did not complete. Will research more. |
||
410 | |||
411 | 198 | Patrick Donnelly | * https://tracker.ceph.com/issues/52624 |
412 | qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" |
||
413 | * https://tracker.ceph.com/issues/57676 |
||
414 | 199 | Patrick Donnelly | qa: error during scrub thrashing: rank damage found: {'backtrace'} |
415 | * https://tracker.ceph.com/issues/63233 |
||
416 | mon|client|mds: valgrind reports possible leaks in the MDS |
||
417 | * https://tracker.ceph.com/issues/59531 |
||
418 | "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS |
||
419 | * https://tracker.ceph.com/issues/57655 |
||
420 | qa: fs:mixed-clients kernel_untar_build failure |
||
421 | 200 | Patrick Donnelly | * https://tracker.ceph.com/issues/62067 |
422 | ffsb.sh failure "Resource temporarily unavailable" |
||
423 | * https://tracker.ceph.com/issues/63411 |
||
424 | qa: flush journal may cause timeouts of `scrub status` |
||
425 | * https://tracker.ceph.com/issues/61243 |
||
426 | test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed |
||
427 | * https://tracker.ceph.com/issues/63141 |
||
428 | 198 | Patrick Donnelly | test_idem_unaffected_root_squash (test_admin.TestFsAuthorizeUpdate) fails |
429 | 148 | Rishabh Dave | |
430 | 195 | Venky Shankar | h3. 18 Oct 2023 |
431 | |||
432 | https://pulpito.ceph.com/?branch=wip-vshankar-testing-20231018.065603 |
||
433 | |||
434 | * https://tracker.ceph.com/issues/52624 |
||
435 | qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" |
||
436 | * https://tracker.ceph.com/issues/57676 |
||
437 | qa: error during scrub thrashing: rank damage found: {'backtrace'} |
||
438 | * https://tracker.ceph.com/issues/63233 |
||
439 | mon|client|mds: valgrind reports possible leaks in the MDS |
||
440 | * https://tracker.ceph.com/issues/63141 |
||
441 | qa/cephfs: test_idem_unaffected_root_squash fails |
||
442 | * https://tracker.ceph.com/issues/59531 |
||
443 | "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio)" |
||
444 | * https://tracker.ceph.com/issues/62658 |
||
445 | error during scrub thrashing: reached maximum tries (31) after waiting for 900 seconds |
||
446 | * https://tracker.ceph.com/issues/62580 |
||
447 | testing: Test failure: test_snapshot_remove (tasks.cephfs.test_strays.TestStrays) |
||
448 | * https://tracker.ceph.com/issues/62067 |
||
449 | ffsb.sh failure "Resource temporarily unavailable" |
||
450 | * https://tracker.ceph.com/issues/57655 |
||
451 | qa: fs:mixed-clients kernel_untar_build failure |
||
452 | * https://tracker.ceph.com/issues/62036 |
||
453 | src/mds/MDCache.cc: 5131: FAILED ceph_assert(isolated_inodes.empty()) |
||
454 | * https://tracker.ceph.com/issues/58945 |
||
455 | qa: xfstests-dev's generic test suite has 20 failures with fuse client |
||
456 | * https://tracker.ceph.com/issues/62847 |
||
457 | mds: blogbench requests stuck (5mds+scrub+snaps-flush) |
||
458 | |||
459 | 193 | Venky Shankar | h3. 13 Oct 2023 |
460 | |||
461 | https://pulpito.ceph.com/?branch=wip-vshankar-testing-20231013.093215 |
||
462 | |||
463 | * https://tracker.ceph.com/issues/52624 |
||
464 | qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" |
||
465 | * https://tracker.ceph.com/issues/62936 |
||
466 | Test failure: test_mirroring_init_failure_with_recovery (tasks.cephfs.test_mirroring.TestMirroring) |
||
467 | * https://tracker.ceph.com/issues/47292 |
||
468 | cephfs-shell: test_df_for_valid_file failure |
||
469 | * https://tracker.ceph.com/issues/63141 |
||
470 | qa/cephfs: test_idem_unaffected_root_squash fails |
||
471 | * https://tracker.ceph.com/issues/62081 |
||
472 | tasks/fscrypt-common does not finish, timesout |
||
473 | 1 | Patrick Donnelly | * https://tracker.ceph.com/issues/58945 |
474 | qa: xfstests-dev's generic test suite has 20 failures with fuse client |
||
475 | 194 | Venky Shankar | * https://tracker.ceph.com/issues/63233 |
476 | mon|client|mds: valgrind reports possible leaks in the MDS |
||
477 | 193 | Venky Shankar | |
478 | 190 | Patrick Donnelly | h3. 16 Oct 2023 |
479 | |||
480 | https://pulpito.ceph.com/?branch=wip-batrick-testing-20231016.203825 |
||
481 | |||
482 | 192 | Patrick Donnelly | Infrastructure issues: |
483 | * /teuthology/pdonnell-2023-10-19_12:04:12-fs-wip-batrick-testing-20231016.203825-distro-default-smithi/7432286/teuthology.log |
||
484 | Host lost. |
||
485 | |||
486 | 196 | Patrick Donnelly | One followup fix: |
487 | * https://pulpito.ceph.com/pdonnell-2023-10-20_00:33:29-fs-wip-batrick-testing-20231016.203825-distro-default-smithi/ |
||
488 | |||
489 | 192 | Patrick Donnelly | Failures: |
490 | |||
491 | * https://tracker.ceph.com/issues/56694 |
||
492 | qa: avoid blocking forever on hung umount |
||
493 | * https://tracker.ceph.com/issues/63089 |
||
494 | qa: tasks/mirror times out |
||
495 | * https://tracker.ceph.com/issues/52624 |
||
496 | qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" |
||
497 | * https://tracker.ceph.com/issues/59531 |
||
498 | "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio)" |
||
499 | * https://tracker.ceph.com/issues/57676 |
||
500 | qa: error during scrub thrashing: rank damage found: {'backtrace'} |
||
501 | * https://tracker.ceph.com/issues/62658 |
||
502 | error during scrub thrashing: reached maximum tries (31) after waiting for 900 seconds |
||
503 | * https://tracker.ceph.com/issues/61243 |
||
504 | test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed |
||
505 | * https://tracker.ceph.com/issues/57656 |
||
506 | dbench: write failed on handle 10010 (Resource temporarily unavailable) |
||
507 | * https://tracker.ceph.com/issues/63233 |
||
508 | mon|client|mds: valgrind reports possible leaks in the MDS |
||
509 | 197 | Patrick Donnelly | * https://tracker.ceph.com/issues/63278 |
510 | kclient: may wrongly decode session messages and believe it is blocklisted (dead jobs) |
||
511 | 192 | Patrick Donnelly | |
512 | 189 | Rishabh Dave | h3. 9 Oct 2023 |
513 | |||
514 | https://pulpito.ceph.com/rishabh-2023-10-06_11:56:52-fs-rishabh-cephfs-mon-testing-default-smithi/ |
||
515 | |||
516 | * https://tracker.ceph.com/issues/54460 |
||
517 | Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1 |
||
518 | * https://tracker.ceph.com/issues/63141 |
||
519 | test_idem_unaffected_root_squash (test_admin.TestFsAuthorizeUpdate) fails |
||
520 | * https://tracker.ceph.com/issues/62937 |
||
521 | logrotate doesn't support parallel execution on same set of logfiles |
||
522 | * https://tracker.ceph.com/issues/61400 |
||
523 | valgrind+ceph-mon issues |
||
524 | * https://tracker.ceph.com/issues/57676 |
||
525 | qa: error during scrub thrashing: rank damage found: {'backtrace'} |
||
526 | * https://tracker.ceph.com/issues/55805 |
||
527 | error during scrub thrashing reached max tries in 900 secs |
||
528 | |||
529 | 188 | Venky Shankar | h3. 26 Sep 2023 |
530 | |||
531 | https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230926.081818 |
||
532 | |||
533 | * https://tracker.ceph.com/issues/52624 |
||
534 | qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" |
||
535 | * https://tracker.ceph.com/issues/62873 |
||
536 | qa: FAIL: test_client_blocklisted_oldest_tid (tasks.cephfs.test_client_limits.TestClientLimits) |
||
537 | * https://tracker.ceph.com/issues/61400 |
||
538 | valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default |
||
539 | * https://tracker.ceph.com/issues/57676 |
||
540 | qa: error during scrub thrashing: rank damage found: {'backtrace'} |
||
541 | * https://tracker.ceph.com/issues/62682 |
||
542 | mon: no mdsmap broadcast after "fs set joinable" is set to true |
||
543 | * https://tracker.ceph.com/issues/63089 |
||
544 | qa: tasks/mirror times out |
||
545 | |||
546 | 185 | Rishabh Dave | h3. 22 Sep 2023 |
547 | |||
548 | https://pulpito.ceph.com/rishabh-2023-09-12_12:12:15-fs-wip-rishabh-2023sep12-b2-testing-default-smithi/ |
||
549 | |||
550 | * https://tracker.ceph.com/issues/59348 |
||
551 | qa: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota |
||
552 | * https://tracker.ceph.com/issues/59344 |
||
553 | qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" |
||
554 | * https://tracker.ceph.com/issues/59531 |
||
555 | "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio)" |
||
556 | * https://tracker.ceph.com/issues/61574 |
||
557 | build failure for mdtest project |
||
558 | * https://tracker.ceph.com/issues/62702 |
||
559 | fsstress.sh: MDS slow requests for the internal 'rename' requests |
||
560 | * https://tracker.ceph.com/issues/57676 |
||
561 | qa: error during scrub thrashing: rank damage found: {'backtrace'} |
||
562 | |||
563 | * https://tracker.ceph.com/issues/62863 |
||
564 | deadlock in ceph-fuse causes teuthology job to hang and fail |
||
565 | * https://tracker.ceph.com/issues/62870 |
||
566 | test_cluster_info (tasks.cephfs.test_nfs.TestNFS) |
||
567 | * https://tracker.ceph.com/issues/62873 |
||
568 | test_client_blocklisted_oldest_tid (tasks.cephfs.test_client_limits.TestClientLimits) |
||
569 | |||
570 | 186 | Venky Shankar | h3. 20 Sep 2023 |
571 | |||
572 | https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230920.072635 |
||
573 | |||
574 | * https://tracker.ceph.com/issues/52624 |
||
575 | qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" |
||
576 | * https://tracker.ceph.com/issues/61400 |
||
577 | valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default |
||
578 | * https://tracker.ceph.com/issues/61399 |
||
579 | libmpich: undefined references to fi_strerror |
||
580 | * https://tracker.ceph.com/issues/62081 |
||
581 | tasks/fscrypt-common does not finish, timesout |
||
582 | * https://tracker.ceph.com/issues/62658 |
||
583 | error during scrub thrashing: reached maximum tries (31) after waiting for 900 seconds |
||
584 | * https://tracker.ceph.com/issues/62915 |
||
585 | qa/suites/fs/nfs: No orchestrator configured (try `ceph orch set backend`) while running test cases |
||
586 | * https://tracker.ceph.com/issues/59531 |
||
587 | quincy: "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio)" |
||
588 | * https://tracker.ceph.com/issues/62873 |
||
589 | qa: FAIL: test_client_blocklisted_oldest_tid (tasks.cephfs.test_client_limits.TestClientLimits) |
||
590 | * https://tracker.ceph.com/issues/62936 |
||
591 | Test failure: test_mirroring_init_failure_with_recovery (tasks.cephfs.test_mirroring.TestMirroring) |
||
592 | * https://tracker.ceph.com/issues/62937 |
||
593 | Command failed on smithi027 with status 3: 'sudo logrotate /etc/logrotate.d/ceph-test.conf' |
||
594 | * https://tracker.ceph.com/issues/62510 |
||
595 | snaptest-git-ceph.sh failure with fs/thrash |
||
596 | * https://tracker.ceph.com/issues/62081 |
||
597 | tasks/fscrypt-common does not finish, timesout |
||
598 | * https://tracker.ceph.com/issues/62126 |
||
599 | test failure: suites/blogbench.sh stops running |
||
600 | 187 | Venky Shankar | * https://tracker.ceph.com/issues/62682 |
601 | mon: no mdsmap broadcast after "fs set joinable" is set to true |
||
602 | 186 | Venky Shankar | |
603 | 184 | Milind Changire | h3. 19 Sep 2023 |
604 | |||
605 | http://pulpito.front.sepia.ceph.com/mchangir-2023-09-12_05:40:22-fs-wip-mchangir-testing-20230908.140927-testing-default-smithi/ |
||
606 | |||
607 | * https://tracker.ceph.com/issues/58220#note-9 |
||
608 | workunit fs/test_python.sh: test_disk_quota_exceeeded_error failure |
||
609 | * https://tracker.ceph.com/issues/62702 |
||
610 | Command failed (workunit test suites/fsstress.sh) on smithi124 with status 124 |
||
611 | * https://tracker.ceph.com/issues/57676 |
||
612 | qa: error during scrub thrashing: rank damage found: {'backtrace'} |
||
613 | * https://tracker.ceph.com/issues/59348 |
||
614 | qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota) |
||
615 | * https://tracker.ceph.com/issues/52624 |
||
616 | qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" |
||
617 | * https://tracker.ceph.com/issues/51964 |
||
618 | qa: test_cephfs_mirror_restart_sync_on_blocklist failure |
||
619 | * https://tracker.ceph.com/issues/61243 |
||
620 | test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed |
||
621 | * https://tracker.ceph.com/issues/59344 |
||
622 | qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" |
||
623 | * https://tracker.ceph.com/issues/62873 |
||
624 | qa: FAIL: test_client_blocklisted_oldest_tid (tasks.cephfs.test_client_limits.TestClientLimits) |
||
625 | * https://tracker.ceph.com/issues/59413 |
||
626 | cephfs: qa snaptest-git-ceph.sh failed with "got remote process result: 128" |
||
627 | * https://tracker.ceph.com/issues/53859 |
||
628 | qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm) |
||
629 | * https://tracker.ceph.com/issues/62482 |
||
630 | qa: cluster [WRN] Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) |
||
631 | |||
632 | 178 | Patrick Donnelly | |
633 | 177 | Venky Shankar | h3. 13 Sep 2023 |
634 | |||
635 | https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230908.065909 |
||
636 | |||
637 | * https://tracker.ceph.com/issues/52624 |
||
638 | qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" |
||
639 | * https://tracker.ceph.com/issues/57655 |
||
640 | qa: fs:mixed-clients kernel_untar_build failure |
||
641 | * https://tracker.ceph.com/issues/57676 |
||
642 | qa: error during scrub thrashing: rank damage found: {'backtrace'} |
||
643 | * https://tracker.ceph.com/issues/61243 |
||
644 | qa: tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev - 17 tests failed |
||
645 | * https://tracker.ceph.com/issues/62567 |
||
646 | postgres workunit times out - MDS_SLOW_REQUEST in logs |
||
647 | * https://tracker.ceph.com/issues/61400 |
||
648 | valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default |
||
649 | * https://tracker.ceph.com/issues/61399 |
||
650 | libmpich: undefined references to fi_strerror |
||
651 | * https://tracker.ceph.com/issues/57655 |
||
652 | qa: fs:mixed-clients kernel_untar_build failure |
||
653 | * https://tracker.ceph.com/issues/57676 |
||
654 | qa: error during scrub thrashing: rank damage found: {'backtrace'} |
||
655 | * https://tracker.ceph.com/issues/51964 |
||
656 | qa: test_cephfs_mirror_restart_sync_on_blocklist failure |
||
657 | * https://tracker.ceph.com/issues/62081 |
||
658 | tasks/fscrypt-common does not finish, timesout |
||
659 | 178 | Patrick Donnelly | |
660 | 179 | Patrick Donnelly | h3. 2023 Sep 12 |
661 | 178 | Patrick Donnelly | |
662 | https://pulpito.ceph.com/pdonnell-2023-09-12_14:07:50-fs-wip-batrick-testing-20230912.122437-distro-default-smithi/ |
||
663 | 1 | Patrick Donnelly | |
664 | 181 | Patrick Donnelly | A few failures caused by qa refactoring in https://github.com/ceph/ceph/pull/48130 ; notably: |
665 | |||
666 | 182 | Patrick Donnelly | * Test failure: test_export_pin_many (tasks.cephfs.test_exports.TestExportPin) caused by fragmentation from config changes. |
667 | 181 | Patrick Donnelly | |
668 | Failures: |
||
669 | |||
670 | 179 | Patrick Donnelly | * https://tracker.ceph.com/issues/59348 |
671 | qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota) |
||
672 | * https://tracker.ceph.com/issues/57656 |
||
673 | dbench: write failed on handle 10010 (Resource temporarily unavailable) |
||
674 | * https://tracker.ceph.com/issues/55805 |
||
675 | error scrub thrashing reached max tries in 900 secs |
||
676 | * https://tracker.ceph.com/issues/62067 |
||
677 | ffsb.sh failure "Resource temporarily unavailable" |
||
678 | * https://tracker.ceph.com/issues/59344 |
||
679 | qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" |
||
680 | * https://tracker.ceph.com/issues/61399 |
||
681 | 180 | Patrick Donnelly | libmpich: undefined references to fi_strerror |
682 | * https://tracker.ceph.com/issues/62832 |
||
683 | common: config_proxy deadlock during shutdown (and possibly other times) |
||
684 | * https://tracker.ceph.com/issues/59413 |
||
685 | 1 | Patrick Donnelly | cephfs: qa snaptest-git-ceph.sh failed with "got remote process result: 128" |
686 | 181 | Patrick Donnelly | * https://tracker.ceph.com/issues/57676 |
687 | qa: error during scrub thrashing: rank damage found: {'backtrace'} |
||
688 | * https://tracker.ceph.com/issues/62567 |
||
689 | Command failed with status 1: "sudo TESTDIR=/home/ubuntu/cephtest bash -c 'sudo -u postgres -- pgbench -s 500 -i'" |
||
690 | * https://tracker.ceph.com/issues/54460 |
||
691 | Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1 |
||
692 | * https://tracker.ceph.com/issues/58220#note-9 |
||
693 | workunit fs/test_python.sh: test_disk_quota_exceeeded_error failure |
||
694 | * https://tracker.ceph.com/issues/59348 |
||
695 | qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota) |
||
696 | 183 | Patrick Donnelly | * https://tracker.ceph.com/issues/62847 |
697 | mds: blogbench requests stuck (5mds+scrub+snaps-flush) |
||
698 | * https://tracker.ceph.com/issues/62848 |
||
699 | qa: fail_fs upgrade scenario hanging |
||
700 | * https://tracker.ceph.com/issues/62081 |
||
701 | tasks/fscrypt-common does not finish, timesout |
||
702 | 177 | Venky Shankar | |
703 | 176 | Venky Shankar | h3. 11 Sep 2023 |
704 | 175 | Venky Shankar | |
705 | https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230830.153114 |
||
706 | |||
707 | * https://tracker.ceph.com/issues/52624 |
||
708 | qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" |
||
709 | * https://tracker.ceph.com/issues/61399 |
||
710 | qa: build failure for ior (the failed instance is when compiling `mdtest`) |
||
711 | * https://tracker.ceph.com/issues/57655 |
||
712 | qa: fs:mixed-clients kernel_untar_build failure |
||
713 | * https://tracker.ceph.com/issues/61399 |
||
714 | ior build failure |
||
715 | * https://tracker.ceph.com/issues/59531 |
||
716 | quincy: "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio)" |
||
717 | * https://tracker.ceph.com/issues/59344 |
||
718 | qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" |
||
719 | * https://tracker.ceph.com/issues/59346 |
||
720 | fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" |
||
721 | * https://tracker.ceph.com/issues/59348 |
||
722 | qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota) |
||
723 | * https://tracker.ceph.com/issues/57676 |
||
724 | qa: error during scrub thrashing: rank damage found: {'backtrace'} |
||
725 | * https://tracker.ceph.com/issues/61243 |
||
726 | qa: tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev - 17 tests failed |
||
727 | * https://tracker.ceph.com/issues/62567 |
||
728 | postgres workunit times out - MDS_SLOW_REQUEST in logs |
||
729 | |||
730 | |||
731 | 174 | Rishabh Dave | h3. 6 Sep 2023 Run 2 |
732 | |||
733 | https://pulpito.ceph.com/rishabh-2023-08-25_01:50:32-fs-wip-rishabh-2023aug3-b5-testing-default-smithi/ |
||
734 | |||
735 | * https://tracker.ceph.com/issues/51964 |
||
736 | test_cephfs_mirror_restart_sync_on_blocklist failure |
||
737 | * https://tracker.ceph.com/issues/59348 |
||
738 | test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota) |
||
739 | * https://tracker.ceph.com/issues/53859 |
||
740 | qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm) |
||
741 | * https://tracker.ceph.com/issues/61892 |
||
742 | test_strays.TestStrays.test_snapshot_remove failed |
||
743 | * https://tracker.ceph.com/issues/54460 |
||
744 | Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1 |
||
745 | * https://tracker.ceph.com/issues/59346 |
||
746 | fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" |
||
747 | * https://tracker.ceph.com/issues/59344 |
||
748 | qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" |
||
749 | * https://tracker.ceph.com/issues/62484 |
||
750 | qa: ffsb.sh test failure |
||
751 | * https://tracker.ceph.com/issues/62567 |
||
752 | Command failed with status 1: "sudo TESTDIR=/home/ubuntu/cephtest bash -c 'sudo -u postgres -- pgbench -s 500 -i'" |
||
753 | |||
754 | * https://tracker.ceph.com/issues/61399 |
||
755 | ior build failure |
||
756 | * https://tracker.ceph.com/issues/57676 |
||
757 | qa: error during scrub thrashing: rank damage found: {'backtrace'} |
||
758 | * https://tracker.ceph.com/issues/55805 |
||
759 | error scrub thrashing reached max tries in 900 secs |
||
760 | |||
761 | 172 | Rishabh Dave | h3. 6 Sep 2023 |
762 | 171 | Rishabh Dave | |
763 | 173 | Rishabh Dave | https://pulpito.ceph.com/rishabh-2023-08-10_20:16:46-fs-wip-rishabh-2023Aug1-b4-testing-default-smithi/ |
764 | 171 | Rishabh Dave | |
765 | 1 | Patrick Donnelly | * https://tracker.ceph.com/issues/53859 |
766 | qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm) |
||
767 | 173 | Rishabh Dave | * https://tracker.ceph.com/issues/51964 |
768 | qa: test_cephfs_mirror_restart_sync_on_blocklist failure |
||
769 | 1 | Patrick Donnelly | * https://tracker.ceph.com/issues/61892 |
770 | 173 | Rishabh Dave | test_snapshot_remove (test_strays.TestStrays) failed |
771 | * https://tracker.ceph.com/issues/59348 |
||
772 | qa: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota |
||
773 | * https://tracker.ceph.com/issues/54462 |
||
774 | Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128 |
||
775 | * https://tracker.ceph.com/issues/62556 |
||
776 | test_acls: xfstests_dev: python2 is missing |
||
777 | * https://tracker.ceph.com/issues/62067 |
||
778 | ffsb.sh failure "Resource temporarily unavailable" |
||
779 | * https://tracker.ceph.com/issues/57656 |
||
780 | dbench: write failed on handle 10010 (Resource temporarily unavailable) |
||
781 | 1 | Patrick Donnelly | * https://tracker.ceph.com/issues/59346 |
782 | fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" |
||
783 | 171 | Rishabh Dave | * https://tracker.ceph.com/issues/59344 |
784 | 173 | Rishabh Dave | qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" |
785 | |||
786 | 171 | Rishabh Dave | * https://tracker.ceph.com/issues/61399 |
787 | ior build failure |
||
788 | * https://tracker.ceph.com/issues/57676 |
||
789 | qa: error during scrub thrashing: rank damage found: {'backtrace'} |
||
790 | * https://tracker.ceph.com/issues/55805 |
||
791 | error scrub thrashing reached max tries in 900 secs |
||
792 | 173 | Rishabh Dave | |
793 | * https://tracker.ceph.com/issues/62567 |
||
794 | Command failed on smithi008 with status 1: "sudo TESTDIR=/home/ubuntu/cephtest bash -c 'sudo -u postgres -- pgbench -s 500 -i'" |
||
795 | * https://tracker.ceph.com/issues/62702 |
||
796 | workunit test suites/fsstress.sh on smithi066 with status 124 |
||
797 | 170 | Rishabh Dave | |
798 | h3. 5 Sep 2023 |
||
799 | |||
800 | https://pulpito.ceph.com/rishabh-2023-08-25_06:38:25-fs-wip-rishabh-2023aug3-b5-testing-default-smithi/ |
||
801 | orch:cephadm suite run: http://pulpito.front.sepia.ceph.com/rishabh-2023-09-05_12:16:09-orch:cephadm-wip-rishabh-2023aug3-b5-testing-default-smithi/ |
||
802 | this run has failures but acc to Adam King these are not relevant and should be ignored |
||
803 | |||
804 | * https://tracker.ceph.com/issues/61892 |
||
805 | test_snapshot_remove (test_strays.TestStrays) failed |
||
806 | * https://tracker.ceph.com/issues/59348 |
||
807 | test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota |
||
808 | * https://tracker.ceph.com/issues/54462 |
||
809 | Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128 |
||
810 | * https://tracker.ceph.com/issues/62067 |
||
811 | ffsb.sh failure "Resource temporarily unavailable" |
||
812 | * https://tracker.ceph.com/issues/57656 |
||
813 | dbench: write failed on handle 10010 (Resource temporarily unavailable) |
||
814 | * https://tracker.ceph.com/issues/59346 |
||
815 | fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" |
||
816 | * https://tracker.ceph.com/issues/59344 |
||
817 | qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" |
||
818 | * https://tracker.ceph.com/issues/50223 |
||
819 | client.xxxx isn't responding to mclientcaps(revoke) |
||
820 | * https://tracker.ceph.com/issues/57655 |
||
821 | qa: fs:mixed-clients kernel_untar_build failure |
||
822 | * https://tracker.ceph.com/issues/62187 |
||
823 | iozone.sh: line 5: iozone: command not found |
||
824 | |||
825 | * https://tracker.ceph.com/issues/61399 |
||
826 | ior build failure |
||
827 | * https://tracker.ceph.com/issues/57676 |
||
828 | qa: error during scrub thrashing: rank damage found: {'backtrace'} |
||
829 | * https://tracker.ceph.com/issues/55805 |
||
830 | error scrub thrashing reached max tries in 900 secs |
||
831 | 169 | Venky Shankar | |
832 | |||
833 | h3. 31 Aug 2023 |
||
834 | |||
835 | https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230824.045828 |
||
836 | |||
837 | * https://tracker.ceph.com/issues/52624 |
||
838 | qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" |
||
839 | * https://tracker.ceph.com/issues/62187 |
||
840 | iozone: command not found |
||
841 | * https://tracker.ceph.com/issues/61399 |
||
842 | ior build failure |
||
843 | * https://tracker.ceph.com/issues/59531 |
||
844 | quincy: "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio)" |
||
845 | * https://tracker.ceph.com/issues/61399 |
||
846 | qa: build failure for ior (the failed instance is when compiling `mdtest`) |
||
847 | * https://tracker.ceph.com/issues/57655 |
||
848 | qa: fs:mixed-clients kernel_untar_build failure |
||
849 | * https://tracker.ceph.com/issues/59344 |
||
850 | qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" |
||
851 | * https://tracker.ceph.com/issues/59346 |
||
852 | fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" |
||
853 | * https://tracker.ceph.com/issues/59348 |
||
854 | qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota) |
||
855 | * https://tracker.ceph.com/issues/59413 |
||
856 | cephfs: qa snaptest-git-ceph.sh failed with "got remote process result: 128" |
||
857 | * https://tracker.ceph.com/issues/62653 |
||
858 | qa: unimplemented fcntl command: 1036 with fsstress |
||
859 | * https://tracker.ceph.com/issues/61400 |
||
860 | valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default |
||
861 | * https://tracker.ceph.com/issues/62658 |
||
862 | error during scrub thrashing: reached maximum tries (31) after waiting for 900 seconds |
||
863 | * https://tracker.ceph.com/issues/62188 |
||
864 | AttributeError: 'RemoteProcess' object has no attribute 'read' (happens only with multis-auth test) |
||
865 | 168 | Venky Shankar | |
866 | |||
867 | h3. 25 Aug 2023 |
||
868 | |||
869 | https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230822.064807 |
||
870 | |||
871 | * https://tracker.ceph.com/issues/59344 |
||
872 | qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" |
||
873 | * https://tracker.ceph.com/issues/59346 |
||
874 | fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" |
||
875 | * https://tracker.ceph.com/issues/59348 |
||
876 | qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota) |
||
877 | * https://tracker.ceph.com/issues/57655 |
||
878 | qa: fs:mixed-clients kernel_untar_build failure |
||
879 | * https://tracker.ceph.com/issues/61243 |
||
880 | test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed |
||
881 | * https://tracker.ceph.com/issues/61399 |
||
882 | ior build failure |
||
883 | * https://tracker.ceph.com/issues/61399 |
||
884 | qa: build failure for ior (the failed instance is when compiling `mdtest`) |
||
885 | * https://tracker.ceph.com/issues/62484 |
||
886 | qa: ffsb.sh test failure |
||
887 | * https://tracker.ceph.com/issues/59531 |
||
888 | quincy: "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio)" |
||
889 | * https://tracker.ceph.com/issues/62510 |
||
890 | snaptest-git-ceph.sh failure with fs/thrash |
||
891 | 167 | Venky Shankar | |
892 | |||
893 | h3. 24 Aug 2023 |
||
894 | |||
895 | https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230822.060131 |
||
896 | |||
897 | * https://tracker.ceph.com/issues/57676 |
||
898 | qa: error during scrub thrashing: rank damage found: {'backtrace'} |
||
899 | * https://tracker.ceph.com/issues/51964 |
||
900 | qa: test_cephfs_mirror_restart_sync_on_blocklist failure |
||
901 | * https://tracker.ceph.com/issues/59344 |
||
902 | qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" |
||
903 | * https://tracker.ceph.com/issues/59346 |
||
904 | fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" |
||
905 | * https://tracker.ceph.com/issues/59348 |
||
906 | qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota) |
||
907 | * https://tracker.ceph.com/issues/61399 |
||
908 | ior build failure |
||
909 | * https://tracker.ceph.com/issues/61399 |
||
910 | qa: build failure for ior (the failed instance is when compiling `mdtest`) |
||
911 | * https://tracker.ceph.com/issues/62510 |
||
912 | snaptest-git-ceph.sh failure with fs/thrash |
||
913 | * https://tracker.ceph.com/issues/62484 |
||
914 | qa: ffsb.sh test failure |
||
915 | * https://tracker.ceph.com/issues/57087 |
||
916 | qa: test_fragmented_injection (tasks.cephfs.test_data_scan.TestDataScan) failure |
||
917 | * https://tracker.ceph.com/issues/57656 |
||
918 | [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable) |
||
919 | * https://tracker.ceph.com/issues/62187 |
||
920 | iozone: command not found |
||
921 | * https://tracker.ceph.com/issues/62188 |
||
922 | AttributeError: 'RemoteProcess' object has no attribute 'read' (happens only with multis-auth test) |
||
923 | * https://tracker.ceph.com/issues/62567 |
||
924 | postgres workunit times out - MDS_SLOW_REQUEST in logs |
||
925 | 166 | Venky Shankar | |
926 | |||
927 | h3. 22 Aug 2023 |
||
928 | |||
929 | https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230809.035933 |
||
930 | |||
931 | * https://tracker.ceph.com/issues/57676 |
||
932 | qa: error during scrub thrashing: rank damage found: {'backtrace'} |
||
933 | * https://tracker.ceph.com/issues/51964 |
||
934 | qa: test_cephfs_mirror_restart_sync_on_blocklist failure |
||
935 | * https://tracker.ceph.com/issues/59344 |
||
936 | qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" |
||
937 | * https://tracker.ceph.com/issues/59346 |
||
938 | fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" |
||
939 | * https://tracker.ceph.com/issues/59348 |
||
940 | qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota) |
||
941 | * https://tracker.ceph.com/issues/61399 |
||
942 | ior build failure |
||
943 | * https://tracker.ceph.com/issues/61399 |
||
944 | qa: build failure for ior (the failed instance is when compiling `mdtest`) |
||
945 | * https://tracker.ceph.com/issues/57655 |
||
946 | qa: fs:mixed-clients kernel_untar_build failure |
||
947 | * https://tracker.ceph.com/issues/61243 |
||
948 | test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed |
||
949 | * https://tracker.ceph.com/issues/62188 |
||
950 | AttributeError: 'RemoteProcess' object has no attribute 'read' (happens only with multis-auth test) |
||
951 | * https://tracker.ceph.com/issues/62510 |
||
952 | snaptest-git-ceph.sh failure with fs/thrash |
||
953 | * https://tracker.ceph.com/issues/62511 |
||
954 | src/mds/MDLog.cc: 299: FAILED ceph_assert(!mds_is_shutting_down) |
||
955 | 165 | Venky Shankar | |
956 | |||
957 | h3. 14 Aug 2023 |
||
958 | |||
959 | https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230808.093601 |
||
960 | |||
961 | * https://tracker.ceph.com/issues/51964 |
||
962 | qa: test_cephfs_mirror_restart_sync_on_blocklist failure |
||
963 | * https://tracker.ceph.com/issues/61400 |
||
964 | valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default |
||
965 | * https://tracker.ceph.com/issues/61399 |
||
966 | ior build failure |
||
967 | * https://tracker.ceph.com/issues/59348 |
||
968 | qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota) |
||
969 | * https://tracker.ceph.com/issues/59531 |
||
970 | cluster [WRN] OSD bench result of 137706.272521 IOPS exceeded the threshold |
||
971 | * https://tracker.ceph.com/issues/59344 |
||
972 | qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" |
||
973 | * https://tracker.ceph.com/issues/59346 |
||
974 | fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" |
||
975 | * https://tracker.ceph.com/issues/61399 |
||
976 | qa: build failure for ior (the failed instance is when compiling `mdtest`) |
||
977 | * https://tracker.ceph.com/issues/59684 [kclient bug] |
||
978 | Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt) |
||
979 | * https://tracker.ceph.com/issues/61243 (NEW) |
||
980 | test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed |
||
981 | * https://tracker.ceph.com/issues/57655 |
||
982 | qa: fs:mixed-clients kernel_untar_build failure |
||
983 | * https://tracker.ceph.com/issues/57656 |
||
984 | [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable) |
||
985 | 163 | Venky Shankar | |
986 | |||
987 | h3. 28 JULY 2023 |
||
988 | |||
989 | https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230725.053049 |
||
990 | |||
991 | * https://tracker.ceph.com/issues/51964 |
||
992 | qa: test_cephfs_mirror_restart_sync_on_blocklist failure |
||
993 | * https://tracker.ceph.com/issues/61400 |
||
994 | valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default |
||
995 | * https://tracker.ceph.com/issues/61399 |
||
996 | ior build failure |
||
997 | * https://tracker.ceph.com/issues/57676 |
||
998 | qa: error during scrub thrashing: rank damage found: {'backtrace'} |
||
999 | * https://tracker.ceph.com/issues/59348 |
||
1000 | qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota) |
||
1001 | * https://tracker.ceph.com/issues/59531 |
||
1002 | cluster [WRN] OSD bench result of 137706.272521 IOPS exceeded the threshold |
||
1003 | * https://tracker.ceph.com/issues/59344 |
||
1004 | qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" |
||
1005 | * https://tracker.ceph.com/issues/59346 |
||
1006 | fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" |
||
1007 | * https://github.com/ceph/ceph/pull/52556 |
||
1008 | task/test_nfs: AttributeError: 'TestNFS' object has no attribute 'run_ceph_cmd' (see note #4) |
||
1009 | * https://tracker.ceph.com/issues/62187 |
||
1010 | iozone: command not found |
||
1011 | * https://tracker.ceph.com/issues/61399 |
||
1012 | qa: build failure for ior (the failed instance is when compiling `mdtest`) |
||
1013 | * https://tracker.ceph.com/issues/62188 |
||
1014 | 164 | Rishabh Dave | AttributeError: 'RemoteProcess' object has no attribute 'read' (happens only with multis-auth test) |
1015 | 158 | Rishabh Dave | |
1016 | h3. 24 Jul 2023 |
||
1017 | |||
1018 | https://pulpito.ceph.com/rishabh-2023-07-13_21:35:13-fs-wip-rishabh-2023Jul13-testing-default-smithi/ |
||
1019 | https://pulpito.ceph.com/rishabh-2023-07-14_10:26:42-fs-wip-rishabh-2023Jul13-testing-default-smithi/ |
||
1020 | There were few failure from one of the PRs under testing. Following run confirms that removing this PR fixes these failures - |
||
1021 | https://pulpito.ceph.com/rishabh-2023-07-18_02:11:50-fs-wip-rishabh-2023Jul13-m-quota-testing-default-smithi/ |
||
1022 | One more extra run to check if blogbench.sh fail every time: |
||
1023 | https://pulpito.ceph.com/rishabh-2023-07-21_17:58:19-fs-wip-rishabh-2023Jul13-m-quota-testing-default-smithi/ |
||
1024 | blogbench.sh failure were seen on above runs for first time, following run with main branch that confirms that "blogbench.sh" was not related to any of the PRs that are under testing - |
||
1025 | 161 | Rishabh Dave | https://pulpito.ceph.com/rishabh-2023-07-21_21:30:53-fs-wip-rishabh-2023Jul13-base-2-testing-default-smithi/ |
1026 | |||
1027 | * https://tracker.ceph.com/issues/61892 |
||
1028 | test_snapshot_remove (test_strays.TestStrays) failed |
||
1029 | * https://tracker.ceph.com/issues/53859 |
||
1030 | test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm) |
||
1031 | * https://tracker.ceph.com/issues/61982 |
||
1032 | test_clean_stale_subvolume_snapshot_metadata (tasks.cephfs.test_volumes.TestSubvolumeSnapshots) |
||
1033 | * https://tracker.ceph.com/issues/52438 |
||
1034 | qa: ffsb timeout |
||
1035 | * https://tracker.ceph.com/issues/54460 |
||
1036 | Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1 |
||
1037 | * https://tracker.ceph.com/issues/57655 |
||
1038 | qa: fs:mixed-clients kernel_untar_build failure |
||
1039 | * https://tracker.ceph.com/issues/48773 |
||
1040 | reached max tries: scrub does not complete |
||
1041 | * https://tracker.ceph.com/issues/58340 |
||
1042 | mds: fsstress.sh hangs with multimds |
||
1043 | * https://tracker.ceph.com/issues/61400 |
||
1044 | valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default |
||
1045 | * https://tracker.ceph.com/issues/57206 |
||
1046 | libcephfs/test.sh: ceph_test_libcephfs_reclaim |
||
1047 | |||
1048 | * https://tracker.ceph.com/issues/57656 |
||
1049 | [testing] dbench: write failed on handle 10010 (Resource temporarily unavailable) |
||
1050 | * https://tracker.ceph.com/issues/61399 |
||
1051 | ior build failure |
||
1052 | * https://tracker.ceph.com/issues/57676 |
||
1053 | error during scrub thrashing: backtrace |
||
1054 | |||
1055 | * https://tracker.ceph.com/issues/38452 |
||
1056 | 'sudo -u postgres -- pgbench -s 500 -i' failed |
||
1057 | 158 | Rishabh Dave | * https://tracker.ceph.com/issues/62126 |
1058 | 157 | Venky Shankar | blogbench.sh failure |
1059 | |||
1060 | h3. 18 July 2023 |
||
1061 | |||
1062 | * https://tracker.ceph.com/issues/52624 |
||
1063 | qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" |
||
1064 | * https://tracker.ceph.com/issues/57676 |
||
1065 | qa: error during scrub thrashing: rank damage found: {'backtrace'} |
||
1066 | * https://tracker.ceph.com/issues/54460 |
||
1067 | Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1 |
||
1068 | * https://tracker.ceph.com/issues/57655 |
||
1069 | qa: fs:mixed-clients kernel_untar_build failure |
||
1070 | * https://tracker.ceph.com/issues/51964 |
||
1071 | qa: test_cephfs_mirror_restart_sync_on_blocklist failure |
||
1072 | * https://tracker.ceph.com/issues/59344 |
||
1073 | qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" |
||
1074 | * https://tracker.ceph.com/issues/61182 |
||
1075 | cephfs-mirror-ha-workunit: reached maximum tries (50) after waiting for 300 seconds |
||
1076 | * https://tracker.ceph.com/issues/61957 |
||
1077 | test_client_limits.TestClientLimits.test_client_release_bug |
||
1078 | * https://tracker.ceph.com/issues/59348 |
||
1079 | qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota) |
||
1080 | * https://tracker.ceph.com/issues/61892 |
||
1081 | test_strays.TestStrays.test_snapshot_remove failed |
||
1082 | * https://tracker.ceph.com/issues/59346 |
||
1083 | fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" |
||
1084 | * https://tracker.ceph.com/issues/44565 |
||
1085 | src/mds/SimpleLock.h: 528: FAILED ceph_assert(state == LOCK_XLOCK || state == LOCK_XLOCKDONE || state == LOCK_XLOCKSNAP || state == LOCK_LOCK_XLOCK || state == LOCK_LOCK || is_locallock()) |
||
1086 | * https://tracker.ceph.com/issues/62067 |
||
1087 | ffsb.sh failure "Resource temporarily unavailable" |
||
1088 | 156 | Venky Shankar | |
1089 | |||
1090 | h3. 17 July 2023 |
||
1091 | |||
1092 | https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230704.040136 |
||
1093 | |||
1094 | * https://tracker.ceph.com/issues/61982 |
||
1095 | Test failure: test_clean_stale_subvolume_snapshot_metadata (tasks.cephfs.test_volumes.TestSubvolumeSnapshots) |
||
1096 | * https://tracker.ceph.com/issues/59344 |
||
1097 | qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" |
||
1098 | * https://tracker.ceph.com/issues/61182 |
||
1099 | cephfs-mirror-ha-workunit: reached maximum tries (50) after waiting for 300 seconds |
||
1100 | * https://tracker.ceph.com/issues/61957 |
||
1101 | test_client_limits.TestClientLimits.test_client_release_bug |
||
1102 | * https://tracker.ceph.com/issues/61400 |
||
1103 | valgrind+ceph-mon: segmentation fault in rocksdb+tcmalloc |
||
1104 | * https://tracker.ceph.com/issues/59348 |
||
1105 | qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota) |
||
1106 | * https://tracker.ceph.com/issues/61892 |
||
1107 | test_strays.TestStrays.test_snapshot_remove failed |
||
1108 | * https://tracker.ceph.com/issues/59346 |
||
1109 | fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" |
||
1110 | * https://tracker.ceph.com/issues/62036 |
||
1111 | src/mds/MDCache.cc: 5131: FAILED ceph_assert(isolated_inodes.empty()) |
||
1112 | * https://tracker.ceph.com/issues/61737 |
||
1113 | coredump from '/bin/podman pull quay.ceph.io/ceph-ci/ceph:pacific' |
||
1114 | * https://tracker.ceph.com/issues/44565 |
||
1115 | src/mds/SimpleLock.h: 528: FAILED ceph_assert(state == LOCK_XLOCK || state == LOCK_XLOCKDONE || state == LOCK_XLOCKSNAP || state == LOCK_LOCK_XLOCK || state == LOCK_LOCK || is_locallock()) |
||
1116 | 155 | Rishabh Dave | |
1117 | 1 | Patrick Donnelly | |
1118 | 153 | Rishabh Dave | h3. 13 July 2023 Run 2 |
1119 | 152 | Rishabh Dave | |
1120 | |||
1121 | https://pulpito.ceph.com/rishabh-2023-07-08_23:33:40-fs-wip-rishabh-2023Jul9-testing-default-smithi/ |
||
1122 | https://pulpito.ceph.com/rishabh-2023-07-09_20:19:09-fs-wip-rishabh-2023Jul9-testing-default-smithi/ |
||
1123 | |||
1124 | * https://tracker.ceph.com/issues/61957 |
||
1125 | test_client_limits.TestClientLimits.test_client_release_bug |
||
1126 | * https://tracker.ceph.com/issues/61982 |
||
1127 | Test failure: test_clean_stale_subvolume_snapshot_metadata (tasks.cephfs.test_volumes.TestSubvolumeSnapshots) |
||
1128 | * https://tracker.ceph.com/issues/59348 |
||
1129 | qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota) |
||
1130 | * https://tracker.ceph.com/issues/59344 |
||
1131 | qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" |
||
1132 | * https://tracker.ceph.com/issues/54460 |
||
1133 | Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1 |
||
1134 | * https://tracker.ceph.com/issues/57655 |
||
1135 | qa: fs:mixed-clients kernel_untar_build failure |
||
1136 | * https://tracker.ceph.com/issues/61400 |
||
1137 | valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default |
||
1138 | * https://tracker.ceph.com/issues/61399 |
||
1139 | ior build failure |
||
1140 | |||
1141 | 151 | Venky Shankar | h3. 13 July 2023 |
1142 | |||
1143 | https://pulpito.ceph.com/vshankar-2023-07-04_11:45:30-fs-wip-vshankar-testing-20230704.040242-testing-default-smithi/ |
||
1144 | |||
1145 | * https://tracker.ceph.com/issues/54460 |
||
1146 | Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1 |
||
1147 | * https://tracker.ceph.com/issues/61400 |
||
1148 | valgrind+ceph-mon: segmentation fault in rocksdb+tcmalloc |
||
1149 | * https://tracker.ceph.com/issues/57655 |
||
1150 | qa: fs:mixed-clients kernel_untar_build failure |
||
1151 | * https://tracker.ceph.com/issues/61945 |
||
1152 | LibCephFS.DelegTimeout failure |
||
1153 | * https://tracker.ceph.com/issues/52624 |
||
1154 | qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" |
||
1155 | * https://tracker.ceph.com/issues/57676 |
||
1156 | qa: error during scrub thrashing: rank damage found: {'backtrace'} |
||
1157 | * https://tracker.ceph.com/issues/59348 |
||
1158 | qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota) |
||
1159 | * https://tracker.ceph.com/issues/59344 |
||
1160 | qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" |
||
1161 | * https://tracker.ceph.com/issues/51964 |
||
1162 | qa: test_cephfs_mirror_restart_sync_on_blocklist failure |
||
1163 | * https://tracker.ceph.com/issues/59346 |
||
1164 | fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" |
||
1165 | * https://tracker.ceph.com/issues/61982 |
||
1166 | Test failure: test_clean_stale_subvolume_snapshot_metadata (tasks.cephfs.test_volumes.TestSubvolumeSnapshots) |
||
1167 | 150 | Rishabh Dave | |
1168 | |||
1169 | h3. 13 Jul 2023 |
||
1170 | |||
1171 | https://pulpito.ceph.com/rishabh-2023-07-05_22:21:20-fs-wip-rishabh-2023Jul5-testing-default-smithi/ |
||
1172 | https://pulpito.ceph.com/rishabh-2023-07-06_19:33:28-fs-wip-rishabh-2023Jul5-testing-default-smithi/ |
||
1173 | |||
1174 | * https://tracker.ceph.com/issues/61957 |
||
1175 | test_client_limits.TestClientLimits.test_client_release_bug |
||
1176 | * https://tracker.ceph.com/issues/59348 |
||
1177 | qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota) |
||
1178 | * https://tracker.ceph.com/issues/59346 |
||
1179 | fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" |
||
1180 | * https://tracker.ceph.com/issues/48773 |
||
1181 | scrub does not complete: reached max tries |
||
1182 | * https://tracker.ceph.com/issues/59344 |
||
1183 | qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" |
||
1184 | * https://tracker.ceph.com/issues/52438 |
||
1185 | qa: ffsb timeout |
||
1186 | * https://tracker.ceph.com/issues/57656 |
||
1187 | [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable) |
||
1188 | * https://tracker.ceph.com/issues/58742 |
||
1189 | xfstests-dev: kcephfs: generic |
||
1190 | * https://tracker.ceph.com/issues/61399 |
||
1191 | 148 | Rishabh Dave | libmpich: undefined references to fi_strerror |
1192 | 149 | Rishabh Dave | |
1193 | 148 | Rishabh Dave | h3. 12 July 2023 |
1194 | |||
1195 | https://pulpito.ceph.com/rishabh-2023-07-05_18:32:52-fs-wip-rishabh-CephManager-in-CephFSTestCase-testing-default-smithi/ |
||
1196 | https://pulpito.ceph.com/rishabh-2023-07-06_19:46:43-fs-wip-rishabh-CephManager-in-CephFSTestCase-testing-default-smithi/ |
||
1197 | |||
1198 | * https://tracker.ceph.com/issues/61892 |
||
1199 | test_strays.TestStrays.test_snapshot_remove failed |
||
1200 | * https://tracker.ceph.com/issues/59348 |
||
1201 | qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota) |
||
1202 | * https://tracker.ceph.com/issues/53859 |
||
1203 | qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm) |
||
1204 | * https://tracker.ceph.com/issues/59346 |
||
1205 | fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" |
||
1206 | * https://tracker.ceph.com/issues/58742 |
||
1207 | xfstests-dev: kcephfs: generic |
||
1208 | * https://tracker.ceph.com/issues/59344 |
||
1209 | qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" |
||
1210 | * https://tracker.ceph.com/issues/52438 |
||
1211 | qa: ffsb timeout |
||
1212 | * https://tracker.ceph.com/issues/57656 |
||
1213 | [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable) |
||
1214 | * https://tracker.ceph.com/issues/54460 |
||
1215 | Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1 |
||
1216 | * https://tracker.ceph.com/issues/57655 |
||
1217 | qa: fs:mixed-clients kernel_untar_build failure |
||
1218 | * https://tracker.ceph.com/issues/61182 |
||
1219 | cephfs-mirror-ha-workunit: reached maximum tries (50) after waiting for 300 seconds |
||
1220 | * https://tracker.ceph.com/issues/61400 |
||
1221 | valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default |
||
1222 | 147 | Rishabh Dave | * https://tracker.ceph.com/issues/48773 |
1223 | 146 | Patrick Donnelly | reached max tries: scrub does not complete |
1224 | |||
1225 | h3. 05 July 2023 |
||
1226 | |||
1227 | https://pulpito.ceph.com/pdonnell-2023-07-05_03:38:33-fs:libcephfs-wip-pdonnell-testing-20230705.003205-distro-default-smithi/ |
||
1228 | |||
1229 | 137 | Rishabh Dave | * https://tracker.ceph.com/issues/59346 |
1230 | 143 | Rishabh Dave | fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" |
1231 | |||
1232 | h3. 27 Jun 2023 |
||
1233 | |||
1234 | https://pulpito.ceph.com/rishabh-2023-06-21_23:38:17-fs-wip-rishabh-improvements-authmon-testing-default-smithi/ |
||
1235 | 144 | Rishabh Dave | https://pulpito.ceph.com/rishabh-2023-06-23_17:37:30-fs-wip-rishabh-improvements-authmon-distro-default-smithi/ |
1236 | |||
1237 | * https://tracker.ceph.com/issues/59348 |
||
1238 | qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota) |
||
1239 | * https://tracker.ceph.com/issues/54460 |
||
1240 | Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1 |
||
1241 | * https://tracker.ceph.com/issues/59346 |
||
1242 | fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" |
||
1243 | * https://tracker.ceph.com/issues/59344 |
||
1244 | qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" |
||
1245 | * https://tracker.ceph.com/issues/61399 |
||
1246 | libmpich: undefined references to fi_strerror |
||
1247 | * https://tracker.ceph.com/issues/50223 |
||
1248 | client.xxxx isn't responding to mclientcaps(revoke) |
||
1249 | 143 | Rishabh Dave | * https://tracker.ceph.com/issues/61831 |
1250 | Test failure: test_mirroring_init_failure_with_recovery (tasks.cephfs.test_mirroring.TestMirroring) |
||
1251 | 142 | Venky Shankar | |
1252 | |||
1253 | h3. 22 June 2023 |
||
1254 | |||
1255 | * https://tracker.ceph.com/issues/57676 |
||
1256 | qa: error during scrub thrashing: rank damage found: {'backtrace'} |
||
1257 | * https://tracker.ceph.com/issues/54460 |
||
1258 | Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1 |
||
1259 | * https://tracker.ceph.com/issues/59344 |
||
1260 | qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" |
||
1261 | * https://tracker.ceph.com/issues/59348 |
||
1262 | qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota) |
||
1263 | * https://tracker.ceph.com/issues/61400 |
||
1264 | valgrind+ceph-mon: segmentation fault in rocksdb+tcmalloc |
||
1265 | * https://tracker.ceph.com/issues/57655 |
||
1266 | qa: fs:mixed-clients kernel_untar_build failure |
||
1267 | * https://tracker.ceph.com/issues/61394 |
||
1268 | qa/quincy: cluster [WRN] evicting unresponsive client smithi152 (4298), after 303.726 seconds" in cluster log |
||
1269 | * https://tracker.ceph.com/issues/61762 |
||
1270 | qa: wait_for_clean: failed before timeout expired |
||
1271 | * https://tracker.ceph.com/issues/61775 |
||
1272 | cephfs-mirror: mirror daemon does not shutdown (in mirror ha tests) |
||
1273 | * https://tracker.ceph.com/issues/44565 |
||
1274 | src/mds/SimpleLock.h: 528: FAILED ceph_assert(state == LOCK_XLOCK || state == LOCK_XLOCKDONE || state == LOCK_XLOCKSNAP || state == LOCK_LOCK_XLOCK || state == LOCK_LOCK || is_locallock()) |
||
1275 | * https://tracker.ceph.com/issues/61790 |
||
1276 | cephfs client to mds comms remain silent after reconnect |
||
1277 | * https://tracker.ceph.com/issues/61791 |
||
1278 | snaptest-git-ceph.sh test timed out (job dead) |
||
1279 | 139 | Venky Shankar | |
1280 | |||
1281 | h3. 20 June 2023 |
||
1282 | |||
1283 | https://pulpito.ceph.com/vshankar-2023-06-15_04:58:28-fs-wip-vshankar-testing-20230614.124123-testing-default-smithi/ |
||
1284 | |||
1285 | * https://tracker.ceph.com/issues/57676 |
||
1286 | qa: error during scrub thrashing: rank damage found: {'backtrace'} |
||
1287 | * https://tracker.ceph.com/issues/54460 |
||
1288 | Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1 |
||
1289 | 140 | Venky Shankar | * https://tracker.ceph.com/issues/54462 |
1290 | 1 | Patrick Donnelly | Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128 |
1291 | 141 | Venky Shankar | * https://tracker.ceph.com/issues/58340 |
1292 | 139 | Venky Shankar | mds: fsstress.sh hangs with multimds |
1293 | * https://tracker.ceph.com/issues/59344 |
||
1294 | qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" |
||
1295 | * https://tracker.ceph.com/issues/59348 |
||
1296 | qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota) |
||
1297 | * https://tracker.ceph.com/issues/57656 |
||
1298 | [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable) |
||
1299 | * https://tracker.ceph.com/issues/61400 |
||
1300 | valgrind+ceph-mon: segmentation fault in rocksdb+tcmalloc |
||
1301 | * https://tracker.ceph.com/issues/57655 |
||
1302 | qa: fs:mixed-clients kernel_untar_build failure |
||
1303 | * https://tracker.ceph.com/issues/44565 |
||
1304 | src/mds/SimpleLock.h: 528: FAILED ceph_assert(state == LOCK_XLOCK || state == LOCK_XLOCKDONE || state == LOCK_XLOCKSNAP || state == LOCK_LOCK_XLOCK || state == LOCK_LOCK || is_locallock()) |
||
1305 | * https://tracker.ceph.com/issues/61737 |
||
1306 | 138 | Rishabh Dave | coredump from '/bin/podman pull quay.ceph.io/ceph-ci/ceph:pacific' |
1307 | |||
1308 | h3. 16 June 2023 |
||
1309 | |||
1310 | 1 | Patrick Donnelly | https://pulpito.ceph.com/rishabh-2023-05-16_10:39:13-fs-wip-rishabh-2023May15-1524-testing-default-smithi/ |
1311 | 145 | Rishabh Dave | https://pulpito.ceph.com/rishabh-2023-05-17_11:09:48-fs-wip-rishabh-2023May15-1524-testing-default-smithi/ |
1312 | 138 | Rishabh Dave | https://pulpito.ceph.com/rishabh-2023-05-18_10:01:53-fs-wip-rishabh-2023May15-1524-testing-default-smithi/ |
1313 | 1 | Patrick Donnelly | (bins were rebuilt with a subset of orig PRs) https://pulpito.ceph.com/rishabh-2023-06-09_10:19:22-fs-wip-rishabh-2023Jun9-1308-testing-default-smithi/ |
1314 | |||
1315 | |||
1316 | * https://tracker.ceph.com/issues/59344 |
||
1317 | qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" |
||
1318 | 138 | Rishabh Dave | * https://tracker.ceph.com/issues/59348 |
1319 | qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota) |
||
1320 | 145 | Rishabh Dave | * https://tracker.ceph.com/issues/59346 |
1321 | fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" |
||
1322 | * https://tracker.ceph.com/issues/57656 |
||
1323 | [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable) |
||
1324 | * https://tracker.ceph.com/issues/54460 |
||
1325 | Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1 |
||
1326 | 138 | Rishabh Dave | * https://tracker.ceph.com/issues/54462 |
1327 | Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128 |
||
1328 | 145 | Rishabh Dave | * https://tracker.ceph.com/issues/61399 |
1329 | libmpich: undefined references to fi_strerror |
||
1330 | * https://tracker.ceph.com/issues/58945 |
||
1331 | xfstests-dev: ceph-fuse: generic |
||
1332 | 138 | Rishabh Dave | * https://tracker.ceph.com/issues/58742 |
1333 | 136 | Patrick Donnelly | xfstests-dev: kcephfs: generic |
1334 | |||
1335 | h3. 24 May 2023 |
||
1336 | |||
1337 | https://pulpito.ceph.com/pdonnell-2023-05-23_18:20:18-fs-wip-pdonnell-testing-20230523.134409-distro-default-smithi/ |
||
1338 | |||
1339 | * https://tracker.ceph.com/issues/57676 |
||
1340 | qa: error during scrub thrashing: rank damage found: {'backtrace'} |
||
1341 | * https://tracker.ceph.com/issues/59683 |
||
1342 | Error: Unable to find a match: userspace-rcu-devel libedit-devel device-mapper-devel with fscrypt tests |
||
1343 | * https://tracker.ceph.com/issues/61399 |
||
1344 | qa: "[Makefile:299: ior] Error 1" |
||
1345 | * https://tracker.ceph.com/issues/61265 |
||
1346 | qa: tasks.cephfs.fuse_mount:process failed to terminate after unmount |
||
1347 | * https://tracker.ceph.com/issues/59348 |
||
1348 | qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota) |
||
1349 | * https://tracker.ceph.com/issues/59346 |
||
1350 | qa/workunits/fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" |
||
1351 | * https://tracker.ceph.com/issues/61400 |
||
1352 | valgrind+ceph-mon: segmentation fault in rocksdb+tcmalloc |
||
1353 | * https://tracker.ceph.com/issues/54460 |
||
1354 | Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1 |
||
1355 | * https://tracker.ceph.com/issues/51964 |
||
1356 | qa: test_cephfs_mirror_restart_sync_on_blocklist failure |
||
1357 | * https://tracker.ceph.com/issues/59344 |
||
1358 | qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" |
||
1359 | * https://tracker.ceph.com/issues/61407 |
||
1360 | mds: abort on CInode::verify_dirfrags |
||
1361 | * https://tracker.ceph.com/issues/48773 |
||
1362 | qa: scrub does not complete |
||
1363 | * https://tracker.ceph.com/issues/57655 |
||
1364 | qa: fs:mixed-clients kernel_untar_build failure |
||
1365 | * https://tracker.ceph.com/issues/61409 |
||
1366 | 128 | Venky Shankar | qa: _test_stale_caps does not wait for file flush before stat |
1367 | |||
1368 | h3. 15 May 2023 |
||
1369 | 130 | Venky Shankar | |
1370 | 128 | Venky Shankar | https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230509.090020 |
1371 | https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230509.090020-6 |
||
1372 | |||
1373 | * https://tracker.ceph.com/issues/52624 |
||
1374 | qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" |
||
1375 | * https://tracker.ceph.com/issues/54460 |
||
1376 | Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1 |
||
1377 | * https://tracker.ceph.com/issues/57676 |
||
1378 | qa: error during scrub thrashing: rank damage found: {'backtrace'} |
||
1379 | * https://tracker.ceph.com/issues/59684 [kclient bug] |
||
1380 | Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt) |
||
1381 | * https://tracker.ceph.com/issues/59348 |
||
1382 | qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota) |
||
1383 | 131 | Venky Shankar | * https://tracker.ceph.com/issues/61148 |
1384 | dbench test results in call trace in dmesg [kclient bug] |
||
1385 | 133 | Kotresh Hiremath Ravishankar | * https://tracker.ceph.com/issues/58340 |
1386 | 134 | Kotresh Hiremath Ravishankar | mds: fsstress.sh hangs with multimds |
1387 | 125 | Venky Shankar | |
1388 | |||
1389 | 129 | Rishabh Dave | h3. 11 May 2023 |
1390 | |||
1391 | https://pulpito.ceph.com/yuriw-2023-05-10_18:21:40-fs-wip-yuri7-testing-2023-05-10-0742-distro-default-smithi/ |
||
1392 | |||
1393 | * https://tracker.ceph.com/issues/59684 [kclient bug] |
||
1394 | Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt) |
||
1395 | * https://tracker.ceph.com/issues/59348 |
||
1396 | qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota) |
||
1397 | * https://tracker.ceph.com/issues/57655 |
||
1398 | qa: fs:mixed-clients kernel_untar_build failure |
||
1399 | * https://tracker.ceph.com/issues/57676 |
||
1400 | qa: error during scrub thrashing: rank damage found: {'backtrace'} |
||
1401 | * https://tracker.ceph.com/issues/55805 |
||
1402 | error during scrub thrashing reached max tries in 900 secs |
||
1403 | * https://tracker.ceph.com/issues/54460 |
||
1404 | Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1 |
||
1405 | * https://tracker.ceph.com/issues/57656 |
||
1406 | [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable) |
||
1407 | * https://tracker.ceph.com/issues/58220 |
||
1408 | Command failed (workunit test fs/quota/quota.sh) on smithi081 with status 1: |
||
1409 | 1 | Patrick Donnelly | * https://tracker.ceph.com/issues/58220#note-9 |
1410 | workunit fs/test_python.sh: test_disk_quota_exceeeded_error failure |
||
1411 | 134 | Kotresh Hiremath Ravishankar | * https://tracker.ceph.com/issues/59342 |
1412 | qa/workunits/kernel_untar_build.sh failed when compiling the Linux source |
||
1413 | 135 | Kotresh Hiremath Ravishankar | * https://tracker.ceph.com/issues/58949 |
1414 | test_cephfs.test_disk_quota_exceeeded_error - AssertionError: DiskQuotaExceeded not raised by write |
||
1415 | 129 | Rishabh Dave | * https://tracker.ceph.com/issues/61243 (NEW) |
1416 | test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed |
||
1417 | |||
1418 | 125 | Venky Shankar | h3. 11 May 2023 |
1419 | 127 | Venky Shankar | |
1420 | https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230509.054005 |
||
1421 | 126 | Venky Shankar | |
1422 | 125 | Venky Shankar | (no fsstress job failure [https://tracker.ceph.com/issues/58340] since https://github.com/ceph/ceph/pull/49553 |
1423 | was included in the branch, however, the PR got updated and needs retest). |
||
1424 | |||
1425 | * https://tracker.ceph.com/issues/52624 |
||
1426 | qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" |
||
1427 | * https://tracker.ceph.com/issues/54460 |
||
1428 | Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1 |
||
1429 | * https://tracker.ceph.com/issues/57676 |
||
1430 | qa: error during scrub thrashing: rank damage found: {'backtrace'} |
||
1431 | * https://tracker.ceph.com/issues/59683 |
||
1432 | Error: Unable to find a match: userspace-rcu-devel libedit-devel device-mapper-devel with fscrypt tests |
||
1433 | * https://tracker.ceph.com/issues/59684 [kclient bug] |
||
1434 | Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt) |
||
1435 | * https://tracker.ceph.com/issues/59348 |
||
1436 | 124 | Venky Shankar | qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota) |
1437 | |||
1438 | h3. 09 May 2023 |
||
1439 | |||
1440 | https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230506.143554 |
||
1441 | |||
1442 | * https://tracker.ceph.com/issues/52624 |
||
1443 | qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" |
||
1444 | * https://tracker.ceph.com/issues/58340 |
||
1445 | mds: fsstress.sh hangs with multimds |
||
1446 | * https://tracker.ceph.com/issues/54460 |
||
1447 | Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1 |
||
1448 | * https://tracker.ceph.com/issues/57676 |
||
1449 | qa: error during scrub thrashing: rank damage found: {'backtrace'} |
||
1450 | * https://tracker.ceph.com/issues/51964 |
||
1451 | qa: test_cephfs_mirror_restart_sync_on_blocklist failure |
||
1452 | * https://tracker.ceph.com/issues/59350 |
||
1453 | qa: test_flag_scrub_mdsdir (tasks.cephfs.test_scrub_checks.TestScrubChecks) ... ERROR |
||
1454 | * https://tracker.ceph.com/issues/59683 |
||
1455 | Error: Unable to find a match: userspace-rcu-devel libedit-devel device-mapper-devel with fscrypt tests |
||
1456 | * https://tracker.ceph.com/issues/59684 [kclient bug] |
||
1457 | Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt) |
||
1458 | * https://tracker.ceph.com/issues/59348 |
||
1459 | 123 | Venky Shankar | qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota) |
1460 | |||
1461 | h3. 10 Apr 2023 |
||
1462 | |||
1463 | https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230330.105356 |
||
1464 | |||
1465 | * https://tracker.ceph.com/issues/52624 |
||
1466 | qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" |
||
1467 | * https://tracker.ceph.com/issues/58340 |
||
1468 | mds: fsstress.sh hangs with multimds |
||
1469 | * https://tracker.ceph.com/issues/54460 |
||
1470 | Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1 |
||
1471 | * https://tracker.ceph.com/issues/57676 |
||
1472 | qa: error during scrub thrashing: rank damage found: {'backtrace'} |
||
1473 | 119 | Rishabh Dave | * https://tracker.ceph.com/issues/51964 |
1474 | 120 | Rishabh Dave | qa: test_cephfs_mirror_restart_sync_on_blocklist failure |
1475 | 121 | Rishabh Dave | |
1476 | 120 | Rishabh Dave | h3. 31 Mar 2023 |
1477 | 122 | Rishabh Dave | |
1478 | run: http://pulpito.front.sepia.ceph.com/rishabh-2023-03-03_21:39:49-fs-wip-rishabh-2023Mar03-2316-testing-default-smithi/ |
||
1479 | 120 | Rishabh Dave | re-run (some PRs removed from batch): http://pulpito.front.sepia.ceph.com/rishabh-2023-03-11_05:54:03-fs-wip-rishabh-2023Mar10-1727-testing-default-smithi/ |
1480 | re-run (some PRs removed from batch): http://pulpito.front.sepia.ceph.com/rishabh-2023-03-23_08:27:28-fs-wip-rishabh-2023Mar20-2250-testing-default-smithi/ |
||
1481 | |||
1482 | There were many more re-runs for "failed+dead" jobs as well as for individual jobs. half of the PRs from the batch were removed (gradually over subsequent re-runs). |
||
1483 | |||
1484 | * https://tracker.ceph.com/issues/57676 |
||
1485 | qa: error during scrub thrashing: rank damage found: {'backtrace'} |
||
1486 | * https://tracker.ceph.com/issues/54460 |
||
1487 | Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1 |
||
1488 | * https://tracker.ceph.com/issues/58220 |
||
1489 | Command failed (workunit test fs/quota/quota.sh) on smithixxx with status 1 |
||
1490 | * https://tracker.ceph.com/issues/58220#note-9 |
||
1491 | workunit fs/test_python.sh: test_disk_quota_exceeeded_error failure |
||
1492 | * https://tracker.ceph.com/issues/56695 |
||
1493 | Command failed (workunit test suites/pjd.sh) |
||
1494 | * https://tracker.ceph.com/issues/58564 |
||
1495 | workuit dbench failed with error code 1 |
||
1496 | * https://tracker.ceph.com/issues/57206 |
||
1497 | libcephfs/test.sh: ceph_test_libcephfs_reclaim |
||
1498 | * https://tracker.ceph.com/issues/57580 |
||
1499 | Test failure: test_newops_getvxattr (tasks.cephfs.test_newops.TestNewOps) |
||
1500 | * https://tracker.ceph.com/issues/58940 |
||
1501 | ceph osd hit ceph_abort |
||
1502 | * https://tracker.ceph.com/issues/55805 |
||
1503 | 118 | Venky Shankar | error scrub thrashing reached max tries in 900 secs |
1504 | |||
1505 | h3. 30 March 2023 |
||
1506 | |||
1507 | https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230315.085747 |
||
1508 | |||
1509 | * https://tracker.ceph.com/issues/58938 |
||
1510 | qa: xfstests-dev's generic test suite has 7 failures with kclient |
||
1511 | * https://tracker.ceph.com/issues/51964 |
||
1512 | qa: test_cephfs_mirror_restart_sync_on_blocklist failure |
||
1513 | * https://tracker.ceph.com/issues/58340 |
||
1514 | 114 | Venky Shankar | mds: fsstress.sh hangs with multimds |
1515 | |||
1516 | 115 | Venky Shankar | h3. 29 March 2023 |
1517 | 114 | Venky Shankar | |
1518 | https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230317.095222 |
||
1519 | |||
1520 | * https://tracker.ceph.com/issues/56695 |
||
1521 | [RHEL stock] pjd test failures |
||
1522 | * https://tracker.ceph.com/issues/57676 |
||
1523 | qa: error during scrub thrashing: rank damage found: {'backtrace'} |
||
1524 | * https://tracker.ceph.com/issues/57087 |
||
1525 | qa: test_fragmented_injection (tasks.cephfs.test_data_scan.TestDataScan) failure |
||
1526 | 116 | Venky Shankar | * https://tracker.ceph.com/issues/58340 |
1527 | mds: fsstress.sh hangs with multimds |
||
1528 | 114 | Venky Shankar | * https://tracker.ceph.com/issues/57655 |
1529 | qa: fs:mixed-clients kernel_untar_build failure |
||
1530 | 117 | Venky Shankar | * https://tracker.ceph.com/issues/59230 |
1531 | Test failure: test_object_deletion (tasks.cephfs.test_damage.TestDamage) |
||
1532 | 114 | Venky Shankar | * https://tracker.ceph.com/issues/58938 |
1533 | 113 | Venky Shankar | qa: xfstests-dev's generic test suite has 7 failures with kclient |
1534 | |||
1535 | h3. 13 Mar 2023 |
||
1536 | |||
1537 | * https://tracker.ceph.com/issues/56695 |
||
1538 | [RHEL stock] pjd test failures |
||
1539 | * https://tracker.ceph.com/issues/57676 |
||
1540 | qa: error during scrub thrashing: rank damage found: {'backtrace'} |
||
1541 | * https://tracker.ceph.com/issues/51964 |
||
1542 | qa: test_cephfs_mirror_restart_sync_on_blocklist failure |
||
1543 | * https://tracker.ceph.com/issues/54460 |
||
1544 | Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1 |
||
1545 | * https://tracker.ceph.com/issues/57656 |
||
1546 | 112 | Venky Shankar | [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable) |
1547 | |||
1548 | h3. 09 Mar 2023 |
||
1549 | |||
1550 | https://pulpito.ceph.com/vshankar-2023-03-03_04:39:14-fs-wip-vshankar-testing-20230303.023823-testing-default-smithi/ |
||
1551 | https://pulpito.ceph.com/vshankar-2023-03-08_15:12:36-fs-wip-vshankar-testing-20230308.112059-testing-default-smithi/ |
||
1552 | |||
1553 | * https://tracker.ceph.com/issues/56695 |
||
1554 | [RHEL stock] pjd test failures |
||
1555 | * https://tracker.ceph.com/issues/57676 |
||
1556 | qa: error during scrub thrashing: rank damage found: {'backtrace'} |
||
1557 | * https://tracker.ceph.com/issues/51964 |
||
1558 | qa: test_cephfs_mirror_restart_sync_on_blocklist failure |
||
1559 | * https://tracker.ceph.com/issues/54460 |
||
1560 | Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1 |
||
1561 | * https://tracker.ceph.com/issues/58340 |
||
1562 | mds: fsstress.sh hangs with multimds |
||
1563 | * https://tracker.ceph.com/issues/57087 |
||
1564 | 111 | Venky Shankar | qa: test_fragmented_injection (tasks.cephfs.test_data_scan.TestDataScan) failure |
1565 | |||
1566 | h3. 07 Mar 2023 |
||
1567 | |||
1568 | https://pulpito.ceph.com/vshankar-2023-03-02_09:21:58-fs-wip-vshankar-testing-20230222.044949-testing-default-smithi/ |
||
1569 | https://pulpito.ceph.com/vshankar-2023-03-07_05:15:12-fs-wip-vshankar-testing-20230307.030510-testing-default-smithi/ |
||
1570 | |||
1571 | * https://tracker.ceph.com/issues/56695 |
||
1572 | [RHEL stock] pjd test failures |
||
1573 | * https://tracker.ceph.com/issues/57676 |
||
1574 | qa: error during scrub thrashing: rank damage found: {'backtrace'} |
||
1575 | * https://tracker.ceph.com/issues/51964 |
||
1576 | qa: test_cephfs_mirror_restart_sync_on_blocklist failure |
||
1577 | * https://tracker.ceph.com/issues/57656 |
||
1578 | [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable) |
||
1579 | * https://tracker.ceph.com/issues/57655 |
||
1580 | qa: fs:mixed-clients kernel_untar_build failure |
||
1581 | * https://tracker.ceph.com/issues/58220 |
||
1582 | Command failed (workunit test fs/quota/quota.sh) on smithi081 with status 1: |
||
1583 | * https://tracker.ceph.com/issues/54460 |
||
1584 | Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1 |
||
1585 | * https://tracker.ceph.com/issues/58934 |
||
1586 | 109 | Venky Shankar | snaptest-git-ceph.sh failure with ceph-fuse |
1587 | |||
1588 | h3. 28 Feb 2023 |
||
1589 | |||
1590 | https://pulpito.ceph.com/vshankar-2023-02-24_02:11:45-fs-wip-vshankar-testing-20230222.025426-testing-default-smithi/ |
||
1591 | |||
1592 | * https://tracker.ceph.com/issues/56695 |
||
1593 | [RHEL stock] pjd test failures |
||
1594 | * https://tracker.ceph.com/issues/57676 |
||
1595 | qa: error during scrub thrashing: rank damage found: {'backtrace'} |
||
1596 | 110 | Venky Shankar | * https://tracker.ceph.com/issues/56446 |
1597 | 109 | Venky Shankar | Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits) |
1598 | |||
1599 | 107 | Venky Shankar | (teuthology infra issues causing testing delays - merging PRs which have tests passing) |
1600 | |||
1601 | h3. 25 Jan 2023 |
||
1602 | |||
1603 | https://pulpito.ceph.com/vshankar-2023-01-25_07:57:32-fs-wip-vshankar-testing-20230125.055346-testing-default-smithi/ |
||
1604 | |||
1605 | * https://tracker.ceph.com/issues/52624 |
||
1606 | qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" |
||
1607 | * https://tracker.ceph.com/issues/56695 |
||
1608 | [RHEL stock] pjd test failures |
||
1609 | * https://tracker.ceph.com/issues/57676 |
||
1610 | qa: error during scrub thrashing: rank damage found: {'backtrace'} |
||
1611 | * https://tracker.ceph.com/issues/56446 |
||
1612 | Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits) |
||
1613 | * https://tracker.ceph.com/issues/57206 |
||
1614 | libcephfs/test.sh: ceph_test_libcephfs_reclaim |
||
1615 | * https://tracker.ceph.com/issues/58220 |
||
1616 | Command failed (workunit test fs/quota/quota.sh) on smithi081 with status 1: |
||
1617 | * https://tracker.ceph.com/issues/58340 |
||
1618 | mds: fsstress.sh hangs with multimds |
||
1619 | * https://tracker.ceph.com/issues/56011 |
||
1620 | fs/thrash: snaptest-snap-rm-cmp.sh fails in mds5sum comparison |
||
1621 | * https://tracker.ceph.com/issues/54460 |
||
1622 | 101 | Rishabh Dave | Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1 |
1623 | |||
1624 | h3. 30 JAN 2023 |
||
1625 | |||
1626 | run: http://pulpito.front.sepia.ceph.com/rishabh-2022-11-28_08:04:11-fs-wip-rishabh-testing-2022Nov24-1818-testing-default-smithi/ |
||
1627 | re-run: http://pulpito.front.sepia.ceph.com/rishabh-2023-01-13_12:08:33-fs-wip-rishabh-testing-2022Nov24-11Jan2023-distro-default-smithi/ |
||
1628 | 105 | Rishabh Dave | re-run of re-run: http://pulpito.front.sepia.ceph.com/rishabh-2023-01-23_18:53:32-fs-wip-rishabh-testing-2022Nov24-11Jan2023-distro-default-smithi/ |
1629 | |||
1630 | 101 | Rishabh Dave | * https://tracker.ceph.com/issues/52624 |
1631 | qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" |
||
1632 | * https://tracker.ceph.com/issues/56695 |
||
1633 | [RHEL stock] pjd test failures |
||
1634 | * https://tracker.ceph.com/issues/57676 |
||
1635 | qa: error during scrub thrashing: rank damage found: {'backtrace'} |
||
1636 | * https://tracker.ceph.com/issues/55332 |
||
1637 | Failure in snaptest-git-ceph.sh |
||
1638 | * https://tracker.ceph.com/issues/51964 |
||
1639 | qa: test_cephfs_mirror_restart_sync_on_blocklist failure |
||
1640 | * https://tracker.ceph.com/issues/56446 |
||
1641 | Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits) |
||
1642 | * https://tracker.ceph.com/issues/57655 |
||
1643 | qa: fs:mixed-clients kernel_untar_build failure |
||
1644 | * https://tracker.ceph.com/issues/54460 |
||
1645 | Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1 |
||
1646 | 103 | Rishabh Dave | * https://tracker.ceph.com/issues/58340 |
1647 | mds: fsstress.sh hangs with multimds |
||
1648 | 101 | Rishabh Dave | * https://tracker.ceph.com/issues/58219 |
1649 | 102 | Rishabh Dave | Command crashed: 'ceph-dencoder type inode_backtrace_t import - decode dump_json' |
1650 | |||
1651 | * "Failed to load ceph-mgr modules: prometheus" in cluster log" |
||
1652 | 106 | Rishabh Dave | http://pulpito.front.sepia.ceph.com/rishabh-2023-01-23_18:53:32-fs-wip-rishabh-testing-2022Nov24-11Jan2023-distro-default-smithi/7134086 |
1653 | Acc to Venky this was fixed in https://github.com/ceph/ceph/commit/cf6089200d96fc56b08ee17a4e31f19823370dc8 |
||
1654 | 102 | Rishabh Dave | * Created https://tracker.ceph.com/issues/58564 |
1655 | 100 | Venky Shankar | workunit test suites/dbench.sh failed error code 1 |
1656 | |||
1657 | h3. 15 Dec 2022 |
||
1658 | |||
1659 | https://pulpito.ceph.com/?branch=wip-vshankar-testing-20221215.112736 |
||
1660 | |||
1661 | * https://tracker.ceph.com/issues/52624 |
||
1662 | qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" |
||
1663 | * https://tracker.ceph.com/issues/56695 |
||
1664 | [RHEL stock] pjd test failures |
||
1665 | * https://tracker.ceph.com/issues/58219 |
||
1666 | * https://tracker.ceph.com/issues/57655 |
||
1667 | * qa: fs:mixed-clients kernel_untar_build failure |
||
1668 | Test failure: test_journal_migration (tasks.cephfs.test_journal_migration.TestJournalMigration) |
||
1669 | * https://tracker.ceph.com/issues/57676 |
||
1670 | qa: error during scrub thrashing: rank damage found: {'backtrace'} |
||
1671 | * https://tracker.ceph.com/issues/58340 |
||
1672 | 96 | Venky Shankar | mds: fsstress.sh hangs with multimds |
1673 | |||
1674 | h3. 08 Dec 2022 |
||
1675 | 99 | Venky Shankar | |
1676 | 96 | Venky Shankar | https://pulpito.ceph.com/?branch=wip-vshankar-testing-20221130.043104 |
1677 | https://pulpito.ceph.com/?branch=wip-vshankar-testing-20221209.043803 |
||
1678 | |||
1679 | (lots of transient git.ceph.com failures) |
||
1680 | |||
1681 | * https://tracker.ceph.com/issues/52624 |
||
1682 | qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" |
||
1683 | * https://tracker.ceph.com/issues/56695 |
||
1684 | [RHEL stock] pjd test failures |
||
1685 | * https://tracker.ceph.com/issues/57655 |
||
1686 | qa: fs:mixed-clients kernel_untar_build failure |
||
1687 | * https://tracker.ceph.com/issues/58219 |
||
1688 | Test failure: test_journal_migration (tasks.cephfs.test_journal_migration.TestJournalMigration) |
||
1689 | * https://tracker.ceph.com/issues/58220 |
||
1690 | Command failed (workunit test fs/quota/quota.sh) on smithi081 with status 1: |
||
1691 | 97 | Venky Shankar | * https://tracker.ceph.com/issues/57676 |
1692 | qa: error during scrub thrashing: rank damage found: {'backtrace'} |
||
1693 | 98 | Venky Shankar | * https://tracker.ceph.com/issues/53859 |
1694 | qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm) |
||
1695 | * https://tracker.ceph.com/issues/54460 |
||
1696 | Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1 |
||
1697 | 96 | Venky Shankar | * https://tracker.ceph.com/issues/58244 |
1698 | 95 | Venky Shankar | Test failure: test_rebuild_inotable (tasks.cephfs.test_data_scan.TestDataScan) |
1699 | |||
1700 | h3. 14 Oct 2022 |
||
1701 | |||
1702 | https://pulpito.ceph.com/vshankar-2022-10-12_04:56:59-fs-wip-vshankar-testing-20221011-145847-testing-default-smithi/ |
||
1703 | https://pulpito.ceph.com/vshankar-2022-10-14_04:04:57-fs-wip-vshankar-testing-20221014-072608-testing-default-smithi/ |
||
1704 | |||
1705 | * https://tracker.ceph.com/issues/52624 |
||
1706 | qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" |
||
1707 | * https://tracker.ceph.com/issues/55804 |
||
1708 | Command failed (workunit test suites/pjd.sh) |
||
1709 | * https://tracker.ceph.com/issues/51964 |
||
1710 | qa: test_cephfs_mirror_restart_sync_on_blocklist failure |
||
1711 | * https://tracker.ceph.com/issues/57682 |
||
1712 | client: ERROR: test_reconnect_after_blocklisted |
||
1713 | 90 | Rishabh Dave | * https://tracker.ceph.com/issues/54460 |
1714 | 91 | Rishabh Dave | Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1 |
1715 | |||
1716 | h3. 10 Oct 2022 |
||
1717 | 92 | Rishabh Dave | |
1718 | 91 | Rishabh Dave | http://pulpito.front.sepia.ceph.com/rishabh-2022-09-30_19:45:21-fs-wip-rishabh-testing-30Sep2022-testing-default-smithi/ |
1719 | |||
1720 | reruns |
||
1721 | * fs-thrash, passed: http://pulpito.front.sepia.ceph.com/rishabh-2022-10-04_13:19:47-fs-wip-rishabh-testing-30Sep2022-testing-default-smithi/ |
||
1722 | 94 | Rishabh Dave | * fs-verify, passed: http://pulpito.front.sepia.ceph.com/rishabh-2022-10-05_12:25:37-fs-wip-rishabh-testing-30Sep2022-testing-default-smithi/ |
1723 | 91 | Rishabh Dave | * cephadm failures also passed after many re-runs: http://pulpito.front.sepia.ceph.com/rishabh-2022-10-06_13:50:51-fs-wip-rishabh-testing-30Sep2022-2-testing-default-smithi/ |
1724 | 93 | Rishabh Dave | ** needed this PR to be merged in ceph-ci branch - https://github.com/ceph/ceph/pull/47458 |
1725 | 91 | Rishabh Dave | |
1726 | known bugs |
||
1727 | * https://tracker.ceph.com/issues/52624 |
||
1728 | qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" |
||
1729 | * https://tracker.ceph.com/issues/50223 |
||
1730 | client.xxxx isn't responding to mclientcaps(revoke |
||
1731 | * https://tracker.ceph.com/issues/57299 |
||
1732 | qa: test_dump_loads fails with JSONDecodeError |
||
1733 | * https://tracker.ceph.com/issues/57655 [Exist in main as well] |
||
1734 | qa: fs:mixed-clients kernel_untar_build failure |
||
1735 | * https://tracker.ceph.com/issues/57206 |
||
1736 | 90 | Rishabh Dave | libcephfs/test.sh: ceph_test_libcephfs_reclaim |
1737 | |||
1738 | h3. 2022 Sep 29 |
||
1739 | |||
1740 | http://pulpito.front.sepia.ceph.com/rishabh-2022-09-14_12:48:43-fs-wip-rishabh-testing-2022Sep9-1708-testing-default-smithi/ |
||
1741 | |||
1742 | * https://tracker.ceph.com/issues/55804 |
||
1743 | Command failed (workunit test suites/pjd.sh) |
||
1744 | * https://tracker.ceph.com/issues/36593 |
||
1745 | Command failed (workunit test fs/quota/quota.sh) on smithixxx with status 1 |
||
1746 | * https://tracker.ceph.com/issues/52624 |
||
1747 | qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" |
||
1748 | * https://tracker.ceph.com/issues/51964 |
||
1749 | qa: test_cephfs_mirror_restart_sync_on_blocklist failure |
||
1750 | * https://tracker.ceph.com/issues/56632 |
||
1751 | Test failure: test_subvolume_snapshot_clone_quota_exceeded |
||
1752 | * https://tracker.ceph.com/issues/50821 |
||
1753 | 88 | Patrick Donnelly | qa: untar_snap_rm failure during mds thrashing |
1754 | |||
1755 | h3. 2022 Sep 26 |
||
1756 | |||
1757 | https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20220923.171109 |
||
1758 | |||
1759 | * https://tracker.ceph.com/issues/55804 |
||
1760 | qa failure: pjd link tests failed |
||
1761 | * https://tracker.ceph.com/issues/57676 |
||
1762 | qa: error during scrub thrashing: rank damage found: {'backtrace'} |
||
1763 | * https://tracker.ceph.com/issues/52624 |
||
1764 | qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" |
||
1765 | * https://tracker.ceph.com/issues/57580 |
||
1766 | Test failure: test_newops_getvxattr (tasks.cephfs.test_newops.TestNewOps) |
||
1767 | * https://tracker.ceph.com/issues/48773 |
||
1768 | qa: scrub does not complete |
||
1769 | * https://tracker.ceph.com/issues/57299 |
||
1770 | qa: test_dump_loads fails with JSONDecodeError |
||
1771 | * https://tracker.ceph.com/issues/57280 |
||
1772 | qa: tasks/kernel_cfuse_workunits_untarbuild_blogbench fails - Failed to fetch package version from shaman |
||
1773 | * https://tracker.ceph.com/issues/57205 |
||
1774 | Test failure: test_subvolume_group_ls_filter_internal_directories (tasks.cephfs.test_volumes.TestSubvolumeGroups) |
||
1775 | * https://tracker.ceph.com/issues/57656 |
||
1776 | [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable) |
||
1777 | * https://tracker.ceph.com/issues/57677 |
||
1778 | qa: "1 MDSs behind on trimming (MDS_TRIM)" |
||
1779 | * https://tracker.ceph.com/issues/57206 |
||
1780 | libcephfs/test.sh: ceph_test_libcephfs_reclaim |
||
1781 | * https://tracker.ceph.com/issues/57446 |
||
1782 | qa: test_subvolume_snapshot_info_if_orphan_clone fails |
||
1783 | 89 | Patrick Donnelly | * https://tracker.ceph.com/issues/57655 [Exist in main as well] |
1784 | qa: fs:mixed-clients kernel_untar_build failure |
||
1785 | 88 | Patrick Donnelly | * https://tracker.ceph.com/issues/57682 |
1786 | client: ERROR: test_reconnect_after_blocklisted |
||
1787 | 87 | Patrick Donnelly | |
1788 | |||
1789 | h3. 2022 Sep 22 |
||
1790 | |||
1791 | https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20220920.234701 |
||
1792 | |||
1793 | * https://tracker.ceph.com/issues/57299 |
||
1794 | qa: test_dump_loads fails with JSONDecodeError |
||
1795 | * https://tracker.ceph.com/issues/57205 |
||
1796 | Test failure: test_subvolume_group_ls_filter_internal_directories (tasks.cephfs.test_volumes.TestSubvolumeGroups) |
||
1797 | * https://tracker.ceph.com/issues/52624 |
||
1798 | qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" |
||
1799 | * https://tracker.ceph.com/issues/57580 |
||
1800 | Test failure: test_newops_getvxattr (tasks.cephfs.test_newops.TestNewOps) |
||
1801 | * https://tracker.ceph.com/issues/57280 |
||
1802 | qa: tasks/kernel_cfuse_workunits_untarbuild_blogbench fails - Failed to fetch package version from shaman |
||
1803 | * https://tracker.ceph.com/issues/48773 |
||
1804 | qa: scrub does not complete |
||
1805 | * https://tracker.ceph.com/issues/56446 |
||
1806 | Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits) |
||
1807 | * https://tracker.ceph.com/issues/57206 |
||
1808 | libcephfs/test.sh: ceph_test_libcephfs_reclaim |
||
1809 | * https://tracker.ceph.com/issues/51267 |
||
1810 | CommandFailedError: Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithi096 with status 1:... |
||
1811 | |||
1812 | NEW: |
||
1813 | |||
1814 | * https://tracker.ceph.com/issues/57656 |
||
1815 | [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable) |
||
1816 | * https://tracker.ceph.com/issues/57655 [Exist in main as well] |
||
1817 | qa: fs:mixed-clients kernel_untar_build failure |
||
1818 | * https://tracker.ceph.com/issues/57657 |
||
1819 | mds: scrub locates mismatch between child accounted_rstats and self rstats |
||
1820 | |||
1821 | Segfault probably caused by: https://github.com/ceph/ceph/pull/47795#issuecomment-1255724799 |
||
1822 | 80 | Venky Shankar | |
1823 | 79 | Venky Shankar | |
1824 | h3. 2022 Sep 16 |
||
1825 | |||
1826 | https://pulpito.ceph.com/?branch=wip-vshankar-testing1-20220905-132828 |
||
1827 | |||
1828 | * https://tracker.ceph.com/issues/57446 |
||
1829 | qa: test_subvolume_snapshot_info_if_orphan_clone fails |
||
1830 | * https://tracker.ceph.com/issues/57299 |
||
1831 | qa: test_dump_loads fails with JSONDecodeError |
||
1832 | * https://tracker.ceph.com/issues/50223 |
||
1833 | client.xxxx isn't responding to mclientcaps(revoke) |
||
1834 | * https://tracker.ceph.com/issues/52624 |
||
1835 | qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" |
||
1836 | * https://tracker.ceph.com/issues/57205 |
||
1837 | Test failure: test_subvolume_group_ls_filter_internal_directories (tasks.cephfs.test_volumes.TestSubvolumeGroups) |
||
1838 | * https://tracker.ceph.com/issues/57280 |
||
1839 | qa: tasks/kernel_cfuse_workunits_untarbuild_blogbench fails - Failed to fetch package version from shaman |
||
1840 | * https://tracker.ceph.com/issues/51282 |
||
1841 | pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings |
||
1842 | * https://tracker.ceph.com/issues/48203 |
||
1843 | https://tracker.ceph.com/issues/36593 |
||
1844 | qa: quota failure |
||
1845 | qa: quota failure caused by clients stepping on each other |
||
1846 | * https://tracker.ceph.com/issues/57580 |
||
1847 | 77 | Rishabh Dave | Test failure: test_newops_getvxattr (tasks.cephfs.test_newops.TestNewOps) |
1848 | |||
1849 | 76 | Rishabh Dave | |
1850 | h3. 2022 Aug 26 |
||
1851 | |||
1852 | http://pulpito.front.sepia.ceph.com/rishabh-2022-08-22_17:49:59-fs-wip-rishabh-testing-2022Aug19-testing-default-smithi/ |
||
1853 | http://pulpito.front.sepia.ceph.com/rishabh-2022-08-24_11:56:51-fs-wip-rishabh-testing-2022Aug19-testing-default-smithi/ |
||
1854 | |||
1855 | * https://tracker.ceph.com/issues/57206 |
||
1856 | libcephfs/test.sh: ceph_test_libcephfs_reclaim |
||
1857 | * https://tracker.ceph.com/issues/56632 |
||
1858 | Test failure: test_subvolume_snapshot_clone_quota_exceeded (tasks.cephfs.test_volumes.TestSubvolumeSnapshotClones) |
||
1859 | * https://tracker.ceph.com/issues/56446 |
||
1860 | Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits) |
||
1861 | * https://tracker.ceph.com/issues/51964 |
||
1862 | qa: test_cephfs_mirror_restart_sync_on_blocklist failure |
||
1863 | * https://tracker.ceph.com/issues/53859 |
||
1864 | qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm) |
||
1865 | |||
1866 | * https://tracker.ceph.com/issues/54460 |
||
1867 | Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1 |
||
1868 | * https://tracker.ceph.com/issues/54462 |
||
1869 | Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128 |
||
1870 | * https://tracker.ceph.com/issues/54460 |
||
1871 | Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1 |
||
1872 | * https://tracker.ceph.com/issues/36593 |
||
1873 | Command failed (workunit test fs/quota/quota.sh) on smithixxx with status 1 |
||
1874 | |||
1875 | * https://tracker.ceph.com/issues/52624 |
||
1876 | qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" |
||
1877 | * https://tracker.ceph.com/issues/55804 |
||
1878 | Command failed (workunit test suites/pjd.sh) |
||
1879 | * https://tracker.ceph.com/issues/50223 |
||
1880 | client.xxxx isn't responding to mclientcaps(revoke) |
||
1881 | 75 | Venky Shankar | |
1882 | |||
1883 | h3. 2022 Aug 22 |
||
1884 | |||
1885 | https://pulpito.ceph.com/vshankar-2022-08-12_09:34:24-fs-wip-vshankar-testing1-20220812-072441-testing-default-smithi/ |
||
1886 | https://pulpito.ceph.com/vshankar-2022-08-18_04:30:42-fs-wip-vshankar-testing1-20220818-082047-testing-default-smithi/ (drop problematic PR and re-run) |
||
1887 | |||
1888 | * https://tracker.ceph.com/issues/52624 |
||
1889 | qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" |
||
1890 | * https://tracker.ceph.com/issues/56446 |
||
1891 | Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits) |
||
1892 | * https://tracker.ceph.com/issues/55804 |
||
1893 | Command failed (workunit test suites/pjd.sh) |
||
1894 | * https://tracker.ceph.com/issues/51278 |
||
1895 | mds: "FAILED ceph_assert(!segments.empty())" |
||
1896 | * https://tracker.ceph.com/issues/54460 |
||
1897 | Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1 |
||
1898 | * https://tracker.ceph.com/issues/57205 |
||
1899 | Test failure: test_subvolume_group_ls_filter_internal_directories (tasks.cephfs.test_volumes.TestSubvolumeGroups) |
||
1900 | * https://tracker.ceph.com/issues/57206 |
||
1901 | ceph_test_libcephfs_reclaim crashes during test |
||
1902 | * https://tracker.ceph.com/issues/53859 |
||
1903 | Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm) |
||
1904 | * https://tracker.ceph.com/issues/50223 |
||
1905 | 72 | Venky Shankar | client.xxxx isn't responding to mclientcaps(revoke) |
1906 | |||
1907 | h3. 2022 Aug 12 |
||
1908 | |||
1909 | https://pulpito.ceph.com/vshankar-2022-08-10_04:06:00-fs-wip-vshankar-testing-20220805-190751-testing-default-smithi/ |
||
1910 | https://pulpito.ceph.com/vshankar-2022-08-11_12:16:58-fs-wip-vshankar-testing-20220811-145809-testing-default-smithi/ (drop problematic PR and re-run) |
||
1911 | |||
1912 | * https://tracker.ceph.com/issues/52624 |
||
1913 | qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" |
||
1914 | * https://tracker.ceph.com/issues/56446 |
||
1915 | Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits) |
||
1916 | * https://tracker.ceph.com/issues/51964 |
||
1917 | qa: test_cephfs_mirror_restart_sync_on_blocklist failure |
||
1918 | * https://tracker.ceph.com/issues/55804 |
||
1919 | Command failed (workunit test suites/pjd.sh) |
||
1920 | * https://tracker.ceph.com/issues/50223 |
||
1921 | client.xxxx isn't responding to mclientcaps(revoke) |
||
1922 | * https://tracker.ceph.com/issues/50821 |
||
1923 | 73 | Venky Shankar | qa: untar_snap_rm failure during mds thrashing |
1924 | 72 | Venky Shankar | * https://tracker.ceph.com/issues/54460 |
1925 | 71 | Venky Shankar | Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1 |
1926 | |||
1927 | h3. 2022 Aug 04 |
||
1928 | |||
1929 | https://pulpito.ceph.com/?branch=wip-vshankar-testing1-20220804-123835 (only mgr/volumes, mgr/stats) |
||
1930 | |||
1931 | 69 | Rishabh Dave | Unrealted teuthology failure on rhel |
1932 | 68 | Rishabh Dave | |
1933 | h3. 2022 Jul 25 |
||
1934 | |||
1935 | http://pulpito.front.sepia.ceph.com/rishabh-2022-07-22_11:34:20-fs-wip-rishabh-testing-2022Jul22-1400-testing-default-smithi/ |
||
1936 | |||
1937 | 74 | Rishabh Dave | 1st re-run: http://pulpito.front.sepia.ceph.com/rishabh-2022-07-24_03:51:19-fs-wip-rishabh-testing-2022Jul22-1400-testing-default-smithi |
1938 | 2nd re-run: http://pulpito.front.sepia.ceph.com/rishabh-2022-07-24_08:53:36-fs-wip-rishabh-testing-2022Jul22-1400-testing-default-smithi/ |
||
1939 | 68 | Rishabh Dave | 3rd re-run: http://pulpito.front.sepia.ceph.com/rishabh-2022-07-24_08:53:36-fs-wip-rishabh-testing-2022Jul22-1400-testing-default-smithi/ |
1940 | 4th (final) re-run: http://pulpito.front.sepia.ceph.com/rishabh-2022-07-28_03:59:01-fs-wip-rishabh-testing-2022Jul28-0143-testing-default-smithi/ |
||
1941 | |||
1942 | * https://tracker.ceph.com/issues/55804 |
||
1943 | Command failed (workunit test suites/pjd.sh) |
||
1944 | * https://tracker.ceph.com/issues/50223 |
||
1945 | client.xxxx isn't responding to mclientcaps(revoke) |
||
1946 | |||
1947 | * https://tracker.ceph.com/issues/54460 |
||
1948 | Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1 |
||
1949 | 1 | Patrick Donnelly | * https://tracker.ceph.com/issues/36593 |
1950 | 74 | Rishabh Dave | Command failed (workunit test fs/quota/quota.sh) on smithixxx with status 1 |
1951 | 68 | Rishabh Dave | * https://tracker.ceph.com/issues/54462 |
1952 | 67 | Patrick Donnelly | Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128~ |
1953 | |||
1954 | h3. 2022 July 22 |
||
1955 | |||
1956 | https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20220721.235756 |
||
1957 | |||
1958 | MDS_HEALTH_DUMMY error in log fixed by followup commit. |
||
1959 | transient selinux ping failure |
||
1960 | |||
1961 | * https://tracker.ceph.com/issues/56694 |
||
1962 | qa: avoid blocking forever on hung umount |
||
1963 | * https://tracker.ceph.com/issues/56695 |
||
1964 | [RHEL stock] pjd test failures |
||
1965 | * https://tracker.ceph.com/issues/56696 |
||
1966 | admin keyring disappears during qa run |
||
1967 | * https://tracker.ceph.com/issues/56697 |
||
1968 | qa: fs/snaps fails for fuse |
||
1969 | * https://tracker.ceph.com/issues/50222 |
||
1970 | osd: 5.2s0 deep-scrub : stat mismatch |
||
1971 | * https://tracker.ceph.com/issues/56698 |
||
1972 | client: FAILED ceph_assert(_size == 0) |
||
1973 | * https://tracker.ceph.com/issues/50223 |
||
1974 | qa: "client.4737 isn't responding to mclientcaps(revoke)" |
||
1975 | 66 | Rishabh Dave | |
1976 | 65 | Rishabh Dave | |
1977 | h3. 2022 Jul 15 |
||
1978 | |||
1979 | http://pulpito.front.sepia.ceph.com/rishabh-2022-07-08_23:53:34-fs-wip-rishabh-testing-2022Jul08-1820-testing-default-smithi/ |
||
1980 | |||
1981 | re-run: http://pulpito.front.sepia.ceph.com/rishabh-2022-07-15_06:42:04-fs-wip-rishabh-testing-2022Jul08-1820-testing-default-smithi/ |
||
1982 | |||
1983 | * https://tracker.ceph.com/issues/53859 |
||
1984 | Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm) |
||
1985 | * https://tracker.ceph.com/issues/55804 |
||
1986 | Command failed (workunit test suites/pjd.sh) |
||
1987 | * https://tracker.ceph.com/issues/50223 |
||
1988 | client.xxxx isn't responding to mclientcaps(revoke) |
||
1989 | * https://tracker.ceph.com/issues/50222 |
||
1990 | osd: deep-scrub : stat mismatch |
||
1991 | |||
1992 | * https://tracker.ceph.com/issues/56632 |
||
1993 | Test failure: test_subvolume_snapshot_clone_quota_exceeded (tasks.cephfs.test_volumes.TestSubvolumeSnapshotClones) |
||
1994 | * https://tracker.ceph.com/issues/56634 |
||
1995 | workunit test fs/snaps/snaptest-intodir.sh |
||
1996 | * https://tracker.ceph.com/issues/56644 |
||
1997 | Test failure: test_rapid_creation (tasks.cephfs.test_fragment.TestFragmentation) |
||
1998 | |||
1999 | 61 | Rishabh Dave | |
2000 | |||
2001 | h3. 2022 July 05 |
||
2002 | 62 | Rishabh Dave | |
2003 | 64 | Rishabh Dave | http://pulpito.front.sepia.ceph.com/rishabh-2022-07-02_14:14:52-fs-wip-rishabh-testing-20220702-1631-testing-default-smithi/ |
2004 | |||
2005 | On 1st re-run some jobs passed - http://pulpito.front.sepia.ceph.com/rishabh-2022-07-03_15:10:28-fs-wip-rishabh-testing-20220702-1631-distro-default-smithi/ |
||
2006 | |||
2007 | On 2nd re-run only few jobs failed - |
||
2008 | 62 | Rishabh Dave | http://pulpito.front.sepia.ceph.com/rishabh-2022-07-06_05:24:29-fs-wip-rishabh-testing-20220705-2132-distro-default-smithi/ |
2009 | http://pulpito.front.sepia.ceph.com/rishabh-2022-07-06_05:24:29-fs-wip-rishabh-testing-20220705-2132-distro-default-smithi/ |
||
2010 | |||
2011 | * https://tracker.ceph.com/issues/56446 |
||
2012 | Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits) |
||
2013 | * https://tracker.ceph.com/issues/55804 |
||
2014 | Command failed (workunit test suites/pjd.sh) on smithi047 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/ |
||
2015 | |||
2016 | * https://tracker.ceph.com/issues/56445 |
||
2017 | 63 | Rishabh Dave | Command failed on smithi080 with status 123: "find /home/ubuntu/cephtest/archive/syslog -name '*.log' -print0 | sudo xargs -0 --no-run-if-empty -- gzip --" |
2018 | * https://tracker.ceph.com/issues/51267 |
||
2019 | Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithi098 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 |
||
2020 | 62 | Rishabh Dave | * https://tracker.ceph.com/issues/50224 |
2021 | Test failure: test_mirroring_init_failure_with_recovery (tasks.cephfs.test_mirroring.TestMirroring) |
||
2022 | 61 | Rishabh Dave | |
2023 | 58 | Venky Shankar | |
2024 | |||
2025 | h3. 2022 July 04 |
||
2026 | |||
2027 | https://pulpito.ceph.com/vshankar-2022-06-29_09:19:00-fs-wip-vshankar-testing-20220627-100931-testing-default-smithi/ |
||
2028 | (rhel runs were borked due to: https://lists.ceph.io/hyperkitty/list/dev@ceph.io/thread/JSZQFUKVLDND4W33PXDGCABPHNSPT6SS/, tests ran with --filter-out=rhel) |
||
2029 | |||
2030 | * https://tracker.ceph.com/issues/56445 |
||
2031 | 59 | Rishabh Dave | Command failed on smithi162 with status 123: "find /home/ubuntu/cephtest/archive/syslog -name '*.log' -print0 | sudo xargs -0 --no-run-if-empty -- gzip --" |
2032 | * https://tracker.ceph.com/issues/56446 |
||
2033 | Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits) |
||
2034 | * https://tracker.ceph.com/issues/51964 |
||
2035 | 60 | Rishabh Dave | qa: test_cephfs_mirror_restart_sync_on_blocklist failure |
2036 | 59 | Rishabh Dave | * https://tracker.ceph.com/issues/52624 |
2037 | 57 | Venky Shankar | qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" |
2038 | |||
2039 | h3. 2022 June 20 |
||
2040 | |||
2041 | https://pulpito.ceph.com/vshankar-2022-06-15_04:03:39-fs-wip-vshankar-testing1-20220615-072516-testing-default-smithi/ |
||
2042 | https://pulpito.ceph.com/vshankar-2022-06-19_08:22:46-fs-wip-vshankar-testing1-20220619-102531-testing-default-smithi/ |
||
2043 | |||
2044 | * https://tracker.ceph.com/issues/52624 |
||
2045 | qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" |
||
2046 | * https://tracker.ceph.com/issues/55804 |
||
2047 | qa failure: pjd link tests failed |
||
2048 | * https://tracker.ceph.com/issues/54108 |
||
2049 | qa: iogen workunit: "The following counters failed to be set on mds daemons: {'mds.exported', 'mds.imported'}" |
||
2050 | * https://tracker.ceph.com/issues/55332 |
||
2051 | 56 | Patrick Donnelly | Failure in snaptest-git-ceph.sh (it's an async unlink/create bug) |
2052 | |||
2053 | h3. 2022 June 13 |
||
2054 | |||
2055 | https://pulpito.ceph.com/pdonnell-2022-06-12_05:08:12-fs:workload-wip-pdonnell-testing-20220612.004943-distro-default-smithi/ |
||
2056 | |||
2057 | * https://tracker.ceph.com/issues/56024 |
||
2058 | cephadm: removes ceph.conf during qa run causing command failure |
||
2059 | * https://tracker.ceph.com/issues/48773 |
||
2060 | qa: scrub does not complete |
||
2061 | * https://tracker.ceph.com/issues/56012 |
||
2062 | mds: src/mds/MDLog.cc: 283: FAILED ceph_assert(!mds->is_ any_replay()) |
||
2063 | 55 | Venky Shankar | |
2064 | 54 | Venky Shankar | |
2065 | h3. 2022 Jun 13 |
||
2066 | |||
2067 | https://pulpito.ceph.com/vshankar-2022-06-07_00:25:50-fs-wip-vshankar-testing-20220606-223254-testing-default-smithi/ |
||
2068 | https://pulpito.ceph.com/vshankar-2022-06-10_01:04:46-fs-wip-vshankar-testing-20220609-175550-testing-default-smithi/ |
||
2069 | |||
2070 | * https://tracker.ceph.com/issues/52624 |
||
2071 | qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" |
||
2072 | * https://tracker.ceph.com/issues/51964 |
||
2073 | qa: test_cephfs_mirror_restart_sync_on_blocklist failure |
||
2074 | * https://tracker.ceph.com/issues/53859 |
||
2075 | qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm) |
||
2076 | * https://tracker.ceph.com/issues/55804 |
||
2077 | qa failure: pjd link tests failed |
||
2078 | * https://tracker.ceph.com/issues/56003 |
||
2079 | client: src/include/xlist.h: 81: FAILED ceph_assert(_size == 0) |
||
2080 | * https://tracker.ceph.com/issues/56011 |
||
2081 | fs/thrash: snaptest-snap-rm-cmp.sh fails in mds5sum comparison |
||
2082 | * https://tracker.ceph.com/issues/56012 |
||
2083 | 53 | Venky Shankar | mds: src/mds/MDLog.cc: 283: FAILED ceph_assert(!mds->is_ any_replay()) |
2084 | |||
2085 | h3. 2022 Jun 07 |
||
2086 | |||
2087 | https://pulpito.ceph.com/vshankar-2022-06-06_21:25:41-fs-wip-vshankar-testing1-20220606-230129-testing-default-smithi/ |
||
2088 | https://pulpito.ceph.com/vshankar-2022-06-07_10:53:31-fs-wip-vshankar-testing1-20220607-104134-testing-default-smithi/ (rerun after dropping a problematic PR) |
||
2089 | |||
2090 | * https://tracker.ceph.com/issues/52624 |
||
2091 | qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" |
||
2092 | * https://tracker.ceph.com/issues/50223 |
||
2093 | qa: "client.4737 isn't responding to mclientcaps(revoke)" |
||
2094 | * https://tracker.ceph.com/issues/50224 |
||
2095 | 51 | Venky Shankar | qa: test_mirroring_init_failure_with_recovery failure |
2096 | |||
2097 | h3. 2022 May 12 |
||
2098 | 52 | Venky Shankar | |
2099 | 51 | Venky Shankar | https://pulpito.ceph.com/?branch=wip-vshankar-testing-20220509-125847 |
2100 | https://pulpito.ceph.com/vshankar-2022-05-13_17:09:16-fs-wip-vshankar-testing-20220513-120051-testing-default-smithi/ (drop prs + rerun) |
||
2101 | |||
2102 | * https://tracker.ceph.com/issues/52624 |
||
2103 | qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" |
||
2104 | * https://tracker.ceph.com/issues/50223 |
||
2105 | qa: "client.4737 isn't responding to mclientcaps(revoke)" |
||
2106 | * https://tracker.ceph.com/issues/55332 |
||
2107 | Failure in snaptest-git-ceph.sh |
||
2108 | * https://tracker.ceph.com/issues/53859 |
||
2109 | 1 | Patrick Donnelly | qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm) |
2110 | 52 | Venky Shankar | * https://tracker.ceph.com/issues/55538 |
2111 | Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead) |
||
2112 | 51 | Venky Shankar | * https://tracker.ceph.com/issues/55258 |
2113 | 49 | Venky Shankar | lots of "heartbeat_check: no reply from X.X.X.X" in OSD logs (cropss up again, though very infrequent) |
2114 | |||
2115 | 50 | Venky Shankar | h3. 2022 May 04 |
2116 | |||
2117 | https://pulpito.ceph.com/vshankar-2022-05-01_13:18:44-fs-wip-vshankar-testing1-20220428-204527-testing-default-smithi/ |
||
2118 | 49 | Venky Shankar | https://pulpito.ceph.com/vshankar-2022-05-02_16:58:59-fs-wip-vshankar-testing1-20220502-201957-testing-default-smithi/ (after dropping PRs) |
2119 | |||
2120 | * https://tracker.ceph.com/issues/52624 |
||
2121 | qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" |
||
2122 | * https://tracker.ceph.com/issues/50223 |
||
2123 | qa: "client.4737 isn't responding to mclientcaps(revoke)" |
||
2124 | * https://tracker.ceph.com/issues/55332 |
||
2125 | Failure in snaptest-git-ceph.sh |
||
2126 | * https://tracker.ceph.com/issues/53859 |
||
2127 | qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm) |
||
2128 | * https://tracker.ceph.com/issues/55516 |
||
2129 | qa: fs suite tests failing with "json.decoder.JSONDecodeError: Extra data: line 2 column 82 (char 82)" |
||
2130 | * https://tracker.ceph.com/issues/55537 |
||
2131 | mds: crash during fs:upgrade test |
||
2132 | * https://tracker.ceph.com/issues/55538 |
||
2133 | 48 | Venky Shankar | Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead) |
2134 | |||
2135 | h3. 2022 Apr 25 |
||
2136 | |||
2137 | https://pulpito.ceph.com/?branch=wip-vshankar-testing-20220420-113951 (owner vshankar) |
||
2138 | |||
2139 | * https://tracker.ceph.com/issues/52624 |
||
2140 | qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" |
||
2141 | * https://tracker.ceph.com/issues/50223 |
||
2142 | qa: "client.4737 isn't responding to mclientcaps(revoke)" |
||
2143 | * https://tracker.ceph.com/issues/55258 |
||
2144 | lots of "heartbeat_check: no reply from X.X.X.X" in OSD logs |
||
2145 | * https://tracker.ceph.com/issues/55377 |
||
2146 | 47 | Venky Shankar | kclient: mds revoke Fwb caps stuck after the kclient tries writebcak once |
2147 | |||
2148 | h3. 2022 Apr 14 |
||
2149 | |||
2150 | https://pulpito.ceph.com/?branch=wip-vshankar-testing1-20220411-144044 |
||
2151 | |||
2152 | * https://tracker.ceph.com/issues/52624 |
||
2153 | qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" |
||
2154 | * https://tracker.ceph.com/issues/50223 |
||
2155 | qa: "client.4737 isn't responding to mclientcaps(revoke)" |
||
2156 | * https://tracker.ceph.com/issues/52438 |
||
2157 | qa: ffsb timeout |
||
2158 | * https://tracker.ceph.com/issues/55170 |
||
2159 | mds: crash during rejoin (CDir::fetch_keys) |
||
2160 | * https://tracker.ceph.com/issues/55331 |
||
2161 | pjd failure |
||
2162 | * https://tracker.ceph.com/issues/48773 |
||
2163 | qa: scrub does not complete |
||
2164 | * https://tracker.ceph.com/issues/55332 |
||
2165 | Failure in snaptest-git-ceph.sh |
||
2166 | * https://tracker.ceph.com/issues/55258 |
||
2167 | 45 | Venky Shankar | lots of "heartbeat_check: no reply from X.X.X.X" in OSD logs |
2168 | |||
2169 | 46 | Venky Shankar | h3. 2022 Apr 11 |
2170 | 45 | Venky Shankar | |
2171 | https://pulpito.ceph.com/?branch=wip-vshankar-testing-55110-20220408-203242 |
||
2172 | |||
2173 | * https://tracker.ceph.com/issues/48773 |
||
2174 | qa: scrub does not complete |
||
2175 | * https://tracker.ceph.com/issues/52624 |
||
2176 | qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" |
||
2177 | * https://tracker.ceph.com/issues/52438 |
||
2178 | qa: ffsb timeout |
||
2179 | * https://tracker.ceph.com/issues/48680 |
||
2180 | mds: scrubbing stuck "scrub active (0 inodes in the stack)" |
||
2181 | * https://tracker.ceph.com/issues/55236 |
||
2182 | qa: fs/snaps tests fails with "hit max job timeout" |
||
2183 | * https://tracker.ceph.com/issues/54108 |
||
2184 | qa: iogen workunit: "The following counters failed to be set on mds daemons: {'mds.exported', 'mds.imported'}" |
||
2185 | * https://tracker.ceph.com/issues/54971 |
||
2186 | Test failure: test_perf_stats_stale_metrics (tasks.cephfs.test_mds_metrics.TestMDSMetrics) |
||
2187 | * https://tracker.ceph.com/issues/50223 |
||
2188 | qa: "client.4737 isn't responding to mclientcaps(revoke)" |
||
2189 | * https://tracker.ceph.com/issues/55258 |
||
2190 | 44 | Venky Shankar | lots of "heartbeat_check: no reply from X.X.X.X" in OSD logs |
2191 | 42 | Venky Shankar | |
2192 | 43 | Venky Shankar | h3. 2022 Mar 21 |
2193 | |||
2194 | https://pulpito.ceph.com/vshankar-2022-03-20_02:16:37-fs-wip-vshankar-testing-20220319-163539-testing-default-smithi/ |
||
2195 | |||
2196 | Run didn't go well, lots of failures - debugging by dropping PRs and running against master branch. Only merging unrelated PRs that pass tests. |
||
2197 | |||
2198 | |||
2199 | 42 | Venky Shankar | h3. 2022 Mar 08 |
2200 | |||
2201 | https://pulpito.ceph.com/vshankar-2022-02-28_04:32:15-fs-wip-vshankar-testing-20220226-211550-testing-default-smithi/ |
||
2202 | |||
2203 | rerun with |
||
2204 | - (drop) https://github.com/ceph/ceph/pull/44679 |
||
2205 | - (drop) https://github.com/ceph/ceph/pull/44958 |
||
2206 | https://pulpito.ceph.com/vshankar-2022-03-06_14:47:51-fs-wip-vshankar-testing-20220304-132102-testing-default-smithi/ |
||
2207 | |||
2208 | * https://tracker.ceph.com/issues/54419 (new) |
||
2209 | `ceph orch upgrade start` seems to never reach completion |
||
2210 | * https://tracker.ceph.com/issues/51964 |
||
2211 | qa: test_cephfs_mirror_restart_sync_on_blocklist failure |
||
2212 | * https://tracker.ceph.com/issues/52624 |
||
2213 | qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" |
||
2214 | * https://tracker.ceph.com/issues/50223 |
||
2215 | qa: "client.4737 isn't responding to mclientcaps(revoke)" |
||
2216 | * https://tracker.ceph.com/issues/52438 |
||
2217 | qa: ffsb timeout |
||
2218 | * https://tracker.ceph.com/issues/50821 |
||
2219 | qa: untar_snap_rm failure during mds thrashing |
||
2220 | 41 | Venky Shankar | |
2221 | |||
2222 | h3. 2022 Feb 09 |
||
2223 | |||
2224 | https://pulpito.ceph.com/vshankar-2022-02-05_17:27:49-fs-wip-vshankar-testing-20220201-113815-testing-default-smithi/ |
||
2225 | |||
2226 | rerun with |
||
2227 | - (drop) https://github.com/ceph/ceph/pull/37938 |
||
2228 | - (drop) https://github.com/ceph/ceph/pull/44335 |
||
2229 | - (drop) https://github.com/ceph/ceph/pull/44491 |
||
2230 | - (drop) https://github.com/ceph/ceph/pull/44501 |
||
2231 | https://pulpito.ceph.com/vshankar-2022-02-08_14:27:29-fs-wip-vshankar-testing-20220208-181241-testing-default-smithi/ |
||
2232 | |||
2233 | * https://tracker.ceph.com/issues/51964 |
||
2234 | qa: test_cephfs_mirror_restart_sync_on_blocklist failure |
||
2235 | * https://tracker.ceph.com/issues/54066 |
||
2236 | test_subvolume_no_upgrade_v1_sanity fails with `AssertionError: 1000 != 0` |
||
2237 | * https://tracker.ceph.com/issues/48773 |
||
2238 | qa: scrub does not complete |
||
2239 | * https://tracker.ceph.com/issues/52624 |
||
2240 | qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" |
||
2241 | * https://tracker.ceph.com/issues/50223 |
||
2242 | qa: "client.4737 isn't responding to mclientcaps(revoke)" |
||
2243 | * https://tracker.ceph.com/issues/52438 |
||
2244 | 40 | Patrick Donnelly | qa: ffsb timeout |
2245 | |||
2246 | h3. 2022 Feb 01 |
||
2247 | |||
2248 | https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20220127.171526 |
||
2249 | |||
2250 | * https://tracker.ceph.com/issues/54107 |
||
2251 | kclient: hang during umount |
||
2252 | * https://tracker.ceph.com/issues/54106 |
||
2253 | kclient: hang during workunit cleanup |
||
2254 | * https://tracker.ceph.com/issues/54108 |
||
2255 | qa: iogen workunit: "The following counters failed to be set on mds daemons: {'mds.exported', 'mds.imported'}" |
||
2256 | * https://tracker.ceph.com/issues/48773 |
||
2257 | qa: scrub does not complete |
||
2258 | * https://tracker.ceph.com/issues/52438 |
||
2259 | qa: ffsb timeout |
||
2260 | 36 | Venky Shankar | |
2261 | |||
2262 | h3. 2022 Jan 13 |
||
2263 | 39 | Venky Shankar | |
2264 | 36 | Venky Shankar | https://pulpito.ceph.com/vshankar-2022-01-06_13:18:41-fs-wip-vshankar-testing-20220106-145819-testing-default-smithi/ |
2265 | 38 | Venky Shankar | |
2266 | rerun with: |
||
2267 | 36 | Venky Shankar | - (add) https://github.com/ceph/ceph/pull/44570 |
2268 | - (drop) https://github.com/ceph/ceph/pull/43184 |
||
2269 | https://pulpito.ceph.com/vshankar-2022-01-13_04:42:40-fs-wip-vshankar-testing-20220106-145819-testing-default-smithi/ |
||
2270 | |||
2271 | * https://tracker.ceph.com/issues/50223 |
||
2272 | qa: "client.4737 isn't responding to mclientcaps(revoke)" |
||
2273 | * https://tracker.ceph.com/issues/51282 |
||
2274 | pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings |
||
2275 | * https://tracker.ceph.com/issues/48773 |
||
2276 | qa: scrub does not complete |
||
2277 | * https://tracker.ceph.com/issues/52624 |
||
2278 | qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" |
||
2279 | * https://tracker.ceph.com/issues/53859 |
||
2280 | 34 | Venky Shankar | qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm) |
2281 | |||
2282 | h3. 2022 Jan 03 |
||
2283 | |||
2284 | https://pulpito.ceph.com/vshankar-2021-12-22_07:37:44-fs-wip-vshankar-testing-20211216-114012-testing-default-smithi/ |
||
2285 | https://pulpito.ceph.com/vshankar-2022-01-03_12:27:45-fs-wip-vshankar-testing-20220103-142738-testing-default-smithi/ (rerun) |
||
2286 | |||
2287 | * https://tracker.ceph.com/issues/50223 |
||
2288 | qa: "client.4737 isn't responding to mclientcaps(revoke)" |
||
2289 | * https://tracker.ceph.com/issues/51964 |
||
2290 | qa: test_cephfs_mirror_restart_sync_on_blocklist failure |
||
2291 | * https://tracker.ceph.com/issues/51267 |
||
2292 | CommandFailedError: Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithi096 with status 1:... |
||
2293 | * https://tracker.ceph.com/issues/51282 |
||
2294 | pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings |
||
2295 | * https://tracker.ceph.com/issues/50821 |
||
2296 | qa: untar_snap_rm failure during mds thrashing |
||
2297 | 35 | Ramana Raja | * https://tracker.ceph.com/issues/51278 |
2298 | mds: "FAILED ceph_assert(!segments.empty())" |
||
2299 | * https://tracker.ceph.com/issues/52279 |
||
2300 | 34 | Venky Shankar | cephadm tests fail due to: error adding seccomp filter rule for syscall bdflush: requested action matches default action of filter |
2301 | 33 | Patrick Donnelly | |
2302 | |||
2303 | h3. 2021 Dec 22 |
||
2304 | |||
2305 | https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211222.014316 |
||
2306 | |||
2307 | * https://tracker.ceph.com/issues/52624 |
||
2308 | qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" |
||
2309 | * https://tracker.ceph.com/issues/50223 |
||
2310 | qa: "client.4737 isn't responding to mclientcaps(revoke)" |
||
2311 | * https://tracker.ceph.com/issues/52279 |
||
2312 | cephadm tests fail due to: error adding seccomp filter rule for syscall bdflush: requested action matches default action of filter |
||
2313 | * https://tracker.ceph.com/issues/50224 |
||
2314 | qa: test_mirroring_init_failure_with_recovery failure |
||
2315 | * https://tracker.ceph.com/issues/48773 |
||
2316 | qa: scrub does not complete |
||
2317 | 32 | Venky Shankar | |
2318 | |||
2319 | h3. 2021 Nov 30 |
||
2320 | |||
2321 | https://pulpito.ceph.com/vshankar-2021-11-24_07:14:27-fs-wip-vshankar-testing-20211124-094330-testing-default-smithi/ |
||
2322 | https://pulpito.ceph.com/vshankar-2021-11-30_06:23:32-fs-wip-vshankar-testing-20211124-094330-distro-default-smithi/ (rerun w/ QA fixes) |
||
2323 | |||
2324 | * https://tracker.ceph.com/issues/53436 |
||
2325 | mds, mon: mds beacon messages get dropped? (mds never reaches up:active state) |
||
2326 | * https://tracker.ceph.com/issues/51964 |
||
2327 | qa: test_cephfs_mirror_restart_sync_on_blocklist failure |
||
2328 | * https://tracker.ceph.com/issues/48812 |
||
2329 | qa: test_scrub_pause_and_resume_with_abort failure |
||
2330 | * https://tracker.ceph.com/issues/51076 |
||
2331 | "wait_for_recovery: failed before timeout expired" during thrashosd test with EC backend. |
||
2332 | * https://tracker.ceph.com/issues/50223 |
||
2333 | qa: "client.4737 isn't responding to mclientcaps(revoke)" |
||
2334 | * https://tracker.ceph.com/issues/52624 |
||
2335 | qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" |
||
2336 | * https://tracker.ceph.com/issues/50250 |
||
2337 | mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones") |
||
2338 | 31 | Patrick Donnelly | |
2339 | |||
2340 | h3. 2021 November 9 |
||
2341 | |||
2342 | https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211109.180315 |
||
2343 | |||
2344 | * https://tracker.ceph.com/issues/53214 |
||
2345 | qa: "dd: error reading '/sys/kernel/debug/ceph/2a934501-6731-4052-a836-f42229a869be.client4874/metrics': Is a directory" |
||
2346 | * https://tracker.ceph.com/issues/48773 |
||
2347 | qa: scrub does not complete |
||
2348 | * https://tracker.ceph.com/issues/50223 |
||
2349 | qa: "client.4737 isn't responding to mclientcaps(revoke)" |
||
2350 | * https://tracker.ceph.com/issues/51282 |
||
2351 | pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings |
||
2352 | * https://tracker.ceph.com/issues/52624 |
||
2353 | qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" |
||
2354 | * https://tracker.ceph.com/issues/53216 |
||
2355 | qa: "RuntimeError: value of attributes should be either str or None. client_id" |
||
2356 | * https://tracker.ceph.com/issues/50250 |
||
2357 | mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones") |
||
2358 | |||
2359 | 30 | Patrick Donnelly | |
2360 | |||
2361 | h3. 2021 November 03 |
||
2362 | |||
2363 | https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211103.023355 |
||
2364 | |||
2365 | * https://tracker.ceph.com/issues/51964 |
||
2366 | qa: test_cephfs_mirror_restart_sync_on_blocklist failure |
||
2367 | * https://tracker.ceph.com/issues/51282 |
||
2368 | pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings |
||
2369 | * https://tracker.ceph.com/issues/52436 |
||
2370 | fs/ceph: "corrupt mdsmap" |
||
2371 | * https://tracker.ceph.com/issues/53074 |
||
2372 | pybind/mgr/cephadm: upgrade sequence does not continue if no MDS are active |
||
2373 | * https://tracker.ceph.com/issues/53150 |
||
2374 | pybind/mgr/cephadm/upgrade: tolerate MDS failures during upgrade straddling v16.2.5 |
||
2375 | * https://tracker.ceph.com/issues/53155 |
||
2376 | MDSMonitor: assertion during upgrade to v16.2.5+ |
||
2377 | 29 | Patrick Donnelly | |
2378 | |||
2379 | h3. 2021 October 26 |
||
2380 | |||
2381 | https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211025.000447 |
||
2382 | |||
2383 | * https://tracker.ceph.com/issues/53074 |
||
2384 | pybind/mgr/cephadm: upgrade sequence does not continue if no MDS are active |
||
2385 | * https://tracker.ceph.com/issues/52997 |
||
2386 | testing: hang ing umount |
||
2387 | * https://tracker.ceph.com/issues/50824 |
||
2388 | qa: snaptest-git-ceph bus error |
||
2389 | * https://tracker.ceph.com/issues/52436 |
||
2390 | fs/ceph: "corrupt mdsmap" |
||
2391 | * https://tracker.ceph.com/issues/48773 |
||
2392 | qa: scrub does not complete |
||
2393 | * https://tracker.ceph.com/issues/53082 |
||
2394 | ceph-fuse: segmenetation fault in Client::handle_mds_map |
||
2395 | * https://tracker.ceph.com/issues/50223 |
||
2396 | qa: "client.4737 isn't responding to mclientcaps(revoke)" |
||
2397 | * https://tracker.ceph.com/issues/52624 |
||
2398 | qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" |
||
2399 | * https://tracker.ceph.com/issues/50224 |
||
2400 | qa: test_mirroring_init_failure_with_recovery failure |
||
2401 | * https://tracker.ceph.com/issues/50821 |
||
2402 | qa: untar_snap_rm failure during mds thrashing |
||
2403 | * https://tracker.ceph.com/issues/50250 |
||
2404 | mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones") |
||
2405 | |||
2406 | 27 | Patrick Donnelly | |
2407 | |||
2408 | 28 | Patrick Donnelly | h3. 2021 October 19 |
2409 | 27 | Patrick Donnelly | |
2410 | https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211019.013028 |
||
2411 | |||
2412 | * https://tracker.ceph.com/issues/52995 |
||
2413 | qa: test_standby_count_wanted failure |
||
2414 | * https://tracker.ceph.com/issues/52948 |
||
2415 | osd: fails to come up: "teuthology.misc:7 of 8 OSDs are up" |
||
2416 | * https://tracker.ceph.com/issues/52996 |
||
2417 | qa: test_perf_counters via test_openfiletable |
||
2418 | * https://tracker.ceph.com/issues/48772 |
||
2419 | qa: pjd: not ok 9, 44, 80 |
||
2420 | * https://tracker.ceph.com/issues/52997 |
||
2421 | testing: hang ing umount |
||
2422 | * https://tracker.ceph.com/issues/50250 |
||
2423 | mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones") |
||
2424 | * https://tracker.ceph.com/issues/52624 |
||
2425 | qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" |
||
2426 | * https://tracker.ceph.com/issues/50223 |
||
2427 | qa: "client.4737 isn't responding to mclientcaps(revoke)" |
||
2428 | * https://tracker.ceph.com/issues/50821 |
||
2429 | qa: untar_snap_rm failure during mds thrashing |
||
2430 | * https://tracker.ceph.com/issues/48773 |
||
2431 | qa: scrub does not complete |
||
2432 | 26 | Patrick Donnelly | |
2433 | |||
2434 | h3. 2021 October 12 |
||
2435 | |||
2436 | https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211012.192211 |
||
2437 | |||
2438 | Some failures caused by teuthology bug: https://tracker.ceph.com/issues/52944 |
||
2439 | |||
2440 | New test caused failure: https://github.com/ceph/ceph/pull/43297#discussion_r729883167 |
||
2441 | |||
2442 | |||
2443 | * https://tracker.ceph.com/issues/51282 |
||
2444 | pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings |
||
2445 | * https://tracker.ceph.com/issues/52948 |
||
2446 | osd: fails to come up: "teuthology.misc:7 of 8 OSDs are up" |
||
2447 | * https://tracker.ceph.com/issues/48773 |
||
2448 | qa: scrub does not complete |
||
2449 | * https://tracker.ceph.com/issues/50224 |
||
2450 | qa: test_mirroring_init_failure_with_recovery failure |
||
2451 | * https://tracker.ceph.com/issues/52949 |
||
2452 | RuntimeError: The following counters failed to be set on mds daemons: {'mds.dir_split'} |
||
2453 | 25 | Patrick Donnelly | |
2454 | 23 | Patrick Donnelly | |
2455 | 24 | Patrick Donnelly | h3. 2021 October 02 |
2456 | |||
2457 | https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211002.163337 |
||
2458 | |||
2459 | Some failures caused by cephadm upgrade test. Fixed in follow-up qa commit. |
||
2460 | |||
2461 | test_simple failures caused by PR in this set. |
||
2462 | |||
2463 | A few reruns because of QA infra noise. |
||
2464 | |||
2465 | * https://tracker.ceph.com/issues/52822 |
||
2466 | qa: failed pacific install on fs:upgrade |
||
2467 | * https://tracker.ceph.com/issues/52624 |
||
2468 | qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" |
||
2469 | * https://tracker.ceph.com/issues/50223 |
||
2470 | qa: "client.4737 isn't responding to mclientcaps(revoke)" |
||
2471 | * https://tracker.ceph.com/issues/48773 |
||
2472 | qa: scrub does not complete |
||
2473 | |||
2474 | |||
2475 | 23 | Patrick Donnelly | h3. 2021 September 20 |
2476 | |||
2477 | https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210917.174826 |
||
2478 | |||
2479 | * https://tracker.ceph.com/issues/52677 |
||
2480 | qa: test_simple failure |
||
2481 | * https://tracker.ceph.com/issues/51279 |
||
2482 | kclient hangs on umount (testing branch) |
||
2483 | * https://tracker.ceph.com/issues/50223 |
||
2484 | qa: "client.4737 isn't responding to mclientcaps(revoke)" |
||
2485 | * https://tracker.ceph.com/issues/50250 |
||
2486 | mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones") |
||
2487 | * https://tracker.ceph.com/issues/52624 |
||
2488 | qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" |
||
2489 | * https://tracker.ceph.com/issues/52438 |
||
2490 | qa: ffsb timeout |
||
2491 | 22 | Patrick Donnelly | |
2492 | |||
2493 | h3. 2021 September 10 |
||
2494 | |||
2495 | https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210910.181451 |
||
2496 | |||
2497 | * https://tracker.ceph.com/issues/50223 |
||
2498 | qa: "client.4737 isn't responding to mclientcaps(revoke)" |
||
2499 | * https://tracker.ceph.com/issues/50250 |
||
2500 | mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones") |
||
2501 | * https://tracker.ceph.com/issues/52624 |
||
2502 | qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" |
||
2503 | * https://tracker.ceph.com/issues/52625 |
||
2504 | qa: test_kill_mdstable (tasks.cephfs.test_snapshots.TestSnapshots) |
||
2505 | * https://tracker.ceph.com/issues/52439 |
||
2506 | qa: acls does not compile on centos stream |
||
2507 | * https://tracker.ceph.com/issues/50821 |
||
2508 | qa: untar_snap_rm failure during mds thrashing |
||
2509 | * https://tracker.ceph.com/issues/48773 |
||
2510 | qa: scrub does not complete |
||
2511 | * https://tracker.ceph.com/issues/52626 |
||
2512 | mds: ScrubStack.cc: 831: FAILED ceph_assert(diri) |
||
2513 | * https://tracker.ceph.com/issues/51279 |
||
2514 | kclient hangs on umount (testing branch) |
||
2515 | 21 | Patrick Donnelly | |
2516 | |||
2517 | h3. 2021 August 27 |
||
2518 | |||
2519 | Several jobs died because of device failures. |
||
2520 | |||
2521 | https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210827.024746 |
||
2522 | |||
2523 | * https://tracker.ceph.com/issues/52430 |
||
2524 | mds: fast async create client mount breaks racy test |
||
2525 | * https://tracker.ceph.com/issues/52436 |
||
2526 | fs/ceph: "corrupt mdsmap" |
||
2527 | * https://tracker.ceph.com/issues/52437 |
||
2528 | mds: InoTable::replay_release_ids abort via test_inotable_sync |
||
2529 | * https://tracker.ceph.com/issues/51282 |
||
2530 | pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings |
||
2531 | * https://tracker.ceph.com/issues/52438 |
||
2532 | qa: ffsb timeout |
||
2533 | * https://tracker.ceph.com/issues/52439 |
||
2534 | qa: acls does not compile on centos stream |
||
2535 | 20 | Patrick Donnelly | |
2536 | |||
2537 | h3. 2021 July 30 |
||
2538 | |||
2539 | https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210729.214022 |
||
2540 | |||
2541 | * https://tracker.ceph.com/issues/50250 |
||
2542 | mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones") |
||
2543 | * https://tracker.ceph.com/issues/51282 |
||
2544 | pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings |
||
2545 | * https://tracker.ceph.com/issues/48773 |
||
2546 | qa: scrub does not complete |
||
2547 | * https://tracker.ceph.com/issues/51975 |
||
2548 | pybind/mgr/stats: KeyError |
||
2549 | 19 | Patrick Donnelly | |
2550 | |||
2551 | h3. 2021 July 28 |
||
2552 | |||
2553 | https://pulpito.ceph.com/pdonnell-2021-07-28_00:39:45-fs-wip-pdonnell-testing-20210727.213757-distro-basic-smithi/ |
||
2554 | |||
2555 | with qa fix: https://pulpito.ceph.com/pdonnell-2021-07-28_16:20:28-fs-wip-pdonnell-testing-20210728.141004-distro-basic-smithi/ |
||
2556 | |||
2557 | * https://tracker.ceph.com/issues/51905 |
||
2558 | qa: "error reading sessionmap 'mds1_sessionmap'" |
||
2559 | * https://tracker.ceph.com/issues/48773 |
||
2560 | qa: scrub does not complete |
||
2561 | * https://tracker.ceph.com/issues/50250 |
||
2562 | mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones") |
||
2563 | * https://tracker.ceph.com/issues/51267 |
||
2564 | CommandFailedError: Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithi096 with status 1:... |
||
2565 | * https://tracker.ceph.com/issues/51279 |
||
2566 | kclient hangs on umount (testing branch) |
||
2567 | 18 | Patrick Donnelly | |
2568 | |||
2569 | h3. 2021 July 16 |
||
2570 | |||
2571 | https://pulpito.ceph.com/pdonnell-2021-07-16_05:50:11-fs-wip-pdonnell-testing-20210716.022804-distro-basic-smithi/ |
||
2572 | |||
2573 | * https://tracker.ceph.com/issues/48773 |
||
2574 | qa: scrub does not complete |
||
2575 | * https://tracker.ceph.com/issues/48772 |
||
2576 | qa: pjd: not ok 9, 44, 80 |
||
2577 | * https://tracker.ceph.com/issues/45434 |
||
2578 | qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed |
||
2579 | * https://tracker.ceph.com/issues/51279 |
||
2580 | kclient hangs on umount (testing branch) |
||
2581 | * https://tracker.ceph.com/issues/50824 |
||
2582 | qa: snaptest-git-ceph bus error |
||
2583 | 17 | Patrick Donnelly | |
2584 | |||
2585 | h3. 2021 July 04 |
||
2586 | |||
2587 | https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210703.052904 |
||
2588 | |||
2589 | * https://tracker.ceph.com/issues/48773 |
||
2590 | qa: scrub does not complete |
||
2591 | * https://tracker.ceph.com/issues/39150 |
||
2592 | mon: "FAILED ceph_assert(session_map.sessions.empty())" when out of quorum |
||
2593 | * https://tracker.ceph.com/issues/45434 |
||
2594 | qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed |
||
2595 | * https://tracker.ceph.com/issues/51282 |
||
2596 | pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings |
||
2597 | * https://tracker.ceph.com/issues/48771 |
||
2598 | qa: iogen: workload fails to cause balancing |
||
2599 | * https://tracker.ceph.com/issues/51279 |
||
2600 | kclient hangs on umount (testing branch) |
||
2601 | * https://tracker.ceph.com/issues/50250 |
||
2602 | mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones") |
||
2603 | 16 | Patrick Donnelly | |
2604 | |||
2605 | h3. 2021 July 01 |
||
2606 | |||
2607 | https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210701.192056 |
||
2608 | |||
2609 | * https://tracker.ceph.com/issues/51197 |
||
2610 | qa: [WRN] Scrub error on inode 0x10000001520 (/client.0/tmp/t/linux-5.4/Documentation/driver-api) see mds.f log and `damage ls` output for details |
||
2611 | * https://tracker.ceph.com/issues/50866 |
||
2612 | osd: stat mismatch on objects |
||
2613 | * https://tracker.ceph.com/issues/48773 |
||
2614 | qa: scrub does not complete |
||
2615 | 15 | Patrick Donnelly | |
2616 | |||
2617 | h3. 2021 June 26 |
||
2618 | |||
2619 | https://pulpito.ceph.com/pdonnell-2021-06-26_00:57:00-fs-wip-pdonnell-testing-20210625.225421-distro-basic-smithi/ |
||
2620 | |||
2621 | * https://tracker.ceph.com/issues/51183 |
||
2622 | qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions' |
||
2623 | * https://tracker.ceph.com/issues/51410 |
||
2624 | kclient: fails to finish reconnect during MDS thrashing (testing branch) |
||
2625 | * https://tracker.ceph.com/issues/48773 |
||
2626 | qa: scrub does not complete |
||
2627 | * https://tracker.ceph.com/issues/51282 |
||
2628 | pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings |
||
2629 | * https://tracker.ceph.com/issues/51169 |
||
2630 | qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp |
||
2631 | * https://tracker.ceph.com/issues/48772 |
||
2632 | qa: pjd: not ok 9, 44, 80 |
||
2633 | 14 | Patrick Donnelly | |
2634 | |||
2635 | h3. 2021 June 21 |
||
2636 | |||
2637 | https://pulpito.ceph.com/pdonnell-2021-06-22_00:27:21-fs-wip-pdonnell-testing-20210621.231646-distro-basic-smithi/ |
||
2638 | |||
2639 | One failure caused by PR: https://github.com/ceph/ceph/pull/41935#issuecomment-866472599 |
||
2640 | |||
2641 | * https://tracker.ceph.com/issues/51282 |
||
2642 | pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings |
||
2643 | * https://tracker.ceph.com/issues/51183 |
||
2644 | qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions' |
||
2645 | * https://tracker.ceph.com/issues/48773 |
||
2646 | qa: scrub does not complete |
||
2647 | * https://tracker.ceph.com/issues/48771 |
||
2648 | qa: iogen: workload fails to cause balancing |
||
2649 | * https://tracker.ceph.com/issues/51169 |
||
2650 | qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp |
||
2651 | * https://tracker.ceph.com/issues/50495 |
||
2652 | libcephfs: shutdown race fails with status 141 |
||
2653 | * https://tracker.ceph.com/issues/45434 |
||
2654 | qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed |
||
2655 | * https://tracker.ceph.com/issues/50824 |
||
2656 | qa: snaptest-git-ceph bus error |
||
2657 | * https://tracker.ceph.com/issues/50223 |
||
2658 | qa: "client.4737 isn't responding to mclientcaps(revoke)" |
||
2659 | 13 | Patrick Donnelly | |
2660 | |||
2661 | h3. 2021 June 16 |
||
2662 | |||
2663 | https://pulpito.ceph.com/pdonnell-2021-06-16_21:26:55-fs-wip-pdonnell-testing-20210616.191804-distro-basic-smithi/ |
||
2664 | |||
2665 | MDS abort class of failures caused by PR: https://github.com/ceph/ceph/pull/41667 |
||
2666 | |||
2667 | * https://tracker.ceph.com/issues/45434 |
||
2668 | qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed |
||
2669 | * https://tracker.ceph.com/issues/51169 |
||
2670 | qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp |
||
2671 | * https://tracker.ceph.com/issues/43216 |
||
2672 | MDSMonitor: removes MDS coming out of quorum election |
||
2673 | * https://tracker.ceph.com/issues/51278 |
||
2674 | mds: "FAILED ceph_assert(!segments.empty())" |
||
2675 | * https://tracker.ceph.com/issues/51279 |
||
2676 | kclient hangs on umount (testing branch) |
||
2677 | * https://tracker.ceph.com/issues/51280 |
||
2678 | mds: "FAILED ceph_assert(r == 0 || r == -2)" |
||
2679 | * https://tracker.ceph.com/issues/51183 |
||
2680 | qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions' |
||
2681 | * https://tracker.ceph.com/issues/51281 |
||
2682 | qa: snaptest-snap-rm-cmp.sh: "echo 'FAIL: bad match, /tmp/a 4637e766853d1ad16a7b17079e2c6f03 != real c3883760b18d50e8d78819c54d579b00'" |
||
2683 | * https://tracker.ceph.com/issues/48773 |
||
2684 | qa: scrub does not complete |
||
2685 | * https://tracker.ceph.com/issues/51076 |
||
2686 | "wait_for_recovery: failed before timeout expired" during thrashosd test with EC backend. |
||
2687 | * https://tracker.ceph.com/issues/51228 |
||
2688 | qa: rmdir: failed to remove 'a/.snap/*': No such file or directory |
||
2689 | * https://tracker.ceph.com/issues/51282 |
||
2690 | pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings |
||
2691 | 12 | Patrick Donnelly | |
2692 | |||
2693 | h3. 2021 June 14 |
||
2694 | |||
2695 | https://pulpito.ceph.com/pdonnell-2021-06-14_20:53:05-fs-wip-pdonnell-testing-20210614.173325-distro-basic-smithi/ |
||
2696 | |||
2697 | Some Ubuntu 20.04 upgrade fallout. In particular, upgrade tests are failing due to missing packages for 18.04 Pacific. |
||
2698 | |||
2699 | * https://tracker.ceph.com/issues/51169 |
||
2700 | qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp |
||
2701 | * https://tracker.ceph.com/issues/51228 |
||
2702 | qa: rmdir: failed to remove 'a/.snap/*': No such file or directory |
||
2703 | * https://tracker.ceph.com/issues/48773 |
||
2704 | qa: scrub does not complete |
||
2705 | * https://tracker.ceph.com/issues/51183 |
||
2706 | qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions' |
||
2707 | * https://tracker.ceph.com/issues/45434 |
||
2708 | qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed |
||
2709 | * https://tracker.ceph.com/issues/51182 |
||
2710 | pybind/mgr/snap_schedule: Invalid command: Unexpected argument 'fs=cephfs' |
||
2711 | * https://tracker.ceph.com/issues/51229 |
||
2712 | qa: test_multi_snap_schedule list difference failure |
||
2713 | * https://tracker.ceph.com/issues/50821 |
||
2714 | qa: untar_snap_rm failure during mds thrashing |
||
2715 | 11 | Patrick Donnelly | |
2716 | |||
2717 | h3. 2021 June 13 |
||
2718 | |||
2719 | https://pulpito.ceph.com/pdonnell-2021-06-12_02:45:35-fs-wip-pdonnell-testing-20210612.002809-distro-basic-smithi/ |
||
2720 | |||
2721 | Some Ubuntu 20.04 upgrade fallout. In particular, upgrade tests are failing due to missing packages for 18.04 Pacific. |
||
2722 | |||
2723 | * https://tracker.ceph.com/issues/51169 |
||
2724 | qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp |
||
2725 | * https://tracker.ceph.com/issues/48773 |
||
2726 | qa: scrub does not complete |
||
2727 | * https://tracker.ceph.com/issues/51182 |
||
2728 | pybind/mgr/snap_schedule: Invalid command: Unexpected argument 'fs=cephfs' |
||
2729 | * https://tracker.ceph.com/issues/51183 |
||
2730 | qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions' |
||
2731 | * https://tracker.ceph.com/issues/51197 |
||
2732 | qa: [WRN] Scrub error on inode 0x10000001520 (/client.0/tmp/t/linux-5.4/Documentation/driver-api) see mds.f log and `damage ls` output for details |
||
2733 | * https://tracker.ceph.com/issues/45434 |
||
2734 | 10 | Patrick Donnelly | qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed |
2735 | |||
2736 | h3. 2021 June 11 |
||
2737 | |||
2738 | https://pulpito.ceph.com/pdonnell-2021-06-11_18:02:10-fs-wip-pdonnell-testing-20210611.162716-distro-basic-smithi/ |
||
2739 | |||
2740 | Some Ubuntu 20.04 upgrade fallout. In particular, upgrade tests are failing due to missing packages for 18.04 Pacific. |
||
2741 | |||
2742 | * https://tracker.ceph.com/issues/51169 |
||
2743 | qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp |
||
2744 | * https://tracker.ceph.com/issues/45434 |
||
2745 | qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed |
||
2746 | * https://tracker.ceph.com/issues/48771 |
||
2747 | qa: iogen: workload fails to cause balancing |
||
2748 | * https://tracker.ceph.com/issues/43216 |
||
2749 | MDSMonitor: removes MDS coming out of quorum election |
||
2750 | * https://tracker.ceph.com/issues/51182 |
||
2751 | pybind/mgr/snap_schedule: Invalid command: Unexpected argument 'fs=cephfs' |
||
2752 | * https://tracker.ceph.com/issues/50223 |
||
2753 | qa: "client.4737 isn't responding to mclientcaps(revoke)" |
||
2754 | * https://tracker.ceph.com/issues/48773 |
||
2755 | qa: scrub does not complete |
||
2756 | * https://tracker.ceph.com/issues/51183 |
||
2757 | qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions' |
||
2758 | * https://tracker.ceph.com/issues/51184 |
||
2759 | qa: fs:bugs does not specify distro |
||
2760 | 9 | Patrick Donnelly | |
2761 | |||
2762 | h3. 2021 June 03 |
||
2763 | |||
2764 | https://pulpito.ceph.com/pdonnell-2021-06-03_03:40:33-fs-wip-pdonnell-testing-20210603.020013-distro-basic-smithi/ |
||
2765 | |||
2766 | * https://tracker.ceph.com/issues/45434 |
||
2767 | qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed |
||
2768 | * https://tracker.ceph.com/issues/50016 |
||
2769 | qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes" |
||
2770 | * https://tracker.ceph.com/issues/50821 |
||
2771 | qa: untar_snap_rm failure during mds thrashing |
||
2772 | * https://tracker.ceph.com/issues/50622 (regression) |
||
2773 | msg: active_connections regression |
||
2774 | * https://tracker.ceph.com/issues/49845#note-2 (regression) |
||
2775 | qa: failed umount in test_volumes |
||
2776 | * https://tracker.ceph.com/issues/48773 |
||
2777 | qa: scrub does not complete |
||
2778 | * https://tracker.ceph.com/issues/43216 |
||
2779 | MDSMonitor: removes MDS coming out of quorum election |
||
2780 | 7 | Patrick Donnelly | |
2781 | |||
2782 | 8 | Patrick Donnelly | h3. 2021 May 18 |
2783 | |||
2784 | https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210518.214114 |
||
2785 | |||
2786 | Regression in testing kernel caused some failures. Ilya fixed those and rerun |
||
2787 | looked better. Some odd new noise in the rerun relating to packaging and "No |
||
2788 | module named 'tasks.ceph'". |
||
2789 | |||
2790 | * https://tracker.ceph.com/issues/50824 |
||
2791 | qa: snaptest-git-ceph bus error |
||
2792 | * https://tracker.ceph.com/issues/50622 (regression) |
||
2793 | msg: active_connections regression |
||
2794 | * https://tracker.ceph.com/issues/49845#note-2 (regression) |
||
2795 | qa: failed umount in test_volumes |
||
2796 | * https://tracker.ceph.com/issues/48203 (stock kernel update required) |
||
2797 | qa: quota failure |
||
2798 | |||
2799 | |||
2800 | 7 | Patrick Donnelly | h3. 2021 May 18 |
2801 | |||
2802 | https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210518.025642 |
||
2803 | |||
2804 | * https://tracker.ceph.com/issues/50821 |
||
2805 | qa: untar_snap_rm failure during mds thrashing |
||
2806 | * https://tracker.ceph.com/issues/48773 |
||
2807 | qa: scrub does not complete |
||
2808 | * https://tracker.ceph.com/issues/45591 |
||
2809 | mgr: FAILED ceph_assert(daemon != nullptr) |
||
2810 | * https://tracker.ceph.com/issues/50866 |
||
2811 | osd: stat mismatch on objects |
||
2812 | * https://tracker.ceph.com/issues/50016 |
||
2813 | qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes" |
||
2814 | * https://tracker.ceph.com/issues/50867 |
||
2815 | qa: fs:mirror: reduced data availability |
||
2816 | * https://tracker.ceph.com/issues/50821 |
||
2817 | qa: untar_snap_rm failure during mds thrashing |
||
2818 | * https://tracker.ceph.com/issues/50622 (regression) |
||
2819 | msg: active_connections regression |
||
2820 | * https://tracker.ceph.com/issues/50223 |
||
2821 | qa: "client.4737 isn't responding to mclientcaps(revoke)" |
||
2822 | * https://tracker.ceph.com/issues/50868 |
||
2823 | qa: "kern.log.gz already exists; not overwritten" |
||
2824 | * https://tracker.ceph.com/issues/50870 |
||
2825 | qa: test_full: "rm: cannot remove 'large_file_a': Permission denied" |
||
2826 | 6 | Patrick Donnelly | |
2827 | |||
2828 | h3. 2021 May 11 |
||
2829 | |||
2830 | https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210511.232042 |
||
2831 | |||
2832 | * one class of failures caused by PR |
||
2833 | * https://tracker.ceph.com/issues/48812 |
||
2834 | qa: test_scrub_pause_and_resume_with_abort failure |
||
2835 | * https://tracker.ceph.com/issues/50390 |
||
2836 | mds: monclient: wait_auth_rotating timed out after 30 |
||
2837 | * https://tracker.ceph.com/issues/48773 |
||
2838 | qa: scrub does not complete |
||
2839 | * https://tracker.ceph.com/issues/50821 |
||
2840 | qa: untar_snap_rm failure during mds thrashing |
||
2841 | * https://tracker.ceph.com/issues/50224 |
||
2842 | qa: test_mirroring_init_failure_with_recovery failure |
||
2843 | * https://tracker.ceph.com/issues/50622 (regression) |
||
2844 | msg: active_connections regression |
||
2845 | * https://tracker.ceph.com/issues/50825 |
||
2846 | qa: snaptest-git-ceph hang during mon thrashing v2 |
||
2847 | * https://tracker.ceph.com/issues/50821 |
||
2848 | qa: untar_snap_rm failure during mds thrashing |
||
2849 | * https://tracker.ceph.com/issues/50823 |
||
2850 | qa: RuntimeError: timeout waiting for cluster to stabilize |
||
2851 | 5 | Patrick Donnelly | |
2852 | |||
2853 | h3. 2021 May 14 |
||
2854 | |||
2855 | https://pulpito.ceph.com/pdonnell-2021-05-14_21:45:42-fs-master-distro-basic-smithi/ |
||
2856 | |||
2857 | * https://tracker.ceph.com/issues/48812 |
||
2858 | qa: test_scrub_pause_and_resume_with_abort failure |
||
2859 | * https://tracker.ceph.com/issues/50821 |
||
2860 | qa: untar_snap_rm failure during mds thrashing |
||
2861 | * https://tracker.ceph.com/issues/50622 (regression) |
||
2862 | msg: active_connections regression |
||
2863 | * https://tracker.ceph.com/issues/50822 |
||
2864 | qa: testing kernel patch for client metrics causes mds abort |
||
2865 | * https://tracker.ceph.com/issues/48773 |
||
2866 | qa: scrub does not complete |
||
2867 | * https://tracker.ceph.com/issues/50823 |
||
2868 | qa: RuntimeError: timeout waiting for cluster to stabilize |
||
2869 | * https://tracker.ceph.com/issues/50824 |
||
2870 | qa: snaptest-git-ceph bus error |
||
2871 | * https://tracker.ceph.com/issues/50825 |
||
2872 | qa: snaptest-git-ceph hang during mon thrashing v2 |
||
2873 | * https://tracker.ceph.com/issues/50826 |
||
2874 | kceph: stock RHEL kernel hangs on snaptests with mon|osd thrashers |
||
2875 | 4 | Patrick Donnelly | |
2876 | |||
2877 | h3. 2021 May 01 |
||
2878 | |||
2879 | https://pulpito.ceph.com/pdonnell-2021-05-01_09:07:09-fs-wip-pdonnell-testing-20210501.040415-distro-basic-smithi/ |
||
2880 | |||
2881 | * https://tracker.ceph.com/issues/45434 |
||
2882 | qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed |
||
2883 | * https://tracker.ceph.com/issues/50281 |
||
2884 | qa: untar_snap_rm timeout |
||
2885 | * https://tracker.ceph.com/issues/48203 (stock kernel update required) |
||
2886 | qa: quota failure |
||
2887 | * https://tracker.ceph.com/issues/48773 |
||
2888 | qa: scrub does not complete |
||
2889 | * https://tracker.ceph.com/issues/50390 |
||
2890 | mds: monclient: wait_auth_rotating timed out after 30 |
||
2891 | * https://tracker.ceph.com/issues/50250 |
||
2892 | mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" |
||
2893 | * https://tracker.ceph.com/issues/50622 (regression) |
||
2894 | msg: active_connections regression |
||
2895 | * https://tracker.ceph.com/issues/45591 |
||
2896 | mgr: FAILED ceph_assert(daemon != nullptr) |
||
2897 | * https://tracker.ceph.com/issues/50221 |
||
2898 | qa: snaptest-git-ceph failure in git diff |
||
2899 | * https://tracker.ceph.com/issues/50016 |
||
2900 | qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes" |
||
2901 | 3 | Patrick Donnelly | |
2902 | |||
2903 | h3. 2021 Apr 15 |
||
2904 | |||
2905 | https://pulpito.ceph.com/pdonnell-2021-04-15_01:35:57-fs-wip-pdonnell-testing-20210414.230315-distro-basic-smithi/ |
||
2906 | |||
2907 | * https://tracker.ceph.com/issues/50281 |
||
2908 | qa: untar_snap_rm timeout |
||
2909 | * https://tracker.ceph.com/issues/50220 |
||
2910 | qa: dbench workload timeout |
||
2911 | * https://tracker.ceph.com/issues/50246 |
||
2912 | mds: failure replaying journal (EMetaBlob) |
||
2913 | * https://tracker.ceph.com/issues/50250 |
||
2914 | mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" |
||
2915 | * https://tracker.ceph.com/issues/50016 |
||
2916 | qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes" |
||
2917 | * https://tracker.ceph.com/issues/50222 |
||
2918 | osd: 5.2s0 deep-scrub : stat mismatch |
||
2919 | * https://tracker.ceph.com/issues/45434 |
||
2920 | qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed |
||
2921 | * https://tracker.ceph.com/issues/49845 |
||
2922 | qa: failed umount in test_volumes |
||
2923 | * https://tracker.ceph.com/issues/37808 |
||
2924 | osd: osdmap cache weak_refs assert during shutdown |
||
2925 | * https://tracker.ceph.com/issues/50387 |
||
2926 | client: fs/snaps failure |
||
2927 | * https://tracker.ceph.com/issues/50389 |
||
2928 | mds: "cluster [ERR] Error recovering journal 0x203: (2) No such file or directory" in cluster log" |
||
2929 | * https://tracker.ceph.com/issues/50216 |
||
2930 | qa: "ls: cannot access 'lost+found': No such file or directory" |
||
2931 | * https://tracker.ceph.com/issues/50390 |
||
2932 | mds: monclient: wait_auth_rotating timed out after 30 |
||
2933 | |||
2934 | 1 | Patrick Donnelly | |
2935 | |||
2936 | 2 | Patrick Donnelly | h3. 2021 Apr 08 |
2937 | |||
2938 | https://pulpito.ceph.com/pdonnell-2021-04-08_22:42:24-fs-wip-pdonnell-testing-20210408.192301-distro-basic-smithi/ |
||
2939 | |||
2940 | * https://tracker.ceph.com/issues/45434 |
||
2941 | qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed |
||
2942 | * https://tracker.ceph.com/issues/50016 |
||
2943 | qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes" |
||
2944 | * https://tracker.ceph.com/issues/48773 |
||
2945 | qa: scrub does not complete |
||
2946 | * https://tracker.ceph.com/issues/50279 |
||
2947 | qa: "Replacing daemon mds.b as rank 0 with standby daemon mds.c" |
||
2948 | * https://tracker.ceph.com/issues/50246 |
||
2949 | mds: failure replaying journal (EMetaBlob) |
||
2950 | * https://tracker.ceph.com/issues/48365 |
||
2951 | qa: ffsb build failure on CentOS 8.2 |
||
2952 | * https://tracker.ceph.com/issues/50216 |
||
2953 | qa: "ls: cannot access 'lost+found': No such file or directory" |
||
2954 | * https://tracker.ceph.com/issues/50223 |
||
2955 | qa: "client.4737 isn't responding to mclientcaps(revoke)" |
||
2956 | * https://tracker.ceph.com/issues/50280 |
||
2957 | cephadm: RuntimeError: uid/gid not found |
||
2958 | * https://tracker.ceph.com/issues/50281 |
||
2959 | qa: untar_snap_rm timeout |
||
2960 | |||
2961 | 1 | Patrick Donnelly | h3. 2021 Apr 08 |
2962 | |||
2963 | https://pulpito.ceph.com/pdonnell-2021-04-08_04:31:36-fs-wip-pdonnell-testing-20210408.024225-distro-basic-smithi/ |
||
2964 | https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210408.142238 (with logic inversion / QA fix) |
||
2965 | |||
2966 | * https://tracker.ceph.com/issues/50246 |
||
2967 | mds: failure replaying journal (EMetaBlob) |
||
2968 | * https://tracker.ceph.com/issues/50250 |
||
2969 | mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" |
||
2970 | |||
2971 | |||
2972 | h3. 2021 Apr 07 |
||
2973 | |||
2974 | https://pulpito.ceph.com/pdonnell-2021-04-07_02:12:41-fs-wip-pdonnell-testing-20210406.213012-distro-basic-smithi/ |
||
2975 | |||
2976 | * https://tracker.ceph.com/issues/50215 |
||
2977 | qa: "log [ERR] : error reading sessionmap 'mds2_sessionmap'" |
||
2978 | * https://tracker.ceph.com/issues/49466 |
||
2979 | qa: "Command failed on gibba030 with status 1: 'set -ex\nsudo dd of=/tmp/tmp.ZEeZBasJer'" |
||
2980 | * https://tracker.ceph.com/issues/50216 |
||
2981 | qa: "ls: cannot access 'lost+found': No such file or directory" |
||
2982 | * https://tracker.ceph.com/issues/48773 |
||
2983 | qa: scrub does not complete |
||
2984 | * https://tracker.ceph.com/issues/49845 |
||
2985 | qa: failed umount in test_volumes |
||
2986 | * https://tracker.ceph.com/issues/50220 |
||
2987 | qa: dbench workload timeout |
||
2988 | * https://tracker.ceph.com/issues/50221 |
||
2989 | qa: snaptest-git-ceph failure in git diff |
||
2990 | * https://tracker.ceph.com/issues/50222 |
||
2991 | osd: 5.2s0 deep-scrub : stat mismatch |
||
2992 | * https://tracker.ceph.com/issues/50223 |
||
2993 | qa: "client.4737 isn't responding to mclientcaps(revoke)" |
||
2994 | * https://tracker.ceph.com/issues/50224 |
||
2995 | qa: test_mirroring_init_failure_with_recovery failure |
||
2996 | |||
2997 | h3. 2021 Apr 01 |
||
2998 | |||
2999 | https://pulpito.ceph.com/pdonnell-2021-04-01_00:45:34-fs-wip-pdonnell-testing-20210331.222326-distro-basic-smithi/ |
||
3000 | |||
3001 | * https://tracker.ceph.com/issues/48772 |
||
3002 | qa: pjd: not ok 9, 44, 80 |
||
3003 | * https://tracker.ceph.com/issues/50177 |
||
3004 | osd: "stalled aio... buggy kernel or bad device?" |
||
3005 | * https://tracker.ceph.com/issues/48771 |
||
3006 | qa: iogen: workload fails to cause balancing |
||
3007 | * https://tracker.ceph.com/issues/49845 |
||
3008 | qa: failed umount in test_volumes |
||
3009 | * https://tracker.ceph.com/issues/48773 |
||
3010 | qa: scrub does not complete |
||
3011 | * https://tracker.ceph.com/issues/48805 |
||
3012 | mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details" |
||
3013 | * https://tracker.ceph.com/issues/50178 |
||
3014 | qa: "TypeError: run() got an unexpected keyword argument 'shell'" |
||
3015 | * https://tracker.ceph.com/issues/45434 |
||
3016 | qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed |
||
3017 | |||
3018 | h3. 2021 Mar 24 |
||
3019 | |||
3020 | https://pulpito.ceph.com/pdonnell-2021-03-24_23:26:35-fs-wip-pdonnell-testing-20210324.190252-distro-basic-smithi/ |
||
3021 | |||
3022 | * https://tracker.ceph.com/issues/49500 |
||
3023 | qa: "Assertion `cb_done' failed." |
||
3024 | * https://tracker.ceph.com/issues/50019 |
||
3025 | qa: mount failure with cephadm "probably no MDS server is up?" |
||
3026 | * https://tracker.ceph.com/issues/50020 |
||
3027 | qa: "RADOS object not found (Failed to operate read op for oid cephfs_mirror)" |
||
3028 | * https://tracker.ceph.com/issues/48773 |
||
3029 | qa: scrub does not complete |
||
3030 | * https://tracker.ceph.com/issues/45434 |
||
3031 | qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed |
||
3032 | * https://tracker.ceph.com/issues/48805 |
||
3033 | mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details" |
||
3034 | * https://tracker.ceph.com/issues/48772 |
||
3035 | qa: pjd: not ok 9, 44, 80 |
||
3036 | * https://tracker.ceph.com/issues/50021 |
||
3037 | qa: snaptest-git-ceph failure during mon thrashing |
||
3038 | * https://tracker.ceph.com/issues/48771 |
||
3039 | qa: iogen: workload fails to cause balancing |
||
3040 | * https://tracker.ceph.com/issues/50016 |
||
3041 | qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes" |
||
3042 | * https://tracker.ceph.com/issues/49466 |
||
3043 | qa: "Command failed on gibba030 with status 1: 'set -ex\nsudo dd of=/tmp/tmp.ZEeZBasJer'" |
||
3044 | |||
3045 | |||
3046 | h3. 2021 Mar 18 |
||
3047 | |||
3048 | https://pulpito.ceph.com/pdonnell-2021-03-18_13:46:31-fs-wip-pdonnell-testing-20210318.024145-distro-basic-smithi/ |
||
3049 | |||
3050 | * https://tracker.ceph.com/issues/49466 |
||
3051 | qa: "Command failed on gibba030 with status 1: 'set -ex\nsudo dd of=/tmp/tmp.ZEeZBasJer'" |
||
3052 | * https://tracker.ceph.com/issues/48773 |
||
3053 | qa: scrub does not complete |
||
3054 | * https://tracker.ceph.com/issues/48805 |
||
3055 | mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details" |
||
3056 | * https://tracker.ceph.com/issues/45434 |
||
3057 | qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed |
||
3058 | * https://tracker.ceph.com/issues/49845 |
||
3059 | qa: failed umount in test_volumes |
||
3060 | * https://tracker.ceph.com/issues/49605 |
||
3061 | mgr: drops command on the floor |
||
3062 | * https://tracker.ceph.com/issues/48203 (stock kernel update required) |
||
3063 | qa: quota failure |
||
3064 | * https://tracker.ceph.com/issues/49928 |
||
3065 | client: items pinned in cache preventing unmount x2 |
||
3066 | |||
3067 | h3. 2021 Mar 15 |
||
3068 | |||
3069 | https://pulpito.ceph.com/pdonnell-2021-03-15_22:16:56-fs-wip-pdonnell-testing-20210315.182203-distro-basic-smithi/ |
||
3070 | |||
3071 | * https://tracker.ceph.com/issues/49842 |
||
3072 | qa: stuck pkg install |
||
3073 | * https://tracker.ceph.com/issues/49466 |
||
3074 | qa: "Command failed on gibba030 with status 1: 'set -ex\nsudo dd of=/tmp/tmp.ZEeZBasJer'" |
||
3075 | * https://tracker.ceph.com/issues/49822 |
||
3076 | test: test_mirroring_command_idempotency (tasks.cephfs.test_admin.TestMirroringCommands) failure |
||
3077 | * https://tracker.ceph.com/issues/49240 |
||
3078 | terminate called after throwing an instance of 'std::bad_alloc' |
||
3079 | * https://tracker.ceph.com/issues/48773 |
||
3080 | qa: scrub does not complete |
||
3081 | * https://tracker.ceph.com/issues/45434 |
||
3082 | qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed |
||
3083 | * https://tracker.ceph.com/issues/49500 |
||
3084 | qa: "Assertion `cb_done' failed." |
||
3085 | * https://tracker.ceph.com/issues/49843 |
||
3086 | qa: fs/snaps/snaptest-upchildrealms.sh failure |
||
3087 | * https://tracker.ceph.com/issues/49845 |
||
3088 | qa: failed umount in test_volumes |
||
3089 | * https://tracker.ceph.com/issues/48805 |
||
3090 | mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details" |
||
3091 | * https://tracker.ceph.com/issues/49605 |
||
3092 | mgr: drops command on the floor |
||
3093 | |||
3094 | and failure caused by PR: https://github.com/ceph/ceph/pull/39969 |
||
3095 | |||
3096 | |||
3097 | h3. 2021 Mar 09 |
||
3098 | |||
3099 | https://pulpito.ceph.com/pdonnell-2021-03-09_03:27:39-fs-wip-pdonnell-testing-20210308.214827-distro-basic-smithi/ |
||
3100 | |||
3101 | * https://tracker.ceph.com/issues/49500 |
||
3102 | qa: "Assertion `cb_done' failed." |
||
3103 | * https://tracker.ceph.com/issues/48805 |
||
3104 | mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details" |
||
3105 | * https://tracker.ceph.com/issues/48773 |
||
3106 | qa: scrub does not complete |
||
3107 | * https://tracker.ceph.com/issues/45434 |
||
3108 | qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed |
||
3109 | * https://tracker.ceph.com/issues/49240 |
||
3110 | terminate called after throwing an instance of 'std::bad_alloc' |
||
3111 | * https://tracker.ceph.com/issues/49466 |
||
3112 | qa: "Command failed on gibba030 with status 1: 'set -ex\nsudo dd of=/tmp/tmp.ZEeZBasJer'" |
||
3113 | * https://tracker.ceph.com/issues/49684 |
||
3114 | qa: fs:cephadm mount does not wait for mds to be created |
||
3115 | * https://tracker.ceph.com/issues/48771 |
||
3116 | qa: iogen: workload fails to cause balancing |