Project

General

Profile

Main » History » Version 269

Patrick Donnelly, 04/30/2024 03:13 PM

1 221 Patrick Donnelly
h1. <code>main</code> branch
2 1 Patrick Donnelly
3 264 Patrick Donnelly
h3. 2024-04-30
4 1 Patrick Donnelly
5 265 Patrick Donnelly
"wip-pdonnell-testing-20240429.210911-debug":https://tracker.ceph.com/issues/65694
6 1 Patrick Donnelly
7 266 Patrick Donnelly
* "qa: error during scrub thrashing: rank damage found: {'backtrace'}":https://tracker.ceph.com/issues/57676
8
* "qa: iogen workunit: The following counters failed to be set on mds daemons: {'mds.exported', 'mds.imported'}":https://tracker.ceph.com/issues/54108
9
* "valgrind error: Leak_PossiblyLost posix_memalign UnknownInlinedFun ceph::buffer::v15_2_0::list::refill_append_space(unsigned int)":https://tracker.ceph.com/issues/65314
10
* "qa/suites/fs/nfs: cluster [WRN] Health check failed: 1 stray daemon(s) not managed by cephadm (CEPHADM_STRAY_DAEMON) in cluster log":https://tracker.ceph.com/issues/65021
11
* "qa: Scrub error on inode 0x1000000356c (/volumes/qa/sv_0/2f8f6bb4-3ea9-47a0-bd79-a0f50dc149d5/client.0/tmp/clients/client7/~dmtmp/PARADOX) see mds.b log and `damage ls` output for details in cluster log":https://tracker.ceph.com/issues/65020
12
* "qa: test_cd_with_args failure":https://tracker.ceph.com/issues/63700
13 267 Patrick Donnelly
* "pacific/quincy/v18.2.0: client: ceph-fuse fails to unmount after upgrade to main":https://tracker.ceph.com/issues/64502
14 268 Patrick Donnelly
* "qa: Health detail: HEALTH_WARN Degraded data redundancy: 40/348 objects degraded (11.494%), 9 pgs degraded in cluster log":https://tracker.ceph.com/issues/65700
15
* "qa/cephfs: test_multifs_single_path_rootsquash (tasks.cephfs.test_admin.TestFsAuthorize)":https://tracker.ceph.com/issues/65246
16
* "qa: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)":https://tracker.ceph.com/issues/53859
17 269 Patrick Donnelly
* "qa/cephfs: test_cephfs_mirror_blocklist raises KeyError: 'rados_inst'":https://tracker.ceph.com/issues/64927
18
* "qa/suites/fs/upgrade: Command failed ... ceph orch upgrade check quay.ceph.io/ceph-ci/ceph:$sha1":https://tracker.ceph.com/issues/65703
19 267 Patrick Donnelly
20 263 Rishabh Dave
21
h3. 26 APR 2024
22
23
 * https://pulpito.ceph.com/rishabh-2024-04-24_05:22:11-fs-wip-rishabh-testing-20240416.193735-5-testing-default-smithi/
24
25
* https://tracker.ceph.com/issues/63700
26
  qa: test_cd_with_args failure
27
* https://tracker.ceph.com/issues/64927
28
  qa/cephfs: test_cephfs_mirror_blocklist raises "KeyError: 'rados_inst'"
29
* https://tracker.ceph.com/issues/65022
30
  qa: test_max_items_per_obj open procs not fully cleaned up
31
* https://tracker.ceph.com/issues/53859
32
  qa: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
33
* https://tracker.ceph.com/issues/65136
34
  QA failure: test_fscrypt_dummy_encryption_with_quick_group
35
36
* https://tracker.ceph.com/issues/64572
37
  workunits/fsx.sh failure
38
* https://tracker.ceph.com/issues/62067
39
  ffsb.sh failure "Resource temporarily unavailable"
40
* https://tracker.ceph.com/issues/65265
41
  qa: health warning "no active mgr (MGR_DOWN)" occurs before and after test_nfs runs
42
* https://tracker.ceph.com/issues/57656
43
  dbench: write failed on handle 10009 (Resource temporarily unavailable)
44
* https://tracker.ceph.com/issues/64502
45
  pacific/quincy/v18.2.0: client: ceph-fuse fails to unmount after upgrade to main
46
* https://tracker.ceph.com/issues/65020
47
  qa: Scrub error on inode 0x1000000356c (/volumes/qa/sv_0/2f8f6bb4-3ea9-47a0-bd79-a0f50dc149d5/client.0/tmp/clients/client7/~dmtmp/PARADOX) see mds.b log and `damage ls` output for details" in cluster log
48
* https://tracker.ceph.com/issues/48562
49
  qa: scrub - object missing on disk; some files may be lost
50
* https://tracker.ceph.com/issues/55805
51
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
52
53
54 257 Patrick Donnelly
h3. 2024-04-20
55
56
https://tracker.ceph.com/issues/65596
57
58 258 Patrick Donnelly
* "qa: logrotate fails when state file is already locked":https://tracker.ceph.com/issues/65612
59
* "valgrind error: Leak_PossiblyLost posix_memalign UnknownInlinedFun ceph::buffer::v15_2_0::list::refill_append_space(unsigned int)":https://tracker.ceph.com/issues/65314
60
* "qa: error during scrub thrashing: rank damage found: {'backtrace'}":https://tracker.ceph.com/issues/57676
61
* "qa: Scrub error on inode 0x1000000356c (/volumes/qa/sv_0/2f8f6bb4-3ea9-47a0-bd79-a0f50dc149d5/client.0/tmp/clients/client7/~dmtmp/PARADOX) see mds.b log and `damage ls` output for details in cluster log":https://tracker.ceph.com/issues/65020
62
* "pacific/quincy/v18.2.0: client: ceph-fuse fails to unmount after upgrade to main":https://tracker.ceph.com/issues/64502
63
* "qa/cephfs: test_cephfs_mirror_blocklist raises KeyError: 'rados_inst'":https://tracker.ceph.com/issues/64927
64
* "qa: health warning no active mgr (MGR_DOWN) occurs before and after test_nfs runs":https://tracker.ceph.com/issues/65265
65 259 Patrick Donnelly
* "qa: test_cd_with_args failure":https://tracker.ceph.com/issues/63700
66
* "qa: iogen workunit: The following counters failed to be set on mds daemons: {'mds.exported', 'mds.imported'}":https://tracker.ceph.com/issues/54108
67
* "test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed":https://tracker.ceph.com/issues/61243
68
* "qa/cephfs: test_multifs_single_path_rootsquash (tasks.cephfs.test_admin.TestFsAuthorize)":https://tracker.ceph.com/issues/65246
69
* "client: resends request to same MDS it just received a forward from if it does not have an open session with the target":https://tracker.ceph.com/issues/65614
70 260 Patrick Donnelly
* "pybind/mgr/snap_schedule: 1m scheduled snaps not reliably executed":https://tracker.ceph.com/issues/65616
71
* "qa: fsstress: cannot execute binary file: Exec format error":https://tracker.ceph.com/issues/65618
72 261 Patrick Donnelly
* "qa: untar_snap_rm failure during mds thrashing":https://tracker.ceph.com/issues/50821
73
* "[testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)":https://tracker.ceph.com/issues/57656
74
* "workunits/fsx.sh failure":https://tracker.ceph.com/issues/64572
75
* "ffsb.sh failure Resource temporarily unavailable":https://tracker.ceph.com/issues/62067
76 258 Patrick Donnelly
77 256 Venky Shankar
h3. 2024-04-12
78
79
https://tracker.ceph.com/issues/65324
80
81
(Lot many `sudo systemctl stop ceph-ba42f8d0-efae-11ee-b647-cb9ed24678a4@mon.a` and infra issues failures in this run)
82
83
* "Test failure: test_cephfs_mirror_cancel_mirroring_and_readd":https://tracker.ceph.com/issues/64711
84
* "pacific/quincy/v18.2.0: client: ceph-fuse fails to unmount after upgrade to main":https://tracker.ceph.com/issues/64502
85
* "qa: ceph tell 4.3a deep-scrub command not found":https://tracker.ceph.com/issues/64972
86
* "qa: scrub - object missing on disk; some files may be lost":https://tracker.ceph.com/issues/48562
87
* "qa: failed cephfs-shell test_reading_conf":https://tracker.ceph.com/issues/63699
88 247 Rishabh Dave
89 253 Venky Shankar
h3. 2024-04-04
90
91
https://tracker.ceph.com/issues/65300
92
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20240330.172700
93
94
(Lot many `sudo systemctl stop ceph-ba42f8d0-efae-11ee-b647-cb9ed24678a4@mon.a` failures in this run)
95
96
* "Test failure: test_cephfs_mirror_cancel_mirroring_and_readd":https://tracker.ceph.com/issues/64711
97
* "pacific/quincy/v18.2.0: client: ceph-fuse fails to unmount after upgrade to main":https://tracker.ceph.com/issues/64502
98
* "qa: failed cephfs-shell test_reading_conf":https://tracker.ceph.com/issues/63699
99
* "centos 9 testing reveals rocksdb Leak_StillReachable memory leak in mons":https://tracker.ceph.com/issues/61774
100 254 Venky Shankar
* "qa: scrub - object missing on disk; some files may be lost":https://tracker.ceph.com/issues/48562
101
* "upgrade stalls after upgrading one ceph-mgr daemon":https://tracker.ceph.com/issues/65263
102 253 Venky Shankar
* "qa: test_max_items_per_obj open procs not fully cleaned up":https://tracker.ceph.com/issues/65022
103
* "QA failure: test_fscrypt_dummy_encryption_with_quick_group":https://tracker.ceph.com/issues/65136
104 254 Venky Shankar
* "qa/cephfs: test_multifs_single_path_rootsquash (tasks.cephfs.test_admin.TestFsAuthorize)":https://tracker.ceph.com/issues/65246
105
* "qa: test_cd_with_args failure":https://tracker.ceph.com/issues/63700
106
* "valgrind error: Leak_PossiblyLost posix_memalign UnknownInlinedFun ceph::buffer::v15_2_0::list::refill_append_space(unsigned int)":https://tracker.ceph.com/issues/65314
107 253 Venky Shankar
108 249 Rishabh Dave
h3. 4 Apr 2024
109 246 Rishabh Dave
110
https://pulpito.ceph.com/rishabh-2024-03-27_05:27:11-fs-wip-rishabh-testing-20240326.131558-testing-default-smithi/
111
112
* https://tracker.ceph.com/issues/64927
113
  qa/cephfs: test_cephfs_mirror_blocklist raises "KeyError: 'rados_inst'"
114
* https://tracker.ceph.com/issues/65022
115
  qa: test_max_items_per_obj open procs not fully cleaned up
116
* https://tracker.ceph.com/issues/63699
117
  qa: failed cephfs-shell test_reading_conf
118
* https://tracker.ceph.com/issues/63700
119
  qa: test_cd_with_args failure
120
* https://tracker.ceph.com/issues/65136
121
  QA failure: test_fscrypt_dummy_encryption_with_quick_group
122
* https://tracker.ceph.com/issues/65246
123
  qa/cephfs: test_multifs_single_path_rootsquash (tasks.cephfs.test_admin.TestFsAuthorize)
124
125 248 Rishabh Dave
126 246 Rishabh Dave
* https://tracker.ceph.com/issues/58945
127 1 Patrick Donnelly
  qa: xfstests-dev's generic test suite has failures with fuse client
128
* https://tracker.ceph.com/issues/57656
129 251 Rishabh Dave
  [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
130 1 Patrick Donnelly
* https://tracker.ceph.com/issues/63265
131
  qa: fs/snaps/snaptest-git-ceph.sh failed when reseting to tag 'v0.1'
132 246 Rishabh Dave
* https://tracker.ceph.com/issues/62067
133 251 Rishabh Dave
  ffsb.sh failure "Resource temporarily unavailable"
134 246 Rishabh Dave
* https://tracker.ceph.com/issues/63949
135
  leak in mds.c detected by valgrind during CephFS QA run
136
* https://tracker.ceph.com/issues/48562
137
  qa: scrub - object missing on disk; some files may be lost
138
* https://tracker.ceph.com/issues/65020
139
  qa: Scrub error on inode 0x1000000356c (/volumes/qa/sv_0/2f8f6bb4-3ea9-47a0-bd79-a0f50dc149d5/client.0/tmp/clients/client7/~dmtmp/PARADOX) see mds.b log and `damage ls` output for details" in cluster log
140
* https://tracker.ceph.com/issues/64572
141
  workunits/fsx.sh failure
142
* https://tracker.ceph.com/issues/57676
143
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
144 1 Patrick Donnelly
* https://tracker.ceph.com/issues/64502
145 246 Rishabh Dave
  client: ceph-fuse fails to unmount after upgrade to main
146 1 Patrick Donnelly
* https://tracker.ceph.com/issues/54741
147
  crash: MDSTableClient::got_journaled_ack(unsigned long)
148 250 Rishabh Dave
149 248 Rishabh Dave
* https://tracker.ceph.com/issues/65265
150
  qa: health warning "no active mgr (MGR_DOWN)" occurs before and after test_nfs runs
151 1 Patrick Donnelly
* https://tracker.ceph.com/issues/65308
152
  qa: fs was offline but also unexpectedly degraded
153
* https://tracker.ceph.com/issues/65309
154
  qa: dbench.sh failed with "ERROR: handle 10318 was not found"
155 250 Rishabh Dave
156
* https://tracker.ceph.com/issues/65018
157 251 Rishabh Dave
  PG_DEGRADED warnings during cluster creation via cephadm: "Health check failed: Degraded data redundancy: 2/192 objects degraded (1.042%), 1 pg degraded (PG_DEGRADED)"
158 250 Rishabh Dave
* https://tracker.ceph.com/issues/52624
159
  qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
160 245 Rishabh Dave
161 240 Patrick Donnelly
h3. 2024-04-02
162
163
https://tracker.ceph.com/issues/65215
164
165
* "qa: error during scrub thrashing: rank damage found: {'backtrace'}":https://tracker.ceph.com/issues/57676
166
* "qa: ceph tell 4.3a deep-scrub command not found":https://tracker.ceph.com/issues/64972
167
* "pacific/quincy/v18.2.0: client: ceph-fuse fails to unmount after upgrade to main":https://tracker.ceph.com/issues/64502
168
* "Test failure: test_cephfs_mirror_cancel_mirroring_and_readd":https://tracker.ceph.com/issues/64711
169
* "workunits/fsx.sh failure":https://tracker.ceph.com/issues/64572
170
* "qa: failed cephfs-shell test_reading_conf":https://tracker.ceph.com/issues/63699
171
* "centos 9 testing reveals rocksdb Leak_StillReachable memory leak in mons":https://tracker.ceph.com/issues/61774
172
* "qa: test_max_items_per_obj open procs not fully cleaned up":https://tracker.ceph.com/issues/65022
173
* "qa: dbench workload timeout":https://tracker.ceph.com/issues/50220
174
* "suites/fsstress.sh hangs on one client - test times out":https://tracker.ceph.com/issues/64707
175 255 Patrick Donnelly
* "qa/suites/fs/nfs: cluster [WRN] Health check failed: 1 stray daemon(s) not managed by cephadm (CEPHADM_STRAY_DAEMON) in cluster log":https://tracker.ceph.com/issues/65021
176 241 Patrick Donnelly
* "qa: Scrub error on inode 0x1000000356c (/volumes/qa/sv_0/2f8f6bb4-3ea9-47a0-bd79-a0f50dc149d5/client.0/tmp/clients/client7/~dmtmp/PARADOX) see mds.b log and `damage ls` output for details in cluster log":https://tracker.ceph.com/issues/65020
177
* "qa: iogen workunit: The following counters failed to be set on mds daemons: {'mds.exported', 'mds.imported'}":https://tracker.ceph.com/issues/54108
178
* "ffsb.sh failure Resource temporarily unavailable":https://tracker.ceph.com/issues/62067
179
* "QA failure: test_fscrypt_dummy_encryption_with_quick_group":https://tracker.ceph.com/issues/65136
180 244 Patrick Donnelly
* "qa: cluster [WRN] Health detail: HEALTH_WARN 1 pool(s) do not have an application enabled in cluster log":https://tracker.ceph.com/issues/65271
181 241 Patrick Donnelly
* "qa: test_cephfs_mirror_cancel_sync fails in a 100 jobs run of fs:mirror suite":https://tracker.ceph.com/issues/64534
182 240 Patrick Donnelly
183 236 Patrick Donnelly
h3. 2024-03-28
184
185
https://tracker.ceph.com/issues/65213
186
187 237 Patrick Donnelly
* "qa: error during scrub thrashing: rank damage found: {'backtrace'}":https://tracker.ceph.com/issues/57676
188
* "workunits/fsx.sh failure":https://tracker.ceph.com/issues/64572
189
* "PG_DEGRADED warnings during cluster creation via cephadm: Health check failed: Degraded data":https://tracker.ceph.com/issues/65018
190 238 Patrick Donnelly
* "suites/fsstress.sh hangs on one client - test times out":https://tracker.ceph.com/issues/64707
191
* "qa: ceph tell 4.3a deep-scrub command not found":https://tracker.ceph.com/issues/64972
192
* "qa: iogen workunit: The following counters failed to be set on mds daemons: {'mds.exported', 'mds.imported'}":https://tracker.ceph.com/issues/54108
193 239 Patrick Donnelly
* "qa: Scrub error on inode 0x1000000356c (/volumes/qa/sv_0/2f8f6bb4-3ea9-47a0-bd79-a0f50dc149d5/client.0/tmp/clients/client7/~dmtmp/PARADOX) see mds.b log and `damage ls` output for details in cluster log":https://tracker.ceph.com/issues/65020
194
* "qa: failed cephfs-shell test_reading_conf":https://tracker.ceph.com/issues/63699
195
* "Test failure: test_cephfs_mirror_cancel_mirroring_and_readd":https://tracker.ceph.com/issues/64711
196
* "qa: test_max_items_per_obj open procs not fully cleaned up":https://tracker.ceph.com/issues/65022
197
* "pacific/quincy/v18.2.0: client: ceph-fuse fails to unmount after upgrade to main":https://tracker.ceph.com/issues/64502
198
* "centos 9 testing reveals rocksdb Leak_StillReachable memory leak in mons":https://tracker.ceph.com/issues/61774
199
* "qa: Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)":https://tracker.ceph.com/issues/52624
200
* "qa: dbench workload timeout":https://tracker.ceph.com/issues/50220
201
202
203 236 Patrick Donnelly
204 235 Milind Changire
h3. 2024-03-25
205
206
https://pulpito.ceph.com/mchangir-2024-03-22_09:46:06-fs:upgrade-wip-mchangir-testing-main-20240318.032620-testing-default-smithi/
207
* https://tracker.ceph.com/issues/64502
208
  fusermount -u fails with: teuthology.exceptions.MaxWhileTries: reached maximum tries (51) after waiting for 300 seconds
209
210
https://pulpito.ceph.com/mchangir-2024-03-22_09:48:09-fs:libcephfs-wip-mchangir-testing-main-20240318.032620-testing-default-smithi/
211
212
* https://tracker.ceph.com/issues/62245
213
  libcephfs/test.sh failed - https://tracker.ceph.com/issues/62245#note-3
214
215
216 228 Patrick Donnelly
h3. 2024-03-20
217
218 234 Patrick Donnelly
https://pulpito.ceph.com/?branch=wip-batrick-testing-20240320.145742
219 228 Patrick Donnelly
220 233 Patrick Donnelly
https://github.com/batrick/ceph/commit/360516069d9393362c4cc6eb9371680fe16d66ab
221
222 229 Patrick Donnelly
Ubuntu jobs filtered out because builds were skipped by jenkins/shaman.
223 1 Patrick Donnelly
224 229 Patrick Donnelly
This run has a lot more failures because https://github.com/ceph/ceph/pull/55455 fixed log WRN/ERR checks.
225 228 Patrick Donnelly
226 229 Patrick Donnelly
* https://tracker.ceph.com/issues/57676
227
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
228
* https://tracker.ceph.com/issues/64572
229
    workunits/fsx.sh failure
230
* https://tracker.ceph.com/issues/65018
231
    PG_DEGRADED warnings during cluster creation via cephadm: "Health check failed: Degraded data redundancy: 2/192 objects degraded (1.042%), 1 pg degraded (PG_DEGRADED)"
232
* https://tracker.ceph.com/issues/64707 (new issue)
233
    suites/fsstress.sh hangs on one client - test times out
234 1 Patrick Donnelly
* https://tracker.ceph.com/issues/64988
235
    qa: fs:workloads mgr client evicted indicated by "cluster [WRN] evicting unresponsive client smithi042:x (15288), after 303.306 seconds"
236
* https://tracker.ceph.com/issues/59684
237
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
238 230 Patrick Donnelly
* https://tracker.ceph.com/issues/64972
239
    qa: "ceph tell 4.3a deep-scrub" command not found
240
* https://tracker.ceph.com/issues/54108
241
    qa: iogen workunit: "The following counters failed to be set on mds daemons: {'mds.exported', 'mds.imported'}"
242
* https://tracker.ceph.com/issues/65019
243
    qa/suites/fs/top: [WRN] Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)" in cluster log 
244
* https://tracker.ceph.com/issues/65020
245
    qa: Scrub error on inode 0x1000000356c (/volumes/qa/sv_0/2f8f6bb4-3ea9-47a0-bd79-a0f50dc149d5/client.0/tmp/clients/client7/~dmtmp/PARADOX) see mds.b log and `damage ls` output for details" in cluster log
246
* https://tracker.ceph.com/issues/65021
247
    qa/suites/fs/nfs: cluster [WRN] Health check failed: 1 stray daemon(s) not managed by cephadm (CEPHADM_STRAY_DAEMON)" in cluster log
248
* https://tracker.ceph.com/issues/63699
249
    qa: failed cephfs-shell test_reading_conf
250 231 Patrick Donnelly
* https://tracker.ceph.com/issues/64711
251
    Test failure: test_cephfs_mirror_cancel_mirroring_and_readd (tasks.cephfs.test_mirroring.TestMirroring)
252
* https://tracker.ceph.com/issues/50821
253
    qa: untar_snap_rm failure during mds thrashing
254 232 Patrick Donnelly
* https://tracker.ceph.com/issues/65022
255
    qa: test_max_items_per_obj open procs not fully cleaned up
256 228 Patrick Donnelly
257 226 Venky Shankar
h3.  14th March 2024
258
259
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20240307.013758
260
261 227 Venky Shankar
(pjd.sh failures are related to a bug in the testing kernel. See - https://tracker.ceph.com/issues/64679#note-4)
262 226 Venky Shankar
263
* https://tracker.ceph.com/issues/62067
264
    ffsb.sh failure "Resource temporarily unavailable"
265
* https://tracker.ceph.com/issues/57676
266
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
267
* https://tracker.ceph.com/issues/64502
268
    pacific/quincy/v18.2.0: client: ceph-fuse fails to unmount after upgrade to main
269
* https://tracker.ceph.com/issues/64572
270
    workunits/fsx.sh failure
271
* https://tracker.ceph.com/issues/63700
272
    qa: test_cd_with_args failure
273
* https://tracker.ceph.com/issues/59684
274
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
275
* https://tracker.ceph.com/issues/61243
276
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
277
278 225 Venky Shankar
h3. 5th March 2024
279
280
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20240304.042522
281
282
* https://tracker.ceph.com/issues/57676
283
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
284
* https://tracker.ceph.com/issues/64502
285
    pacific/quincy/v18.2.0: client: ceph-fuse fails to unmount after upgrade to main
286
* https://tracker.ceph.com/issues/63949
287
    leak in mds.c detected by valgrind during CephFS QA run
288
* https://tracker.ceph.com/issues/57656
289
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
290
* https://tracker.ceph.com/issues/63699
291
    qa: failed cephfs-shell test_reading_conf
292
* https://tracker.ceph.com/issues/64572
293
    workunits/fsx.sh failure
294
* https://tracker.ceph.com/issues/64707 (new issue)
295
    suites/fsstress.sh hangs on one client - test times out
296
* https://tracker.ceph.com/issues/59684
297
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
298
* https://tracker.ceph.com/issues/63700
299
    qa: test_cd_with_args failure
300
* https://tracker.ceph.com/issues/64711
301
    Test failure: test_cephfs_mirror_cancel_mirroring_and_readd (tasks.cephfs.test_mirroring.TestMirroring)
302
* https://tracker.ceph.com/issues/64729 (new issue)
303
    mon.a (mon.0) 1281 : cluster 3 [WRN] MDS_SLOW_METADATA_IO: 3 MDSs report slow metadata IOs" in cluster log
304
* https://tracker.ceph.com/issues/64730
305
    fs/misc/multiple_rsync.sh workunit times out
306
307 224 Venky Shankar
h3. 26th Feb 2024
308
309
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20240216.060239
310
311
(This run is a bit messy due to
312
313
  a) OCI runtime issues in the testing kernel with centos9
314
  b) SELinux denials related failures
315
  c) Unrelated MON_DOWN warnings)
316
317
* https://tracker.ceph.com/issues/57676
318
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
319
* https://tracker.ceph.com/issues/63700
320
    qa: test_cd_with_args failure
321
* https://tracker.ceph.com/issues/63949
322
    leak in mds.c detected by valgrind during CephFS QA run
323
* https://tracker.ceph.com/issues/59684
324
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
325
* https://tracker.ceph.com/issues/61243
326
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
327
* https://tracker.ceph.com/issues/63699
328
    qa: failed cephfs-shell test_reading_conf
329
* https://tracker.ceph.com/issues/64172
330
    Test failure: test_multiple_path_r (tasks.cephfs.test_admin.TestFsAuthorize)
331
* https://tracker.ceph.com/issues/57656
332
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
333
* https://tracker.ceph.com/issues/64572
334
    workunits/fsx.sh failure
335
336 222 Patrick Donnelly
h3. 20th Feb 2024
337
338
https://github.com/ceph/ceph/pull/55601
339
https://github.com/ceph/ceph/pull/55659
340
341
https://pulpito.ceph.com/pdonnell-2024-02-20_07:23:03-fs:upgrade:mds_upgrade_sequence-wip-batrick-testing-20240220.022152-distro-default-smithi/
342
343
* https://tracker.ceph.com/issues/64502
344
    client: quincy ceph-fuse fails to unmount after upgrade to main
345
346 223 Patrick Donnelly
This run has numerous problems. #55601 introduces testing for the upgrade sequence from </code>reef/{v18.2.0,v18.2.1,reef}</code> as well as an extra dimension for the ceph-fuse client. The main "big" issue is i64502: the ceph-fuse client is not being unmounted when <code>fusermount -u</code> is called. Instead, the client begins to unmount only after daemons are shut down during test cleanup.
347 218 Venky Shankar
348
h3. 19th Feb 2024
349
350 220 Venky Shankar
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20240217.015652
351
352 218 Venky Shankar
* https://tracker.ceph.com/issues/61243
353
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
354
* https://tracker.ceph.com/issues/63700
355
    qa: test_cd_with_args failure
356
* https://tracker.ceph.com/issues/63141
357
    qa/cephfs: test_idem_unaffected_root_squash fails
358
* https://tracker.ceph.com/issues/59684
359
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
360
* https://tracker.ceph.com/issues/63949
361
    leak in mds.c detected by valgrind during CephFS QA run
362
* https://tracker.ceph.com/issues/63764
363
    Test failure: test_r_with_fsname_and_no_path_in_cap (tasks.cephfs.test_multifs_auth.TestMDSCaps)
364
* https://tracker.ceph.com/issues/63699
365
    qa: failed cephfs-shell test_reading_conf
366 219 Venky Shankar
* https://tracker.ceph.com/issues/64482
367
    ceph: stderr Error: OCI runtime error: crun: bpf create ``: Function not implemented
368 201 Rishabh Dave
369 217 Venky Shankar
h3. 29 Jan 2024
370
371
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20240119.075157-1
372
373
* https://tracker.ceph.com/issues/57676
374
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
375
* https://tracker.ceph.com/issues/63949
376
    leak in mds.c detected by valgrind during CephFS QA run
377
* https://tracker.ceph.com/issues/62067
378
    ffsb.sh failure "Resource temporarily unavailable"
379
* https://tracker.ceph.com/issues/64172
380
    Test failure: test_multiple_path_r (tasks.cephfs.test_admin.TestFsAuthorize)
381
* https://tracker.ceph.com/issues/63265
382
    qa: fs/snaps/snaptest-git-ceph.sh failed when reseting to tag 'v0.1'
383
* https://tracker.ceph.com/issues/61243
384
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
385
* https://tracker.ceph.com/issues/59684
386
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
387
* https://tracker.ceph.com/issues/57656
388
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
389
* https://tracker.ceph.com/issues/64209
390
    snaptest-multiple-capsnaps.sh fails with "got remote process result: 1"
391
392 216 Venky Shankar
h3. 17th Jan 2024
393
394
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20240103.072409-1
395
396
* https://tracker.ceph.com/issues/63764
397
    Test failure: test_r_with_fsname_and_no_path_in_cap (tasks.cephfs.test_multifs_auth.TestMDSCaps)
398
* https://tracker.ceph.com/issues/57676
399
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
400
* https://tracker.ceph.com/issues/51964
401
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
402
* https://tracker.ceph.com/issues/63949
403
    leak in mds.c detected by valgrind during CephFS QA run
404
* https://tracker.ceph.com/issues/62067
405
    ffsb.sh failure "Resource temporarily unavailable"
406
* https://tracker.ceph.com/issues/61243
407
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
408
* https://tracker.ceph.com/issues/63259
409
    mds: failed to store backtrace and force file system read-only
410
* https://tracker.ceph.com/issues/63265
411
    qa: fs/snaps/snaptest-git-ceph.sh failed when reseting to tag 'v0.1'
412
413
h3. 16 Jan 2024
414 215 Rishabh Dave
415 214 Rishabh Dave
https://pulpito.ceph.com/rishabh-2023-12-11_15:37:57-fs-rishabh-2023dec11-testing-default-smithi/
416
https://pulpito.ceph.com/rishabh-2023-12-17_11:19:43-fs-rishabh-2023dec11-testing-default-smithi/
417
https://pulpito.ceph.com/rishabh-2024-01-04_18:43:16-fs-rishabh-2024jan4-testing-default-smithi
418
419
* https://tracker.ceph.com/issues/63764
420
  Test failure: test_r_with_fsname_and_no_path_in_cap (tasks.cephfs.test_multifs_auth.TestMDSCaps)
421
* https://tracker.ceph.com/issues/63141
422
  qa/cephfs: test_idem_unaffected_root_squash fails
423
* https://tracker.ceph.com/issues/62067
424
  ffsb.sh failure "Resource temporarily unavailable" 
425
* https://tracker.ceph.com/issues/51964
426
  qa: test_cephfs_mirror_restart_sync_on_blocklist failure
427
* https://tracker.ceph.com/issues/54462 
428
  Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128
429
* https://tracker.ceph.com/issues/57676
430
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
431
432
* https://tracker.ceph.com/issues/63949
433
  valgrind leak in MDS
434
* https://tracker.ceph.com/issues/64041
435
  qa/cephfs: fs/upgrade/nofs suite attempts to jump more than 2 releases
436
* fsstress failure in last run was due a kernel MM layer failure, unrelated to CephFS
437
* from last run, job #7507400 failed due to MGR; FS wasn't degraded, so it's unrelated to CephFS
438
439 213 Venky Shankar
h3. 06 Dec 2023
440
441
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20231206.125818
442
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20231206.125818-x (rerun w/ squid kickoff changes)
443
444
* https://tracker.ceph.com/issues/63764
445
    Test failure: test_r_with_fsname_and_no_path_in_cap (tasks.cephfs.test_multifs_auth.TestMDSCaps)
446
* https://tracker.ceph.com/issues/63233
447
    mon|client|mds: valgrind reports possible leaks in the MDS
448
* https://tracker.ceph.com/issues/57676
449
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
450
* https://tracker.ceph.com/issues/62580
451
    testing: Test failure: test_snapshot_remove (tasks.cephfs.test_strays.TestStrays)
452
* https://tracker.ceph.com/issues/62067
453
    ffsb.sh failure "Resource temporarily unavailable"
454
* https://tracker.ceph.com/issues/61243
455
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
456
* https://tracker.ceph.com/issues/62081
457
    tasks/fscrypt-common does not finish, timesout
458
* https://tracker.ceph.com/issues/63265
459
    qa: fs/snaps/snaptest-git-ceph.sh failed when reseting to tag 'v0.1'
460
* https://tracker.ceph.com/issues/63806
461
    ffsb.sh workunit failure (MDS: std::out_of_range, damaged)
462
463 211 Patrick Donnelly
h3. 30 Nov 2023
464
465
https://pulpito.ceph.com/pdonnell-2023-11-30_08:05:19-fs:shell-wip-batrick-testing-20231130.014408-distro-default-smithi/
466
467
* https://tracker.ceph.com/issues/63699
468 212 Patrick Donnelly
    qa: failed cephfs-shell test_reading_conf
469
* https://tracker.ceph.com/issues/63700
470
    qa: test_cd_with_args failure
471 211 Patrick Donnelly
472 210 Venky Shankar
h3. 29 Nov 2023
473
474
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20231107.042705
475
476
* https://tracker.ceph.com/issues/63233
477
    mon|client|mds: valgrind reports possible leaks in the MDS
478
* https://tracker.ceph.com/issues/63141
479
    qa/cephfs: test_idem_unaffected_root_squash fails
480
* https://tracker.ceph.com/issues/57676
481
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
482
* https://tracker.ceph.com/issues/57655
483
    qa: fs:mixed-clients kernel_untar_build failure
484
* https://tracker.ceph.com/issues/62067
485
    ffsb.sh failure "Resource temporarily unavailable"
486
* https://tracker.ceph.com/issues/61243
487
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
488
* https://tracker.ceph.com/issues/62510 (pending RHEL back port)
    snaptest-git-ceph.sh failure with fs/thrash
489
* https://tracker.ceph.com/issues/62810
490
    Failure in snaptest-git-ceph.sh (it's an async unlink/create bug) -- Need to fix again
491
492 206 Venky Shankar
h3. 14 Nov 2023
493 207 Milind Changire
(Milind)
494
495
https://pulpito.ceph.com/mchangir-2023-11-13_10:27:15-fs-wip-mchangir-testing-20231110.052303-testing-default-smithi/
496
497
* https://tracker.ceph.com/issues/53859
498
  qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
499
* https://tracker.ceph.com/issues/63233
500
  mon|client|mds: valgrind reports possible leaks in the MDS
501
* https://tracker.ceph.com/issues/63521
502
  qa: Test failure: test_scrub_merge_dirfrags (tasks.cephfs.test_scrub_checks.TestScrubChecks)
503
* https://tracker.ceph.com/issues/57655
504
  qa: fs:mixed-clients kernel_untar_build failure
505
* https://tracker.ceph.com/issues/62580
506
    testing: Test failure: test_snapshot_remove (tasks.cephfs.test_strays.TestStrays)
507
* https://tracker.ceph.com/issues/57676
508
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
509
* https://tracker.ceph.com/issues/61243
510
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
511
* https://tracker.ceph.com/issues/63141
512
    qa/cephfs: test_idem_unaffected_root_squash fails
513
* https://tracker.ceph.com/issues/51964
514
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
515
* https://tracker.ceph.com/issues/63522
516
    No module named 'tasks.ceph_fuse'
517
    No module named 'tasks.kclient'
518
    No module named 'tasks.cephfs.fuse_mount'
519
    No module named 'tasks.ceph'
520
* https://tracker.ceph.com/issues/63523
521
    Command failed - qa/workunits/fs/misc/general_vxattrs.sh
522
523
524
h3. 14 Nov 2023
525 206 Venky Shankar
526
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20231106.073650
527
528
(nvm the fs:upgrade test failure - the PR is excluded from merge)
529
530
* https://tracker.ceph.com/issues/57676
531
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
532
* https://tracker.ceph.com/issues/63233
533
    mon|client|mds: valgrind reports possible leaks in the MDS
534
* https://tracker.ceph.com/issues/63141
535
    qa/cephfs: test_idem_unaffected_root_squash fails
536
* https://tracker.ceph.com/issues/62580
537
    testing: Test failure: test_snapshot_remove (tasks.cephfs.test_strays.TestStrays)
538
* https://tracker.ceph.com/issues/57655
539
    qa: fs:mixed-clients kernel_untar_build failure
540
* https://tracker.ceph.com/issues/51964
541
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
542
* https://tracker.ceph.com/issues/63519
543
    ceph-fuse: reef ceph-fuse crashes with main branch ceph-mds
544
* https://tracker.ceph.com/issues/57087
545
    qa: test_fragmented_injection (tasks.cephfs.test_data_scan.TestDataScan) failure
546
* https://tracker.ceph.com/issues/58945
547
    qa: xfstests-dev's generic test suite has 20 failures with fuse client
548
549 204 Rishabh Dave
h3. 7 Nov 2023
550
551 205 Rishabh Dave
fs: https://pulpito.ceph.com/rishabh-2023-11-04_04:30:51-fs-rishabh-2023nov3-testing-default-smithi/
552
re-run: https://pulpito.ceph.com/rishabh-2023-11-05_14:10:09-fs-rishabh-2023nov3-testing-default-smithi/
553
smoke: https://pulpito.ceph.com/rishabh-2023-11-08_08:39:05-smoke-rishabh-2023nov3-testing-default-smithi/
554 204 Rishabh Dave
555
* https://tracker.ceph.com/issues/53859
556
  qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
557
* https://tracker.ceph.com/issues/63233
558
  mon|client|mds: valgrind reports possible leaks in the MDS
559
* https://tracker.ceph.com/issues/57655
560
  qa: fs:mixed-clients kernel_untar_build failure
561
* https://tracker.ceph.com/issues/57676
562
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
563
564
* https://tracker.ceph.com/issues/63473
565
  fsstress.sh failed with errno 124
566
567 202 Rishabh Dave
h3. 3 Nov 2023
568 203 Rishabh Dave
569 202 Rishabh Dave
https://pulpito.ceph.com/rishabh-2023-10-27_06:26:52-fs-rishabh-2023oct26-testing-default-smithi/
570
571
* https://tracker.ceph.com/issues/63141
572
  qa/cephfs: test_idem_unaffected_root_squash fails
573
* https://tracker.ceph.com/issues/63233
574
  mon|client|mds: valgrind reports possible leaks in the MDS
575
* https://tracker.ceph.com/issues/57656
576
  dbench: write failed on handle 10010 (Resource temporarily unavailable)
577
* https://tracker.ceph.com/issues/57655
578
  qa: fs:mixed-clients kernel_untar_build failure
579
* https://tracker.ceph.com/issues/57676
580
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
581
582
* https://tracker.ceph.com/issues/59531
583
  "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio)"
584
* https://tracker.ceph.com/issues/52624
585
  qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
586
587 198 Patrick Donnelly
h3. 24 October 2023
588
589
https://pulpito.ceph.com/?branch=wip-batrick-testing-20231024.144545
590
591 200 Patrick Donnelly
Two failures:
592
593
https://pulpito.ceph.com/pdonnell-2023-10-26_05:21:22-fs-wip-batrick-testing-20231024.144545-distro-default-smithi/7438459/
594
https://pulpito.ceph.com/pdonnell-2023-10-26_05:21:22-fs-wip-batrick-testing-20231024.144545-distro-default-smithi/7438468/
595
596
probably related to https://github.com/ceph/ceph/pull/53255. Killing the mount as part of the test did not complete. Will research more.
597
598 198 Patrick Donnelly
* https://tracker.ceph.com/issues/52624
599
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
600
* https://tracker.ceph.com/issues/57676
601 199 Patrick Donnelly
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
602
* https://tracker.ceph.com/issues/63233
603
    mon|client|mds: valgrind reports possible leaks in the MDS
604
* https://tracker.ceph.com/issues/59531
605
    "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS 
606
* https://tracker.ceph.com/issues/57655
607
    qa: fs:mixed-clients kernel_untar_build failure
608 200 Patrick Donnelly
* https://tracker.ceph.com/issues/62067
609
    ffsb.sh failure "Resource temporarily unavailable"
610
* https://tracker.ceph.com/issues/63411
611
    qa: flush journal may cause timeouts of `scrub status`
612
* https://tracker.ceph.com/issues/61243
613
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
614
* https://tracker.ceph.com/issues/63141
615 198 Patrick Donnelly
    test_idem_unaffected_root_squash (test_admin.TestFsAuthorizeUpdate) fails
616 148 Rishabh Dave
617 195 Venky Shankar
h3. 18 Oct 2023
618
619
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20231018.065603
620
621
* https://tracker.ceph.com/issues/52624
622
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
623
* https://tracker.ceph.com/issues/57676
624
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
625
* https://tracker.ceph.com/issues/63233
626
    mon|client|mds: valgrind reports possible leaks in the MDS
627
* https://tracker.ceph.com/issues/63141
628
    qa/cephfs: test_idem_unaffected_root_squash fails
629
* https://tracker.ceph.com/issues/59531
630
  "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio)" 
631
* https://tracker.ceph.com/issues/62658
632
    error during scrub thrashing: reached maximum tries (31) after waiting for 900 seconds
633
* https://tracker.ceph.com/issues/62580
634
    testing: Test failure: test_snapshot_remove (tasks.cephfs.test_strays.TestStrays)
635
* https://tracker.ceph.com/issues/62067
636
    ffsb.sh failure "Resource temporarily unavailable"
637
* https://tracker.ceph.com/issues/57655
638
    qa: fs:mixed-clients kernel_untar_build failure
639
* https://tracker.ceph.com/issues/62036
640
    src/mds/MDCache.cc: 5131: FAILED ceph_assert(isolated_inodes.empty())
641
* https://tracker.ceph.com/issues/58945
642
    qa: xfstests-dev's generic test suite has 20 failures with fuse client
643
* https://tracker.ceph.com/issues/62847
644
    mds: blogbench requests stuck (5mds+scrub+snaps-flush)
645
646 193 Venky Shankar
h3. 13 Oct 2023
647
648
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20231013.093215
649
650
* https://tracker.ceph.com/issues/52624
651
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
652
* https://tracker.ceph.com/issues/62936
653
    Test failure: test_mirroring_init_failure_with_recovery (tasks.cephfs.test_mirroring.TestMirroring)
654
* https://tracker.ceph.com/issues/47292
655
    cephfs-shell: test_df_for_valid_file failure
656
* https://tracker.ceph.com/issues/63141
657
    qa/cephfs: test_idem_unaffected_root_squash fails
658
* https://tracker.ceph.com/issues/62081
659
    tasks/fscrypt-common does not finish, timesout
660 1 Patrick Donnelly
* https://tracker.ceph.com/issues/58945
661
    qa: xfstests-dev's generic test suite has 20 failures with fuse client
662 194 Venky Shankar
* https://tracker.ceph.com/issues/63233
663
    mon|client|mds: valgrind reports possible leaks in the MDS
664 193 Venky Shankar
665 190 Patrick Donnelly
h3. 16 Oct 2023
666
667
https://pulpito.ceph.com/?branch=wip-batrick-testing-20231016.203825
668
669 192 Patrick Donnelly
Infrastructure issues:
670
* /teuthology/pdonnell-2023-10-19_12:04:12-fs-wip-batrick-testing-20231016.203825-distro-default-smithi/7432286/teuthology.log
671
    Host lost.
672
673 196 Patrick Donnelly
One followup fix:
674
* https://pulpito.ceph.com/pdonnell-2023-10-20_00:33:29-fs-wip-batrick-testing-20231016.203825-distro-default-smithi/
675
676 192 Patrick Donnelly
Failures:
677
678
* https://tracker.ceph.com/issues/56694
679
    qa: avoid blocking forever on hung umount
680
* https://tracker.ceph.com/issues/63089
681
    qa: tasks/mirror times out
682
* https://tracker.ceph.com/issues/52624
683
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
684
* https://tracker.ceph.com/issues/59531
685
  "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio)" 
686
* https://tracker.ceph.com/issues/57676
687
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
688
* https://tracker.ceph.com/issues/62658 
689
    error during scrub thrashing: reached maximum tries (31) after waiting for 900 seconds
690
* https://tracker.ceph.com/issues/61243
691
  test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
692
* https://tracker.ceph.com/issues/57656
693
  dbench: write failed on handle 10010 (Resource temporarily unavailable)
694
* https://tracker.ceph.com/issues/63233
695
  mon|client|mds: valgrind reports possible leaks in the MDS
696 197 Patrick Donnelly
* https://tracker.ceph.com/issues/63278
697
  kclient: may wrongly decode session messages and believe it is blocklisted (dead jobs)
698 192 Patrick Donnelly
699 189 Rishabh Dave
h3. 9 Oct 2023
700
701
https://pulpito.ceph.com/rishabh-2023-10-06_11:56:52-fs-rishabh-cephfs-mon-testing-default-smithi/
702
703
* https://tracker.ceph.com/issues/54460
704
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
705
* https://tracker.ceph.com/issues/63141
706
  test_idem_unaffected_root_squash (test_admin.TestFsAuthorizeUpdate) fails
707
* https://tracker.ceph.com/issues/62937
708
  logrotate doesn't support parallel execution on same set of logfiles
709
* https://tracker.ceph.com/issues/61400
710
  valgrind+ceph-mon issues
711
* https://tracker.ceph.com/issues/57676
712
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
713
* https://tracker.ceph.com/issues/55805
714
  error during scrub thrashing reached max tries in 900 secs
715
716 188 Venky Shankar
h3. 26 Sep 2023
717
718
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230926.081818
719
720
* https://tracker.ceph.com/issues/52624
721
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
722
* https://tracker.ceph.com/issues/62873
723
    qa: FAIL: test_client_blocklisted_oldest_tid (tasks.cephfs.test_client_limits.TestClientLimits)
724
* https://tracker.ceph.com/issues/61400
725
    valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
726
* https://tracker.ceph.com/issues/57676
727
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
728
* https://tracker.ceph.com/issues/62682
729
    mon: no mdsmap broadcast after "fs set joinable" is set to true
730
* https://tracker.ceph.com/issues/63089
731
    qa: tasks/mirror times out
732
733 185 Rishabh Dave
h3. 22 Sep 2023
734
735
https://pulpito.ceph.com/rishabh-2023-09-12_12:12:15-fs-wip-rishabh-2023sep12-b2-testing-default-smithi/
736
737
* https://tracker.ceph.com/issues/59348
738
  qa: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota
739
* https://tracker.ceph.com/issues/59344
740
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
741
* https://tracker.ceph.com/issues/59531
742
  "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio)" 
743
* https://tracker.ceph.com/issues/61574
744
  build failure for mdtest project
745
* https://tracker.ceph.com/issues/62702
746
  fsstress.sh: MDS slow requests for the internal 'rename' requests
747
* https://tracker.ceph.com/issues/57676
748
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
749
750
* https://tracker.ceph.com/issues/62863 
751
  deadlock in ceph-fuse causes teuthology job to hang and fail
752
* https://tracker.ceph.com/issues/62870
753
  test_cluster_info (tasks.cephfs.test_nfs.TestNFS)
754
* https://tracker.ceph.com/issues/62873
755
  test_client_blocklisted_oldest_tid (tasks.cephfs.test_client_limits.TestClientLimits)
756
757 186 Venky Shankar
h3. 20 Sep 2023
758
759
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230920.072635
760
761
* https://tracker.ceph.com/issues/52624
762
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
763
* https://tracker.ceph.com/issues/61400
764
    valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
765
* https://tracker.ceph.com/issues/61399
766
    libmpich: undefined references to fi_strerror
767
* https://tracker.ceph.com/issues/62081
768
    tasks/fscrypt-common does not finish, timesout
769
* https://tracker.ceph.com/issues/62658 
770
    error during scrub thrashing: reached maximum tries (31) after waiting for 900 seconds
771
* https://tracker.ceph.com/issues/62915
772
    qa/suites/fs/nfs: No orchestrator configured (try `ceph orch set backend`) while running test cases
773
* https://tracker.ceph.com/issues/59531
774
    quincy: "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio)"
775
* https://tracker.ceph.com/issues/62873
776
  qa: FAIL: test_client_blocklisted_oldest_tid (tasks.cephfs.test_client_limits.TestClientLimits)
777
* https://tracker.ceph.com/issues/62936
778
    Test failure: test_mirroring_init_failure_with_recovery (tasks.cephfs.test_mirroring.TestMirroring)
779
* https://tracker.ceph.com/issues/62937
780
    Command failed on smithi027 with status 3: 'sudo logrotate /etc/logrotate.d/ceph-test.conf'
781
* https://tracker.ceph.com/issues/62510
782
    snaptest-git-ceph.sh failure with fs/thrash
783
* https://tracker.ceph.com/issues/62081
784
    tasks/fscrypt-common does not finish, timesout
785
* https://tracker.ceph.com/issues/62126
786
    test failure: suites/blogbench.sh stops running
787 187 Venky Shankar
* https://tracker.ceph.com/issues/62682
788
    mon: no mdsmap broadcast after "fs set joinable" is set to true
789 186 Venky Shankar
790 184 Milind Changire
h3. 19 Sep 2023
791
792
http://pulpito.front.sepia.ceph.com/mchangir-2023-09-12_05:40:22-fs-wip-mchangir-testing-20230908.140927-testing-default-smithi/
793
794
* https://tracker.ceph.com/issues/58220#note-9
795
  workunit fs/test_python.sh: test_disk_quota_exceeeded_error failure
796
* https://tracker.ceph.com/issues/62702
797
  Command failed (workunit test suites/fsstress.sh) on smithi124 with status 124
798
* https://tracker.ceph.com/issues/57676
799
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
800
* https://tracker.ceph.com/issues/59348
801
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
802
* https://tracker.ceph.com/issues/52624
803
  qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
804
* https://tracker.ceph.com/issues/51964
805
  qa: test_cephfs_mirror_restart_sync_on_blocklist failure
806
* https://tracker.ceph.com/issues/61243
807
  test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
808
* https://tracker.ceph.com/issues/59344
809
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
810
* https://tracker.ceph.com/issues/62873
811
  qa: FAIL: test_client_blocklisted_oldest_tid (tasks.cephfs.test_client_limits.TestClientLimits)
812
* https://tracker.ceph.com/issues/59413
813
  cephfs: qa snaptest-git-ceph.sh failed with "got remote process result: 128"
814
* https://tracker.ceph.com/issues/53859
815
  qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
816
* https://tracker.ceph.com/issues/62482
817
  qa: cluster [WRN] Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
818
819 178 Patrick Donnelly
820 177 Venky Shankar
h3. 13 Sep 2023
821
822
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230908.065909
823
824
* https://tracker.ceph.com/issues/52624
825
      qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
826
* https://tracker.ceph.com/issues/57655
827
    qa: fs:mixed-clients kernel_untar_build failure
828
* https://tracker.ceph.com/issues/57676
829
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
830
* https://tracker.ceph.com/issues/61243
831
    qa: tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev - 17 tests failed
832
* https://tracker.ceph.com/issues/62567
833
    postgres workunit times out - MDS_SLOW_REQUEST in logs
834
* https://tracker.ceph.com/issues/61400
835
    valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
836
* https://tracker.ceph.com/issues/61399
837
    libmpich: undefined references to fi_strerror
838
* https://tracker.ceph.com/issues/57655
839
    qa: fs:mixed-clients kernel_untar_build failure
840
* https://tracker.ceph.com/issues/57676
841
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
842
* https://tracker.ceph.com/issues/51964
843
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
844
* https://tracker.ceph.com/issues/62081
845
    tasks/fscrypt-common does not finish, timesout
846 178 Patrick Donnelly
847 179 Patrick Donnelly
h3. 2023 Sep 12
848 178 Patrick Donnelly
849
https://pulpito.ceph.com/pdonnell-2023-09-12_14:07:50-fs-wip-batrick-testing-20230912.122437-distro-default-smithi/
850 1 Patrick Donnelly
851 181 Patrick Donnelly
A few failures caused by qa refactoring in https://github.com/ceph/ceph/pull/48130 ; notably:
852
853 182 Patrick Donnelly
* Test failure: test_export_pin_many (tasks.cephfs.test_exports.TestExportPin) caused by fragmentation from config changes.
854 181 Patrick Donnelly
855
Failures:
856
857 179 Patrick Donnelly
* https://tracker.ceph.com/issues/59348
858
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
859
* https://tracker.ceph.com/issues/57656
860
  dbench: write failed on handle 10010 (Resource temporarily unavailable)
861
* https://tracker.ceph.com/issues/55805
862
  error scrub thrashing reached max tries in 900 secs
863
* https://tracker.ceph.com/issues/62067
864
    ffsb.sh failure "Resource temporarily unavailable"
865
* https://tracker.ceph.com/issues/59344
866
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
867
* https://tracker.ceph.com/issues/61399
868 180 Patrick Donnelly
  libmpich: undefined references to fi_strerror
869
* https://tracker.ceph.com/issues/62832
870
  common: config_proxy deadlock during shutdown (and possibly other times)
871
* https://tracker.ceph.com/issues/59413
872 1 Patrick Donnelly
  cephfs: qa snaptest-git-ceph.sh failed with "got remote process result: 128"
873 181 Patrick Donnelly
* https://tracker.ceph.com/issues/57676
874
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
875
* https://tracker.ceph.com/issues/62567
876
  Command failed with status 1: "sudo TESTDIR=/home/ubuntu/cephtest bash -c 'sudo -u postgres -- pgbench -s 500 -i'"
877
* https://tracker.ceph.com/issues/54460
878
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
879
* https://tracker.ceph.com/issues/58220#note-9
880
  workunit fs/test_python.sh: test_disk_quota_exceeeded_error failure
881
* https://tracker.ceph.com/issues/59348
882
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
883 183 Patrick Donnelly
* https://tracker.ceph.com/issues/62847
884
    mds: blogbench requests stuck (5mds+scrub+snaps-flush)
885
* https://tracker.ceph.com/issues/62848
886
    qa: fail_fs upgrade scenario hanging
887
* https://tracker.ceph.com/issues/62081
888
    tasks/fscrypt-common does not finish, timesout
889 177 Venky Shankar
890 176 Venky Shankar
h3. 11 Sep 2023
891 175 Venky Shankar
892
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230830.153114
893
894
* https://tracker.ceph.com/issues/52624
895
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
896
* https://tracker.ceph.com/issues/61399
897
    qa: build failure for ior (the failed instance is when compiling `mdtest`)
898
* https://tracker.ceph.com/issues/57655
899
    qa: fs:mixed-clients kernel_untar_build failure
900
* https://tracker.ceph.com/issues/61399
901
    ior build failure
902
* https://tracker.ceph.com/issues/59531
903
    quincy: "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio)"
904
* https://tracker.ceph.com/issues/59344
905
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
906
* https://tracker.ceph.com/issues/59346
907
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
908
* https://tracker.ceph.com/issues/59348
909
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
910
* https://tracker.ceph.com/issues/57676
911
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
912
* https://tracker.ceph.com/issues/61243
913
  qa: tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev - 17 tests failed
914
* https://tracker.ceph.com/issues/62567
915
  postgres workunit times out - MDS_SLOW_REQUEST in logs
916
917
918 174 Rishabh Dave
h3. 6 Sep 2023 Run 2
919
920
https://pulpito.ceph.com/rishabh-2023-08-25_01:50:32-fs-wip-rishabh-2023aug3-b5-testing-default-smithi/ 
921
922
* https://tracker.ceph.com/issues/51964
923
  test_cephfs_mirror_restart_sync_on_blocklist failure
924
* https://tracker.ceph.com/issues/59348
925
  test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
926
* https://tracker.ceph.com/issues/53859
927
  qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
928
* https://tracker.ceph.com/issues/61892
929
  test_strays.TestStrays.test_snapshot_remove failed
930
* https://tracker.ceph.com/issues/54460
931
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
932
* https://tracker.ceph.com/issues/59346
933
  fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write"
934
* https://tracker.ceph.com/issues/59344
935
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
936
* https://tracker.ceph.com/issues/62484
937
  qa: ffsb.sh test failure
938
* https://tracker.ceph.com/issues/62567
939
  Command failed with status 1: "sudo TESTDIR=/home/ubuntu/cephtest bash -c 'sudo -u postgres -- pgbench -s 500 -i'"
940
  
941
* https://tracker.ceph.com/issues/61399
942
  ior build failure
943
* https://tracker.ceph.com/issues/57676
944
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
945
* https://tracker.ceph.com/issues/55805
946
  error scrub thrashing reached max tries in 900 secs
947
948 172 Rishabh Dave
h3. 6 Sep 2023
949 171 Rishabh Dave
950 173 Rishabh Dave
https://pulpito.ceph.com/rishabh-2023-08-10_20:16:46-fs-wip-rishabh-2023Aug1-b4-testing-default-smithi/
951 171 Rishabh Dave
952 1 Patrick Donnelly
* https://tracker.ceph.com/issues/53859
953
  qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
954 173 Rishabh Dave
* https://tracker.ceph.com/issues/51964
955
  qa: test_cephfs_mirror_restart_sync_on_blocklist failure
956 1 Patrick Donnelly
* https://tracker.ceph.com/issues/61892
957 173 Rishabh Dave
  test_snapshot_remove (test_strays.TestStrays) failed
958
* https://tracker.ceph.com/issues/59348
959
  qa: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota
960
* https://tracker.ceph.com/issues/54462
961
  Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128
962
* https://tracker.ceph.com/issues/62556
963
  test_acls: xfstests_dev: python2 is missing
964
* https://tracker.ceph.com/issues/62067
965
  ffsb.sh failure "Resource temporarily unavailable"
966
* https://tracker.ceph.com/issues/57656
967
  dbench: write failed on handle 10010 (Resource temporarily unavailable)
968 1 Patrick Donnelly
* https://tracker.ceph.com/issues/59346
969
  fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write"
970 171 Rishabh Dave
* https://tracker.ceph.com/issues/59344
971 173 Rishabh Dave
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
972
973 171 Rishabh Dave
* https://tracker.ceph.com/issues/61399
974
  ior build failure
975
* https://tracker.ceph.com/issues/57676
976
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
977
* https://tracker.ceph.com/issues/55805
978
  error scrub thrashing reached max tries in 900 secs
979 173 Rishabh Dave
980
* https://tracker.ceph.com/issues/62567
981
  Command failed on smithi008 with status 1: "sudo TESTDIR=/home/ubuntu/cephtest bash -c 'sudo -u postgres -- pgbench -s 500 -i'"
982
* https://tracker.ceph.com/issues/62702
983
  workunit test suites/fsstress.sh on smithi066 with status 124
984 170 Rishabh Dave
985
h3. 5 Sep 2023
986
987
https://pulpito.ceph.com/rishabh-2023-08-25_06:38:25-fs-wip-rishabh-2023aug3-b5-testing-default-smithi/
988
orch:cephadm suite run: http://pulpito.front.sepia.ceph.com/rishabh-2023-09-05_12:16:09-orch:cephadm-wip-rishabh-2023aug3-b5-testing-default-smithi/
989
  this run has failures but acc to Adam King these are not relevant and should be ignored
990
991
* https://tracker.ceph.com/issues/61892
992
  test_snapshot_remove (test_strays.TestStrays) failed
993
* https://tracker.ceph.com/issues/59348
994
  test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota
995
* https://tracker.ceph.com/issues/54462
996
  Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128
997
* https://tracker.ceph.com/issues/62067
998
  ffsb.sh failure "Resource temporarily unavailable"
999
* https://tracker.ceph.com/issues/57656 
1000
  dbench: write failed on handle 10010 (Resource temporarily unavailable)
1001
* https://tracker.ceph.com/issues/59346
1002
  fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write"
1003
* https://tracker.ceph.com/issues/59344
1004
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
1005
* https://tracker.ceph.com/issues/50223
1006
  client.xxxx isn't responding to mclientcaps(revoke)
1007
* https://tracker.ceph.com/issues/57655
1008
  qa: fs:mixed-clients kernel_untar_build failure
1009
* https://tracker.ceph.com/issues/62187
1010
  iozone.sh: line 5: iozone: command not found
1011
 
1012
* https://tracker.ceph.com/issues/61399
1013
  ior build failure
1014
* https://tracker.ceph.com/issues/57676
1015
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
1016
* https://tracker.ceph.com/issues/55805
1017
  error scrub thrashing reached max tries in 900 secs
1018 169 Venky Shankar
1019
1020
h3. 31 Aug 2023
1021
1022
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230824.045828
1023
1024
* https://tracker.ceph.com/issues/52624
1025
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1026
* https://tracker.ceph.com/issues/62187
1027
    iozone: command not found
1028
* https://tracker.ceph.com/issues/61399
1029
    ior build failure
1030
* https://tracker.ceph.com/issues/59531
1031
    quincy: "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio)"
1032
* https://tracker.ceph.com/issues/61399
1033
    qa: build failure for ior (the failed instance is when compiling `mdtest`)
1034
* https://tracker.ceph.com/issues/57655
1035
    qa: fs:mixed-clients kernel_untar_build failure
1036
* https://tracker.ceph.com/issues/59344
1037
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
1038
* https://tracker.ceph.com/issues/59346
1039
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
1040
* https://tracker.ceph.com/issues/59348
1041
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1042
* https://tracker.ceph.com/issues/59413
1043
    cephfs: qa snaptest-git-ceph.sh failed with "got remote process result: 128"
1044
* https://tracker.ceph.com/issues/62653
1045
    qa: unimplemented fcntl command: 1036 with fsstress
1046
* https://tracker.ceph.com/issues/61400
1047
    valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
1048
* https://tracker.ceph.com/issues/62658
1049
    error during scrub thrashing: reached maximum tries (31) after waiting for 900 seconds
1050
* https://tracker.ceph.com/issues/62188
1051
    AttributeError: 'RemoteProcess' object has no attribute 'read' (happens only with multis-auth test)
1052 168 Venky Shankar
1053
1054
h3. 25 Aug 2023
1055
1056
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230822.064807
1057
1058
* https://tracker.ceph.com/issues/59344
1059
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
1060
* https://tracker.ceph.com/issues/59346
1061
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
1062
* https://tracker.ceph.com/issues/59348
1063
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1064
* https://tracker.ceph.com/issues/57655
1065
    qa: fs:mixed-clients kernel_untar_build failure
1066
* https://tracker.ceph.com/issues/61243
1067
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
1068
* https://tracker.ceph.com/issues/61399
1069
    ior build failure
1070
* https://tracker.ceph.com/issues/61399
1071
    qa: build failure for ior (the failed instance is when compiling `mdtest`)
1072
* https://tracker.ceph.com/issues/62484
1073
    qa: ffsb.sh test failure
1074
* https://tracker.ceph.com/issues/59531
1075
    quincy: "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio)"
1076
* https://tracker.ceph.com/issues/62510
1077
    snaptest-git-ceph.sh failure with fs/thrash
1078 167 Venky Shankar
1079
1080
h3. 24 Aug 2023
1081
1082
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230822.060131
1083
1084
* https://tracker.ceph.com/issues/57676
1085
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1086
* https://tracker.ceph.com/issues/51964
1087
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1088
* https://tracker.ceph.com/issues/59344
1089
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
1090
* https://tracker.ceph.com/issues/59346
1091
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
1092
* https://tracker.ceph.com/issues/59348
1093
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1094
* https://tracker.ceph.com/issues/61399
1095
    ior build failure
1096
* https://tracker.ceph.com/issues/61399
1097
    qa: build failure for ior (the failed instance is when compiling `mdtest`)
1098
* https://tracker.ceph.com/issues/62510
1099
    snaptest-git-ceph.sh failure with fs/thrash
1100
* https://tracker.ceph.com/issues/62484
1101
    qa: ffsb.sh test failure
1102
* https://tracker.ceph.com/issues/57087
1103
    qa: test_fragmented_injection (tasks.cephfs.test_data_scan.TestDataScan) failure
1104
* https://tracker.ceph.com/issues/57656
1105
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
1106
* https://tracker.ceph.com/issues/62187
1107
    iozone: command not found
1108
* https://tracker.ceph.com/issues/62188
1109
    AttributeError: 'RemoteProcess' object has no attribute 'read' (happens only with multis-auth test)
1110
* https://tracker.ceph.com/issues/62567
1111
    postgres workunit times out - MDS_SLOW_REQUEST in logs
1112 166 Venky Shankar
1113
1114
h3. 22 Aug 2023
1115
1116
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230809.035933
1117
1118
* https://tracker.ceph.com/issues/57676
1119
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1120
* https://tracker.ceph.com/issues/51964
1121
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1122
* https://tracker.ceph.com/issues/59344
1123
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
1124
* https://tracker.ceph.com/issues/59346
1125
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
1126
* https://tracker.ceph.com/issues/59348
1127
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1128
* https://tracker.ceph.com/issues/61399
1129
    ior build failure
1130
* https://tracker.ceph.com/issues/61399
1131
    qa: build failure for ior (the failed instance is when compiling `mdtest`)
1132
* https://tracker.ceph.com/issues/57655
1133
    qa: fs:mixed-clients kernel_untar_build failure
1134
* https://tracker.ceph.com/issues/61243
1135
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
1136
* https://tracker.ceph.com/issues/62188
1137
    AttributeError: 'RemoteProcess' object has no attribute 'read' (happens only with multis-auth test)
1138
* https://tracker.ceph.com/issues/62510
1139
    snaptest-git-ceph.sh failure with fs/thrash
1140
* https://tracker.ceph.com/issues/62511
1141
    src/mds/MDLog.cc: 299: FAILED ceph_assert(!mds_is_shutting_down)
1142 165 Venky Shankar
1143
1144
h3. 14 Aug 2023
1145
1146
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230808.093601
1147
1148
* https://tracker.ceph.com/issues/51964
1149
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1150
* https://tracker.ceph.com/issues/61400
1151
    valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
1152
* https://tracker.ceph.com/issues/61399
1153
    ior build failure
1154
* https://tracker.ceph.com/issues/59348
1155
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1156
* https://tracker.ceph.com/issues/59531
1157
    cluster [WRN] OSD bench result of 137706.272521 IOPS exceeded the threshold
1158
* https://tracker.ceph.com/issues/59344
1159
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
1160
* https://tracker.ceph.com/issues/59346
1161
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
1162
* https://tracker.ceph.com/issues/61399
1163
    qa: build failure for ior (the failed instance is when compiling `mdtest`)
1164
* https://tracker.ceph.com/issues/59684 [kclient bug]
1165
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
1166
* https://tracker.ceph.com/issues/61243 (NEW)
1167
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
1168
* https://tracker.ceph.com/issues/57655
1169
    qa: fs:mixed-clients kernel_untar_build failure
1170
* https://tracker.ceph.com/issues/57656
1171
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
1172 163 Venky Shankar
1173
1174
h3. 28 JULY 2023
1175
1176
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230725.053049
1177
1178
* https://tracker.ceph.com/issues/51964
1179
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1180
* https://tracker.ceph.com/issues/61400
1181
    valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
1182
* https://tracker.ceph.com/issues/61399
1183
    ior build failure
1184
* https://tracker.ceph.com/issues/57676
1185
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1186
* https://tracker.ceph.com/issues/59348
1187
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1188
* https://tracker.ceph.com/issues/59531
1189
    cluster [WRN] OSD bench result of 137706.272521 IOPS exceeded the threshold
1190
* https://tracker.ceph.com/issues/59344
1191
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
1192
* https://tracker.ceph.com/issues/59346
1193
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
1194
* https://github.com/ceph/ceph/pull/52556
1195
    task/test_nfs: AttributeError: 'TestNFS' object has no attribute 'run_ceph_cmd' (see note #4)
1196
* https://tracker.ceph.com/issues/62187
1197
    iozone: command not found
1198
* https://tracker.ceph.com/issues/61399
1199
    qa: build failure for ior (the failed instance is when compiling `mdtest`)
1200
* https://tracker.ceph.com/issues/62188
1201 164 Rishabh Dave
    AttributeError: 'RemoteProcess' object has no attribute 'read' (happens only with multis-auth test)
1202 158 Rishabh Dave
1203
h3. 24 Jul 2023
1204
1205
https://pulpito.ceph.com/rishabh-2023-07-13_21:35:13-fs-wip-rishabh-2023Jul13-testing-default-smithi/
1206
https://pulpito.ceph.com/rishabh-2023-07-14_10:26:42-fs-wip-rishabh-2023Jul13-testing-default-smithi/
1207
There were few failure from one of the PRs under testing. Following run confirms that removing this PR fixes these failures -
1208
https://pulpito.ceph.com/rishabh-2023-07-18_02:11:50-fs-wip-rishabh-2023Jul13-m-quota-testing-default-smithi/
1209
One more extra run to check if blogbench.sh fail every time:
1210
https://pulpito.ceph.com/rishabh-2023-07-21_17:58:19-fs-wip-rishabh-2023Jul13-m-quota-testing-default-smithi/
1211
blogbench.sh failure were seen on above runs for first time, following run with main branch that confirms that "blogbench.sh" was not related to any of the PRs that are under testing -
1212 161 Rishabh Dave
https://pulpito.ceph.com/rishabh-2023-07-21_21:30:53-fs-wip-rishabh-2023Jul13-base-2-testing-default-smithi/
1213
1214
* https://tracker.ceph.com/issues/61892
1215
  test_snapshot_remove (test_strays.TestStrays) failed
1216
* https://tracker.ceph.com/issues/53859
1217
  test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
1218
* https://tracker.ceph.com/issues/61982
1219
  test_clean_stale_subvolume_snapshot_metadata (tasks.cephfs.test_volumes.TestSubvolumeSnapshots)
1220
* https://tracker.ceph.com/issues/52438
1221
  qa: ffsb timeout
1222
* https://tracker.ceph.com/issues/54460
1223
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1224
* https://tracker.ceph.com/issues/57655
1225
  qa: fs:mixed-clients kernel_untar_build failure
1226
* https://tracker.ceph.com/issues/48773
1227
  reached max tries: scrub does not complete
1228
* https://tracker.ceph.com/issues/58340
1229
  mds: fsstress.sh hangs with multimds
1230
* https://tracker.ceph.com/issues/61400
1231
  valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
1232
* https://tracker.ceph.com/issues/57206
1233
  libcephfs/test.sh: ceph_test_libcephfs_reclaim
1234
  
1235
* https://tracker.ceph.com/issues/57656
1236
  [testing] dbench: write failed on handle 10010 (Resource temporarily unavailable)
1237
* https://tracker.ceph.com/issues/61399
1238
  ior build failure
1239
* https://tracker.ceph.com/issues/57676
1240
  error during scrub thrashing: backtrace
1241
  
1242
* https://tracker.ceph.com/issues/38452
1243
  'sudo -u postgres -- pgbench -s 500 -i' failed
1244 158 Rishabh Dave
* https://tracker.ceph.com/issues/62126
1245 157 Venky Shankar
  blogbench.sh failure
1246
1247
h3. 18 July 2023
1248
1249
* https://tracker.ceph.com/issues/52624
1250
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
1251
* https://tracker.ceph.com/issues/57676
1252
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1253
* https://tracker.ceph.com/issues/54460
1254
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1255
* https://tracker.ceph.com/issues/57655
1256
    qa: fs:mixed-clients kernel_untar_build failure
1257
* https://tracker.ceph.com/issues/51964
1258
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1259
* https://tracker.ceph.com/issues/59344
1260
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
1261
* https://tracker.ceph.com/issues/61182
1262
    cephfs-mirror-ha-workunit: reached maximum tries (50) after waiting for 300 seconds
1263
* https://tracker.ceph.com/issues/61957
1264
    test_client_limits.TestClientLimits.test_client_release_bug
1265
* https://tracker.ceph.com/issues/59348
1266
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1267
* https://tracker.ceph.com/issues/61892
1268
    test_strays.TestStrays.test_snapshot_remove failed
1269
* https://tracker.ceph.com/issues/59346
1270
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
1271
* https://tracker.ceph.com/issues/44565
1272
    src/mds/SimpleLock.h: 528: FAILED ceph_assert(state == LOCK_XLOCK || state == LOCK_XLOCKDONE || state == LOCK_XLOCKSNAP || state == LOCK_LOCK_XLOCK || state == LOCK_LOCK || is_locallock())
1273
* https://tracker.ceph.com/issues/62067
1274
    ffsb.sh failure "Resource temporarily unavailable"
1275 156 Venky Shankar
1276
1277
h3. 17 July 2023
1278
1279
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230704.040136
1280
1281
* https://tracker.ceph.com/issues/61982
1282
    Test failure: test_clean_stale_subvolume_snapshot_metadata (tasks.cephfs.test_volumes.TestSubvolumeSnapshots)
1283
* https://tracker.ceph.com/issues/59344
1284
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
1285
* https://tracker.ceph.com/issues/61182
1286
    cephfs-mirror-ha-workunit: reached maximum tries (50) after waiting for 300 seconds
1287
* https://tracker.ceph.com/issues/61957
1288
    test_client_limits.TestClientLimits.test_client_release_bug
1289
* https://tracker.ceph.com/issues/61400
1290
    valgrind+ceph-mon: segmentation fault in rocksdb+tcmalloc
1291
* https://tracker.ceph.com/issues/59348
1292
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1293
* https://tracker.ceph.com/issues/61892
1294
    test_strays.TestStrays.test_snapshot_remove failed
1295
* https://tracker.ceph.com/issues/59346
1296
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
1297
* https://tracker.ceph.com/issues/62036
1298
    src/mds/MDCache.cc: 5131: FAILED ceph_assert(isolated_inodes.empty())
1299
* https://tracker.ceph.com/issues/61737
1300
    coredump from '/bin/podman pull quay.ceph.io/ceph-ci/ceph:pacific'
1301
* https://tracker.ceph.com/issues/44565
1302
    src/mds/SimpleLock.h: 528: FAILED ceph_assert(state == LOCK_XLOCK || state == LOCK_XLOCKDONE || state == LOCK_XLOCKSNAP || state == LOCK_LOCK_XLOCK || state == LOCK_LOCK || is_locallock())
1303 155 Rishabh Dave
1304 1 Patrick Donnelly
1305 153 Rishabh Dave
h3. 13 July 2023 Run 2
1306 152 Rishabh Dave
1307
1308
https://pulpito.ceph.com/rishabh-2023-07-08_23:33:40-fs-wip-rishabh-2023Jul9-testing-default-smithi/
1309
https://pulpito.ceph.com/rishabh-2023-07-09_20:19:09-fs-wip-rishabh-2023Jul9-testing-default-smithi/
1310
1311
* https://tracker.ceph.com/issues/61957
1312
  test_client_limits.TestClientLimits.test_client_release_bug
1313
* https://tracker.ceph.com/issues/61982
1314
  Test failure: test_clean_stale_subvolume_snapshot_metadata (tasks.cephfs.test_volumes.TestSubvolumeSnapshots)
1315
* https://tracker.ceph.com/issues/59348
1316
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1317
* https://tracker.ceph.com/issues/59344
1318
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
1319
* https://tracker.ceph.com/issues/54460
1320
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1321
* https://tracker.ceph.com/issues/57655
1322
  qa: fs:mixed-clients kernel_untar_build failure
1323
* https://tracker.ceph.com/issues/61400
1324
  valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
1325
* https://tracker.ceph.com/issues/61399
1326
  ior build failure
1327
1328 151 Venky Shankar
h3. 13 July 2023
1329
1330
https://pulpito.ceph.com/vshankar-2023-07-04_11:45:30-fs-wip-vshankar-testing-20230704.040242-testing-default-smithi/
1331
1332
* https://tracker.ceph.com/issues/54460
1333
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1334
* https://tracker.ceph.com/issues/61400
1335
    valgrind+ceph-mon: segmentation fault in rocksdb+tcmalloc
1336
* https://tracker.ceph.com/issues/57655
1337
    qa: fs:mixed-clients kernel_untar_build failure
1338
* https://tracker.ceph.com/issues/61945
1339
    LibCephFS.DelegTimeout failure
1340
* https://tracker.ceph.com/issues/52624
1341
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
1342
* https://tracker.ceph.com/issues/57676
1343
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1344
* https://tracker.ceph.com/issues/59348
1345
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1346
* https://tracker.ceph.com/issues/59344
1347
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
1348
* https://tracker.ceph.com/issues/51964
1349
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1350
* https://tracker.ceph.com/issues/59346
1351
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
1352
* https://tracker.ceph.com/issues/61982
1353
    Test failure: test_clean_stale_subvolume_snapshot_metadata (tasks.cephfs.test_volumes.TestSubvolumeSnapshots)
1354 150 Rishabh Dave
1355
1356
h3. 13 Jul 2023
1357
1358
https://pulpito.ceph.com/rishabh-2023-07-05_22:21:20-fs-wip-rishabh-2023Jul5-testing-default-smithi/
1359
https://pulpito.ceph.com/rishabh-2023-07-06_19:33:28-fs-wip-rishabh-2023Jul5-testing-default-smithi/
1360
1361
* https://tracker.ceph.com/issues/61957
1362
  test_client_limits.TestClientLimits.test_client_release_bug
1363
* https://tracker.ceph.com/issues/59348
1364
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1365
* https://tracker.ceph.com/issues/59346
1366
  fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
1367
* https://tracker.ceph.com/issues/48773
1368
  scrub does not complete: reached max tries
1369
* https://tracker.ceph.com/issues/59344
1370
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
1371
* https://tracker.ceph.com/issues/52438
1372
  qa: ffsb timeout
1373
* https://tracker.ceph.com/issues/57656
1374
  [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
1375
* https://tracker.ceph.com/issues/58742
1376
  xfstests-dev: kcephfs: generic
1377
* https://tracker.ceph.com/issues/61399
1378 148 Rishabh Dave
  libmpich: undefined references to fi_strerror
1379 149 Rishabh Dave
1380 148 Rishabh Dave
h3. 12 July 2023
1381
1382
https://pulpito.ceph.com/rishabh-2023-07-05_18:32:52-fs-wip-rishabh-CephManager-in-CephFSTestCase-testing-default-smithi/
1383
https://pulpito.ceph.com/rishabh-2023-07-06_19:46:43-fs-wip-rishabh-CephManager-in-CephFSTestCase-testing-default-smithi/
1384
1385
* https://tracker.ceph.com/issues/61892
1386
  test_strays.TestStrays.test_snapshot_remove failed
1387
* https://tracker.ceph.com/issues/59348
1388
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1389
* https://tracker.ceph.com/issues/53859
1390
  qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
1391
* https://tracker.ceph.com/issues/59346
1392
  fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write"
1393
* https://tracker.ceph.com/issues/58742
1394
  xfstests-dev: kcephfs: generic
1395
* https://tracker.ceph.com/issues/59344
1396
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
1397
* https://tracker.ceph.com/issues/52438
1398
  qa: ffsb timeout
1399
* https://tracker.ceph.com/issues/57656
1400
  [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
1401
* https://tracker.ceph.com/issues/54460
1402
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1403
* https://tracker.ceph.com/issues/57655
1404
  qa: fs:mixed-clients kernel_untar_build failure
1405
* https://tracker.ceph.com/issues/61182
1406
  cephfs-mirror-ha-workunit: reached maximum tries (50) after waiting for 300 seconds
1407
* https://tracker.ceph.com/issues/61400
1408
  valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
1409 147 Rishabh Dave
* https://tracker.ceph.com/issues/48773
1410 146 Patrick Donnelly
  reached max tries: scrub does not complete
1411
1412
h3. 05 July 2023
1413
1414
https://pulpito.ceph.com/pdonnell-2023-07-05_03:38:33-fs:libcephfs-wip-pdonnell-testing-20230705.003205-distro-default-smithi/
1415
1416 137 Rishabh Dave
* https://tracker.ceph.com/issues/59346
1417 143 Rishabh Dave
  fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write"
1418
1419
h3. 27 Jun 2023
1420
1421
https://pulpito.ceph.com/rishabh-2023-06-21_23:38:17-fs-wip-rishabh-improvements-authmon-testing-default-smithi/
1422 144 Rishabh Dave
https://pulpito.ceph.com/rishabh-2023-06-23_17:37:30-fs-wip-rishabh-improvements-authmon-distro-default-smithi/
1423
1424
* https://tracker.ceph.com/issues/59348
1425
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1426
* https://tracker.ceph.com/issues/54460
1427
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1428
* https://tracker.ceph.com/issues/59346
1429
  fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write"
1430
* https://tracker.ceph.com/issues/59344
1431
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
1432
* https://tracker.ceph.com/issues/61399
1433
  libmpich: undefined references to fi_strerror
1434
* https://tracker.ceph.com/issues/50223
1435
  client.xxxx isn't responding to mclientcaps(revoke)
1436 143 Rishabh Dave
* https://tracker.ceph.com/issues/61831
1437
  Test failure: test_mirroring_init_failure_with_recovery (tasks.cephfs.test_mirroring.TestMirroring)
1438 142 Venky Shankar
1439
1440
h3. 22 June 2023
1441
1442
* https://tracker.ceph.com/issues/57676
1443
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1444
* https://tracker.ceph.com/issues/54460
1445
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1446
* https://tracker.ceph.com/issues/59344
1447
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
1448
* https://tracker.ceph.com/issues/59348
1449
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1450
* https://tracker.ceph.com/issues/61400
1451
    valgrind+ceph-mon: segmentation fault in rocksdb+tcmalloc
1452
* https://tracker.ceph.com/issues/57655
1453
    qa: fs:mixed-clients kernel_untar_build failure
1454
* https://tracker.ceph.com/issues/61394
1455
    qa/quincy: cluster [WRN] evicting unresponsive client smithi152 (4298), after 303.726 seconds" in cluster log
1456
* https://tracker.ceph.com/issues/61762
1457
    qa: wait_for_clean: failed before timeout expired
1458
* https://tracker.ceph.com/issues/61775
1459
    cephfs-mirror: mirror daemon does not shutdown (in mirror ha tests)
1460
* https://tracker.ceph.com/issues/44565
1461
    src/mds/SimpleLock.h: 528: FAILED ceph_assert(state == LOCK_XLOCK || state == LOCK_XLOCKDONE || state == LOCK_XLOCKSNAP || state == LOCK_LOCK_XLOCK || state == LOCK_LOCK || is_locallock())
1462
* https://tracker.ceph.com/issues/61790
1463
    cephfs client to mds comms remain silent after reconnect
1464
* https://tracker.ceph.com/issues/61791
1465
    snaptest-git-ceph.sh test timed out (job dead)
1466 139 Venky Shankar
1467
1468
h3. 20 June 2023
1469
1470
https://pulpito.ceph.com/vshankar-2023-06-15_04:58:28-fs-wip-vshankar-testing-20230614.124123-testing-default-smithi/
1471
1472
* https://tracker.ceph.com/issues/57676
1473
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1474
* https://tracker.ceph.com/issues/54460
1475
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1476 140 Venky Shankar
* https://tracker.ceph.com/issues/54462
1477 1 Patrick Donnelly
  Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128
1478 141 Venky Shankar
* https://tracker.ceph.com/issues/58340
1479 139 Venky Shankar
  mds: fsstress.sh hangs with multimds
1480
* https://tracker.ceph.com/issues/59344
1481
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
1482
* https://tracker.ceph.com/issues/59348
1483
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1484
* https://tracker.ceph.com/issues/57656
1485
  [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
1486
* https://tracker.ceph.com/issues/61400
1487
    valgrind+ceph-mon: segmentation fault in rocksdb+tcmalloc
1488
* https://tracker.ceph.com/issues/57655
1489
    qa: fs:mixed-clients kernel_untar_build failure
1490
* https://tracker.ceph.com/issues/44565
1491
    src/mds/SimpleLock.h: 528: FAILED ceph_assert(state == LOCK_XLOCK || state == LOCK_XLOCKDONE || state == LOCK_XLOCKSNAP || state == LOCK_LOCK_XLOCK || state == LOCK_LOCK || is_locallock())
1492
* https://tracker.ceph.com/issues/61737
1493 138 Rishabh Dave
    coredump from '/bin/podman pull quay.ceph.io/ceph-ci/ceph:pacific'
1494
1495
h3. 16 June 2023
1496
1497 1 Patrick Donnelly
https://pulpito.ceph.com/rishabh-2023-05-16_10:39:13-fs-wip-rishabh-2023May15-1524-testing-default-smithi/
1498 145 Rishabh Dave
https://pulpito.ceph.com/rishabh-2023-05-17_11:09:48-fs-wip-rishabh-2023May15-1524-testing-default-smithi/
1499 138 Rishabh Dave
https://pulpito.ceph.com/rishabh-2023-05-18_10:01:53-fs-wip-rishabh-2023May15-1524-testing-default-smithi/
1500 1 Patrick Donnelly
(bins were rebuilt with a subset of orig PRs) https://pulpito.ceph.com/rishabh-2023-06-09_10:19:22-fs-wip-rishabh-2023Jun9-1308-testing-default-smithi/
1501
1502
1503
* https://tracker.ceph.com/issues/59344
1504
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
1505 138 Rishabh Dave
* https://tracker.ceph.com/issues/59348
1506
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1507 145 Rishabh Dave
* https://tracker.ceph.com/issues/59346
1508
  fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write"
1509
* https://tracker.ceph.com/issues/57656
1510
  [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
1511
* https://tracker.ceph.com/issues/54460
1512
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1513 138 Rishabh Dave
* https://tracker.ceph.com/issues/54462
1514
  Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128
1515 145 Rishabh Dave
* https://tracker.ceph.com/issues/61399
1516
  libmpich: undefined references to fi_strerror
1517
* https://tracker.ceph.com/issues/58945
1518
  xfstests-dev: ceph-fuse: generic 
1519 138 Rishabh Dave
* https://tracker.ceph.com/issues/58742
1520 136 Patrick Donnelly
  xfstests-dev: kcephfs: generic
1521
1522
h3. 24 May 2023
1523
1524
https://pulpito.ceph.com/pdonnell-2023-05-23_18:20:18-fs-wip-pdonnell-testing-20230523.134409-distro-default-smithi/
1525
1526
* https://tracker.ceph.com/issues/57676
1527
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1528
* https://tracker.ceph.com/issues/59683
1529
    Error: Unable to find a match: userspace-rcu-devel libedit-devel device-mapper-devel with fscrypt tests
1530
* https://tracker.ceph.com/issues/61399
1531
    qa: "[Makefile:299: ior] Error 1"
1532
* https://tracker.ceph.com/issues/61265
1533
    qa: tasks.cephfs.fuse_mount:process failed to terminate after unmount
1534
* https://tracker.ceph.com/issues/59348
1535
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1536
* https://tracker.ceph.com/issues/59346
1537
    qa/workunits/fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write"
1538
* https://tracker.ceph.com/issues/61400
1539
    valgrind+ceph-mon: segmentation fault in rocksdb+tcmalloc
1540
* https://tracker.ceph.com/issues/54460
1541
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1542
* https://tracker.ceph.com/issues/51964
1543
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1544
* https://tracker.ceph.com/issues/59344
1545
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
1546
* https://tracker.ceph.com/issues/61407
1547
    mds: abort on CInode::verify_dirfrags
1548
* https://tracker.ceph.com/issues/48773
1549
    qa: scrub does not complete
1550
* https://tracker.ceph.com/issues/57655
1551
    qa: fs:mixed-clients kernel_untar_build failure
1552
* https://tracker.ceph.com/issues/61409
1553 128 Venky Shankar
    qa: _test_stale_caps does not wait for file flush before stat
1554
1555
h3. 15 May 2023
1556 130 Venky Shankar
1557 128 Venky Shankar
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230509.090020
1558
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230509.090020-6
1559
1560
* https://tracker.ceph.com/issues/52624
1561
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
1562
* https://tracker.ceph.com/issues/54460
1563
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1564
* https://tracker.ceph.com/issues/57676
1565
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1566
* https://tracker.ceph.com/issues/59684 [kclient bug]
1567
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
1568
* https://tracker.ceph.com/issues/59348
1569
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1570 131 Venky Shankar
* https://tracker.ceph.com/issues/61148
1571
    dbench test results in call trace in dmesg [kclient bug]
1572 133 Kotresh Hiremath Ravishankar
* https://tracker.ceph.com/issues/58340
1573 134 Kotresh Hiremath Ravishankar
    mds: fsstress.sh hangs with multimds
1574 125 Venky Shankar
1575
 
1576 129 Rishabh Dave
h3. 11 May 2023
1577
1578
https://pulpito.ceph.com/yuriw-2023-05-10_18:21:40-fs-wip-yuri7-testing-2023-05-10-0742-distro-default-smithi/
1579
1580
* https://tracker.ceph.com/issues/59684 [kclient bug]
1581
  Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
1582
* https://tracker.ceph.com/issues/59348
1583
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1584
* https://tracker.ceph.com/issues/57655
1585
  qa: fs:mixed-clients kernel_untar_build failure
1586
* https://tracker.ceph.com/issues/57676
1587
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
1588
* https://tracker.ceph.com/issues/55805
1589
  error during scrub thrashing reached max tries in 900 secs
1590
* https://tracker.ceph.com/issues/54460
1591
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1592
* https://tracker.ceph.com/issues/57656
1593
  [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
1594
* https://tracker.ceph.com/issues/58220
1595
  Command failed (workunit test fs/quota/quota.sh) on smithi081 with status 1:
1596 1 Patrick Donnelly
* https://tracker.ceph.com/issues/58220#note-9
1597
  workunit fs/test_python.sh: test_disk_quota_exceeeded_error failure
1598 134 Kotresh Hiremath Ravishankar
* https://tracker.ceph.com/issues/59342
1599
  qa/workunits/kernel_untar_build.sh failed when compiling the Linux source
1600 135 Kotresh Hiremath Ravishankar
* https://tracker.ceph.com/issues/58949
1601
    test_cephfs.test_disk_quota_exceeeded_error - AssertionError: DiskQuotaExceeded not raised by write
1602 129 Rishabh Dave
* https://tracker.ceph.com/issues/61243 (NEW)
1603
  test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
1604
1605 125 Venky Shankar
h3. 11 May 2023
1606 127 Venky Shankar
1607
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230509.054005
1608 126 Venky Shankar
1609 125 Venky Shankar
(no fsstress job failure [https://tracker.ceph.com/issues/58340] since https://github.com/ceph/ceph/pull/49553
1610
 was included in the branch, however, the PR got updated and needs retest).
1611
1612
* https://tracker.ceph.com/issues/52624
1613
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
1614
* https://tracker.ceph.com/issues/54460
1615
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1616
* https://tracker.ceph.com/issues/57676
1617
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1618
* https://tracker.ceph.com/issues/59683
1619
    Error: Unable to find a match: userspace-rcu-devel libedit-devel device-mapper-devel with fscrypt tests
1620
* https://tracker.ceph.com/issues/59684 [kclient bug]
1621
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
1622
* https://tracker.ceph.com/issues/59348
1623 124 Venky Shankar
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1624
1625
h3. 09 May 2023
1626
1627
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230506.143554
1628
1629
* https://tracker.ceph.com/issues/52624
1630
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
1631
* https://tracker.ceph.com/issues/58340
1632
    mds: fsstress.sh hangs with multimds
1633
* https://tracker.ceph.com/issues/54460
1634
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1635
* https://tracker.ceph.com/issues/57676
1636
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1637
* https://tracker.ceph.com/issues/51964
1638
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1639
* https://tracker.ceph.com/issues/59350
1640
    qa: test_flag_scrub_mdsdir (tasks.cephfs.test_scrub_checks.TestScrubChecks) ... ERROR
1641
* https://tracker.ceph.com/issues/59683
1642
    Error: Unable to find a match: userspace-rcu-devel libedit-devel device-mapper-devel with fscrypt tests
1643
* https://tracker.ceph.com/issues/59684 [kclient bug]
1644
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
1645
* https://tracker.ceph.com/issues/59348
1646 123 Venky Shankar
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1647
1648
h3. 10 Apr 2023
1649
1650
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230330.105356
1651
1652
* https://tracker.ceph.com/issues/52624
1653
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
1654
* https://tracker.ceph.com/issues/58340
1655
    mds: fsstress.sh hangs with multimds
1656
* https://tracker.ceph.com/issues/54460
1657
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1658
* https://tracker.ceph.com/issues/57676
1659
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1660 119 Rishabh Dave
* https://tracker.ceph.com/issues/51964
1661 120 Rishabh Dave
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1662 121 Rishabh Dave
1663 120 Rishabh Dave
h3. 31 Mar 2023
1664 122 Rishabh Dave
1665
run: http://pulpito.front.sepia.ceph.com/rishabh-2023-03-03_21:39:49-fs-wip-rishabh-2023Mar03-2316-testing-default-smithi/
1666 120 Rishabh Dave
re-run (some PRs removed from batch): http://pulpito.front.sepia.ceph.com/rishabh-2023-03-11_05:54:03-fs-wip-rishabh-2023Mar10-1727-testing-default-smithi/
1667
re-run (some PRs removed from batch): http://pulpito.front.sepia.ceph.com/rishabh-2023-03-23_08:27:28-fs-wip-rishabh-2023Mar20-2250-testing-default-smithi/
1668
1669
There were many more re-runs for "failed+dead" jobs as well as for individual jobs. half of the PRs from the batch were removed (gradually over subsequent re-runs).
1670
1671
* https://tracker.ceph.com/issues/57676
1672
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
1673
* https://tracker.ceph.com/issues/54460
1674
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1675
* https://tracker.ceph.com/issues/58220
1676
  Command failed (workunit test fs/quota/quota.sh) on smithixxx with status 1
1677
* https://tracker.ceph.com/issues/58220#note-9
1678
  workunit fs/test_python.sh: test_disk_quota_exceeeded_error failure
1679
* https://tracker.ceph.com/issues/56695
1680
  Command failed (workunit test suites/pjd.sh)
1681
* https://tracker.ceph.com/issues/58564 
1682
  workuit dbench failed with error code 1
1683
* https://tracker.ceph.com/issues/57206
1684
  libcephfs/test.sh: ceph_test_libcephfs_reclaim
1685
* https://tracker.ceph.com/issues/57580
1686
  Test failure: test_newops_getvxattr (tasks.cephfs.test_newops.TestNewOps)
1687
* https://tracker.ceph.com/issues/58940
1688
  ceph osd hit ceph_abort
1689
* https://tracker.ceph.com/issues/55805
1690 118 Venky Shankar
  error scrub thrashing reached max tries in 900 secs
1691
1692
h3. 30 March 2023
1693
1694
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230315.085747
1695
1696
* https://tracker.ceph.com/issues/58938
1697
    qa: xfstests-dev's generic test suite has 7 failures with kclient
1698
* https://tracker.ceph.com/issues/51964
1699
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1700
* https://tracker.ceph.com/issues/58340
1701 114 Venky Shankar
    mds: fsstress.sh hangs with multimds
1702
1703 115 Venky Shankar
h3. 29 March 2023
1704 114 Venky Shankar
1705
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230317.095222
1706
1707
* https://tracker.ceph.com/issues/56695
1708
    [RHEL stock] pjd test failures
1709
* https://tracker.ceph.com/issues/57676
1710
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1711
* https://tracker.ceph.com/issues/57087
1712
    qa: test_fragmented_injection (tasks.cephfs.test_data_scan.TestDataScan) failure
1713 116 Venky Shankar
* https://tracker.ceph.com/issues/58340
1714
    mds: fsstress.sh hangs with multimds
1715 114 Venky Shankar
* https://tracker.ceph.com/issues/57655
1716
    qa: fs:mixed-clients kernel_untar_build failure
1717 117 Venky Shankar
* https://tracker.ceph.com/issues/59230
1718
    Test failure: test_object_deletion (tasks.cephfs.test_damage.TestDamage)
1719 114 Venky Shankar
* https://tracker.ceph.com/issues/58938
1720 113 Venky Shankar
    qa: xfstests-dev's generic test suite has 7 failures with kclient
1721
1722
h3. 13 Mar 2023
1723
1724
* https://tracker.ceph.com/issues/56695
1725
    [RHEL stock] pjd test failures
1726
* https://tracker.ceph.com/issues/57676
1727
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1728
* https://tracker.ceph.com/issues/51964
1729
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1730
* https://tracker.ceph.com/issues/54460
1731
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1732
* https://tracker.ceph.com/issues/57656
1733 112 Venky Shankar
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
1734
1735
h3. 09 Mar 2023
1736
1737
https://pulpito.ceph.com/vshankar-2023-03-03_04:39:14-fs-wip-vshankar-testing-20230303.023823-testing-default-smithi/
1738
https://pulpito.ceph.com/vshankar-2023-03-08_15:12:36-fs-wip-vshankar-testing-20230308.112059-testing-default-smithi/
1739
1740
* https://tracker.ceph.com/issues/56695
1741
    [RHEL stock] pjd test failures
1742
* https://tracker.ceph.com/issues/57676
1743
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1744
* https://tracker.ceph.com/issues/51964
1745
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1746
* https://tracker.ceph.com/issues/54460
1747
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1748
* https://tracker.ceph.com/issues/58340
1749
    mds: fsstress.sh hangs with multimds
1750
* https://tracker.ceph.com/issues/57087
1751 111 Venky Shankar
    qa: test_fragmented_injection (tasks.cephfs.test_data_scan.TestDataScan) failure
1752
1753
h3. 07 Mar 2023
1754
1755
https://pulpito.ceph.com/vshankar-2023-03-02_09:21:58-fs-wip-vshankar-testing-20230222.044949-testing-default-smithi/
1756
https://pulpito.ceph.com/vshankar-2023-03-07_05:15:12-fs-wip-vshankar-testing-20230307.030510-testing-default-smithi/
1757
1758
* https://tracker.ceph.com/issues/56695
1759
    [RHEL stock] pjd test failures
1760
* https://tracker.ceph.com/issues/57676
1761
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1762
* https://tracker.ceph.com/issues/51964
1763
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1764
* https://tracker.ceph.com/issues/57656
1765
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
1766
* https://tracker.ceph.com/issues/57655
1767
    qa: fs:mixed-clients kernel_untar_build failure
1768
* https://tracker.ceph.com/issues/58220
1769
    Command failed (workunit test fs/quota/quota.sh) on smithi081 with status 1:
1770
* https://tracker.ceph.com/issues/54460
1771
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1772
* https://tracker.ceph.com/issues/58934
1773 109 Venky Shankar
    snaptest-git-ceph.sh failure with ceph-fuse
1774
1775
h3. 28 Feb 2023
1776
1777
https://pulpito.ceph.com/vshankar-2023-02-24_02:11:45-fs-wip-vshankar-testing-20230222.025426-testing-default-smithi/
1778
1779
* https://tracker.ceph.com/issues/56695
1780
    [RHEL stock] pjd test failures
1781
* https://tracker.ceph.com/issues/57676
1782
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1783 110 Venky Shankar
* https://tracker.ceph.com/issues/56446
1784 109 Venky Shankar
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
1785
1786 107 Venky Shankar
(teuthology infra issues causing testing delays - merging PRs which have tests passing)
1787
1788
h3. 25 Jan 2023
1789
1790
https://pulpito.ceph.com/vshankar-2023-01-25_07:57:32-fs-wip-vshankar-testing-20230125.055346-testing-default-smithi/
1791
1792
* https://tracker.ceph.com/issues/52624
1793
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
1794
* https://tracker.ceph.com/issues/56695
1795
    [RHEL stock] pjd test failures
1796
* https://tracker.ceph.com/issues/57676
1797
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1798
* https://tracker.ceph.com/issues/56446
1799
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
1800
* https://tracker.ceph.com/issues/57206
1801
    libcephfs/test.sh: ceph_test_libcephfs_reclaim
1802
* https://tracker.ceph.com/issues/58220
1803
    Command failed (workunit test fs/quota/quota.sh) on smithi081 with status 1:
1804
* https://tracker.ceph.com/issues/58340
1805
  mds: fsstress.sh hangs with multimds
1806
* https://tracker.ceph.com/issues/56011
1807
    fs/thrash: snaptest-snap-rm-cmp.sh fails in mds5sum comparison
1808
* https://tracker.ceph.com/issues/54460
1809 101 Rishabh Dave
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1810
1811
h3. 30 JAN 2023
1812
1813
run: http://pulpito.front.sepia.ceph.com/rishabh-2022-11-28_08:04:11-fs-wip-rishabh-testing-2022Nov24-1818-testing-default-smithi/
1814
re-run: http://pulpito.front.sepia.ceph.com/rishabh-2023-01-13_12:08:33-fs-wip-rishabh-testing-2022Nov24-11Jan2023-distro-default-smithi/
1815 105 Rishabh Dave
re-run of re-run: http://pulpito.front.sepia.ceph.com/rishabh-2023-01-23_18:53:32-fs-wip-rishabh-testing-2022Nov24-11Jan2023-distro-default-smithi/
1816
1817 101 Rishabh Dave
* https://tracker.ceph.com/issues/52624
1818
  qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
1819
* https://tracker.ceph.com/issues/56695
1820
  [RHEL stock] pjd test failures
1821
* https://tracker.ceph.com/issues/57676
1822
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
1823
* https://tracker.ceph.com/issues/55332
1824
  Failure in snaptest-git-ceph.sh
1825
* https://tracker.ceph.com/issues/51964
1826
  qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1827
* https://tracker.ceph.com/issues/56446
1828
  Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
1829
* https://tracker.ceph.com/issues/57655 
1830
  qa: fs:mixed-clients kernel_untar_build failure
1831
* https://tracker.ceph.com/issues/54460
1832
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1833 103 Rishabh Dave
* https://tracker.ceph.com/issues/58340
1834
  mds: fsstress.sh hangs with multimds
1835 101 Rishabh Dave
* https://tracker.ceph.com/issues/58219
1836 102 Rishabh Dave
  Command crashed: 'ceph-dencoder type inode_backtrace_t import - decode dump_json'
1837
1838
* "Failed to load ceph-mgr modules: prometheus" in cluster log"
1839 106 Rishabh Dave
  http://pulpito.front.sepia.ceph.com/rishabh-2023-01-23_18:53:32-fs-wip-rishabh-testing-2022Nov24-11Jan2023-distro-default-smithi/7134086
1840
  Acc to Venky this was fixed in https://github.com/ceph/ceph/commit/cf6089200d96fc56b08ee17a4e31f19823370dc8
1841 102 Rishabh Dave
* Created https://tracker.ceph.com/issues/58564
1842 100 Venky Shankar
  workunit test suites/dbench.sh failed error code 1
1843
1844
h3. 15 Dec 2022
1845
1846
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20221215.112736
1847
1848
* https://tracker.ceph.com/issues/52624
1849
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1850
* https://tracker.ceph.com/issues/56695
1851
    [RHEL stock] pjd test failures
1852
* https://tracker.ceph.com/issues/58219
1853
* https://tracker.ceph.com/issues/57655
1854
* qa: fs:mixed-clients kernel_untar_build failure
1855
    Test failure: test_journal_migration (tasks.cephfs.test_journal_migration.TestJournalMigration)
1856
* https://tracker.ceph.com/issues/57676
1857
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1858
* https://tracker.ceph.com/issues/58340
1859 96 Venky Shankar
    mds: fsstress.sh hangs with multimds
1860
1861
h3. 08 Dec 2022
1862 99 Venky Shankar
1863 96 Venky Shankar
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20221130.043104
1864
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20221209.043803
1865
1866
(lots of transient git.ceph.com failures)
1867
1868
* https://tracker.ceph.com/issues/52624
1869
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1870
* https://tracker.ceph.com/issues/56695
1871
    [RHEL stock] pjd test failures
1872
* https://tracker.ceph.com/issues/57655
1873
    qa: fs:mixed-clients kernel_untar_build failure
1874
* https://tracker.ceph.com/issues/58219
1875
    Test failure: test_journal_migration (tasks.cephfs.test_journal_migration.TestJournalMigration)
1876
* https://tracker.ceph.com/issues/58220
1877
    Command failed (workunit test fs/quota/quota.sh) on smithi081 with status 1:
1878 97 Venky Shankar
* https://tracker.ceph.com/issues/57676
1879
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1880 98 Venky Shankar
* https://tracker.ceph.com/issues/53859
1881
    qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
1882
* https://tracker.ceph.com/issues/54460
1883
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1884 96 Venky Shankar
* https://tracker.ceph.com/issues/58244
1885 95 Venky Shankar
    Test failure: test_rebuild_inotable (tasks.cephfs.test_data_scan.TestDataScan)
1886
1887
h3. 14 Oct 2022
1888
1889
https://pulpito.ceph.com/vshankar-2022-10-12_04:56:59-fs-wip-vshankar-testing-20221011-145847-testing-default-smithi/
1890
https://pulpito.ceph.com/vshankar-2022-10-14_04:04:57-fs-wip-vshankar-testing-20221014-072608-testing-default-smithi/
1891
1892
* https://tracker.ceph.com/issues/52624
1893
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1894
* https://tracker.ceph.com/issues/55804
1895
    Command failed (workunit test suites/pjd.sh)
1896
* https://tracker.ceph.com/issues/51964
1897
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1898
* https://tracker.ceph.com/issues/57682
1899
    client: ERROR: test_reconnect_after_blocklisted
1900 90 Rishabh Dave
* https://tracker.ceph.com/issues/54460
1901 91 Rishabh Dave
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1902
1903
h3. 10 Oct 2022
1904 92 Rishabh Dave
1905 91 Rishabh Dave
http://pulpito.front.sepia.ceph.com/rishabh-2022-09-30_19:45:21-fs-wip-rishabh-testing-30Sep2022-testing-default-smithi/
1906
1907
reruns
1908
* fs-thrash, passed: http://pulpito.front.sepia.ceph.com/rishabh-2022-10-04_13:19:47-fs-wip-rishabh-testing-30Sep2022-testing-default-smithi/
1909 94 Rishabh Dave
* fs-verify, passed: http://pulpito.front.sepia.ceph.com/rishabh-2022-10-05_12:25:37-fs-wip-rishabh-testing-30Sep2022-testing-default-smithi/
1910 91 Rishabh Dave
* cephadm failures also passed after many re-runs: http://pulpito.front.sepia.ceph.com/rishabh-2022-10-06_13:50:51-fs-wip-rishabh-testing-30Sep2022-2-testing-default-smithi/
1911 93 Rishabh Dave
    ** needed this PR to be merged in ceph-ci branch - https://github.com/ceph/ceph/pull/47458
1912 91 Rishabh Dave
1913
known bugs
1914
* https://tracker.ceph.com/issues/52624
1915
  qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1916
* https://tracker.ceph.com/issues/50223
1917
  client.xxxx isn't responding to mclientcaps(revoke
1918
* https://tracker.ceph.com/issues/57299
1919
  qa: test_dump_loads fails with JSONDecodeError
1920
* https://tracker.ceph.com/issues/57655 [Exist in main as well]
1921
  qa: fs:mixed-clients kernel_untar_build failure
1922
* https://tracker.ceph.com/issues/57206
1923 90 Rishabh Dave
  libcephfs/test.sh: ceph_test_libcephfs_reclaim
1924
1925
h3. 2022 Sep 29
1926
1927
http://pulpito.front.sepia.ceph.com/rishabh-2022-09-14_12:48:43-fs-wip-rishabh-testing-2022Sep9-1708-testing-default-smithi/
1928
1929
* https://tracker.ceph.com/issues/55804
1930
  Command failed (workunit test suites/pjd.sh)
1931
* https://tracker.ceph.com/issues/36593
1932
  Command failed (workunit test fs/quota/quota.sh) on smithixxx with status 1
1933
* https://tracker.ceph.com/issues/52624
1934
  qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1935
* https://tracker.ceph.com/issues/51964
1936
  qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1937
* https://tracker.ceph.com/issues/56632
1938
  Test failure: test_subvolume_snapshot_clone_quota_exceeded
1939
* https://tracker.ceph.com/issues/50821
1940 88 Patrick Donnelly
  qa: untar_snap_rm failure during mds thrashing
1941
1942
h3. 2022 Sep 26
1943
1944
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20220923.171109
1945
1946
* https://tracker.ceph.com/issues/55804
1947
    qa failure: pjd link tests failed
1948
* https://tracker.ceph.com/issues/57676
1949
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1950
* https://tracker.ceph.com/issues/52624
1951
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1952
* https://tracker.ceph.com/issues/57580
1953
    Test failure: test_newops_getvxattr (tasks.cephfs.test_newops.TestNewOps)
1954
* https://tracker.ceph.com/issues/48773
1955
    qa: scrub does not complete
1956
* https://tracker.ceph.com/issues/57299
1957
    qa: test_dump_loads fails with JSONDecodeError
1958
* https://tracker.ceph.com/issues/57280
1959
    qa: tasks/kernel_cfuse_workunits_untarbuild_blogbench fails - Failed to fetch package version from shaman
1960
* https://tracker.ceph.com/issues/57205
1961
    Test failure: test_subvolume_group_ls_filter_internal_directories (tasks.cephfs.test_volumes.TestSubvolumeGroups)
1962
* https://tracker.ceph.com/issues/57656
1963
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
1964
* https://tracker.ceph.com/issues/57677
1965
    qa: "1 MDSs behind on trimming (MDS_TRIM)"
1966
* https://tracker.ceph.com/issues/57206
1967
    libcephfs/test.sh: ceph_test_libcephfs_reclaim
1968
* https://tracker.ceph.com/issues/57446
1969
    qa: test_subvolume_snapshot_info_if_orphan_clone fails
1970 89 Patrick Donnelly
* https://tracker.ceph.com/issues/57655 [Exist in main as well]
1971
    qa: fs:mixed-clients kernel_untar_build failure
1972 88 Patrick Donnelly
* https://tracker.ceph.com/issues/57682
1973
    client: ERROR: test_reconnect_after_blocklisted
1974 87 Patrick Donnelly
1975
1976
h3. 2022 Sep 22
1977
1978
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20220920.234701
1979
1980
* https://tracker.ceph.com/issues/57299
1981
    qa: test_dump_loads fails with JSONDecodeError
1982
* https://tracker.ceph.com/issues/57205
1983
    Test failure: test_subvolume_group_ls_filter_internal_directories (tasks.cephfs.test_volumes.TestSubvolumeGroups)
1984
* https://tracker.ceph.com/issues/52624
1985
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1986
* https://tracker.ceph.com/issues/57580
1987
    Test failure: test_newops_getvxattr (tasks.cephfs.test_newops.TestNewOps)
1988
* https://tracker.ceph.com/issues/57280
1989
    qa: tasks/kernel_cfuse_workunits_untarbuild_blogbench fails - Failed to fetch package version from shaman
1990
* https://tracker.ceph.com/issues/48773
1991
    qa: scrub does not complete
1992
* https://tracker.ceph.com/issues/56446
1993
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
1994
* https://tracker.ceph.com/issues/57206
1995
    libcephfs/test.sh: ceph_test_libcephfs_reclaim
1996
* https://tracker.ceph.com/issues/51267
1997
    CommandFailedError: Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithi096 with status 1:...
1998
1999
NEW:
2000
2001
* https://tracker.ceph.com/issues/57656
2002
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
2003
* https://tracker.ceph.com/issues/57655 [Exist in main as well]
2004
    qa: fs:mixed-clients kernel_untar_build failure
2005
* https://tracker.ceph.com/issues/57657
2006
    mds: scrub locates mismatch between child accounted_rstats and self rstats
2007
2008
Segfault probably caused by: https://github.com/ceph/ceph/pull/47795#issuecomment-1255724799
2009 80 Venky Shankar
2010 79 Venky Shankar
2011
h3. 2022 Sep 16
2012
2013
https://pulpito.ceph.com/?branch=wip-vshankar-testing1-20220905-132828
2014
2015
* https://tracker.ceph.com/issues/57446
2016
    qa: test_subvolume_snapshot_info_if_orphan_clone fails
2017
* https://tracker.ceph.com/issues/57299
2018
    qa: test_dump_loads fails with JSONDecodeError
2019
* https://tracker.ceph.com/issues/50223
2020
    client.xxxx isn't responding to mclientcaps(revoke)
2021
* https://tracker.ceph.com/issues/52624
2022
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2023
* https://tracker.ceph.com/issues/57205
2024
    Test failure: test_subvolume_group_ls_filter_internal_directories (tasks.cephfs.test_volumes.TestSubvolumeGroups)
2025
* https://tracker.ceph.com/issues/57280
2026
    qa: tasks/kernel_cfuse_workunits_untarbuild_blogbench fails - Failed to fetch package version from shaman
2027
* https://tracker.ceph.com/issues/51282
2028
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2029
* https://tracker.ceph.com/issues/48203
2030
  https://tracker.ceph.com/issues/36593
2031
    qa: quota failure
2032
    qa: quota failure caused by clients stepping on each other
2033
* https://tracker.ceph.com/issues/57580
2034 77 Rishabh Dave
    Test failure: test_newops_getvxattr (tasks.cephfs.test_newops.TestNewOps)
2035
2036 76 Rishabh Dave
2037
h3. 2022 Aug 26
2038
2039
http://pulpito.front.sepia.ceph.com/rishabh-2022-08-22_17:49:59-fs-wip-rishabh-testing-2022Aug19-testing-default-smithi/
2040
http://pulpito.front.sepia.ceph.com/rishabh-2022-08-24_11:56:51-fs-wip-rishabh-testing-2022Aug19-testing-default-smithi/
2041
2042
* https://tracker.ceph.com/issues/57206
2043
  libcephfs/test.sh: ceph_test_libcephfs_reclaim
2044
* https://tracker.ceph.com/issues/56632
2045
  Test failure: test_subvolume_snapshot_clone_quota_exceeded (tasks.cephfs.test_volumes.TestSubvolumeSnapshotClones)
2046
* https://tracker.ceph.com/issues/56446
2047
  Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
2048
* https://tracker.ceph.com/issues/51964
2049
  qa: test_cephfs_mirror_restart_sync_on_blocklist failure
2050
* https://tracker.ceph.com/issues/53859
2051
  qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
2052
2053
* https://tracker.ceph.com/issues/54460
2054
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
2055
* https://tracker.ceph.com/issues/54462
2056
  Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128
2057
* https://tracker.ceph.com/issues/54460
2058
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
2059
* https://tracker.ceph.com/issues/36593
2060
  Command failed (workunit test fs/quota/quota.sh) on smithixxx with status 1
2061
2062
* https://tracker.ceph.com/issues/52624
2063
  qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2064
* https://tracker.ceph.com/issues/55804
2065
  Command failed (workunit test suites/pjd.sh)
2066
* https://tracker.ceph.com/issues/50223
2067
  client.xxxx isn't responding to mclientcaps(revoke)
2068 75 Venky Shankar
2069
2070
h3. 2022 Aug 22
2071
2072
https://pulpito.ceph.com/vshankar-2022-08-12_09:34:24-fs-wip-vshankar-testing1-20220812-072441-testing-default-smithi/
2073
https://pulpito.ceph.com/vshankar-2022-08-18_04:30:42-fs-wip-vshankar-testing1-20220818-082047-testing-default-smithi/ (drop problematic PR and re-run)
2074
2075
* https://tracker.ceph.com/issues/52624
2076
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2077
* https://tracker.ceph.com/issues/56446
2078
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
2079
* https://tracker.ceph.com/issues/55804
2080
    Command failed (workunit test suites/pjd.sh)
2081
* https://tracker.ceph.com/issues/51278
2082
    mds: "FAILED ceph_assert(!segments.empty())"
2083
* https://tracker.ceph.com/issues/54460
2084
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
2085
* https://tracker.ceph.com/issues/57205
2086
    Test failure: test_subvolume_group_ls_filter_internal_directories (tasks.cephfs.test_volumes.TestSubvolumeGroups)
2087
* https://tracker.ceph.com/issues/57206
2088
    ceph_test_libcephfs_reclaim crashes during test
2089
* https://tracker.ceph.com/issues/53859
2090
  Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
2091
* https://tracker.ceph.com/issues/50223
2092 72 Venky Shankar
    client.xxxx isn't responding to mclientcaps(revoke)
2093
2094
h3. 2022 Aug 12
2095
2096
https://pulpito.ceph.com/vshankar-2022-08-10_04:06:00-fs-wip-vshankar-testing-20220805-190751-testing-default-smithi/
2097
https://pulpito.ceph.com/vshankar-2022-08-11_12:16:58-fs-wip-vshankar-testing-20220811-145809-testing-default-smithi/ (drop problematic PR and re-run)
2098
2099
* https://tracker.ceph.com/issues/52624
2100
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2101
* https://tracker.ceph.com/issues/56446
2102
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
2103
* https://tracker.ceph.com/issues/51964
2104
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
2105
* https://tracker.ceph.com/issues/55804
2106
    Command failed (workunit test suites/pjd.sh)
2107
* https://tracker.ceph.com/issues/50223
2108
    client.xxxx isn't responding to mclientcaps(revoke)
2109
* https://tracker.ceph.com/issues/50821
2110 73 Venky Shankar
    qa: untar_snap_rm failure during mds thrashing
2111 72 Venky Shankar
* https://tracker.ceph.com/issues/54460
2112 71 Venky Shankar
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
2113
2114
h3. 2022 Aug 04
2115
2116
https://pulpito.ceph.com/?branch=wip-vshankar-testing1-20220804-123835 (only mgr/volumes, mgr/stats)
2117
2118 69 Rishabh Dave
Unrealted teuthology failure on rhel
2119 68 Rishabh Dave
2120
h3. 2022 Jul 25
2121
2122
http://pulpito.front.sepia.ceph.com/rishabh-2022-07-22_11:34:20-fs-wip-rishabh-testing-2022Jul22-1400-testing-default-smithi/
2123
2124 74 Rishabh Dave
1st re-run: http://pulpito.front.sepia.ceph.com/rishabh-2022-07-24_03:51:19-fs-wip-rishabh-testing-2022Jul22-1400-testing-default-smithi
2125
2nd re-run: http://pulpito.front.sepia.ceph.com/rishabh-2022-07-24_08:53:36-fs-wip-rishabh-testing-2022Jul22-1400-testing-default-smithi/
2126 68 Rishabh Dave
3rd re-run: http://pulpito.front.sepia.ceph.com/rishabh-2022-07-24_08:53:36-fs-wip-rishabh-testing-2022Jul22-1400-testing-default-smithi/
2127
4th (final) re-run: http://pulpito.front.sepia.ceph.com/rishabh-2022-07-28_03:59:01-fs-wip-rishabh-testing-2022Jul28-0143-testing-default-smithi/
2128
2129
* https://tracker.ceph.com/issues/55804
2130
  Command failed (workunit test suites/pjd.sh)
2131
* https://tracker.ceph.com/issues/50223
2132
  client.xxxx isn't responding to mclientcaps(revoke)
2133
2134
* https://tracker.ceph.com/issues/54460
2135
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
2136 1 Patrick Donnelly
* https://tracker.ceph.com/issues/36593
2137 74 Rishabh Dave
  Command failed (workunit test fs/quota/quota.sh) on smithixxx with status 1
2138 68 Rishabh Dave
* https://tracker.ceph.com/issues/54462
2139 67 Patrick Donnelly
  Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128~
2140
2141
h3. 2022 July 22
2142
2143
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20220721.235756
2144
2145
MDS_HEALTH_DUMMY error in log fixed by followup commit.
2146
transient selinux ping failure
2147
2148
* https://tracker.ceph.com/issues/56694
2149
    qa: avoid blocking forever on hung umount
2150
* https://tracker.ceph.com/issues/56695
2151
    [RHEL stock] pjd test failures
2152
* https://tracker.ceph.com/issues/56696
2153
    admin keyring disappears during qa run
2154
* https://tracker.ceph.com/issues/56697
2155
    qa: fs/snaps fails for fuse
2156
* https://tracker.ceph.com/issues/50222
2157
    osd: 5.2s0 deep-scrub : stat mismatch
2158
* https://tracker.ceph.com/issues/56698
2159
    client: FAILED ceph_assert(_size == 0)
2160
* https://tracker.ceph.com/issues/50223
2161
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2162 66 Rishabh Dave
2163 65 Rishabh Dave
2164
h3. 2022 Jul 15
2165
2166
http://pulpito.front.sepia.ceph.com/rishabh-2022-07-08_23:53:34-fs-wip-rishabh-testing-2022Jul08-1820-testing-default-smithi/
2167
2168
re-run: http://pulpito.front.sepia.ceph.com/rishabh-2022-07-15_06:42:04-fs-wip-rishabh-testing-2022Jul08-1820-testing-default-smithi/
2169
2170
* https://tracker.ceph.com/issues/53859
2171
  Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
2172
* https://tracker.ceph.com/issues/55804
2173
  Command failed (workunit test suites/pjd.sh)
2174
* https://tracker.ceph.com/issues/50223
2175
  client.xxxx isn't responding to mclientcaps(revoke)
2176
* https://tracker.ceph.com/issues/50222
2177
  osd: deep-scrub : stat mismatch
2178
2179
* https://tracker.ceph.com/issues/56632
2180
  Test failure: test_subvolume_snapshot_clone_quota_exceeded (tasks.cephfs.test_volumes.TestSubvolumeSnapshotClones)
2181
* https://tracker.ceph.com/issues/56634
2182
  workunit test fs/snaps/snaptest-intodir.sh
2183
* https://tracker.ceph.com/issues/56644
2184
  Test failure: test_rapid_creation (tasks.cephfs.test_fragment.TestFragmentation)
2185
2186 61 Rishabh Dave
2187
2188
h3. 2022 July 05
2189 62 Rishabh Dave
2190 64 Rishabh Dave
http://pulpito.front.sepia.ceph.com/rishabh-2022-07-02_14:14:52-fs-wip-rishabh-testing-20220702-1631-testing-default-smithi/
2191
2192
On 1st re-run some jobs passed - http://pulpito.front.sepia.ceph.com/rishabh-2022-07-03_15:10:28-fs-wip-rishabh-testing-20220702-1631-distro-default-smithi/
2193
2194
On 2nd re-run only few jobs failed -
2195 62 Rishabh Dave
http://pulpito.front.sepia.ceph.com/rishabh-2022-07-06_05:24:29-fs-wip-rishabh-testing-20220705-2132-distro-default-smithi/
2196
http://pulpito.front.sepia.ceph.com/rishabh-2022-07-06_05:24:29-fs-wip-rishabh-testing-20220705-2132-distro-default-smithi/
2197
2198
* https://tracker.ceph.com/issues/56446
2199
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
2200
* https://tracker.ceph.com/issues/55804
2201
    Command failed (workunit test suites/pjd.sh) on smithi047 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/
2202
2203
* https://tracker.ceph.com/issues/56445
2204 63 Rishabh Dave
    Command failed on smithi080 with status 123: "find /home/ubuntu/cephtest/archive/syslog -name '*.log' -print0 | sudo xargs -0 --no-run-if-empty -- gzip --"
2205
* https://tracker.ceph.com/issues/51267
2206
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithi098 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1
2207 62 Rishabh Dave
* https://tracker.ceph.com/issues/50224
2208
    Test failure: test_mirroring_init_failure_with_recovery (tasks.cephfs.test_mirroring.TestMirroring)
2209 61 Rishabh Dave
2210 58 Venky Shankar
2211
2212
h3. 2022 July 04
2213
2214
https://pulpito.ceph.com/vshankar-2022-06-29_09:19:00-fs-wip-vshankar-testing-20220627-100931-testing-default-smithi/
2215
(rhel runs were borked due to: https://lists.ceph.io/hyperkitty/list/dev@ceph.io/thread/JSZQFUKVLDND4W33PXDGCABPHNSPT6SS/, tests ran with --filter-out=rhel)
2216
2217
* https://tracker.ceph.com/issues/56445
2218 59 Rishabh Dave
    Command failed on smithi162 with status 123: "find /home/ubuntu/cephtest/archive/syslog -name '*.log' -print0 | sudo xargs -0 --no-run-if-empty -- gzip --"
2219
* https://tracker.ceph.com/issues/56446
2220
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
2221
* https://tracker.ceph.com/issues/51964
2222 60 Rishabh Dave
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
2223 59 Rishabh Dave
* https://tracker.ceph.com/issues/52624
2224 57 Venky Shankar
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2225
2226
h3. 2022 June 20
2227
2228
https://pulpito.ceph.com/vshankar-2022-06-15_04:03:39-fs-wip-vshankar-testing1-20220615-072516-testing-default-smithi/
2229
https://pulpito.ceph.com/vshankar-2022-06-19_08:22:46-fs-wip-vshankar-testing1-20220619-102531-testing-default-smithi/
2230
2231
* https://tracker.ceph.com/issues/52624
2232
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2233
* https://tracker.ceph.com/issues/55804
2234
    qa failure: pjd link tests failed
2235
* https://tracker.ceph.com/issues/54108
2236
    qa: iogen workunit: "The following counters failed to be set on mds daemons: {'mds.exported', 'mds.imported'}"
2237
* https://tracker.ceph.com/issues/55332
2238 56 Patrick Donnelly
    Failure in snaptest-git-ceph.sh (it's an async unlink/create bug)
2239
2240
h3. 2022 June 13
2241
2242
https://pulpito.ceph.com/pdonnell-2022-06-12_05:08:12-fs:workload-wip-pdonnell-testing-20220612.004943-distro-default-smithi/
2243
2244
* https://tracker.ceph.com/issues/56024
2245
    cephadm: removes ceph.conf during qa run causing command failure
2246
* https://tracker.ceph.com/issues/48773
2247
    qa: scrub does not complete
2248
* https://tracker.ceph.com/issues/56012
2249
    mds: src/mds/MDLog.cc: 283: FAILED ceph_assert(!mds->is_ any_replay())
2250 55 Venky Shankar
2251 54 Venky Shankar
2252
h3. 2022 Jun 13
2253
2254
https://pulpito.ceph.com/vshankar-2022-06-07_00:25:50-fs-wip-vshankar-testing-20220606-223254-testing-default-smithi/
2255
https://pulpito.ceph.com/vshankar-2022-06-10_01:04:46-fs-wip-vshankar-testing-20220609-175550-testing-default-smithi/
2256
2257
* https://tracker.ceph.com/issues/52624
2258
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2259
* https://tracker.ceph.com/issues/51964
2260
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
2261
* https://tracker.ceph.com/issues/53859
2262
    qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
2263
* https://tracker.ceph.com/issues/55804
2264
    qa failure: pjd link tests failed
2265
* https://tracker.ceph.com/issues/56003
2266
    client: src/include/xlist.h: 81: FAILED ceph_assert(_size == 0)
2267
* https://tracker.ceph.com/issues/56011
2268
    fs/thrash: snaptest-snap-rm-cmp.sh fails in mds5sum comparison
2269
* https://tracker.ceph.com/issues/56012
2270 53 Venky Shankar
    mds: src/mds/MDLog.cc: 283: FAILED ceph_assert(!mds->is_ any_replay())
2271
2272
h3. 2022 Jun 07
2273
2274
https://pulpito.ceph.com/vshankar-2022-06-06_21:25:41-fs-wip-vshankar-testing1-20220606-230129-testing-default-smithi/
2275
https://pulpito.ceph.com/vshankar-2022-06-07_10:53:31-fs-wip-vshankar-testing1-20220607-104134-testing-default-smithi/ (rerun after dropping a problematic PR)
2276
2277
* https://tracker.ceph.com/issues/52624
2278
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2279
* https://tracker.ceph.com/issues/50223
2280
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2281
* https://tracker.ceph.com/issues/50224
2282 51 Venky Shankar
    qa: test_mirroring_init_failure_with_recovery failure
2283
2284
h3. 2022 May 12
2285 52 Venky Shankar
2286 51 Venky Shankar
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20220509-125847
2287
https://pulpito.ceph.com/vshankar-2022-05-13_17:09:16-fs-wip-vshankar-testing-20220513-120051-testing-default-smithi/ (drop prs + rerun)
2288
2289
* https://tracker.ceph.com/issues/52624
2290
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2291
* https://tracker.ceph.com/issues/50223
2292
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2293
* https://tracker.ceph.com/issues/55332
2294
    Failure in snaptest-git-ceph.sh
2295
* https://tracker.ceph.com/issues/53859
2296 1 Patrick Donnelly
    qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
2297 52 Venky Shankar
* https://tracker.ceph.com/issues/55538
2298
    Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead)
2299 51 Venky Shankar
* https://tracker.ceph.com/issues/55258
2300 49 Venky Shankar
    lots of "heartbeat_check: no reply from X.X.X.X" in OSD logs (cropss up again, though very infrequent)
2301
2302 50 Venky Shankar
h3. 2022 May 04
2303
2304
https://pulpito.ceph.com/vshankar-2022-05-01_13:18:44-fs-wip-vshankar-testing1-20220428-204527-testing-default-smithi/
2305 49 Venky Shankar
https://pulpito.ceph.com/vshankar-2022-05-02_16:58:59-fs-wip-vshankar-testing1-20220502-201957-testing-default-smithi/ (after dropping PRs)
2306
2307
* https://tracker.ceph.com/issues/52624
2308
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2309
* https://tracker.ceph.com/issues/50223
2310
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2311
* https://tracker.ceph.com/issues/55332
2312
    Failure in snaptest-git-ceph.sh
2313
* https://tracker.ceph.com/issues/53859
2314
    qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
2315
* https://tracker.ceph.com/issues/55516
2316
    qa: fs suite tests failing with "json.decoder.JSONDecodeError: Extra data: line 2 column 82 (char 82)"
2317
* https://tracker.ceph.com/issues/55537
2318
    mds: crash during fs:upgrade test
2319
* https://tracker.ceph.com/issues/55538
2320 48 Venky Shankar
    Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead)
2321
2322
h3. 2022 Apr 25
2323
2324
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20220420-113951 (owner vshankar)
2325
2326
* https://tracker.ceph.com/issues/52624
2327
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2328
* https://tracker.ceph.com/issues/50223
2329
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2330
* https://tracker.ceph.com/issues/55258
2331
    lots of "heartbeat_check: no reply from X.X.X.X" in OSD logs
2332
* https://tracker.ceph.com/issues/55377
2333 47 Venky Shankar
    kclient: mds revoke Fwb caps stuck after the kclient tries writebcak once
2334
2335
h3. 2022 Apr 14
2336
2337
https://pulpito.ceph.com/?branch=wip-vshankar-testing1-20220411-144044
2338
2339
* https://tracker.ceph.com/issues/52624
2340
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2341
* https://tracker.ceph.com/issues/50223
2342
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2343
* https://tracker.ceph.com/issues/52438
2344
    qa: ffsb timeout
2345
* https://tracker.ceph.com/issues/55170
2346
    mds: crash during rejoin (CDir::fetch_keys)
2347
* https://tracker.ceph.com/issues/55331
2348
    pjd failure
2349
* https://tracker.ceph.com/issues/48773
2350
    qa: scrub does not complete
2351
* https://tracker.ceph.com/issues/55332
2352
    Failure in snaptest-git-ceph.sh
2353
* https://tracker.ceph.com/issues/55258
2354 45 Venky Shankar
    lots of "heartbeat_check: no reply from X.X.X.X" in OSD logs
2355
2356 46 Venky Shankar
h3. 2022 Apr 11
2357 45 Venky Shankar
2358
https://pulpito.ceph.com/?branch=wip-vshankar-testing-55110-20220408-203242
2359
2360
* https://tracker.ceph.com/issues/48773
2361
    qa: scrub does not complete
2362
* https://tracker.ceph.com/issues/52624
2363
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2364
* https://tracker.ceph.com/issues/52438
2365
    qa: ffsb timeout
2366
* https://tracker.ceph.com/issues/48680
2367
    mds: scrubbing stuck "scrub active (0 inodes in the stack)"
2368
* https://tracker.ceph.com/issues/55236
2369
    qa: fs/snaps tests fails with "hit max job timeout"
2370
* https://tracker.ceph.com/issues/54108
2371
    qa: iogen workunit: "The following counters failed to be set on mds daemons: {'mds.exported', 'mds.imported'}"
2372
* https://tracker.ceph.com/issues/54971
2373
    Test failure: test_perf_stats_stale_metrics (tasks.cephfs.test_mds_metrics.TestMDSMetrics)
2374
* https://tracker.ceph.com/issues/50223
2375
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2376
* https://tracker.ceph.com/issues/55258
2377 44 Venky Shankar
    lots of "heartbeat_check: no reply from X.X.X.X" in OSD logs
2378 42 Venky Shankar
2379 43 Venky Shankar
h3. 2022 Mar 21
2380
2381
https://pulpito.ceph.com/vshankar-2022-03-20_02:16:37-fs-wip-vshankar-testing-20220319-163539-testing-default-smithi/
2382
2383
Run didn't go well, lots of failures - debugging by dropping PRs and running against master branch. Only merging unrelated PRs that pass tests.
2384
2385
2386 42 Venky Shankar
h3. 2022 Mar 08
2387
2388
https://pulpito.ceph.com/vshankar-2022-02-28_04:32:15-fs-wip-vshankar-testing-20220226-211550-testing-default-smithi/
2389
2390
rerun with
2391
- (drop) https://github.com/ceph/ceph/pull/44679
2392
- (drop) https://github.com/ceph/ceph/pull/44958
2393
https://pulpito.ceph.com/vshankar-2022-03-06_14:47:51-fs-wip-vshankar-testing-20220304-132102-testing-default-smithi/
2394
2395
* https://tracker.ceph.com/issues/54419 (new)
2396
    `ceph orch upgrade start` seems to never reach completion
2397
* https://tracker.ceph.com/issues/51964
2398
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
2399
* https://tracker.ceph.com/issues/52624
2400
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2401
* https://tracker.ceph.com/issues/50223
2402
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2403
* https://tracker.ceph.com/issues/52438
2404
    qa: ffsb timeout
2405
* https://tracker.ceph.com/issues/50821
2406
    qa: untar_snap_rm failure during mds thrashing
2407 41 Venky Shankar
2408
2409
h3. 2022 Feb 09
2410
2411
https://pulpito.ceph.com/vshankar-2022-02-05_17:27:49-fs-wip-vshankar-testing-20220201-113815-testing-default-smithi/
2412
2413
rerun with
2414
- (drop) https://github.com/ceph/ceph/pull/37938
2415
- (drop) https://github.com/ceph/ceph/pull/44335
2416
- (drop) https://github.com/ceph/ceph/pull/44491
2417
- (drop) https://github.com/ceph/ceph/pull/44501
2418
https://pulpito.ceph.com/vshankar-2022-02-08_14:27:29-fs-wip-vshankar-testing-20220208-181241-testing-default-smithi/
2419
2420
* https://tracker.ceph.com/issues/51964
2421
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
2422
* https://tracker.ceph.com/issues/54066
2423
    test_subvolume_no_upgrade_v1_sanity fails with `AssertionError: 1000 != 0`
2424
* https://tracker.ceph.com/issues/48773
2425
    qa: scrub does not complete
2426
* https://tracker.ceph.com/issues/52624
2427
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2428
* https://tracker.ceph.com/issues/50223
2429
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2430
* https://tracker.ceph.com/issues/52438
2431 40 Patrick Donnelly
    qa: ffsb timeout
2432
2433
h3. 2022 Feb 01
2434
2435
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20220127.171526
2436
2437
* https://tracker.ceph.com/issues/54107
2438
    kclient: hang during umount
2439
* https://tracker.ceph.com/issues/54106
2440
    kclient: hang during workunit cleanup
2441
* https://tracker.ceph.com/issues/54108
2442
    qa: iogen workunit: "The following counters failed to be set on mds daemons: {'mds.exported', 'mds.imported'}"
2443
* https://tracker.ceph.com/issues/48773
2444
    qa: scrub does not complete
2445
* https://tracker.ceph.com/issues/52438
2446
    qa: ffsb timeout
2447 36 Venky Shankar
2448
2449
h3. 2022 Jan 13
2450 39 Venky Shankar
2451 36 Venky Shankar
https://pulpito.ceph.com/vshankar-2022-01-06_13:18:41-fs-wip-vshankar-testing-20220106-145819-testing-default-smithi/
2452 38 Venky Shankar
2453
rerun with:
2454 36 Venky Shankar
- (add) https://github.com/ceph/ceph/pull/44570
2455
- (drop) https://github.com/ceph/ceph/pull/43184
2456
https://pulpito.ceph.com/vshankar-2022-01-13_04:42:40-fs-wip-vshankar-testing-20220106-145819-testing-default-smithi/
2457
2458
* https://tracker.ceph.com/issues/50223
2459
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2460
* https://tracker.ceph.com/issues/51282
2461
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2462
* https://tracker.ceph.com/issues/48773
2463
    qa: scrub does not complete
2464
* https://tracker.ceph.com/issues/52624
2465
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2466
* https://tracker.ceph.com/issues/53859
2467 34 Venky Shankar
    qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
2468
2469
h3. 2022 Jan 03
2470
2471
https://pulpito.ceph.com/vshankar-2021-12-22_07:37:44-fs-wip-vshankar-testing-20211216-114012-testing-default-smithi/
2472
https://pulpito.ceph.com/vshankar-2022-01-03_12:27:45-fs-wip-vshankar-testing-20220103-142738-testing-default-smithi/ (rerun)
2473
2474
* https://tracker.ceph.com/issues/50223
2475
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2476
* https://tracker.ceph.com/issues/51964
2477
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
2478
* https://tracker.ceph.com/issues/51267
2479
    CommandFailedError: Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithi096 with status 1:...
2480
* https://tracker.ceph.com/issues/51282
2481
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2482
* https://tracker.ceph.com/issues/50821
2483
    qa: untar_snap_rm failure during mds thrashing
2484 35 Ramana Raja
* https://tracker.ceph.com/issues/51278
2485
    mds: "FAILED ceph_assert(!segments.empty())"
2486
* https://tracker.ceph.com/issues/52279
2487 34 Venky Shankar
    cephadm tests fail due to: error adding seccomp filter rule for syscall bdflush: requested action matches default action of filter
2488 33 Patrick Donnelly
2489
2490
h3. 2021 Dec 22
2491
2492
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211222.014316
2493
2494
* https://tracker.ceph.com/issues/52624
2495
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2496
* https://tracker.ceph.com/issues/50223
2497
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2498
* https://tracker.ceph.com/issues/52279
2499
    cephadm tests fail due to: error adding seccomp filter rule for syscall bdflush: requested action matches default action of filter
2500
* https://tracker.ceph.com/issues/50224
2501
    qa: test_mirroring_init_failure_with_recovery failure
2502
* https://tracker.ceph.com/issues/48773
2503
    qa: scrub does not complete
2504 32 Venky Shankar
2505
2506
h3. 2021 Nov 30
2507
2508
https://pulpito.ceph.com/vshankar-2021-11-24_07:14:27-fs-wip-vshankar-testing-20211124-094330-testing-default-smithi/
2509
https://pulpito.ceph.com/vshankar-2021-11-30_06:23:32-fs-wip-vshankar-testing-20211124-094330-distro-default-smithi/ (rerun w/ QA fixes)
2510
2511
* https://tracker.ceph.com/issues/53436
2512
    mds, mon: mds beacon messages get dropped? (mds never reaches up:active state)
2513
* https://tracker.ceph.com/issues/51964
2514
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
2515
* https://tracker.ceph.com/issues/48812
2516
    qa: test_scrub_pause_and_resume_with_abort failure
2517
* https://tracker.ceph.com/issues/51076
2518
    "wait_for_recovery: failed before timeout expired" during thrashosd test with EC backend.
2519
* https://tracker.ceph.com/issues/50223
2520
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2521
* https://tracker.ceph.com/issues/52624
2522
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2523
* https://tracker.ceph.com/issues/50250
2524
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
2525 31 Patrick Donnelly
2526
2527
h3. 2021 November 9
2528
2529
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211109.180315
2530
2531
* https://tracker.ceph.com/issues/53214
2532
    qa: "dd: error reading '/sys/kernel/debug/ceph/2a934501-6731-4052-a836-f42229a869be.client4874/metrics': Is a directory"
2533
* https://tracker.ceph.com/issues/48773
2534
    qa: scrub does not complete
2535
* https://tracker.ceph.com/issues/50223
2536
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2537
* https://tracker.ceph.com/issues/51282
2538
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2539
* https://tracker.ceph.com/issues/52624
2540
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2541
* https://tracker.ceph.com/issues/53216
2542
    qa: "RuntimeError: value of attributes should be either str or None. client_id"
2543
* https://tracker.ceph.com/issues/50250
2544
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
2545
2546 30 Patrick Donnelly
2547
2548
h3. 2021 November 03
2549
2550
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211103.023355
2551
2552
* https://tracker.ceph.com/issues/51964
2553
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
2554
* https://tracker.ceph.com/issues/51282
2555
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2556
* https://tracker.ceph.com/issues/52436
2557
    fs/ceph: "corrupt mdsmap"
2558
* https://tracker.ceph.com/issues/53074
2559
    pybind/mgr/cephadm: upgrade sequence does not continue if no MDS are active
2560
* https://tracker.ceph.com/issues/53150
2561
    pybind/mgr/cephadm/upgrade: tolerate MDS failures during upgrade straddling v16.2.5
2562
* https://tracker.ceph.com/issues/53155
2563
    MDSMonitor: assertion during upgrade to v16.2.5+
2564 29 Patrick Donnelly
2565
2566
h3. 2021 October 26
2567
2568
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211025.000447
2569
2570
* https://tracker.ceph.com/issues/53074
2571
    pybind/mgr/cephadm: upgrade sequence does not continue if no MDS are active
2572
* https://tracker.ceph.com/issues/52997
2573
    testing: hang ing umount
2574
* https://tracker.ceph.com/issues/50824
2575
    qa: snaptest-git-ceph bus error
2576
* https://tracker.ceph.com/issues/52436
2577
    fs/ceph: "corrupt mdsmap"
2578
* https://tracker.ceph.com/issues/48773
2579
    qa: scrub does not complete
2580
* https://tracker.ceph.com/issues/53082
2581
    ceph-fuse: segmenetation fault in Client::handle_mds_map
2582
* https://tracker.ceph.com/issues/50223
2583
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2584
* https://tracker.ceph.com/issues/52624
2585
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2586
* https://tracker.ceph.com/issues/50224
2587
    qa: test_mirroring_init_failure_with_recovery failure
2588
* https://tracker.ceph.com/issues/50821
2589
    qa: untar_snap_rm failure during mds thrashing
2590
* https://tracker.ceph.com/issues/50250
2591
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
2592
2593 27 Patrick Donnelly
2594
2595 28 Patrick Donnelly
h3. 2021 October 19
2596 27 Patrick Donnelly
2597
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211019.013028
2598
2599
* https://tracker.ceph.com/issues/52995
2600
    qa: test_standby_count_wanted failure
2601
* https://tracker.ceph.com/issues/52948
2602
    osd: fails to come up: "teuthology.misc:7 of 8 OSDs are up"
2603
* https://tracker.ceph.com/issues/52996
2604
    qa: test_perf_counters via test_openfiletable
2605
* https://tracker.ceph.com/issues/48772
2606
    qa: pjd: not ok 9, 44, 80
2607
* https://tracker.ceph.com/issues/52997
2608
    testing: hang ing umount
2609
* https://tracker.ceph.com/issues/50250
2610
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
2611
* https://tracker.ceph.com/issues/52624
2612
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2613
* https://tracker.ceph.com/issues/50223
2614
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2615
* https://tracker.ceph.com/issues/50821
2616
    qa: untar_snap_rm failure during mds thrashing
2617
* https://tracker.ceph.com/issues/48773
2618
    qa: scrub does not complete
2619 26 Patrick Donnelly
2620
2621
h3. 2021 October 12
2622
2623
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211012.192211
2624
2625
Some failures caused by teuthology bug: https://tracker.ceph.com/issues/52944
2626
2627
New test caused failure: https://github.com/ceph/ceph/pull/43297#discussion_r729883167
2628
2629
2630
* https://tracker.ceph.com/issues/51282
2631
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2632
* https://tracker.ceph.com/issues/52948
2633
    osd: fails to come up: "teuthology.misc:7 of 8 OSDs are up"
2634
* https://tracker.ceph.com/issues/48773
2635
    qa: scrub does not complete
2636
* https://tracker.ceph.com/issues/50224
2637
    qa: test_mirroring_init_failure_with_recovery failure
2638
* https://tracker.ceph.com/issues/52949
2639
    RuntimeError: The following counters failed to be set on mds daemons: {'mds.dir_split'}
2640 25 Patrick Donnelly
2641 23 Patrick Donnelly
2642 24 Patrick Donnelly
h3. 2021 October 02
2643
2644
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211002.163337
2645
2646
Some failures caused by cephadm upgrade test. Fixed in follow-up qa commit.
2647
2648
test_simple failures caused by PR in this set.
2649
2650
A few reruns because of QA infra noise.
2651
2652
* https://tracker.ceph.com/issues/52822
2653
    qa: failed pacific install on fs:upgrade
2654
* https://tracker.ceph.com/issues/52624
2655
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2656
* https://tracker.ceph.com/issues/50223
2657
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2658
* https://tracker.ceph.com/issues/48773
2659
    qa: scrub does not complete
2660
2661
2662 23 Patrick Donnelly
h3. 2021 September 20
2663
2664
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210917.174826
2665
2666
* https://tracker.ceph.com/issues/52677
2667
    qa: test_simple failure
2668
* https://tracker.ceph.com/issues/51279
2669
    kclient hangs on umount (testing branch)
2670
* https://tracker.ceph.com/issues/50223
2671
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2672
* https://tracker.ceph.com/issues/50250
2673
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
2674
* https://tracker.ceph.com/issues/52624
2675
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2676
* https://tracker.ceph.com/issues/52438
2677
    qa: ffsb timeout
2678 22 Patrick Donnelly
2679
2680
h3. 2021 September 10
2681
2682
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210910.181451
2683
2684
* https://tracker.ceph.com/issues/50223
2685
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2686
* https://tracker.ceph.com/issues/50250
2687
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
2688
* https://tracker.ceph.com/issues/52624
2689
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2690
* https://tracker.ceph.com/issues/52625
2691
    qa: test_kill_mdstable (tasks.cephfs.test_snapshots.TestSnapshots)
2692
* https://tracker.ceph.com/issues/52439
2693
    qa: acls does not compile on centos stream
2694
* https://tracker.ceph.com/issues/50821
2695
    qa: untar_snap_rm failure during mds thrashing
2696
* https://tracker.ceph.com/issues/48773
2697
    qa: scrub does not complete
2698
* https://tracker.ceph.com/issues/52626
2699
    mds: ScrubStack.cc: 831: FAILED ceph_assert(diri)
2700
* https://tracker.ceph.com/issues/51279
2701
    kclient hangs on umount (testing branch)
2702 21 Patrick Donnelly
2703
2704
h3. 2021 August 27
2705
2706
Several jobs died because of device failures.
2707
2708
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210827.024746
2709
2710
* https://tracker.ceph.com/issues/52430
2711
    mds: fast async create client mount breaks racy test
2712
* https://tracker.ceph.com/issues/52436
2713
    fs/ceph: "corrupt mdsmap"
2714
* https://tracker.ceph.com/issues/52437
2715
    mds: InoTable::replay_release_ids abort via test_inotable_sync
2716
* https://tracker.ceph.com/issues/51282
2717
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2718
* https://tracker.ceph.com/issues/52438
2719
    qa: ffsb timeout
2720
* https://tracker.ceph.com/issues/52439
2721
    qa: acls does not compile on centos stream
2722 20 Patrick Donnelly
2723
2724
h3. 2021 July 30
2725
2726
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210729.214022
2727
2728
* https://tracker.ceph.com/issues/50250
2729
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
2730
* https://tracker.ceph.com/issues/51282
2731
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2732
* https://tracker.ceph.com/issues/48773
2733
    qa: scrub does not complete
2734
* https://tracker.ceph.com/issues/51975
2735
    pybind/mgr/stats: KeyError
2736 19 Patrick Donnelly
2737
2738
h3. 2021 July 28
2739
2740
https://pulpito.ceph.com/pdonnell-2021-07-28_00:39:45-fs-wip-pdonnell-testing-20210727.213757-distro-basic-smithi/
2741
2742
with qa fix: https://pulpito.ceph.com/pdonnell-2021-07-28_16:20:28-fs-wip-pdonnell-testing-20210728.141004-distro-basic-smithi/
2743
2744
* https://tracker.ceph.com/issues/51905
2745
    qa: "error reading sessionmap 'mds1_sessionmap'"
2746
* https://tracker.ceph.com/issues/48773
2747
    qa: scrub does not complete
2748
* https://tracker.ceph.com/issues/50250
2749
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
2750
* https://tracker.ceph.com/issues/51267
2751
    CommandFailedError: Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithi096 with status 1:...
2752
* https://tracker.ceph.com/issues/51279
2753
    kclient hangs on umount (testing branch)
2754 18 Patrick Donnelly
2755
2756
h3. 2021 July 16
2757
2758
https://pulpito.ceph.com/pdonnell-2021-07-16_05:50:11-fs-wip-pdonnell-testing-20210716.022804-distro-basic-smithi/
2759
2760
* https://tracker.ceph.com/issues/48773
2761
    qa: scrub does not complete
2762
* https://tracker.ceph.com/issues/48772
2763
    qa: pjd: not ok 9, 44, 80
2764
* https://tracker.ceph.com/issues/45434
2765
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2766
* https://tracker.ceph.com/issues/51279
2767
    kclient hangs on umount (testing branch)
2768
* https://tracker.ceph.com/issues/50824
2769
    qa: snaptest-git-ceph bus error
2770 17 Patrick Donnelly
2771
2772
h3. 2021 July 04
2773
2774
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210703.052904
2775
2776
* https://tracker.ceph.com/issues/48773
2777
    qa: scrub does not complete
2778
* https://tracker.ceph.com/issues/39150
2779
    mon: "FAILED ceph_assert(session_map.sessions.empty())" when out of quorum
2780
* https://tracker.ceph.com/issues/45434
2781
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2782
* https://tracker.ceph.com/issues/51282
2783
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2784
* https://tracker.ceph.com/issues/48771
2785
    qa: iogen: workload fails to cause balancing
2786
* https://tracker.ceph.com/issues/51279
2787
    kclient hangs on umount (testing branch)
2788
* https://tracker.ceph.com/issues/50250
2789
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
2790 16 Patrick Donnelly
2791
2792
h3. 2021 July 01
2793
2794
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210701.192056
2795
2796
* https://tracker.ceph.com/issues/51197
2797
    qa: [WRN] Scrub error on inode 0x10000001520 (/client.0/tmp/t/linux-5.4/Documentation/driver-api) see mds.f log and `damage ls` output for details
2798
* https://tracker.ceph.com/issues/50866
2799
    osd: stat mismatch on objects
2800
* https://tracker.ceph.com/issues/48773
2801
    qa: scrub does not complete
2802 15 Patrick Donnelly
2803
2804
h3. 2021 June 26
2805
2806
https://pulpito.ceph.com/pdonnell-2021-06-26_00:57:00-fs-wip-pdonnell-testing-20210625.225421-distro-basic-smithi/
2807
2808
* https://tracker.ceph.com/issues/51183
2809
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
2810
* https://tracker.ceph.com/issues/51410
2811
    kclient: fails to finish reconnect during MDS thrashing (testing branch)
2812
* https://tracker.ceph.com/issues/48773
2813
    qa: scrub does not complete
2814
* https://tracker.ceph.com/issues/51282
2815
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2816
* https://tracker.ceph.com/issues/51169
2817
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
2818
* https://tracker.ceph.com/issues/48772
2819
    qa: pjd: not ok 9, 44, 80
2820 14 Patrick Donnelly
2821
2822
h3. 2021 June 21
2823
2824
https://pulpito.ceph.com/pdonnell-2021-06-22_00:27:21-fs-wip-pdonnell-testing-20210621.231646-distro-basic-smithi/
2825
2826
One failure caused by PR: https://github.com/ceph/ceph/pull/41935#issuecomment-866472599
2827
2828
* https://tracker.ceph.com/issues/51282
2829
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2830
* https://tracker.ceph.com/issues/51183
2831
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
2832
* https://tracker.ceph.com/issues/48773
2833
    qa: scrub does not complete
2834
* https://tracker.ceph.com/issues/48771
2835
    qa: iogen: workload fails to cause balancing
2836
* https://tracker.ceph.com/issues/51169
2837
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
2838
* https://tracker.ceph.com/issues/50495
2839
    libcephfs: shutdown race fails with status 141
2840
* https://tracker.ceph.com/issues/45434
2841
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2842
* https://tracker.ceph.com/issues/50824
2843
    qa: snaptest-git-ceph bus error
2844
* https://tracker.ceph.com/issues/50223
2845
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2846 13 Patrick Donnelly
2847
2848
h3. 2021 June 16
2849
2850
https://pulpito.ceph.com/pdonnell-2021-06-16_21:26:55-fs-wip-pdonnell-testing-20210616.191804-distro-basic-smithi/
2851
2852
MDS abort class of failures caused by PR: https://github.com/ceph/ceph/pull/41667
2853
2854
* https://tracker.ceph.com/issues/45434
2855
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2856
* https://tracker.ceph.com/issues/51169
2857
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
2858
* https://tracker.ceph.com/issues/43216
2859
    MDSMonitor: removes MDS coming out of quorum election
2860
* https://tracker.ceph.com/issues/51278
2861
    mds: "FAILED ceph_assert(!segments.empty())"
2862
* https://tracker.ceph.com/issues/51279
2863
    kclient hangs on umount (testing branch)
2864
* https://tracker.ceph.com/issues/51280
2865
    mds: "FAILED ceph_assert(r == 0 || r == -2)"
2866
* https://tracker.ceph.com/issues/51183
2867
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
2868
* https://tracker.ceph.com/issues/51281
2869
    qa: snaptest-snap-rm-cmp.sh: "echo 'FAIL: bad match, /tmp/a 4637e766853d1ad16a7b17079e2c6f03 != real c3883760b18d50e8d78819c54d579b00'"
2870
* https://tracker.ceph.com/issues/48773
2871
    qa: scrub does not complete
2872
* https://tracker.ceph.com/issues/51076
2873
    "wait_for_recovery: failed before timeout expired" during thrashosd test with EC backend.
2874
* https://tracker.ceph.com/issues/51228
2875
    qa: rmdir: failed to remove 'a/.snap/*': No such file or directory
2876
* https://tracker.ceph.com/issues/51282
2877
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2878 12 Patrick Donnelly
2879
2880
h3. 2021 June 14
2881
2882
https://pulpito.ceph.com/pdonnell-2021-06-14_20:53:05-fs-wip-pdonnell-testing-20210614.173325-distro-basic-smithi/
2883
2884
Some Ubuntu 20.04 upgrade fallout. In particular, upgrade tests are failing due to missing packages for 18.04 Pacific.
2885
2886
* https://tracker.ceph.com/issues/51169
2887
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
2888
* https://tracker.ceph.com/issues/51228
2889
    qa: rmdir: failed to remove 'a/.snap/*': No such file or directory
2890
* https://tracker.ceph.com/issues/48773
2891
    qa: scrub does not complete
2892
* https://tracker.ceph.com/issues/51183
2893
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
2894
* https://tracker.ceph.com/issues/45434
2895
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2896
* https://tracker.ceph.com/issues/51182
2897
    pybind/mgr/snap_schedule: Invalid command: Unexpected argument 'fs=cephfs'
2898
* https://tracker.ceph.com/issues/51229
2899
    qa: test_multi_snap_schedule list difference failure
2900
* https://tracker.ceph.com/issues/50821
2901
    qa: untar_snap_rm failure during mds thrashing
2902 11 Patrick Donnelly
2903
2904
h3. 2021 June 13
2905
2906
https://pulpito.ceph.com/pdonnell-2021-06-12_02:45:35-fs-wip-pdonnell-testing-20210612.002809-distro-basic-smithi/
2907
2908
Some Ubuntu 20.04 upgrade fallout. In particular, upgrade tests are failing due to missing packages for 18.04 Pacific.
2909
2910
* https://tracker.ceph.com/issues/51169
2911
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
2912
* https://tracker.ceph.com/issues/48773
2913
    qa: scrub does not complete
2914
* https://tracker.ceph.com/issues/51182
2915
    pybind/mgr/snap_schedule: Invalid command: Unexpected argument 'fs=cephfs'
2916
* https://tracker.ceph.com/issues/51183
2917
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
2918
* https://tracker.ceph.com/issues/51197
2919
    qa: [WRN] Scrub error on inode 0x10000001520 (/client.0/tmp/t/linux-5.4/Documentation/driver-api) see mds.f log and `damage ls` output for details
2920
* https://tracker.ceph.com/issues/45434
2921 10 Patrick Donnelly
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2922
2923
h3. 2021 June 11
2924
2925
https://pulpito.ceph.com/pdonnell-2021-06-11_18:02:10-fs-wip-pdonnell-testing-20210611.162716-distro-basic-smithi/
2926
2927
Some Ubuntu 20.04 upgrade fallout. In particular, upgrade tests are failing due to missing packages for 18.04 Pacific.
2928
2929
* https://tracker.ceph.com/issues/51169
2930
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
2931
* https://tracker.ceph.com/issues/45434
2932
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2933
* https://tracker.ceph.com/issues/48771
2934
    qa: iogen: workload fails to cause balancing
2935
* https://tracker.ceph.com/issues/43216
2936
    MDSMonitor: removes MDS coming out of quorum election
2937
* https://tracker.ceph.com/issues/51182
2938
    pybind/mgr/snap_schedule: Invalid command: Unexpected argument 'fs=cephfs'
2939
* https://tracker.ceph.com/issues/50223
2940
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2941
* https://tracker.ceph.com/issues/48773
2942
    qa: scrub does not complete
2943
* https://tracker.ceph.com/issues/51183
2944
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
2945
* https://tracker.ceph.com/issues/51184
2946
    qa: fs:bugs does not specify distro
2947 9 Patrick Donnelly
2948
2949
h3. 2021 June 03
2950
2951
https://pulpito.ceph.com/pdonnell-2021-06-03_03:40:33-fs-wip-pdonnell-testing-20210603.020013-distro-basic-smithi/
2952
2953
* https://tracker.ceph.com/issues/45434
2954
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2955
* https://tracker.ceph.com/issues/50016
2956
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
2957
* https://tracker.ceph.com/issues/50821
2958
    qa: untar_snap_rm failure during mds thrashing
2959
* https://tracker.ceph.com/issues/50622 (regression)
2960
    msg: active_connections regression
2961
* https://tracker.ceph.com/issues/49845#note-2 (regression)
2962
    qa: failed umount in test_volumes
2963
* https://tracker.ceph.com/issues/48773
2964
    qa: scrub does not complete
2965
* https://tracker.ceph.com/issues/43216
2966
    MDSMonitor: removes MDS coming out of quorum election
2967 7 Patrick Donnelly
2968
2969 8 Patrick Donnelly
h3. 2021 May 18
2970
2971
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210518.214114
2972
2973
Regression in testing kernel caused some failures. Ilya fixed those and rerun
2974
looked better. Some odd new noise in the rerun relating to packaging and "No
2975
module named 'tasks.ceph'".
2976
2977
* https://tracker.ceph.com/issues/50824
2978
    qa: snaptest-git-ceph bus error
2979
* https://tracker.ceph.com/issues/50622 (regression)
2980
    msg: active_connections regression
2981
* https://tracker.ceph.com/issues/49845#note-2 (regression)
2982
    qa: failed umount in test_volumes
2983
* https://tracker.ceph.com/issues/48203 (stock kernel update required)
2984
    qa: quota failure
2985
2986
2987 7 Patrick Donnelly
h3. 2021 May 18
2988
2989
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210518.025642
2990
2991
* https://tracker.ceph.com/issues/50821
2992
    qa: untar_snap_rm failure during mds thrashing
2993
* https://tracker.ceph.com/issues/48773
2994
    qa: scrub does not complete
2995
* https://tracker.ceph.com/issues/45591
2996
    mgr: FAILED ceph_assert(daemon != nullptr)
2997
* https://tracker.ceph.com/issues/50866
2998
    osd: stat mismatch on objects
2999
* https://tracker.ceph.com/issues/50016
3000
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
3001
* https://tracker.ceph.com/issues/50867
3002
    qa: fs:mirror: reduced data availability
3003
* https://tracker.ceph.com/issues/50821
3004
    qa: untar_snap_rm failure during mds thrashing
3005
* https://tracker.ceph.com/issues/50622 (regression)
3006
    msg: active_connections regression
3007
* https://tracker.ceph.com/issues/50223
3008
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
3009
* https://tracker.ceph.com/issues/50868
3010
    qa: "kern.log.gz already exists; not overwritten"
3011
* https://tracker.ceph.com/issues/50870
3012
    qa: test_full: "rm: cannot remove 'large_file_a': Permission denied"
3013 6 Patrick Donnelly
3014
3015
h3. 2021 May 11
3016
3017
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210511.232042
3018
3019
* one class of failures caused by PR
3020
* https://tracker.ceph.com/issues/48812
3021
    qa: test_scrub_pause_and_resume_with_abort failure
3022
* https://tracker.ceph.com/issues/50390
3023
    mds: monclient: wait_auth_rotating timed out after 30
3024
* https://tracker.ceph.com/issues/48773
3025
    qa: scrub does not complete
3026
* https://tracker.ceph.com/issues/50821
3027
    qa: untar_snap_rm failure during mds thrashing
3028
* https://tracker.ceph.com/issues/50224
3029
    qa: test_mirroring_init_failure_with_recovery failure
3030
* https://tracker.ceph.com/issues/50622 (regression)
3031
    msg: active_connections regression
3032
* https://tracker.ceph.com/issues/50825
3033
    qa: snaptest-git-ceph hang during mon thrashing v2
3034
* https://tracker.ceph.com/issues/50821
3035
    qa: untar_snap_rm failure during mds thrashing
3036
* https://tracker.ceph.com/issues/50823
3037
    qa: RuntimeError: timeout waiting for cluster to stabilize
3038 5 Patrick Donnelly
3039
3040
h3. 2021 May 14
3041
3042
https://pulpito.ceph.com/pdonnell-2021-05-14_21:45:42-fs-master-distro-basic-smithi/
3043
3044
* https://tracker.ceph.com/issues/48812
3045
    qa: test_scrub_pause_and_resume_with_abort failure
3046
* https://tracker.ceph.com/issues/50821
3047
    qa: untar_snap_rm failure during mds thrashing
3048
* https://tracker.ceph.com/issues/50622 (regression)
3049
    msg: active_connections regression
3050
* https://tracker.ceph.com/issues/50822
3051
    qa: testing kernel patch for client metrics causes mds abort
3052
* https://tracker.ceph.com/issues/48773
3053
    qa: scrub does not complete
3054
* https://tracker.ceph.com/issues/50823
3055
    qa: RuntimeError: timeout waiting for cluster to stabilize
3056
* https://tracker.ceph.com/issues/50824
3057
    qa: snaptest-git-ceph bus error
3058
* https://tracker.ceph.com/issues/50825
3059
    qa: snaptest-git-ceph hang during mon thrashing v2
3060
* https://tracker.ceph.com/issues/50826
3061
    kceph: stock RHEL kernel hangs on snaptests with mon|osd thrashers
3062 4 Patrick Donnelly
3063
3064
h3. 2021 May 01
3065
3066
https://pulpito.ceph.com/pdonnell-2021-05-01_09:07:09-fs-wip-pdonnell-testing-20210501.040415-distro-basic-smithi/
3067
3068
* https://tracker.ceph.com/issues/45434
3069
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
3070
* https://tracker.ceph.com/issues/50281
3071
    qa: untar_snap_rm timeout
3072
* https://tracker.ceph.com/issues/48203 (stock kernel update required)
3073
    qa: quota failure
3074
* https://tracker.ceph.com/issues/48773
3075
    qa: scrub does not complete
3076
* https://tracker.ceph.com/issues/50390
3077
    mds: monclient: wait_auth_rotating timed out after 30
3078
* https://tracker.ceph.com/issues/50250
3079
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details"
3080
* https://tracker.ceph.com/issues/50622 (regression)
3081
    msg: active_connections regression
3082
* https://tracker.ceph.com/issues/45591
3083
    mgr: FAILED ceph_assert(daemon != nullptr)
3084
* https://tracker.ceph.com/issues/50221
3085
    qa: snaptest-git-ceph failure in git diff
3086
* https://tracker.ceph.com/issues/50016
3087
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
3088 3 Patrick Donnelly
3089
3090
h3. 2021 Apr 15
3091
3092
https://pulpito.ceph.com/pdonnell-2021-04-15_01:35:57-fs-wip-pdonnell-testing-20210414.230315-distro-basic-smithi/
3093
3094
* https://tracker.ceph.com/issues/50281
3095
    qa: untar_snap_rm timeout
3096
* https://tracker.ceph.com/issues/50220
3097
    qa: dbench workload timeout
3098
* https://tracker.ceph.com/issues/50246
3099
    mds: failure replaying journal (EMetaBlob)
3100
* https://tracker.ceph.com/issues/50250
3101
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details"
3102
* https://tracker.ceph.com/issues/50016
3103
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
3104
* https://tracker.ceph.com/issues/50222
3105
    osd: 5.2s0 deep-scrub : stat mismatch
3106
* https://tracker.ceph.com/issues/45434
3107
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
3108
* https://tracker.ceph.com/issues/49845
3109
    qa: failed umount in test_volumes
3110
* https://tracker.ceph.com/issues/37808
3111
    osd: osdmap cache weak_refs assert during shutdown
3112
* https://tracker.ceph.com/issues/50387
3113
    client: fs/snaps failure
3114
* https://tracker.ceph.com/issues/50389
3115
    mds: "cluster [ERR] Error recovering journal 0x203: (2) No such file or directory" in cluster log"
3116
* https://tracker.ceph.com/issues/50216
3117
    qa: "ls: cannot access 'lost+found': No such file or directory"
3118
* https://tracker.ceph.com/issues/50390
3119
    mds: monclient: wait_auth_rotating timed out after 30
3120
3121 1 Patrick Donnelly
3122
3123 2 Patrick Donnelly
h3. 2021 Apr 08
3124
3125
https://pulpito.ceph.com/pdonnell-2021-04-08_22:42:24-fs-wip-pdonnell-testing-20210408.192301-distro-basic-smithi/
3126
3127
* https://tracker.ceph.com/issues/45434
3128
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
3129
* https://tracker.ceph.com/issues/50016
3130
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
3131
* https://tracker.ceph.com/issues/48773
3132
    qa: scrub does not complete
3133
* https://tracker.ceph.com/issues/50279
3134
    qa: "Replacing daemon mds.b as rank 0 with standby daemon mds.c"
3135
* https://tracker.ceph.com/issues/50246
3136
    mds: failure replaying journal (EMetaBlob)
3137
* https://tracker.ceph.com/issues/48365
3138
    qa: ffsb build failure on CentOS 8.2
3139
* https://tracker.ceph.com/issues/50216
3140
    qa: "ls: cannot access 'lost+found': No such file or directory"
3141
* https://tracker.ceph.com/issues/50223
3142
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
3143
* https://tracker.ceph.com/issues/50280
3144
    cephadm: RuntimeError: uid/gid not found
3145
* https://tracker.ceph.com/issues/50281
3146
    qa: untar_snap_rm timeout
3147
3148 1 Patrick Donnelly
h3. 2021 Apr 08
3149
3150
https://pulpito.ceph.com/pdonnell-2021-04-08_04:31:36-fs-wip-pdonnell-testing-20210408.024225-distro-basic-smithi/
3151
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210408.142238 (with logic inversion / QA fix)
3152
3153
* https://tracker.ceph.com/issues/50246
3154
    mds: failure replaying journal (EMetaBlob)
3155
* https://tracker.ceph.com/issues/50250
3156
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details"
3157
3158
3159
h3. 2021 Apr 07
3160
3161
https://pulpito.ceph.com/pdonnell-2021-04-07_02:12:41-fs-wip-pdonnell-testing-20210406.213012-distro-basic-smithi/
3162
3163
* https://tracker.ceph.com/issues/50215
3164
    qa: "log [ERR] : error reading sessionmap 'mds2_sessionmap'"
3165
* https://tracker.ceph.com/issues/49466
3166
    qa: "Command failed on gibba030 with status 1: 'set -ex\nsudo dd of=/tmp/tmp.ZEeZBasJer'"
3167
* https://tracker.ceph.com/issues/50216
3168
    qa: "ls: cannot access 'lost+found': No such file or directory"
3169
* https://tracker.ceph.com/issues/48773
3170
    qa: scrub does not complete
3171
* https://tracker.ceph.com/issues/49845
3172
    qa: failed umount in test_volumes
3173
* https://tracker.ceph.com/issues/50220
3174
    qa: dbench workload timeout
3175
* https://tracker.ceph.com/issues/50221
3176
    qa: snaptest-git-ceph failure in git diff
3177
* https://tracker.ceph.com/issues/50222
3178
    osd: 5.2s0 deep-scrub : stat mismatch
3179
* https://tracker.ceph.com/issues/50223
3180
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
3181
* https://tracker.ceph.com/issues/50224
3182
    qa: test_mirroring_init_failure_with_recovery failure
3183
3184
h3. 2021 Apr 01
3185
3186
https://pulpito.ceph.com/pdonnell-2021-04-01_00:45:34-fs-wip-pdonnell-testing-20210331.222326-distro-basic-smithi/
3187
3188
* https://tracker.ceph.com/issues/48772
3189
    qa: pjd: not ok 9, 44, 80
3190
* https://tracker.ceph.com/issues/50177
3191
    osd: "stalled aio... buggy kernel or bad device?"
3192
* https://tracker.ceph.com/issues/48771
3193
    qa: iogen: workload fails to cause balancing
3194
* https://tracker.ceph.com/issues/49845
3195
    qa: failed umount in test_volumes
3196
* https://tracker.ceph.com/issues/48773
3197
    qa: scrub does not complete
3198
* https://tracker.ceph.com/issues/48805
3199
    mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details"
3200
* https://tracker.ceph.com/issues/50178
3201
    qa: "TypeError: run() got an unexpected keyword argument 'shell'"
3202
* https://tracker.ceph.com/issues/45434
3203
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
3204
3205
h3. 2021 Mar 24
3206
3207
https://pulpito.ceph.com/pdonnell-2021-03-24_23:26:35-fs-wip-pdonnell-testing-20210324.190252-distro-basic-smithi/
3208
3209
* https://tracker.ceph.com/issues/49500
3210
    qa: "Assertion `cb_done' failed."
3211
* https://tracker.ceph.com/issues/50019
3212
    qa: mount failure with cephadm "probably no MDS server is up?"
3213
* https://tracker.ceph.com/issues/50020
3214
    qa: "RADOS object not found (Failed to operate read op for oid cephfs_mirror)"
3215
* https://tracker.ceph.com/issues/48773
3216
    qa: scrub does not complete
3217
* https://tracker.ceph.com/issues/45434
3218
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
3219
* https://tracker.ceph.com/issues/48805
3220
    mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details"
3221
* https://tracker.ceph.com/issues/48772
3222
    qa: pjd: not ok 9, 44, 80
3223
* https://tracker.ceph.com/issues/50021
3224
    qa: snaptest-git-ceph failure during mon thrashing
3225
* https://tracker.ceph.com/issues/48771
3226
    qa: iogen: workload fails to cause balancing
3227
* https://tracker.ceph.com/issues/50016
3228
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
3229
* https://tracker.ceph.com/issues/49466
3230
    qa: "Command failed on gibba030 with status 1: 'set -ex\nsudo dd of=/tmp/tmp.ZEeZBasJer'"
3231
3232
3233
h3. 2021 Mar 18
3234
3235
https://pulpito.ceph.com/pdonnell-2021-03-18_13:46:31-fs-wip-pdonnell-testing-20210318.024145-distro-basic-smithi/
3236
3237
* https://tracker.ceph.com/issues/49466
3238
    qa: "Command failed on gibba030 with status 1: 'set -ex\nsudo dd of=/tmp/tmp.ZEeZBasJer'"
3239
* https://tracker.ceph.com/issues/48773
3240
    qa: scrub does not complete
3241
* https://tracker.ceph.com/issues/48805
3242
    mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details"
3243
* https://tracker.ceph.com/issues/45434
3244
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
3245
* https://tracker.ceph.com/issues/49845
3246
    qa: failed umount in test_volumes
3247
* https://tracker.ceph.com/issues/49605
3248
    mgr: drops command on the floor
3249
* https://tracker.ceph.com/issues/48203 (stock kernel update required)
3250
    qa: quota failure
3251
* https://tracker.ceph.com/issues/49928
3252
    client: items pinned in cache preventing unmount x2
3253
3254
h3. 2021 Mar 15
3255
3256
https://pulpito.ceph.com/pdonnell-2021-03-15_22:16:56-fs-wip-pdonnell-testing-20210315.182203-distro-basic-smithi/
3257
3258
* https://tracker.ceph.com/issues/49842
3259
    qa: stuck pkg install
3260
* https://tracker.ceph.com/issues/49466
3261
    qa: "Command failed on gibba030 with status 1: 'set -ex\nsudo dd of=/tmp/tmp.ZEeZBasJer'"
3262
* https://tracker.ceph.com/issues/49822
3263
    test: test_mirroring_command_idempotency (tasks.cephfs.test_admin.TestMirroringCommands) failure
3264
* https://tracker.ceph.com/issues/49240
3265
    terminate called after throwing an instance of 'std::bad_alloc'
3266
* https://tracker.ceph.com/issues/48773
3267
    qa: scrub does not complete
3268
* https://tracker.ceph.com/issues/45434
3269
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
3270
* https://tracker.ceph.com/issues/49500
3271
    qa: "Assertion `cb_done' failed."
3272
* https://tracker.ceph.com/issues/49843
3273
    qa: fs/snaps/snaptest-upchildrealms.sh failure
3274
* https://tracker.ceph.com/issues/49845
3275
    qa: failed umount in test_volumes
3276
* https://tracker.ceph.com/issues/48805
3277
    mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details"
3278
* https://tracker.ceph.com/issues/49605
3279
    mgr: drops command on the floor
3280
3281
and failure caused by PR: https://github.com/ceph/ceph/pull/39969
3282
3283
3284
h3. 2021 Mar 09
3285
3286
https://pulpito.ceph.com/pdonnell-2021-03-09_03:27:39-fs-wip-pdonnell-testing-20210308.214827-distro-basic-smithi/
3287
3288
* https://tracker.ceph.com/issues/49500
3289
    qa: "Assertion `cb_done' failed."
3290
* https://tracker.ceph.com/issues/48805
3291
    mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details"
3292
* https://tracker.ceph.com/issues/48773
3293
    qa: scrub does not complete
3294
* https://tracker.ceph.com/issues/45434
3295
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
3296
* https://tracker.ceph.com/issues/49240
3297
    terminate called after throwing an instance of 'std::bad_alloc'
3298
* https://tracker.ceph.com/issues/49466
3299
    qa: "Command failed on gibba030 with status 1: 'set -ex\nsudo dd of=/tmp/tmp.ZEeZBasJer'"
3300
* https://tracker.ceph.com/issues/49684
3301
    qa: fs:cephadm mount does not wait for mds to be created
3302
* https://tracker.ceph.com/issues/48771
3303
    qa: iogen: workload fails to cause balancing