Project

General

Profile

Main » History » Version 267

Patrick Donnelly, 04/30/2024 02:00 PM

1 221 Patrick Donnelly
h1. <code>main</code> branch
2 1 Patrick Donnelly
3 264 Patrick Donnelly
h3. 2024-04-30
4 1 Patrick Donnelly
5 265 Patrick Donnelly
"wip-pdonnell-testing-20240429.210911-debug":https://tracker.ceph.com/issues/65694
6 1 Patrick Donnelly
7 266 Patrick Donnelly
* "qa: error during scrub thrashing: rank damage found: {'backtrace'}":https://tracker.ceph.com/issues/57676
8
* "qa: iogen workunit: The following counters failed to be set on mds daemons: {'mds.exported', 'mds.imported'}":https://tracker.ceph.com/issues/54108
9
* "valgrind error: Leak_PossiblyLost posix_memalign UnknownInlinedFun ceph::buffer::v15_2_0::list::refill_append_space(unsigned int)":https://tracker.ceph.com/issues/65314
10
* "qa/suites/fs/nfs: cluster [WRN] Health check failed: 1 stray daemon(s) not managed by cephadm (CEPHADM_STRAY_DAEMON) in cluster log":https://tracker.ceph.com/issues/65021
11
* "qa: Scrub error on inode 0x1000000356c (/volumes/qa/sv_0/2f8f6bb4-3ea9-47a0-bd79-a0f50dc149d5/client.0/tmp/clients/client7/~dmtmp/PARADOX) see mds.b log and `damage ls` output for details in cluster log":https://tracker.ceph.com/issues/65020
12
* "qa: test_cd_with_args failure":https://tracker.ceph.com/issues/63700
13 267 Patrick Donnelly
* "pacific/quincy/v18.2.0: client: ceph-fuse fails to unmount after upgrade to main":https://tracker.ceph.com/issues/64502
14
15 263 Rishabh Dave
16
h3. 26 APR 2024
17
18
 * https://pulpito.ceph.com/rishabh-2024-04-24_05:22:11-fs-wip-rishabh-testing-20240416.193735-5-testing-default-smithi/
19
20
* https://tracker.ceph.com/issues/63700
21
  qa: test_cd_with_args failure
22
* https://tracker.ceph.com/issues/64927
23
  qa/cephfs: test_cephfs_mirror_blocklist raises "KeyError: 'rados_inst'"
24
* https://tracker.ceph.com/issues/65022
25
  qa: test_max_items_per_obj open procs not fully cleaned up
26
* https://tracker.ceph.com/issues/53859
27
  qa: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
28
* https://tracker.ceph.com/issues/65136
29
  QA failure: test_fscrypt_dummy_encryption_with_quick_group
30
31
* https://tracker.ceph.com/issues/64572
32
  workunits/fsx.sh failure
33
* https://tracker.ceph.com/issues/62067
34
  ffsb.sh failure "Resource temporarily unavailable"
35
* https://tracker.ceph.com/issues/65265
36
  qa: health warning "no active mgr (MGR_DOWN)" occurs before and after test_nfs runs
37
* https://tracker.ceph.com/issues/57656
38
  dbench: write failed on handle 10009 (Resource temporarily unavailable)
39
* https://tracker.ceph.com/issues/64502
40
  pacific/quincy/v18.2.0: client: ceph-fuse fails to unmount after upgrade to main
41
* https://tracker.ceph.com/issues/65020
42
  qa: Scrub error on inode 0x1000000356c (/volumes/qa/sv_0/2f8f6bb4-3ea9-47a0-bd79-a0f50dc149d5/client.0/tmp/clients/client7/~dmtmp/PARADOX) see mds.b log and `damage ls` output for details" in cluster log
43
* https://tracker.ceph.com/issues/48562
44
  qa: scrub - object missing on disk; some files may be lost
45
* https://tracker.ceph.com/issues/55805
46
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
47
48
49 257 Patrick Donnelly
h3. 2024-04-20
50
51
https://tracker.ceph.com/issues/65596
52
53 258 Patrick Donnelly
* "qa: logrotate fails when state file is already locked":https://tracker.ceph.com/issues/65612
54
* "valgrind error: Leak_PossiblyLost posix_memalign UnknownInlinedFun ceph::buffer::v15_2_0::list::refill_append_space(unsigned int)":https://tracker.ceph.com/issues/65314
55
* "qa: error during scrub thrashing: rank damage found: {'backtrace'}":https://tracker.ceph.com/issues/57676
56
* "qa: Scrub error on inode 0x1000000356c (/volumes/qa/sv_0/2f8f6bb4-3ea9-47a0-bd79-a0f50dc149d5/client.0/tmp/clients/client7/~dmtmp/PARADOX) see mds.b log and `damage ls` output for details in cluster log":https://tracker.ceph.com/issues/65020
57
* "pacific/quincy/v18.2.0: client: ceph-fuse fails to unmount after upgrade to main":https://tracker.ceph.com/issues/64502
58
* "qa/cephfs: test_cephfs_mirror_blocklist raises KeyError: 'rados_inst'":https://tracker.ceph.com/issues/64927
59
* "qa: health warning no active mgr (MGR_DOWN) occurs before and after test_nfs runs":https://tracker.ceph.com/issues/65265
60 259 Patrick Donnelly
* "qa: test_cd_with_args failure":https://tracker.ceph.com/issues/63700
61
* "qa: iogen workunit: The following counters failed to be set on mds daemons: {'mds.exported', 'mds.imported'}":https://tracker.ceph.com/issues/54108
62
* "test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed":https://tracker.ceph.com/issues/61243
63
* "qa/cephfs: test_multifs_single_path_rootsquash (tasks.cephfs.test_admin.TestFsAuthorize)":https://tracker.ceph.com/issues/65246
64
* "client: resends request to same MDS it just received a forward from if it does not have an open session with the target":https://tracker.ceph.com/issues/65614
65 260 Patrick Donnelly
* "pybind/mgr/snap_schedule: 1m scheduled snaps not reliably executed":https://tracker.ceph.com/issues/65616
66
* "qa: fsstress: cannot execute binary file: Exec format error":https://tracker.ceph.com/issues/65618
67 261 Patrick Donnelly
* "qa: untar_snap_rm failure during mds thrashing":https://tracker.ceph.com/issues/50821
68
* "[testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)":https://tracker.ceph.com/issues/57656
69
* "workunits/fsx.sh failure":https://tracker.ceph.com/issues/64572
70
* "ffsb.sh failure Resource temporarily unavailable":https://tracker.ceph.com/issues/62067
71 258 Patrick Donnelly
72 256 Venky Shankar
h3. 2024-04-12
73
74
https://tracker.ceph.com/issues/65324
75
76
(Lot many `sudo systemctl stop ceph-ba42f8d0-efae-11ee-b647-cb9ed24678a4@mon.a` and infra issues failures in this run)
77
78
* "Test failure: test_cephfs_mirror_cancel_mirroring_and_readd":https://tracker.ceph.com/issues/64711
79
* "pacific/quincy/v18.2.0: client: ceph-fuse fails to unmount after upgrade to main":https://tracker.ceph.com/issues/64502
80
* "qa: ceph tell 4.3a deep-scrub command not found":https://tracker.ceph.com/issues/64972
81
* "qa: scrub - object missing on disk; some files may be lost":https://tracker.ceph.com/issues/48562
82
* "qa: failed cephfs-shell test_reading_conf":https://tracker.ceph.com/issues/63699
83 247 Rishabh Dave
84 253 Venky Shankar
h3. 2024-04-04
85
86
https://tracker.ceph.com/issues/65300
87
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20240330.172700
88
89
(Lot many `sudo systemctl stop ceph-ba42f8d0-efae-11ee-b647-cb9ed24678a4@mon.a` failures in this run)
90
91
* "Test failure: test_cephfs_mirror_cancel_mirroring_and_readd":https://tracker.ceph.com/issues/64711
92
* "pacific/quincy/v18.2.0: client: ceph-fuse fails to unmount after upgrade to main":https://tracker.ceph.com/issues/64502
93
* "qa: failed cephfs-shell test_reading_conf":https://tracker.ceph.com/issues/63699
94
* "centos 9 testing reveals rocksdb Leak_StillReachable memory leak in mons":https://tracker.ceph.com/issues/61774
95 254 Venky Shankar
* "qa: scrub - object missing on disk; some files may be lost":https://tracker.ceph.com/issues/48562
96
* "upgrade stalls after upgrading one ceph-mgr daemon":https://tracker.ceph.com/issues/65263
97 253 Venky Shankar
* "qa: test_max_items_per_obj open procs not fully cleaned up":https://tracker.ceph.com/issues/65022
98
* "QA failure: test_fscrypt_dummy_encryption_with_quick_group":https://tracker.ceph.com/issues/65136
99 254 Venky Shankar
* "qa/cephfs: test_multifs_single_path_rootsquash (tasks.cephfs.test_admin.TestFsAuthorize)":https://tracker.ceph.com/issues/65246
100
* "qa: test_cd_with_args failure":https://tracker.ceph.com/issues/63700
101
* "valgrind error: Leak_PossiblyLost posix_memalign UnknownInlinedFun ceph::buffer::v15_2_0::list::refill_append_space(unsigned int)":https://tracker.ceph.com/issues/65314
102 253 Venky Shankar
103 249 Rishabh Dave
h3. 4 Apr 2024
104 246 Rishabh Dave
105
https://pulpito.ceph.com/rishabh-2024-03-27_05:27:11-fs-wip-rishabh-testing-20240326.131558-testing-default-smithi/
106
107
* https://tracker.ceph.com/issues/64927
108
  qa/cephfs: test_cephfs_mirror_blocklist raises "KeyError: 'rados_inst'"
109
* https://tracker.ceph.com/issues/65022
110
  qa: test_max_items_per_obj open procs not fully cleaned up
111
* https://tracker.ceph.com/issues/63699
112
  qa: failed cephfs-shell test_reading_conf
113
* https://tracker.ceph.com/issues/63700
114
  qa: test_cd_with_args failure
115
* https://tracker.ceph.com/issues/65136
116
  QA failure: test_fscrypt_dummy_encryption_with_quick_group
117
* https://tracker.ceph.com/issues/65246
118
  qa/cephfs: test_multifs_single_path_rootsquash (tasks.cephfs.test_admin.TestFsAuthorize)
119
120 248 Rishabh Dave
121 246 Rishabh Dave
* https://tracker.ceph.com/issues/58945
122 1 Patrick Donnelly
  qa: xfstests-dev's generic test suite has failures with fuse client
123
* https://tracker.ceph.com/issues/57656
124 251 Rishabh Dave
  [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
125 1 Patrick Donnelly
* https://tracker.ceph.com/issues/63265
126
  qa: fs/snaps/snaptest-git-ceph.sh failed when reseting to tag 'v0.1'
127 246 Rishabh Dave
* https://tracker.ceph.com/issues/62067
128 251 Rishabh Dave
  ffsb.sh failure "Resource temporarily unavailable"
129 246 Rishabh Dave
* https://tracker.ceph.com/issues/63949
130
  leak in mds.c detected by valgrind during CephFS QA run
131
* https://tracker.ceph.com/issues/48562
132
  qa: scrub - object missing on disk; some files may be lost
133
* https://tracker.ceph.com/issues/65020
134
  qa: Scrub error on inode 0x1000000356c (/volumes/qa/sv_0/2f8f6bb4-3ea9-47a0-bd79-a0f50dc149d5/client.0/tmp/clients/client7/~dmtmp/PARADOX) see mds.b log and `damage ls` output for details" in cluster log
135
* https://tracker.ceph.com/issues/64572
136
  workunits/fsx.sh failure
137
* https://tracker.ceph.com/issues/57676
138
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
139 1 Patrick Donnelly
* https://tracker.ceph.com/issues/64502
140 246 Rishabh Dave
  client: ceph-fuse fails to unmount after upgrade to main
141 1 Patrick Donnelly
* https://tracker.ceph.com/issues/54741
142
  crash: MDSTableClient::got_journaled_ack(unsigned long)
143 250 Rishabh Dave
144 248 Rishabh Dave
* https://tracker.ceph.com/issues/65265
145
  qa: health warning "no active mgr (MGR_DOWN)" occurs before and after test_nfs runs
146 1 Patrick Donnelly
* https://tracker.ceph.com/issues/65308
147
  qa: fs was offline but also unexpectedly degraded
148
* https://tracker.ceph.com/issues/65309
149
  qa: dbench.sh failed with "ERROR: handle 10318 was not found"
150 250 Rishabh Dave
151
* https://tracker.ceph.com/issues/65018
152 251 Rishabh Dave
  PG_DEGRADED warnings during cluster creation via cephadm: "Health check failed: Degraded data redundancy: 2/192 objects degraded (1.042%), 1 pg degraded (PG_DEGRADED)"
153 250 Rishabh Dave
* https://tracker.ceph.com/issues/52624
154
  qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
155 245 Rishabh Dave
156 240 Patrick Donnelly
h3. 2024-04-02
157
158
https://tracker.ceph.com/issues/65215
159
160
* "qa: error during scrub thrashing: rank damage found: {'backtrace'}":https://tracker.ceph.com/issues/57676
161
* "qa: ceph tell 4.3a deep-scrub command not found":https://tracker.ceph.com/issues/64972
162
* "pacific/quincy/v18.2.0: client: ceph-fuse fails to unmount after upgrade to main":https://tracker.ceph.com/issues/64502
163
* "Test failure: test_cephfs_mirror_cancel_mirroring_and_readd":https://tracker.ceph.com/issues/64711
164
* "workunits/fsx.sh failure":https://tracker.ceph.com/issues/64572
165
* "qa: failed cephfs-shell test_reading_conf":https://tracker.ceph.com/issues/63699
166
* "centos 9 testing reveals rocksdb Leak_StillReachable memory leak in mons":https://tracker.ceph.com/issues/61774
167
* "qa: test_max_items_per_obj open procs not fully cleaned up":https://tracker.ceph.com/issues/65022
168
* "qa: dbench workload timeout":https://tracker.ceph.com/issues/50220
169
* "suites/fsstress.sh hangs on one client - test times out":https://tracker.ceph.com/issues/64707
170 255 Patrick Donnelly
* "qa/suites/fs/nfs: cluster [WRN] Health check failed: 1 stray daemon(s) not managed by cephadm (CEPHADM_STRAY_DAEMON) in cluster log":https://tracker.ceph.com/issues/65021
171 241 Patrick Donnelly
* "qa: Scrub error on inode 0x1000000356c (/volumes/qa/sv_0/2f8f6bb4-3ea9-47a0-bd79-a0f50dc149d5/client.0/tmp/clients/client7/~dmtmp/PARADOX) see mds.b log and `damage ls` output for details in cluster log":https://tracker.ceph.com/issues/65020
172
* "qa: iogen workunit: The following counters failed to be set on mds daemons: {'mds.exported', 'mds.imported'}":https://tracker.ceph.com/issues/54108
173
* "ffsb.sh failure Resource temporarily unavailable":https://tracker.ceph.com/issues/62067
174
* "QA failure: test_fscrypt_dummy_encryption_with_quick_group":https://tracker.ceph.com/issues/65136
175 244 Patrick Donnelly
* "qa: cluster [WRN] Health detail: HEALTH_WARN 1 pool(s) do not have an application enabled in cluster log":https://tracker.ceph.com/issues/65271
176 241 Patrick Donnelly
* "qa: test_cephfs_mirror_cancel_sync fails in a 100 jobs run of fs:mirror suite":https://tracker.ceph.com/issues/64534
177 240 Patrick Donnelly
178 236 Patrick Donnelly
h3. 2024-03-28
179
180
https://tracker.ceph.com/issues/65213
181
182 237 Patrick Donnelly
* "qa: error during scrub thrashing: rank damage found: {'backtrace'}":https://tracker.ceph.com/issues/57676
183
* "workunits/fsx.sh failure":https://tracker.ceph.com/issues/64572
184
* "PG_DEGRADED warnings during cluster creation via cephadm: Health check failed: Degraded data":https://tracker.ceph.com/issues/65018
185 238 Patrick Donnelly
* "suites/fsstress.sh hangs on one client - test times out":https://tracker.ceph.com/issues/64707
186
* "qa: ceph tell 4.3a deep-scrub command not found":https://tracker.ceph.com/issues/64972
187
* "qa: iogen workunit: The following counters failed to be set on mds daemons: {'mds.exported', 'mds.imported'}":https://tracker.ceph.com/issues/54108
188 239 Patrick Donnelly
* "qa: Scrub error on inode 0x1000000356c (/volumes/qa/sv_0/2f8f6bb4-3ea9-47a0-bd79-a0f50dc149d5/client.0/tmp/clients/client7/~dmtmp/PARADOX) see mds.b log and `damage ls` output for details in cluster log":https://tracker.ceph.com/issues/65020
189
* "qa: failed cephfs-shell test_reading_conf":https://tracker.ceph.com/issues/63699
190
* "Test failure: test_cephfs_mirror_cancel_mirroring_and_readd":https://tracker.ceph.com/issues/64711
191
* "qa: test_max_items_per_obj open procs not fully cleaned up":https://tracker.ceph.com/issues/65022
192
* "pacific/quincy/v18.2.0: client: ceph-fuse fails to unmount after upgrade to main":https://tracker.ceph.com/issues/64502
193
* "centos 9 testing reveals rocksdb Leak_StillReachable memory leak in mons":https://tracker.ceph.com/issues/61774
194
* "qa: Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)":https://tracker.ceph.com/issues/52624
195
* "qa: dbench workload timeout":https://tracker.ceph.com/issues/50220
196
197
198 236 Patrick Donnelly
199 235 Milind Changire
h3. 2024-03-25
200
201
https://pulpito.ceph.com/mchangir-2024-03-22_09:46:06-fs:upgrade-wip-mchangir-testing-main-20240318.032620-testing-default-smithi/
202
* https://tracker.ceph.com/issues/64502
203
  fusermount -u fails with: teuthology.exceptions.MaxWhileTries: reached maximum tries (51) after waiting for 300 seconds
204
205
https://pulpito.ceph.com/mchangir-2024-03-22_09:48:09-fs:libcephfs-wip-mchangir-testing-main-20240318.032620-testing-default-smithi/
206
207
* https://tracker.ceph.com/issues/62245
208
  libcephfs/test.sh failed - https://tracker.ceph.com/issues/62245#note-3
209
210
211 228 Patrick Donnelly
h3. 2024-03-20
212
213 234 Patrick Donnelly
https://pulpito.ceph.com/?branch=wip-batrick-testing-20240320.145742
214 228 Patrick Donnelly
215 233 Patrick Donnelly
https://github.com/batrick/ceph/commit/360516069d9393362c4cc6eb9371680fe16d66ab
216
217 229 Patrick Donnelly
Ubuntu jobs filtered out because builds were skipped by jenkins/shaman.
218 1 Patrick Donnelly
219 229 Patrick Donnelly
This run has a lot more failures because https://github.com/ceph/ceph/pull/55455 fixed log WRN/ERR checks.
220 228 Patrick Donnelly
221 229 Patrick Donnelly
* https://tracker.ceph.com/issues/57676
222
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
223
* https://tracker.ceph.com/issues/64572
224
    workunits/fsx.sh failure
225
* https://tracker.ceph.com/issues/65018
226
    PG_DEGRADED warnings during cluster creation via cephadm: "Health check failed: Degraded data redundancy: 2/192 objects degraded (1.042%), 1 pg degraded (PG_DEGRADED)"
227
* https://tracker.ceph.com/issues/64707 (new issue)
228
    suites/fsstress.sh hangs on one client - test times out
229 1 Patrick Donnelly
* https://tracker.ceph.com/issues/64988
230
    qa: fs:workloads mgr client evicted indicated by "cluster [WRN] evicting unresponsive client smithi042:x (15288), after 303.306 seconds"
231
* https://tracker.ceph.com/issues/59684
232
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
233 230 Patrick Donnelly
* https://tracker.ceph.com/issues/64972
234
    qa: "ceph tell 4.3a deep-scrub" command not found
235
* https://tracker.ceph.com/issues/54108
236
    qa: iogen workunit: "The following counters failed to be set on mds daemons: {'mds.exported', 'mds.imported'}"
237
* https://tracker.ceph.com/issues/65019
238
    qa/suites/fs/top: [WRN] Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)" in cluster log 
239
* https://tracker.ceph.com/issues/65020
240
    qa: Scrub error on inode 0x1000000356c (/volumes/qa/sv_0/2f8f6bb4-3ea9-47a0-bd79-a0f50dc149d5/client.0/tmp/clients/client7/~dmtmp/PARADOX) see mds.b log and `damage ls` output for details" in cluster log
241
* https://tracker.ceph.com/issues/65021
242
    qa/suites/fs/nfs: cluster [WRN] Health check failed: 1 stray daemon(s) not managed by cephadm (CEPHADM_STRAY_DAEMON)" in cluster log
243
* https://tracker.ceph.com/issues/63699
244
    qa: failed cephfs-shell test_reading_conf
245 231 Patrick Donnelly
* https://tracker.ceph.com/issues/64711
246
    Test failure: test_cephfs_mirror_cancel_mirroring_and_readd (tasks.cephfs.test_mirroring.TestMirroring)
247
* https://tracker.ceph.com/issues/50821
248
    qa: untar_snap_rm failure during mds thrashing
249 232 Patrick Donnelly
* https://tracker.ceph.com/issues/65022
250
    qa: test_max_items_per_obj open procs not fully cleaned up
251 228 Patrick Donnelly
252 226 Venky Shankar
h3.  14th March 2024
253
254
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20240307.013758
255
256 227 Venky Shankar
(pjd.sh failures are related to a bug in the testing kernel. See - https://tracker.ceph.com/issues/64679#note-4)
257 226 Venky Shankar
258
* https://tracker.ceph.com/issues/62067
259
    ffsb.sh failure "Resource temporarily unavailable"
260
* https://tracker.ceph.com/issues/57676
261
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
262
* https://tracker.ceph.com/issues/64502
263
    pacific/quincy/v18.2.0: client: ceph-fuse fails to unmount after upgrade to main
264
* https://tracker.ceph.com/issues/64572
265
    workunits/fsx.sh failure
266
* https://tracker.ceph.com/issues/63700
267
    qa: test_cd_with_args failure
268
* https://tracker.ceph.com/issues/59684
269
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
270
* https://tracker.ceph.com/issues/61243
271
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
272
273 225 Venky Shankar
h3. 5th March 2024
274
275
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20240304.042522
276
277
* https://tracker.ceph.com/issues/57676
278
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
279
* https://tracker.ceph.com/issues/64502
280
    pacific/quincy/v18.2.0: client: ceph-fuse fails to unmount after upgrade to main
281
* https://tracker.ceph.com/issues/63949
282
    leak in mds.c detected by valgrind during CephFS QA run
283
* https://tracker.ceph.com/issues/57656
284
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
285
* https://tracker.ceph.com/issues/63699
286
    qa: failed cephfs-shell test_reading_conf
287
* https://tracker.ceph.com/issues/64572
288
    workunits/fsx.sh failure
289
* https://tracker.ceph.com/issues/64707 (new issue)
290
    suites/fsstress.sh hangs on one client - test times out
291
* https://tracker.ceph.com/issues/59684
292
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
293
* https://tracker.ceph.com/issues/63700
294
    qa: test_cd_with_args failure
295
* https://tracker.ceph.com/issues/64711
296
    Test failure: test_cephfs_mirror_cancel_mirroring_and_readd (tasks.cephfs.test_mirroring.TestMirroring)
297
* https://tracker.ceph.com/issues/64729 (new issue)
298
    mon.a (mon.0) 1281 : cluster 3 [WRN] MDS_SLOW_METADATA_IO: 3 MDSs report slow metadata IOs" in cluster log
299
* https://tracker.ceph.com/issues/64730
300
    fs/misc/multiple_rsync.sh workunit times out
301
302 224 Venky Shankar
h3. 26th Feb 2024
303
304
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20240216.060239
305
306
(This run is a bit messy due to
307
308
  a) OCI runtime issues in the testing kernel with centos9
309
  b) SELinux denials related failures
310
  c) Unrelated MON_DOWN warnings)
311
312
* https://tracker.ceph.com/issues/57676
313
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
314
* https://tracker.ceph.com/issues/63700
315
    qa: test_cd_with_args failure
316
* https://tracker.ceph.com/issues/63949
317
    leak in mds.c detected by valgrind during CephFS QA run
318
* https://tracker.ceph.com/issues/59684
319
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
320
* https://tracker.ceph.com/issues/61243
321
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
322
* https://tracker.ceph.com/issues/63699
323
    qa: failed cephfs-shell test_reading_conf
324
* https://tracker.ceph.com/issues/64172
325
    Test failure: test_multiple_path_r (tasks.cephfs.test_admin.TestFsAuthorize)
326
* https://tracker.ceph.com/issues/57656
327
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
328
* https://tracker.ceph.com/issues/64572
329
    workunits/fsx.sh failure
330
331 222 Patrick Donnelly
h3. 20th Feb 2024
332
333
https://github.com/ceph/ceph/pull/55601
334
https://github.com/ceph/ceph/pull/55659
335
336
https://pulpito.ceph.com/pdonnell-2024-02-20_07:23:03-fs:upgrade:mds_upgrade_sequence-wip-batrick-testing-20240220.022152-distro-default-smithi/
337
338
* https://tracker.ceph.com/issues/64502
339
    client: quincy ceph-fuse fails to unmount after upgrade to main
340
341 223 Patrick Donnelly
This run has numerous problems. #55601 introduces testing for the upgrade sequence from </code>reef/{v18.2.0,v18.2.1,reef}</code> as well as an extra dimension for the ceph-fuse client. The main "big" issue is i64502: the ceph-fuse client is not being unmounted when <code>fusermount -u</code> is called. Instead, the client begins to unmount only after daemons are shut down during test cleanup.
342 218 Venky Shankar
343
h3. 19th Feb 2024
344
345 220 Venky Shankar
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20240217.015652
346
347 218 Venky Shankar
* https://tracker.ceph.com/issues/61243
348
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
349
* https://tracker.ceph.com/issues/63700
350
    qa: test_cd_with_args failure
351
* https://tracker.ceph.com/issues/63141
352
    qa/cephfs: test_idem_unaffected_root_squash fails
353
* https://tracker.ceph.com/issues/59684
354
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
355
* https://tracker.ceph.com/issues/63949
356
    leak in mds.c detected by valgrind during CephFS QA run
357
* https://tracker.ceph.com/issues/63764
358
    Test failure: test_r_with_fsname_and_no_path_in_cap (tasks.cephfs.test_multifs_auth.TestMDSCaps)
359
* https://tracker.ceph.com/issues/63699
360
    qa: failed cephfs-shell test_reading_conf
361 219 Venky Shankar
* https://tracker.ceph.com/issues/64482
362
    ceph: stderr Error: OCI runtime error: crun: bpf create ``: Function not implemented
363 201 Rishabh Dave
364 217 Venky Shankar
h3. 29 Jan 2024
365
366
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20240119.075157-1
367
368
* https://tracker.ceph.com/issues/57676
369
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
370
* https://tracker.ceph.com/issues/63949
371
    leak in mds.c detected by valgrind during CephFS QA run
372
* https://tracker.ceph.com/issues/62067
373
    ffsb.sh failure "Resource temporarily unavailable"
374
* https://tracker.ceph.com/issues/64172
375
    Test failure: test_multiple_path_r (tasks.cephfs.test_admin.TestFsAuthorize)
376
* https://tracker.ceph.com/issues/63265
377
    qa: fs/snaps/snaptest-git-ceph.sh failed when reseting to tag 'v0.1'
378
* https://tracker.ceph.com/issues/61243
379
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
380
* https://tracker.ceph.com/issues/59684
381
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
382
* https://tracker.ceph.com/issues/57656
383
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
384
* https://tracker.ceph.com/issues/64209
385
    snaptest-multiple-capsnaps.sh fails with "got remote process result: 1"
386
387 216 Venky Shankar
h3. 17th Jan 2024
388
389
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20240103.072409-1
390
391
* https://tracker.ceph.com/issues/63764
392
    Test failure: test_r_with_fsname_and_no_path_in_cap (tasks.cephfs.test_multifs_auth.TestMDSCaps)
393
* https://tracker.ceph.com/issues/57676
394
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
395
* https://tracker.ceph.com/issues/51964
396
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
397
* https://tracker.ceph.com/issues/63949
398
    leak in mds.c detected by valgrind during CephFS QA run
399
* https://tracker.ceph.com/issues/62067
400
    ffsb.sh failure "Resource temporarily unavailable"
401
* https://tracker.ceph.com/issues/61243
402
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
403
* https://tracker.ceph.com/issues/63259
404
    mds: failed to store backtrace and force file system read-only
405
* https://tracker.ceph.com/issues/63265
406
    qa: fs/snaps/snaptest-git-ceph.sh failed when reseting to tag 'v0.1'
407
408
h3. 16 Jan 2024
409 215 Rishabh Dave
410 214 Rishabh Dave
https://pulpito.ceph.com/rishabh-2023-12-11_15:37:57-fs-rishabh-2023dec11-testing-default-smithi/
411
https://pulpito.ceph.com/rishabh-2023-12-17_11:19:43-fs-rishabh-2023dec11-testing-default-smithi/
412
https://pulpito.ceph.com/rishabh-2024-01-04_18:43:16-fs-rishabh-2024jan4-testing-default-smithi
413
414
* https://tracker.ceph.com/issues/63764
415
  Test failure: test_r_with_fsname_and_no_path_in_cap (tasks.cephfs.test_multifs_auth.TestMDSCaps)
416
* https://tracker.ceph.com/issues/63141
417
  qa/cephfs: test_idem_unaffected_root_squash fails
418
* https://tracker.ceph.com/issues/62067
419
  ffsb.sh failure "Resource temporarily unavailable" 
420
* https://tracker.ceph.com/issues/51964
421
  qa: test_cephfs_mirror_restart_sync_on_blocklist failure
422
* https://tracker.ceph.com/issues/54462 
423
  Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128
424
* https://tracker.ceph.com/issues/57676
425
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
426
427
* https://tracker.ceph.com/issues/63949
428
  valgrind leak in MDS
429
* https://tracker.ceph.com/issues/64041
430
  qa/cephfs: fs/upgrade/nofs suite attempts to jump more than 2 releases
431
* fsstress failure in last run was due a kernel MM layer failure, unrelated to CephFS
432
* from last run, job #7507400 failed due to MGR; FS wasn't degraded, so it's unrelated to CephFS
433
434 213 Venky Shankar
h3. 06 Dec 2023
435
436
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20231206.125818
437
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20231206.125818-x (rerun w/ squid kickoff changes)
438
439
* https://tracker.ceph.com/issues/63764
440
    Test failure: test_r_with_fsname_and_no_path_in_cap (tasks.cephfs.test_multifs_auth.TestMDSCaps)
441
* https://tracker.ceph.com/issues/63233
442
    mon|client|mds: valgrind reports possible leaks in the MDS
443
* https://tracker.ceph.com/issues/57676
444
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
445
* https://tracker.ceph.com/issues/62580
446
    testing: Test failure: test_snapshot_remove (tasks.cephfs.test_strays.TestStrays)
447
* https://tracker.ceph.com/issues/62067
448
    ffsb.sh failure "Resource temporarily unavailable"
449
* https://tracker.ceph.com/issues/61243
450
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
451
* https://tracker.ceph.com/issues/62081
452
    tasks/fscrypt-common does not finish, timesout
453
* https://tracker.ceph.com/issues/63265
454
    qa: fs/snaps/snaptest-git-ceph.sh failed when reseting to tag 'v0.1'
455
* https://tracker.ceph.com/issues/63806
456
    ffsb.sh workunit failure (MDS: std::out_of_range, damaged)
457
458 211 Patrick Donnelly
h3. 30 Nov 2023
459
460
https://pulpito.ceph.com/pdonnell-2023-11-30_08:05:19-fs:shell-wip-batrick-testing-20231130.014408-distro-default-smithi/
461
462
* https://tracker.ceph.com/issues/63699
463 212 Patrick Donnelly
    qa: failed cephfs-shell test_reading_conf
464
* https://tracker.ceph.com/issues/63700
465
    qa: test_cd_with_args failure
466 211 Patrick Donnelly
467 210 Venky Shankar
h3. 29 Nov 2023
468
469
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20231107.042705
470
471
* https://tracker.ceph.com/issues/63233
472
    mon|client|mds: valgrind reports possible leaks in the MDS
473
* https://tracker.ceph.com/issues/63141
474
    qa/cephfs: test_idem_unaffected_root_squash fails
475
* https://tracker.ceph.com/issues/57676
476
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
477
* https://tracker.ceph.com/issues/57655
478
    qa: fs:mixed-clients kernel_untar_build failure
479
* https://tracker.ceph.com/issues/62067
480
    ffsb.sh failure "Resource temporarily unavailable"
481
* https://tracker.ceph.com/issues/61243
482
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
483
* https://tracker.ceph.com/issues/62510 (pending RHEL back port)
    snaptest-git-ceph.sh failure with fs/thrash
484
* https://tracker.ceph.com/issues/62810
485
    Failure in snaptest-git-ceph.sh (it's an async unlink/create bug) -- Need to fix again
486
487 206 Venky Shankar
h3. 14 Nov 2023
488 207 Milind Changire
(Milind)
489
490
https://pulpito.ceph.com/mchangir-2023-11-13_10:27:15-fs-wip-mchangir-testing-20231110.052303-testing-default-smithi/
491
492
* https://tracker.ceph.com/issues/53859
493
  qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
494
* https://tracker.ceph.com/issues/63233
495
  mon|client|mds: valgrind reports possible leaks in the MDS
496
* https://tracker.ceph.com/issues/63521
497
  qa: Test failure: test_scrub_merge_dirfrags (tasks.cephfs.test_scrub_checks.TestScrubChecks)
498
* https://tracker.ceph.com/issues/57655
499
  qa: fs:mixed-clients kernel_untar_build failure
500
* https://tracker.ceph.com/issues/62580
501
    testing: Test failure: test_snapshot_remove (tasks.cephfs.test_strays.TestStrays)
502
* https://tracker.ceph.com/issues/57676
503
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
504
* https://tracker.ceph.com/issues/61243
505
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
506
* https://tracker.ceph.com/issues/63141
507
    qa/cephfs: test_idem_unaffected_root_squash fails
508
* https://tracker.ceph.com/issues/51964
509
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
510
* https://tracker.ceph.com/issues/63522
511
    No module named 'tasks.ceph_fuse'
512
    No module named 'tasks.kclient'
513
    No module named 'tasks.cephfs.fuse_mount'
514
    No module named 'tasks.ceph'
515
* https://tracker.ceph.com/issues/63523
516
    Command failed - qa/workunits/fs/misc/general_vxattrs.sh
517
518
519
h3. 14 Nov 2023
520 206 Venky Shankar
521
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20231106.073650
522
523
(nvm the fs:upgrade test failure - the PR is excluded from merge)
524
525
* https://tracker.ceph.com/issues/57676
526
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
527
* https://tracker.ceph.com/issues/63233
528
    mon|client|mds: valgrind reports possible leaks in the MDS
529
* https://tracker.ceph.com/issues/63141
530
    qa/cephfs: test_idem_unaffected_root_squash fails
531
* https://tracker.ceph.com/issues/62580
532
    testing: Test failure: test_snapshot_remove (tasks.cephfs.test_strays.TestStrays)
533
* https://tracker.ceph.com/issues/57655
534
    qa: fs:mixed-clients kernel_untar_build failure
535
* https://tracker.ceph.com/issues/51964
536
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
537
* https://tracker.ceph.com/issues/63519
538
    ceph-fuse: reef ceph-fuse crashes with main branch ceph-mds
539
* https://tracker.ceph.com/issues/57087
540
    qa: test_fragmented_injection (tasks.cephfs.test_data_scan.TestDataScan) failure
541
* https://tracker.ceph.com/issues/58945
542
    qa: xfstests-dev's generic test suite has 20 failures with fuse client
543
544 204 Rishabh Dave
h3. 7 Nov 2023
545
546 205 Rishabh Dave
fs: https://pulpito.ceph.com/rishabh-2023-11-04_04:30:51-fs-rishabh-2023nov3-testing-default-smithi/
547
re-run: https://pulpito.ceph.com/rishabh-2023-11-05_14:10:09-fs-rishabh-2023nov3-testing-default-smithi/
548
smoke: https://pulpito.ceph.com/rishabh-2023-11-08_08:39:05-smoke-rishabh-2023nov3-testing-default-smithi/
549 204 Rishabh Dave
550
* https://tracker.ceph.com/issues/53859
551
  qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
552
* https://tracker.ceph.com/issues/63233
553
  mon|client|mds: valgrind reports possible leaks in the MDS
554
* https://tracker.ceph.com/issues/57655
555
  qa: fs:mixed-clients kernel_untar_build failure
556
* https://tracker.ceph.com/issues/57676
557
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
558
559
* https://tracker.ceph.com/issues/63473
560
  fsstress.sh failed with errno 124
561
562 202 Rishabh Dave
h3. 3 Nov 2023
563 203 Rishabh Dave
564 202 Rishabh Dave
https://pulpito.ceph.com/rishabh-2023-10-27_06:26:52-fs-rishabh-2023oct26-testing-default-smithi/
565
566
* https://tracker.ceph.com/issues/63141
567
  qa/cephfs: test_idem_unaffected_root_squash fails
568
* https://tracker.ceph.com/issues/63233
569
  mon|client|mds: valgrind reports possible leaks in the MDS
570
* https://tracker.ceph.com/issues/57656
571
  dbench: write failed on handle 10010 (Resource temporarily unavailable)
572
* https://tracker.ceph.com/issues/57655
573
  qa: fs:mixed-clients kernel_untar_build failure
574
* https://tracker.ceph.com/issues/57676
575
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
576
577
* https://tracker.ceph.com/issues/59531
578
  "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio)"
579
* https://tracker.ceph.com/issues/52624
580
  qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
581
582 198 Patrick Donnelly
h3. 24 October 2023
583
584
https://pulpito.ceph.com/?branch=wip-batrick-testing-20231024.144545
585
586 200 Patrick Donnelly
Two failures:
587
588
https://pulpito.ceph.com/pdonnell-2023-10-26_05:21:22-fs-wip-batrick-testing-20231024.144545-distro-default-smithi/7438459/
589
https://pulpito.ceph.com/pdonnell-2023-10-26_05:21:22-fs-wip-batrick-testing-20231024.144545-distro-default-smithi/7438468/
590
591
probably related to https://github.com/ceph/ceph/pull/53255. Killing the mount as part of the test did not complete. Will research more.
592
593 198 Patrick Donnelly
* https://tracker.ceph.com/issues/52624
594
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
595
* https://tracker.ceph.com/issues/57676
596 199 Patrick Donnelly
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
597
* https://tracker.ceph.com/issues/63233
598
    mon|client|mds: valgrind reports possible leaks in the MDS
599
* https://tracker.ceph.com/issues/59531
600
    "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS 
601
* https://tracker.ceph.com/issues/57655
602
    qa: fs:mixed-clients kernel_untar_build failure
603 200 Patrick Donnelly
* https://tracker.ceph.com/issues/62067
604
    ffsb.sh failure "Resource temporarily unavailable"
605
* https://tracker.ceph.com/issues/63411
606
    qa: flush journal may cause timeouts of `scrub status`
607
* https://tracker.ceph.com/issues/61243
608
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
609
* https://tracker.ceph.com/issues/63141
610 198 Patrick Donnelly
    test_idem_unaffected_root_squash (test_admin.TestFsAuthorizeUpdate) fails
611 148 Rishabh Dave
612 195 Venky Shankar
h3. 18 Oct 2023
613
614
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20231018.065603
615
616
* https://tracker.ceph.com/issues/52624
617
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
618
* https://tracker.ceph.com/issues/57676
619
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
620
* https://tracker.ceph.com/issues/63233
621
    mon|client|mds: valgrind reports possible leaks in the MDS
622
* https://tracker.ceph.com/issues/63141
623
    qa/cephfs: test_idem_unaffected_root_squash fails
624
* https://tracker.ceph.com/issues/59531
625
  "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio)" 
626
* https://tracker.ceph.com/issues/62658
627
    error during scrub thrashing: reached maximum tries (31) after waiting for 900 seconds
628
* https://tracker.ceph.com/issues/62580
629
    testing: Test failure: test_snapshot_remove (tasks.cephfs.test_strays.TestStrays)
630
* https://tracker.ceph.com/issues/62067
631
    ffsb.sh failure "Resource temporarily unavailable"
632
* https://tracker.ceph.com/issues/57655
633
    qa: fs:mixed-clients kernel_untar_build failure
634
* https://tracker.ceph.com/issues/62036
635
    src/mds/MDCache.cc: 5131: FAILED ceph_assert(isolated_inodes.empty())
636
* https://tracker.ceph.com/issues/58945
637
    qa: xfstests-dev's generic test suite has 20 failures with fuse client
638
* https://tracker.ceph.com/issues/62847
639
    mds: blogbench requests stuck (5mds+scrub+snaps-flush)
640
641 193 Venky Shankar
h3. 13 Oct 2023
642
643
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20231013.093215
644
645
* https://tracker.ceph.com/issues/52624
646
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
647
* https://tracker.ceph.com/issues/62936
648
    Test failure: test_mirroring_init_failure_with_recovery (tasks.cephfs.test_mirroring.TestMirroring)
649
* https://tracker.ceph.com/issues/47292
650
    cephfs-shell: test_df_for_valid_file failure
651
* https://tracker.ceph.com/issues/63141
652
    qa/cephfs: test_idem_unaffected_root_squash fails
653
* https://tracker.ceph.com/issues/62081
654
    tasks/fscrypt-common does not finish, timesout
655 1 Patrick Donnelly
* https://tracker.ceph.com/issues/58945
656
    qa: xfstests-dev's generic test suite has 20 failures with fuse client
657 194 Venky Shankar
* https://tracker.ceph.com/issues/63233
658
    mon|client|mds: valgrind reports possible leaks in the MDS
659 193 Venky Shankar
660 190 Patrick Donnelly
h3. 16 Oct 2023
661
662
https://pulpito.ceph.com/?branch=wip-batrick-testing-20231016.203825
663
664 192 Patrick Donnelly
Infrastructure issues:
665
* /teuthology/pdonnell-2023-10-19_12:04:12-fs-wip-batrick-testing-20231016.203825-distro-default-smithi/7432286/teuthology.log
666
    Host lost.
667
668 196 Patrick Donnelly
One followup fix:
669
* https://pulpito.ceph.com/pdonnell-2023-10-20_00:33:29-fs-wip-batrick-testing-20231016.203825-distro-default-smithi/
670
671 192 Patrick Donnelly
Failures:
672
673
* https://tracker.ceph.com/issues/56694
674
    qa: avoid blocking forever on hung umount
675
* https://tracker.ceph.com/issues/63089
676
    qa: tasks/mirror times out
677
* https://tracker.ceph.com/issues/52624
678
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
679
* https://tracker.ceph.com/issues/59531
680
  "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio)" 
681
* https://tracker.ceph.com/issues/57676
682
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
683
* https://tracker.ceph.com/issues/62658 
684
    error during scrub thrashing: reached maximum tries (31) after waiting for 900 seconds
685
* https://tracker.ceph.com/issues/61243
686
  test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
687
* https://tracker.ceph.com/issues/57656
688
  dbench: write failed on handle 10010 (Resource temporarily unavailable)
689
* https://tracker.ceph.com/issues/63233
690
  mon|client|mds: valgrind reports possible leaks in the MDS
691 197 Patrick Donnelly
* https://tracker.ceph.com/issues/63278
692
  kclient: may wrongly decode session messages and believe it is blocklisted (dead jobs)
693 192 Patrick Donnelly
694 189 Rishabh Dave
h3. 9 Oct 2023
695
696
https://pulpito.ceph.com/rishabh-2023-10-06_11:56:52-fs-rishabh-cephfs-mon-testing-default-smithi/
697
698
* https://tracker.ceph.com/issues/54460
699
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
700
* https://tracker.ceph.com/issues/63141
701
  test_idem_unaffected_root_squash (test_admin.TestFsAuthorizeUpdate) fails
702
* https://tracker.ceph.com/issues/62937
703
  logrotate doesn't support parallel execution on same set of logfiles
704
* https://tracker.ceph.com/issues/61400
705
  valgrind+ceph-mon issues
706
* https://tracker.ceph.com/issues/57676
707
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
708
* https://tracker.ceph.com/issues/55805
709
  error during scrub thrashing reached max tries in 900 secs
710
711 188 Venky Shankar
h3. 26 Sep 2023
712
713
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230926.081818
714
715
* https://tracker.ceph.com/issues/52624
716
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
717
* https://tracker.ceph.com/issues/62873
718
    qa: FAIL: test_client_blocklisted_oldest_tid (tasks.cephfs.test_client_limits.TestClientLimits)
719
* https://tracker.ceph.com/issues/61400
720
    valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
721
* https://tracker.ceph.com/issues/57676
722
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
723
* https://tracker.ceph.com/issues/62682
724
    mon: no mdsmap broadcast after "fs set joinable" is set to true
725
* https://tracker.ceph.com/issues/63089
726
    qa: tasks/mirror times out
727
728 185 Rishabh Dave
h3. 22 Sep 2023
729
730
https://pulpito.ceph.com/rishabh-2023-09-12_12:12:15-fs-wip-rishabh-2023sep12-b2-testing-default-smithi/
731
732
* https://tracker.ceph.com/issues/59348
733
  qa: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota
734
* https://tracker.ceph.com/issues/59344
735
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
736
* https://tracker.ceph.com/issues/59531
737
  "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio)" 
738
* https://tracker.ceph.com/issues/61574
739
  build failure for mdtest project
740
* https://tracker.ceph.com/issues/62702
741
  fsstress.sh: MDS slow requests for the internal 'rename' requests
742
* https://tracker.ceph.com/issues/57676
743
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
744
745
* https://tracker.ceph.com/issues/62863 
746
  deadlock in ceph-fuse causes teuthology job to hang and fail
747
* https://tracker.ceph.com/issues/62870
748
  test_cluster_info (tasks.cephfs.test_nfs.TestNFS)
749
* https://tracker.ceph.com/issues/62873
750
  test_client_blocklisted_oldest_tid (tasks.cephfs.test_client_limits.TestClientLimits)
751
752 186 Venky Shankar
h3. 20 Sep 2023
753
754
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230920.072635
755
756
* https://tracker.ceph.com/issues/52624
757
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
758
* https://tracker.ceph.com/issues/61400
759
    valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
760
* https://tracker.ceph.com/issues/61399
761
    libmpich: undefined references to fi_strerror
762
* https://tracker.ceph.com/issues/62081
763
    tasks/fscrypt-common does not finish, timesout
764
* https://tracker.ceph.com/issues/62658 
765
    error during scrub thrashing: reached maximum tries (31) after waiting for 900 seconds
766
* https://tracker.ceph.com/issues/62915
767
    qa/suites/fs/nfs: No orchestrator configured (try `ceph orch set backend`) while running test cases
768
* https://tracker.ceph.com/issues/59531
769
    quincy: "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio)"
770
* https://tracker.ceph.com/issues/62873
771
  qa: FAIL: test_client_blocklisted_oldest_tid (tasks.cephfs.test_client_limits.TestClientLimits)
772
* https://tracker.ceph.com/issues/62936
773
    Test failure: test_mirroring_init_failure_with_recovery (tasks.cephfs.test_mirroring.TestMirroring)
774
* https://tracker.ceph.com/issues/62937
775
    Command failed on smithi027 with status 3: 'sudo logrotate /etc/logrotate.d/ceph-test.conf'
776
* https://tracker.ceph.com/issues/62510
777
    snaptest-git-ceph.sh failure with fs/thrash
778
* https://tracker.ceph.com/issues/62081
779
    tasks/fscrypt-common does not finish, timesout
780
* https://tracker.ceph.com/issues/62126
781
    test failure: suites/blogbench.sh stops running
782 187 Venky Shankar
* https://tracker.ceph.com/issues/62682
783
    mon: no mdsmap broadcast after "fs set joinable" is set to true
784 186 Venky Shankar
785 184 Milind Changire
h3. 19 Sep 2023
786
787
http://pulpito.front.sepia.ceph.com/mchangir-2023-09-12_05:40:22-fs-wip-mchangir-testing-20230908.140927-testing-default-smithi/
788
789
* https://tracker.ceph.com/issues/58220#note-9
790
  workunit fs/test_python.sh: test_disk_quota_exceeeded_error failure
791
* https://tracker.ceph.com/issues/62702
792
  Command failed (workunit test suites/fsstress.sh) on smithi124 with status 124
793
* https://tracker.ceph.com/issues/57676
794
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
795
* https://tracker.ceph.com/issues/59348
796
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
797
* https://tracker.ceph.com/issues/52624
798
  qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
799
* https://tracker.ceph.com/issues/51964
800
  qa: test_cephfs_mirror_restart_sync_on_blocklist failure
801
* https://tracker.ceph.com/issues/61243
802
  test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
803
* https://tracker.ceph.com/issues/59344
804
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
805
* https://tracker.ceph.com/issues/62873
806
  qa: FAIL: test_client_blocklisted_oldest_tid (tasks.cephfs.test_client_limits.TestClientLimits)
807
* https://tracker.ceph.com/issues/59413
808
  cephfs: qa snaptest-git-ceph.sh failed with "got remote process result: 128"
809
* https://tracker.ceph.com/issues/53859
810
  qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
811
* https://tracker.ceph.com/issues/62482
812
  qa: cluster [WRN] Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
813
814 178 Patrick Donnelly
815 177 Venky Shankar
h3. 13 Sep 2023
816
817
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230908.065909
818
819
* https://tracker.ceph.com/issues/52624
820
      qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
821
* https://tracker.ceph.com/issues/57655
822
    qa: fs:mixed-clients kernel_untar_build failure
823
* https://tracker.ceph.com/issues/57676
824
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
825
* https://tracker.ceph.com/issues/61243
826
    qa: tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev - 17 tests failed
827
* https://tracker.ceph.com/issues/62567
828
    postgres workunit times out - MDS_SLOW_REQUEST in logs
829
* https://tracker.ceph.com/issues/61400
830
    valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
831
* https://tracker.ceph.com/issues/61399
832
    libmpich: undefined references to fi_strerror
833
* https://tracker.ceph.com/issues/57655
834
    qa: fs:mixed-clients kernel_untar_build failure
835
* https://tracker.ceph.com/issues/57676
836
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
837
* https://tracker.ceph.com/issues/51964
838
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
839
* https://tracker.ceph.com/issues/62081
840
    tasks/fscrypt-common does not finish, timesout
841 178 Patrick Donnelly
842 179 Patrick Donnelly
h3. 2023 Sep 12
843 178 Patrick Donnelly
844
https://pulpito.ceph.com/pdonnell-2023-09-12_14:07:50-fs-wip-batrick-testing-20230912.122437-distro-default-smithi/
845 1 Patrick Donnelly
846 181 Patrick Donnelly
A few failures caused by qa refactoring in https://github.com/ceph/ceph/pull/48130 ; notably:
847
848 182 Patrick Donnelly
* Test failure: test_export_pin_many (tasks.cephfs.test_exports.TestExportPin) caused by fragmentation from config changes.
849 181 Patrick Donnelly
850
Failures:
851
852 179 Patrick Donnelly
* https://tracker.ceph.com/issues/59348
853
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
854
* https://tracker.ceph.com/issues/57656
855
  dbench: write failed on handle 10010 (Resource temporarily unavailable)
856
* https://tracker.ceph.com/issues/55805
857
  error scrub thrashing reached max tries in 900 secs
858
* https://tracker.ceph.com/issues/62067
859
    ffsb.sh failure "Resource temporarily unavailable"
860
* https://tracker.ceph.com/issues/59344
861
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
862
* https://tracker.ceph.com/issues/61399
863 180 Patrick Donnelly
  libmpich: undefined references to fi_strerror
864
* https://tracker.ceph.com/issues/62832
865
  common: config_proxy deadlock during shutdown (and possibly other times)
866
* https://tracker.ceph.com/issues/59413
867 1 Patrick Donnelly
  cephfs: qa snaptest-git-ceph.sh failed with "got remote process result: 128"
868 181 Patrick Donnelly
* https://tracker.ceph.com/issues/57676
869
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
870
* https://tracker.ceph.com/issues/62567
871
  Command failed with status 1: "sudo TESTDIR=/home/ubuntu/cephtest bash -c 'sudo -u postgres -- pgbench -s 500 -i'"
872
* https://tracker.ceph.com/issues/54460
873
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
874
* https://tracker.ceph.com/issues/58220#note-9
875
  workunit fs/test_python.sh: test_disk_quota_exceeeded_error failure
876
* https://tracker.ceph.com/issues/59348
877
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
878 183 Patrick Donnelly
* https://tracker.ceph.com/issues/62847
879
    mds: blogbench requests stuck (5mds+scrub+snaps-flush)
880
* https://tracker.ceph.com/issues/62848
881
    qa: fail_fs upgrade scenario hanging
882
* https://tracker.ceph.com/issues/62081
883
    tasks/fscrypt-common does not finish, timesout
884 177 Venky Shankar
885 176 Venky Shankar
h3. 11 Sep 2023
886 175 Venky Shankar
887
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230830.153114
888
889
* https://tracker.ceph.com/issues/52624
890
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
891
* https://tracker.ceph.com/issues/61399
892
    qa: build failure for ior (the failed instance is when compiling `mdtest`)
893
* https://tracker.ceph.com/issues/57655
894
    qa: fs:mixed-clients kernel_untar_build failure
895
* https://tracker.ceph.com/issues/61399
896
    ior build failure
897
* https://tracker.ceph.com/issues/59531
898
    quincy: "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio)"
899
* https://tracker.ceph.com/issues/59344
900
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
901
* https://tracker.ceph.com/issues/59346
902
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
903
* https://tracker.ceph.com/issues/59348
904
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
905
* https://tracker.ceph.com/issues/57676
906
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
907
* https://tracker.ceph.com/issues/61243
908
  qa: tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev - 17 tests failed
909
* https://tracker.ceph.com/issues/62567
910
  postgres workunit times out - MDS_SLOW_REQUEST in logs
911
912
913 174 Rishabh Dave
h3. 6 Sep 2023 Run 2
914
915
https://pulpito.ceph.com/rishabh-2023-08-25_01:50:32-fs-wip-rishabh-2023aug3-b5-testing-default-smithi/ 
916
917
* https://tracker.ceph.com/issues/51964
918
  test_cephfs_mirror_restart_sync_on_blocklist failure
919
* https://tracker.ceph.com/issues/59348
920
  test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
921
* https://tracker.ceph.com/issues/53859
922
  qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
923
* https://tracker.ceph.com/issues/61892
924
  test_strays.TestStrays.test_snapshot_remove failed
925
* https://tracker.ceph.com/issues/54460
926
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
927
* https://tracker.ceph.com/issues/59346
928
  fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write"
929
* https://tracker.ceph.com/issues/59344
930
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
931
* https://tracker.ceph.com/issues/62484
932
  qa: ffsb.sh test failure
933
* https://tracker.ceph.com/issues/62567
934
  Command failed with status 1: "sudo TESTDIR=/home/ubuntu/cephtest bash -c 'sudo -u postgres -- pgbench -s 500 -i'"
935
  
936
* https://tracker.ceph.com/issues/61399
937
  ior build failure
938
* https://tracker.ceph.com/issues/57676
939
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
940
* https://tracker.ceph.com/issues/55805
941
  error scrub thrashing reached max tries in 900 secs
942
943 172 Rishabh Dave
h3. 6 Sep 2023
944 171 Rishabh Dave
945 173 Rishabh Dave
https://pulpito.ceph.com/rishabh-2023-08-10_20:16:46-fs-wip-rishabh-2023Aug1-b4-testing-default-smithi/
946 171 Rishabh Dave
947 1 Patrick Donnelly
* https://tracker.ceph.com/issues/53859
948
  qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
949 173 Rishabh Dave
* https://tracker.ceph.com/issues/51964
950
  qa: test_cephfs_mirror_restart_sync_on_blocklist failure
951 1 Patrick Donnelly
* https://tracker.ceph.com/issues/61892
952 173 Rishabh Dave
  test_snapshot_remove (test_strays.TestStrays) failed
953
* https://tracker.ceph.com/issues/59348
954
  qa: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota
955
* https://tracker.ceph.com/issues/54462
956
  Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128
957
* https://tracker.ceph.com/issues/62556
958
  test_acls: xfstests_dev: python2 is missing
959
* https://tracker.ceph.com/issues/62067
960
  ffsb.sh failure "Resource temporarily unavailable"
961
* https://tracker.ceph.com/issues/57656
962
  dbench: write failed on handle 10010 (Resource temporarily unavailable)
963 1 Patrick Donnelly
* https://tracker.ceph.com/issues/59346
964
  fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write"
965 171 Rishabh Dave
* https://tracker.ceph.com/issues/59344
966 173 Rishabh Dave
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
967
968 171 Rishabh Dave
* https://tracker.ceph.com/issues/61399
969
  ior build failure
970
* https://tracker.ceph.com/issues/57676
971
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
972
* https://tracker.ceph.com/issues/55805
973
  error scrub thrashing reached max tries in 900 secs
974 173 Rishabh Dave
975
* https://tracker.ceph.com/issues/62567
976
  Command failed on smithi008 with status 1: "sudo TESTDIR=/home/ubuntu/cephtest bash -c 'sudo -u postgres -- pgbench -s 500 -i'"
977
* https://tracker.ceph.com/issues/62702
978
  workunit test suites/fsstress.sh on smithi066 with status 124
979 170 Rishabh Dave
980
h3. 5 Sep 2023
981
982
https://pulpito.ceph.com/rishabh-2023-08-25_06:38:25-fs-wip-rishabh-2023aug3-b5-testing-default-smithi/
983
orch:cephadm suite run: http://pulpito.front.sepia.ceph.com/rishabh-2023-09-05_12:16:09-orch:cephadm-wip-rishabh-2023aug3-b5-testing-default-smithi/
984
  this run has failures but acc to Adam King these are not relevant and should be ignored
985
986
* https://tracker.ceph.com/issues/61892
987
  test_snapshot_remove (test_strays.TestStrays) failed
988
* https://tracker.ceph.com/issues/59348
989
  test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota
990
* https://tracker.ceph.com/issues/54462
991
  Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128
992
* https://tracker.ceph.com/issues/62067
993
  ffsb.sh failure "Resource temporarily unavailable"
994
* https://tracker.ceph.com/issues/57656 
995
  dbench: write failed on handle 10010 (Resource temporarily unavailable)
996
* https://tracker.ceph.com/issues/59346
997
  fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write"
998
* https://tracker.ceph.com/issues/59344
999
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
1000
* https://tracker.ceph.com/issues/50223
1001
  client.xxxx isn't responding to mclientcaps(revoke)
1002
* https://tracker.ceph.com/issues/57655
1003
  qa: fs:mixed-clients kernel_untar_build failure
1004
* https://tracker.ceph.com/issues/62187
1005
  iozone.sh: line 5: iozone: command not found
1006
 
1007
* https://tracker.ceph.com/issues/61399
1008
  ior build failure
1009
* https://tracker.ceph.com/issues/57676
1010
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
1011
* https://tracker.ceph.com/issues/55805
1012
  error scrub thrashing reached max tries in 900 secs
1013 169 Venky Shankar
1014
1015
h3. 31 Aug 2023
1016
1017
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230824.045828
1018
1019
* https://tracker.ceph.com/issues/52624
1020
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1021
* https://tracker.ceph.com/issues/62187
1022
    iozone: command not found
1023
* https://tracker.ceph.com/issues/61399
1024
    ior build failure
1025
* https://tracker.ceph.com/issues/59531
1026
    quincy: "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio)"
1027
* https://tracker.ceph.com/issues/61399
1028
    qa: build failure for ior (the failed instance is when compiling `mdtest`)
1029
* https://tracker.ceph.com/issues/57655
1030
    qa: fs:mixed-clients kernel_untar_build failure
1031
* https://tracker.ceph.com/issues/59344
1032
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
1033
* https://tracker.ceph.com/issues/59346
1034
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
1035
* https://tracker.ceph.com/issues/59348
1036
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1037
* https://tracker.ceph.com/issues/59413
1038
    cephfs: qa snaptest-git-ceph.sh failed with "got remote process result: 128"
1039
* https://tracker.ceph.com/issues/62653
1040
    qa: unimplemented fcntl command: 1036 with fsstress
1041
* https://tracker.ceph.com/issues/61400
1042
    valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
1043
* https://tracker.ceph.com/issues/62658
1044
    error during scrub thrashing: reached maximum tries (31) after waiting for 900 seconds
1045
* https://tracker.ceph.com/issues/62188
1046
    AttributeError: 'RemoteProcess' object has no attribute 'read' (happens only with multis-auth test)
1047 168 Venky Shankar
1048
1049
h3. 25 Aug 2023
1050
1051
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230822.064807
1052
1053
* https://tracker.ceph.com/issues/59344
1054
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
1055
* https://tracker.ceph.com/issues/59346
1056
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
1057
* https://tracker.ceph.com/issues/59348
1058
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1059
* https://tracker.ceph.com/issues/57655
1060
    qa: fs:mixed-clients kernel_untar_build failure
1061
* https://tracker.ceph.com/issues/61243
1062
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
1063
* https://tracker.ceph.com/issues/61399
1064
    ior build failure
1065
* https://tracker.ceph.com/issues/61399
1066
    qa: build failure for ior (the failed instance is when compiling `mdtest`)
1067
* https://tracker.ceph.com/issues/62484
1068
    qa: ffsb.sh test failure
1069
* https://tracker.ceph.com/issues/59531
1070
    quincy: "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio)"
1071
* https://tracker.ceph.com/issues/62510
1072
    snaptest-git-ceph.sh failure with fs/thrash
1073 167 Venky Shankar
1074
1075
h3. 24 Aug 2023
1076
1077
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230822.060131
1078
1079
* https://tracker.ceph.com/issues/57676
1080
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1081
* https://tracker.ceph.com/issues/51964
1082
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1083
* https://tracker.ceph.com/issues/59344
1084
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
1085
* https://tracker.ceph.com/issues/59346
1086
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
1087
* https://tracker.ceph.com/issues/59348
1088
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1089
* https://tracker.ceph.com/issues/61399
1090
    ior build failure
1091
* https://tracker.ceph.com/issues/61399
1092
    qa: build failure for ior (the failed instance is when compiling `mdtest`)
1093
* https://tracker.ceph.com/issues/62510
1094
    snaptest-git-ceph.sh failure with fs/thrash
1095
* https://tracker.ceph.com/issues/62484
1096
    qa: ffsb.sh test failure
1097
* https://tracker.ceph.com/issues/57087
1098
    qa: test_fragmented_injection (tasks.cephfs.test_data_scan.TestDataScan) failure
1099
* https://tracker.ceph.com/issues/57656
1100
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
1101
* https://tracker.ceph.com/issues/62187
1102
    iozone: command not found
1103
* https://tracker.ceph.com/issues/62188
1104
    AttributeError: 'RemoteProcess' object has no attribute 'read' (happens only with multis-auth test)
1105
* https://tracker.ceph.com/issues/62567
1106
    postgres workunit times out - MDS_SLOW_REQUEST in logs
1107 166 Venky Shankar
1108
1109
h3. 22 Aug 2023
1110
1111
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230809.035933
1112
1113
* https://tracker.ceph.com/issues/57676
1114
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1115
* https://tracker.ceph.com/issues/51964
1116
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1117
* https://tracker.ceph.com/issues/59344
1118
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
1119
* https://tracker.ceph.com/issues/59346
1120
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
1121
* https://tracker.ceph.com/issues/59348
1122
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1123
* https://tracker.ceph.com/issues/61399
1124
    ior build failure
1125
* https://tracker.ceph.com/issues/61399
1126
    qa: build failure for ior (the failed instance is when compiling `mdtest`)
1127
* https://tracker.ceph.com/issues/57655
1128
    qa: fs:mixed-clients kernel_untar_build failure
1129
* https://tracker.ceph.com/issues/61243
1130
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
1131
* https://tracker.ceph.com/issues/62188
1132
    AttributeError: 'RemoteProcess' object has no attribute 'read' (happens only with multis-auth test)
1133
* https://tracker.ceph.com/issues/62510
1134
    snaptest-git-ceph.sh failure with fs/thrash
1135
* https://tracker.ceph.com/issues/62511
1136
    src/mds/MDLog.cc: 299: FAILED ceph_assert(!mds_is_shutting_down)
1137 165 Venky Shankar
1138
1139
h3. 14 Aug 2023
1140
1141
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230808.093601
1142
1143
* https://tracker.ceph.com/issues/51964
1144
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1145
* https://tracker.ceph.com/issues/61400
1146
    valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
1147
* https://tracker.ceph.com/issues/61399
1148
    ior build failure
1149
* https://tracker.ceph.com/issues/59348
1150
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1151
* https://tracker.ceph.com/issues/59531
1152
    cluster [WRN] OSD bench result of 137706.272521 IOPS exceeded the threshold
1153
* https://tracker.ceph.com/issues/59344
1154
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
1155
* https://tracker.ceph.com/issues/59346
1156
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
1157
* https://tracker.ceph.com/issues/61399
1158
    qa: build failure for ior (the failed instance is when compiling `mdtest`)
1159
* https://tracker.ceph.com/issues/59684 [kclient bug]
1160
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
1161
* https://tracker.ceph.com/issues/61243 (NEW)
1162
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
1163
* https://tracker.ceph.com/issues/57655
1164
    qa: fs:mixed-clients kernel_untar_build failure
1165
* https://tracker.ceph.com/issues/57656
1166
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
1167 163 Venky Shankar
1168
1169
h3. 28 JULY 2023
1170
1171
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230725.053049
1172
1173
* https://tracker.ceph.com/issues/51964
1174
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1175
* https://tracker.ceph.com/issues/61400
1176
    valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
1177
* https://tracker.ceph.com/issues/61399
1178
    ior build failure
1179
* https://tracker.ceph.com/issues/57676
1180
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1181
* https://tracker.ceph.com/issues/59348
1182
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1183
* https://tracker.ceph.com/issues/59531
1184
    cluster [WRN] OSD bench result of 137706.272521 IOPS exceeded the threshold
1185
* https://tracker.ceph.com/issues/59344
1186
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
1187
* https://tracker.ceph.com/issues/59346
1188
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
1189
* https://github.com/ceph/ceph/pull/52556
1190
    task/test_nfs: AttributeError: 'TestNFS' object has no attribute 'run_ceph_cmd' (see note #4)
1191
* https://tracker.ceph.com/issues/62187
1192
    iozone: command not found
1193
* https://tracker.ceph.com/issues/61399
1194
    qa: build failure for ior (the failed instance is when compiling `mdtest`)
1195
* https://tracker.ceph.com/issues/62188
1196 164 Rishabh Dave
    AttributeError: 'RemoteProcess' object has no attribute 'read' (happens only with multis-auth test)
1197 158 Rishabh Dave
1198
h3. 24 Jul 2023
1199
1200
https://pulpito.ceph.com/rishabh-2023-07-13_21:35:13-fs-wip-rishabh-2023Jul13-testing-default-smithi/
1201
https://pulpito.ceph.com/rishabh-2023-07-14_10:26:42-fs-wip-rishabh-2023Jul13-testing-default-smithi/
1202
There were few failure from one of the PRs under testing. Following run confirms that removing this PR fixes these failures -
1203
https://pulpito.ceph.com/rishabh-2023-07-18_02:11:50-fs-wip-rishabh-2023Jul13-m-quota-testing-default-smithi/
1204
One more extra run to check if blogbench.sh fail every time:
1205
https://pulpito.ceph.com/rishabh-2023-07-21_17:58:19-fs-wip-rishabh-2023Jul13-m-quota-testing-default-smithi/
1206
blogbench.sh failure were seen on above runs for first time, following run with main branch that confirms that "blogbench.sh" was not related to any of the PRs that are under testing -
1207 161 Rishabh Dave
https://pulpito.ceph.com/rishabh-2023-07-21_21:30:53-fs-wip-rishabh-2023Jul13-base-2-testing-default-smithi/
1208
1209
* https://tracker.ceph.com/issues/61892
1210
  test_snapshot_remove (test_strays.TestStrays) failed
1211
* https://tracker.ceph.com/issues/53859
1212
  test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
1213
* https://tracker.ceph.com/issues/61982
1214
  test_clean_stale_subvolume_snapshot_metadata (tasks.cephfs.test_volumes.TestSubvolumeSnapshots)
1215
* https://tracker.ceph.com/issues/52438
1216
  qa: ffsb timeout
1217
* https://tracker.ceph.com/issues/54460
1218
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1219
* https://tracker.ceph.com/issues/57655
1220
  qa: fs:mixed-clients kernel_untar_build failure
1221
* https://tracker.ceph.com/issues/48773
1222
  reached max tries: scrub does not complete
1223
* https://tracker.ceph.com/issues/58340
1224
  mds: fsstress.sh hangs with multimds
1225
* https://tracker.ceph.com/issues/61400
1226
  valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
1227
* https://tracker.ceph.com/issues/57206
1228
  libcephfs/test.sh: ceph_test_libcephfs_reclaim
1229
  
1230
* https://tracker.ceph.com/issues/57656
1231
  [testing] dbench: write failed on handle 10010 (Resource temporarily unavailable)
1232
* https://tracker.ceph.com/issues/61399
1233
  ior build failure
1234
* https://tracker.ceph.com/issues/57676
1235
  error during scrub thrashing: backtrace
1236
  
1237
* https://tracker.ceph.com/issues/38452
1238
  'sudo -u postgres -- pgbench -s 500 -i' failed
1239 158 Rishabh Dave
* https://tracker.ceph.com/issues/62126
1240 157 Venky Shankar
  blogbench.sh failure
1241
1242
h3. 18 July 2023
1243
1244
* https://tracker.ceph.com/issues/52624
1245
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
1246
* https://tracker.ceph.com/issues/57676
1247
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1248
* https://tracker.ceph.com/issues/54460
1249
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1250
* https://tracker.ceph.com/issues/57655
1251
    qa: fs:mixed-clients kernel_untar_build failure
1252
* https://tracker.ceph.com/issues/51964
1253
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1254
* https://tracker.ceph.com/issues/59344
1255
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
1256
* https://tracker.ceph.com/issues/61182
1257
    cephfs-mirror-ha-workunit: reached maximum tries (50) after waiting for 300 seconds
1258
* https://tracker.ceph.com/issues/61957
1259
    test_client_limits.TestClientLimits.test_client_release_bug
1260
* https://tracker.ceph.com/issues/59348
1261
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1262
* https://tracker.ceph.com/issues/61892
1263
    test_strays.TestStrays.test_snapshot_remove failed
1264
* https://tracker.ceph.com/issues/59346
1265
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
1266
* https://tracker.ceph.com/issues/44565
1267
    src/mds/SimpleLock.h: 528: FAILED ceph_assert(state == LOCK_XLOCK || state == LOCK_XLOCKDONE || state == LOCK_XLOCKSNAP || state == LOCK_LOCK_XLOCK || state == LOCK_LOCK || is_locallock())
1268
* https://tracker.ceph.com/issues/62067
1269
    ffsb.sh failure "Resource temporarily unavailable"
1270 156 Venky Shankar
1271
1272
h3. 17 July 2023
1273
1274
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230704.040136
1275
1276
* https://tracker.ceph.com/issues/61982
1277
    Test failure: test_clean_stale_subvolume_snapshot_metadata (tasks.cephfs.test_volumes.TestSubvolumeSnapshots)
1278
* https://tracker.ceph.com/issues/59344
1279
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
1280
* https://tracker.ceph.com/issues/61182
1281
    cephfs-mirror-ha-workunit: reached maximum tries (50) after waiting for 300 seconds
1282
* https://tracker.ceph.com/issues/61957
1283
    test_client_limits.TestClientLimits.test_client_release_bug
1284
* https://tracker.ceph.com/issues/61400
1285
    valgrind+ceph-mon: segmentation fault in rocksdb+tcmalloc
1286
* https://tracker.ceph.com/issues/59348
1287
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1288
* https://tracker.ceph.com/issues/61892
1289
    test_strays.TestStrays.test_snapshot_remove failed
1290
* https://tracker.ceph.com/issues/59346
1291
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
1292
* https://tracker.ceph.com/issues/62036
1293
    src/mds/MDCache.cc: 5131: FAILED ceph_assert(isolated_inodes.empty())
1294
* https://tracker.ceph.com/issues/61737
1295
    coredump from '/bin/podman pull quay.ceph.io/ceph-ci/ceph:pacific'
1296
* https://tracker.ceph.com/issues/44565
1297
    src/mds/SimpleLock.h: 528: FAILED ceph_assert(state == LOCK_XLOCK || state == LOCK_XLOCKDONE || state == LOCK_XLOCKSNAP || state == LOCK_LOCK_XLOCK || state == LOCK_LOCK || is_locallock())
1298 155 Rishabh Dave
1299 1 Patrick Donnelly
1300 153 Rishabh Dave
h3. 13 July 2023 Run 2
1301 152 Rishabh Dave
1302
1303
https://pulpito.ceph.com/rishabh-2023-07-08_23:33:40-fs-wip-rishabh-2023Jul9-testing-default-smithi/
1304
https://pulpito.ceph.com/rishabh-2023-07-09_20:19:09-fs-wip-rishabh-2023Jul9-testing-default-smithi/
1305
1306
* https://tracker.ceph.com/issues/61957
1307
  test_client_limits.TestClientLimits.test_client_release_bug
1308
* https://tracker.ceph.com/issues/61982
1309
  Test failure: test_clean_stale_subvolume_snapshot_metadata (tasks.cephfs.test_volumes.TestSubvolumeSnapshots)
1310
* https://tracker.ceph.com/issues/59348
1311
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1312
* https://tracker.ceph.com/issues/59344
1313
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
1314
* https://tracker.ceph.com/issues/54460
1315
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1316
* https://tracker.ceph.com/issues/57655
1317
  qa: fs:mixed-clients kernel_untar_build failure
1318
* https://tracker.ceph.com/issues/61400
1319
  valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
1320
* https://tracker.ceph.com/issues/61399
1321
  ior build failure
1322
1323 151 Venky Shankar
h3. 13 July 2023
1324
1325
https://pulpito.ceph.com/vshankar-2023-07-04_11:45:30-fs-wip-vshankar-testing-20230704.040242-testing-default-smithi/
1326
1327
* https://tracker.ceph.com/issues/54460
1328
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1329
* https://tracker.ceph.com/issues/61400
1330
    valgrind+ceph-mon: segmentation fault in rocksdb+tcmalloc
1331
* https://tracker.ceph.com/issues/57655
1332
    qa: fs:mixed-clients kernel_untar_build failure
1333
* https://tracker.ceph.com/issues/61945
1334
    LibCephFS.DelegTimeout failure
1335
* https://tracker.ceph.com/issues/52624
1336
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
1337
* https://tracker.ceph.com/issues/57676
1338
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1339
* https://tracker.ceph.com/issues/59348
1340
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1341
* https://tracker.ceph.com/issues/59344
1342
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
1343
* https://tracker.ceph.com/issues/51964
1344
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1345
* https://tracker.ceph.com/issues/59346
1346
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
1347
* https://tracker.ceph.com/issues/61982
1348
    Test failure: test_clean_stale_subvolume_snapshot_metadata (tasks.cephfs.test_volumes.TestSubvolumeSnapshots)
1349 150 Rishabh Dave
1350
1351
h3. 13 Jul 2023
1352
1353
https://pulpito.ceph.com/rishabh-2023-07-05_22:21:20-fs-wip-rishabh-2023Jul5-testing-default-smithi/
1354
https://pulpito.ceph.com/rishabh-2023-07-06_19:33:28-fs-wip-rishabh-2023Jul5-testing-default-smithi/
1355
1356
* https://tracker.ceph.com/issues/61957
1357
  test_client_limits.TestClientLimits.test_client_release_bug
1358
* https://tracker.ceph.com/issues/59348
1359
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1360
* https://tracker.ceph.com/issues/59346
1361
  fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
1362
* https://tracker.ceph.com/issues/48773
1363
  scrub does not complete: reached max tries
1364
* https://tracker.ceph.com/issues/59344
1365
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
1366
* https://tracker.ceph.com/issues/52438
1367
  qa: ffsb timeout
1368
* https://tracker.ceph.com/issues/57656
1369
  [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
1370
* https://tracker.ceph.com/issues/58742
1371
  xfstests-dev: kcephfs: generic
1372
* https://tracker.ceph.com/issues/61399
1373 148 Rishabh Dave
  libmpich: undefined references to fi_strerror
1374 149 Rishabh Dave
1375 148 Rishabh Dave
h3. 12 July 2023
1376
1377
https://pulpito.ceph.com/rishabh-2023-07-05_18:32:52-fs-wip-rishabh-CephManager-in-CephFSTestCase-testing-default-smithi/
1378
https://pulpito.ceph.com/rishabh-2023-07-06_19:46:43-fs-wip-rishabh-CephManager-in-CephFSTestCase-testing-default-smithi/
1379
1380
* https://tracker.ceph.com/issues/61892
1381
  test_strays.TestStrays.test_snapshot_remove failed
1382
* https://tracker.ceph.com/issues/59348
1383
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1384
* https://tracker.ceph.com/issues/53859
1385
  qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
1386
* https://tracker.ceph.com/issues/59346
1387
  fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write"
1388
* https://tracker.ceph.com/issues/58742
1389
  xfstests-dev: kcephfs: generic
1390
* https://tracker.ceph.com/issues/59344
1391
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
1392
* https://tracker.ceph.com/issues/52438
1393
  qa: ffsb timeout
1394
* https://tracker.ceph.com/issues/57656
1395
  [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
1396
* https://tracker.ceph.com/issues/54460
1397
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1398
* https://tracker.ceph.com/issues/57655
1399
  qa: fs:mixed-clients kernel_untar_build failure
1400
* https://tracker.ceph.com/issues/61182
1401
  cephfs-mirror-ha-workunit: reached maximum tries (50) after waiting for 300 seconds
1402
* https://tracker.ceph.com/issues/61400
1403
  valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
1404 147 Rishabh Dave
* https://tracker.ceph.com/issues/48773
1405 146 Patrick Donnelly
  reached max tries: scrub does not complete
1406
1407
h3. 05 July 2023
1408
1409
https://pulpito.ceph.com/pdonnell-2023-07-05_03:38:33-fs:libcephfs-wip-pdonnell-testing-20230705.003205-distro-default-smithi/
1410
1411 137 Rishabh Dave
* https://tracker.ceph.com/issues/59346
1412 143 Rishabh Dave
  fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write"
1413
1414
h3. 27 Jun 2023
1415
1416
https://pulpito.ceph.com/rishabh-2023-06-21_23:38:17-fs-wip-rishabh-improvements-authmon-testing-default-smithi/
1417 144 Rishabh Dave
https://pulpito.ceph.com/rishabh-2023-06-23_17:37:30-fs-wip-rishabh-improvements-authmon-distro-default-smithi/
1418
1419
* https://tracker.ceph.com/issues/59348
1420
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1421
* https://tracker.ceph.com/issues/54460
1422
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1423
* https://tracker.ceph.com/issues/59346
1424
  fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write"
1425
* https://tracker.ceph.com/issues/59344
1426
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
1427
* https://tracker.ceph.com/issues/61399
1428
  libmpich: undefined references to fi_strerror
1429
* https://tracker.ceph.com/issues/50223
1430
  client.xxxx isn't responding to mclientcaps(revoke)
1431 143 Rishabh Dave
* https://tracker.ceph.com/issues/61831
1432
  Test failure: test_mirroring_init_failure_with_recovery (tasks.cephfs.test_mirroring.TestMirroring)
1433 142 Venky Shankar
1434
1435
h3. 22 June 2023
1436
1437
* https://tracker.ceph.com/issues/57676
1438
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1439
* https://tracker.ceph.com/issues/54460
1440
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1441
* https://tracker.ceph.com/issues/59344
1442
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
1443
* https://tracker.ceph.com/issues/59348
1444
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1445
* https://tracker.ceph.com/issues/61400
1446
    valgrind+ceph-mon: segmentation fault in rocksdb+tcmalloc
1447
* https://tracker.ceph.com/issues/57655
1448
    qa: fs:mixed-clients kernel_untar_build failure
1449
* https://tracker.ceph.com/issues/61394
1450
    qa/quincy: cluster [WRN] evicting unresponsive client smithi152 (4298), after 303.726 seconds" in cluster log
1451
* https://tracker.ceph.com/issues/61762
1452
    qa: wait_for_clean: failed before timeout expired
1453
* https://tracker.ceph.com/issues/61775
1454
    cephfs-mirror: mirror daemon does not shutdown (in mirror ha tests)
1455
* https://tracker.ceph.com/issues/44565
1456
    src/mds/SimpleLock.h: 528: FAILED ceph_assert(state == LOCK_XLOCK || state == LOCK_XLOCKDONE || state == LOCK_XLOCKSNAP || state == LOCK_LOCK_XLOCK || state == LOCK_LOCK || is_locallock())
1457
* https://tracker.ceph.com/issues/61790
1458
    cephfs client to mds comms remain silent after reconnect
1459
* https://tracker.ceph.com/issues/61791
1460
    snaptest-git-ceph.sh test timed out (job dead)
1461 139 Venky Shankar
1462
1463
h3. 20 June 2023
1464
1465
https://pulpito.ceph.com/vshankar-2023-06-15_04:58:28-fs-wip-vshankar-testing-20230614.124123-testing-default-smithi/
1466
1467
* https://tracker.ceph.com/issues/57676
1468
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1469
* https://tracker.ceph.com/issues/54460
1470
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1471 140 Venky Shankar
* https://tracker.ceph.com/issues/54462
1472 1 Patrick Donnelly
  Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128
1473 141 Venky Shankar
* https://tracker.ceph.com/issues/58340
1474 139 Venky Shankar
  mds: fsstress.sh hangs with multimds
1475
* https://tracker.ceph.com/issues/59344
1476
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
1477
* https://tracker.ceph.com/issues/59348
1478
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1479
* https://tracker.ceph.com/issues/57656
1480
  [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
1481
* https://tracker.ceph.com/issues/61400
1482
    valgrind+ceph-mon: segmentation fault in rocksdb+tcmalloc
1483
* https://tracker.ceph.com/issues/57655
1484
    qa: fs:mixed-clients kernel_untar_build failure
1485
* https://tracker.ceph.com/issues/44565
1486
    src/mds/SimpleLock.h: 528: FAILED ceph_assert(state == LOCK_XLOCK || state == LOCK_XLOCKDONE || state == LOCK_XLOCKSNAP || state == LOCK_LOCK_XLOCK || state == LOCK_LOCK || is_locallock())
1487
* https://tracker.ceph.com/issues/61737
1488 138 Rishabh Dave
    coredump from '/bin/podman pull quay.ceph.io/ceph-ci/ceph:pacific'
1489
1490
h3. 16 June 2023
1491
1492 1 Patrick Donnelly
https://pulpito.ceph.com/rishabh-2023-05-16_10:39:13-fs-wip-rishabh-2023May15-1524-testing-default-smithi/
1493 145 Rishabh Dave
https://pulpito.ceph.com/rishabh-2023-05-17_11:09:48-fs-wip-rishabh-2023May15-1524-testing-default-smithi/
1494 138 Rishabh Dave
https://pulpito.ceph.com/rishabh-2023-05-18_10:01:53-fs-wip-rishabh-2023May15-1524-testing-default-smithi/
1495 1 Patrick Donnelly
(bins were rebuilt with a subset of orig PRs) https://pulpito.ceph.com/rishabh-2023-06-09_10:19:22-fs-wip-rishabh-2023Jun9-1308-testing-default-smithi/
1496
1497
1498
* https://tracker.ceph.com/issues/59344
1499
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
1500 138 Rishabh Dave
* https://tracker.ceph.com/issues/59348
1501
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1502 145 Rishabh Dave
* https://tracker.ceph.com/issues/59346
1503
  fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write"
1504
* https://tracker.ceph.com/issues/57656
1505
  [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
1506
* https://tracker.ceph.com/issues/54460
1507
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1508 138 Rishabh Dave
* https://tracker.ceph.com/issues/54462
1509
  Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128
1510 145 Rishabh Dave
* https://tracker.ceph.com/issues/61399
1511
  libmpich: undefined references to fi_strerror
1512
* https://tracker.ceph.com/issues/58945
1513
  xfstests-dev: ceph-fuse: generic 
1514 138 Rishabh Dave
* https://tracker.ceph.com/issues/58742
1515 136 Patrick Donnelly
  xfstests-dev: kcephfs: generic
1516
1517
h3. 24 May 2023
1518
1519
https://pulpito.ceph.com/pdonnell-2023-05-23_18:20:18-fs-wip-pdonnell-testing-20230523.134409-distro-default-smithi/
1520
1521
* https://tracker.ceph.com/issues/57676
1522
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1523
* https://tracker.ceph.com/issues/59683
1524
    Error: Unable to find a match: userspace-rcu-devel libedit-devel device-mapper-devel with fscrypt tests
1525
* https://tracker.ceph.com/issues/61399
1526
    qa: "[Makefile:299: ior] Error 1"
1527
* https://tracker.ceph.com/issues/61265
1528
    qa: tasks.cephfs.fuse_mount:process failed to terminate after unmount
1529
* https://tracker.ceph.com/issues/59348
1530
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1531
* https://tracker.ceph.com/issues/59346
1532
    qa/workunits/fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write"
1533
* https://tracker.ceph.com/issues/61400
1534
    valgrind+ceph-mon: segmentation fault in rocksdb+tcmalloc
1535
* https://tracker.ceph.com/issues/54460
1536
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1537
* https://tracker.ceph.com/issues/51964
1538
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1539
* https://tracker.ceph.com/issues/59344
1540
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
1541
* https://tracker.ceph.com/issues/61407
1542
    mds: abort on CInode::verify_dirfrags
1543
* https://tracker.ceph.com/issues/48773
1544
    qa: scrub does not complete
1545
* https://tracker.ceph.com/issues/57655
1546
    qa: fs:mixed-clients kernel_untar_build failure
1547
* https://tracker.ceph.com/issues/61409
1548 128 Venky Shankar
    qa: _test_stale_caps does not wait for file flush before stat
1549
1550
h3. 15 May 2023
1551 130 Venky Shankar
1552 128 Venky Shankar
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230509.090020
1553
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230509.090020-6
1554
1555
* https://tracker.ceph.com/issues/52624
1556
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
1557
* https://tracker.ceph.com/issues/54460
1558
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1559
* https://tracker.ceph.com/issues/57676
1560
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1561
* https://tracker.ceph.com/issues/59684 [kclient bug]
1562
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
1563
* https://tracker.ceph.com/issues/59348
1564
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1565 131 Venky Shankar
* https://tracker.ceph.com/issues/61148
1566
    dbench test results in call trace in dmesg [kclient bug]
1567 133 Kotresh Hiremath Ravishankar
* https://tracker.ceph.com/issues/58340
1568 134 Kotresh Hiremath Ravishankar
    mds: fsstress.sh hangs with multimds
1569 125 Venky Shankar
1570
 
1571 129 Rishabh Dave
h3. 11 May 2023
1572
1573
https://pulpito.ceph.com/yuriw-2023-05-10_18:21:40-fs-wip-yuri7-testing-2023-05-10-0742-distro-default-smithi/
1574
1575
* https://tracker.ceph.com/issues/59684 [kclient bug]
1576
  Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
1577
* https://tracker.ceph.com/issues/59348
1578
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1579
* https://tracker.ceph.com/issues/57655
1580
  qa: fs:mixed-clients kernel_untar_build failure
1581
* https://tracker.ceph.com/issues/57676
1582
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
1583
* https://tracker.ceph.com/issues/55805
1584
  error during scrub thrashing reached max tries in 900 secs
1585
* https://tracker.ceph.com/issues/54460
1586
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1587
* https://tracker.ceph.com/issues/57656
1588
  [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
1589
* https://tracker.ceph.com/issues/58220
1590
  Command failed (workunit test fs/quota/quota.sh) on smithi081 with status 1:
1591 1 Patrick Donnelly
* https://tracker.ceph.com/issues/58220#note-9
1592
  workunit fs/test_python.sh: test_disk_quota_exceeeded_error failure
1593 134 Kotresh Hiremath Ravishankar
* https://tracker.ceph.com/issues/59342
1594
  qa/workunits/kernel_untar_build.sh failed when compiling the Linux source
1595 135 Kotresh Hiremath Ravishankar
* https://tracker.ceph.com/issues/58949
1596
    test_cephfs.test_disk_quota_exceeeded_error - AssertionError: DiskQuotaExceeded not raised by write
1597 129 Rishabh Dave
* https://tracker.ceph.com/issues/61243 (NEW)
1598
  test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
1599
1600 125 Venky Shankar
h3. 11 May 2023
1601 127 Venky Shankar
1602
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230509.054005
1603 126 Venky Shankar
1604 125 Venky Shankar
(no fsstress job failure [https://tracker.ceph.com/issues/58340] since https://github.com/ceph/ceph/pull/49553
1605
 was included in the branch, however, the PR got updated and needs retest).
1606
1607
* https://tracker.ceph.com/issues/52624
1608
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
1609
* https://tracker.ceph.com/issues/54460
1610
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1611
* https://tracker.ceph.com/issues/57676
1612
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1613
* https://tracker.ceph.com/issues/59683
1614
    Error: Unable to find a match: userspace-rcu-devel libedit-devel device-mapper-devel with fscrypt tests
1615
* https://tracker.ceph.com/issues/59684 [kclient bug]
1616
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
1617
* https://tracker.ceph.com/issues/59348
1618 124 Venky Shankar
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1619
1620
h3. 09 May 2023
1621
1622
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230506.143554
1623
1624
* https://tracker.ceph.com/issues/52624
1625
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
1626
* https://tracker.ceph.com/issues/58340
1627
    mds: fsstress.sh hangs with multimds
1628
* https://tracker.ceph.com/issues/54460
1629
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1630
* https://tracker.ceph.com/issues/57676
1631
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1632
* https://tracker.ceph.com/issues/51964
1633
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1634
* https://tracker.ceph.com/issues/59350
1635
    qa: test_flag_scrub_mdsdir (tasks.cephfs.test_scrub_checks.TestScrubChecks) ... ERROR
1636
* https://tracker.ceph.com/issues/59683
1637
    Error: Unable to find a match: userspace-rcu-devel libedit-devel device-mapper-devel with fscrypt tests
1638
* https://tracker.ceph.com/issues/59684 [kclient bug]
1639
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
1640
* https://tracker.ceph.com/issues/59348
1641 123 Venky Shankar
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1642
1643
h3. 10 Apr 2023
1644
1645
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230330.105356
1646
1647
* https://tracker.ceph.com/issues/52624
1648
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
1649
* https://tracker.ceph.com/issues/58340
1650
    mds: fsstress.sh hangs with multimds
1651
* https://tracker.ceph.com/issues/54460
1652
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1653
* https://tracker.ceph.com/issues/57676
1654
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1655 119 Rishabh Dave
* https://tracker.ceph.com/issues/51964
1656 120 Rishabh Dave
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1657 121 Rishabh Dave
1658 120 Rishabh Dave
h3. 31 Mar 2023
1659 122 Rishabh Dave
1660
run: http://pulpito.front.sepia.ceph.com/rishabh-2023-03-03_21:39:49-fs-wip-rishabh-2023Mar03-2316-testing-default-smithi/
1661 120 Rishabh Dave
re-run (some PRs removed from batch): http://pulpito.front.sepia.ceph.com/rishabh-2023-03-11_05:54:03-fs-wip-rishabh-2023Mar10-1727-testing-default-smithi/
1662
re-run (some PRs removed from batch): http://pulpito.front.sepia.ceph.com/rishabh-2023-03-23_08:27:28-fs-wip-rishabh-2023Mar20-2250-testing-default-smithi/
1663
1664
There were many more re-runs for "failed+dead" jobs as well as for individual jobs. half of the PRs from the batch were removed (gradually over subsequent re-runs).
1665
1666
* https://tracker.ceph.com/issues/57676
1667
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
1668
* https://tracker.ceph.com/issues/54460
1669
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1670
* https://tracker.ceph.com/issues/58220
1671
  Command failed (workunit test fs/quota/quota.sh) on smithixxx with status 1
1672
* https://tracker.ceph.com/issues/58220#note-9
1673
  workunit fs/test_python.sh: test_disk_quota_exceeeded_error failure
1674
* https://tracker.ceph.com/issues/56695
1675
  Command failed (workunit test suites/pjd.sh)
1676
* https://tracker.ceph.com/issues/58564 
1677
  workuit dbench failed with error code 1
1678
* https://tracker.ceph.com/issues/57206
1679
  libcephfs/test.sh: ceph_test_libcephfs_reclaim
1680
* https://tracker.ceph.com/issues/57580
1681
  Test failure: test_newops_getvxattr (tasks.cephfs.test_newops.TestNewOps)
1682
* https://tracker.ceph.com/issues/58940
1683
  ceph osd hit ceph_abort
1684
* https://tracker.ceph.com/issues/55805
1685 118 Venky Shankar
  error scrub thrashing reached max tries in 900 secs
1686
1687
h3. 30 March 2023
1688
1689
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230315.085747
1690
1691
* https://tracker.ceph.com/issues/58938
1692
    qa: xfstests-dev's generic test suite has 7 failures with kclient
1693
* https://tracker.ceph.com/issues/51964
1694
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1695
* https://tracker.ceph.com/issues/58340
1696 114 Venky Shankar
    mds: fsstress.sh hangs with multimds
1697
1698 115 Venky Shankar
h3. 29 March 2023
1699 114 Venky Shankar
1700
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230317.095222
1701
1702
* https://tracker.ceph.com/issues/56695
1703
    [RHEL stock] pjd test failures
1704
* https://tracker.ceph.com/issues/57676
1705
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1706
* https://tracker.ceph.com/issues/57087
1707
    qa: test_fragmented_injection (tasks.cephfs.test_data_scan.TestDataScan) failure
1708 116 Venky Shankar
* https://tracker.ceph.com/issues/58340
1709
    mds: fsstress.sh hangs with multimds
1710 114 Venky Shankar
* https://tracker.ceph.com/issues/57655
1711
    qa: fs:mixed-clients kernel_untar_build failure
1712 117 Venky Shankar
* https://tracker.ceph.com/issues/59230
1713
    Test failure: test_object_deletion (tasks.cephfs.test_damage.TestDamage)
1714 114 Venky Shankar
* https://tracker.ceph.com/issues/58938
1715 113 Venky Shankar
    qa: xfstests-dev's generic test suite has 7 failures with kclient
1716
1717
h3. 13 Mar 2023
1718
1719
* https://tracker.ceph.com/issues/56695
1720
    [RHEL stock] pjd test failures
1721
* https://tracker.ceph.com/issues/57676
1722
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1723
* https://tracker.ceph.com/issues/51964
1724
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1725
* https://tracker.ceph.com/issues/54460
1726
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1727
* https://tracker.ceph.com/issues/57656
1728 112 Venky Shankar
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
1729
1730
h3. 09 Mar 2023
1731
1732
https://pulpito.ceph.com/vshankar-2023-03-03_04:39:14-fs-wip-vshankar-testing-20230303.023823-testing-default-smithi/
1733
https://pulpito.ceph.com/vshankar-2023-03-08_15:12:36-fs-wip-vshankar-testing-20230308.112059-testing-default-smithi/
1734
1735
* https://tracker.ceph.com/issues/56695
1736
    [RHEL stock] pjd test failures
1737
* https://tracker.ceph.com/issues/57676
1738
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1739
* https://tracker.ceph.com/issues/51964
1740
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1741
* https://tracker.ceph.com/issues/54460
1742
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1743
* https://tracker.ceph.com/issues/58340
1744
    mds: fsstress.sh hangs with multimds
1745
* https://tracker.ceph.com/issues/57087
1746 111 Venky Shankar
    qa: test_fragmented_injection (tasks.cephfs.test_data_scan.TestDataScan) failure
1747
1748
h3. 07 Mar 2023
1749
1750
https://pulpito.ceph.com/vshankar-2023-03-02_09:21:58-fs-wip-vshankar-testing-20230222.044949-testing-default-smithi/
1751
https://pulpito.ceph.com/vshankar-2023-03-07_05:15:12-fs-wip-vshankar-testing-20230307.030510-testing-default-smithi/
1752
1753
* https://tracker.ceph.com/issues/56695
1754
    [RHEL stock] pjd test failures
1755
* https://tracker.ceph.com/issues/57676
1756
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1757
* https://tracker.ceph.com/issues/51964
1758
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1759
* https://tracker.ceph.com/issues/57656
1760
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
1761
* https://tracker.ceph.com/issues/57655
1762
    qa: fs:mixed-clients kernel_untar_build failure
1763
* https://tracker.ceph.com/issues/58220
1764
    Command failed (workunit test fs/quota/quota.sh) on smithi081 with status 1:
1765
* https://tracker.ceph.com/issues/54460
1766
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1767
* https://tracker.ceph.com/issues/58934
1768 109 Venky Shankar
    snaptest-git-ceph.sh failure with ceph-fuse
1769
1770
h3. 28 Feb 2023
1771
1772
https://pulpito.ceph.com/vshankar-2023-02-24_02:11:45-fs-wip-vshankar-testing-20230222.025426-testing-default-smithi/
1773
1774
* https://tracker.ceph.com/issues/56695
1775
    [RHEL stock] pjd test failures
1776
* https://tracker.ceph.com/issues/57676
1777
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1778 110 Venky Shankar
* https://tracker.ceph.com/issues/56446
1779 109 Venky Shankar
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
1780
1781 107 Venky Shankar
(teuthology infra issues causing testing delays - merging PRs which have tests passing)
1782
1783
h3. 25 Jan 2023
1784
1785
https://pulpito.ceph.com/vshankar-2023-01-25_07:57:32-fs-wip-vshankar-testing-20230125.055346-testing-default-smithi/
1786
1787
* https://tracker.ceph.com/issues/52624
1788
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
1789
* https://tracker.ceph.com/issues/56695
1790
    [RHEL stock] pjd test failures
1791
* https://tracker.ceph.com/issues/57676
1792
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1793
* https://tracker.ceph.com/issues/56446
1794
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
1795
* https://tracker.ceph.com/issues/57206
1796
    libcephfs/test.sh: ceph_test_libcephfs_reclaim
1797
* https://tracker.ceph.com/issues/58220
1798
    Command failed (workunit test fs/quota/quota.sh) on smithi081 with status 1:
1799
* https://tracker.ceph.com/issues/58340
1800
  mds: fsstress.sh hangs with multimds
1801
* https://tracker.ceph.com/issues/56011
1802
    fs/thrash: snaptest-snap-rm-cmp.sh fails in mds5sum comparison
1803
* https://tracker.ceph.com/issues/54460
1804 101 Rishabh Dave
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1805
1806
h3. 30 JAN 2023
1807
1808
run: http://pulpito.front.sepia.ceph.com/rishabh-2022-11-28_08:04:11-fs-wip-rishabh-testing-2022Nov24-1818-testing-default-smithi/
1809
re-run: http://pulpito.front.sepia.ceph.com/rishabh-2023-01-13_12:08:33-fs-wip-rishabh-testing-2022Nov24-11Jan2023-distro-default-smithi/
1810 105 Rishabh Dave
re-run of re-run: http://pulpito.front.sepia.ceph.com/rishabh-2023-01-23_18:53:32-fs-wip-rishabh-testing-2022Nov24-11Jan2023-distro-default-smithi/
1811
1812 101 Rishabh Dave
* https://tracker.ceph.com/issues/52624
1813
  qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
1814
* https://tracker.ceph.com/issues/56695
1815
  [RHEL stock] pjd test failures
1816
* https://tracker.ceph.com/issues/57676
1817
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
1818
* https://tracker.ceph.com/issues/55332
1819
  Failure in snaptest-git-ceph.sh
1820
* https://tracker.ceph.com/issues/51964
1821
  qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1822
* https://tracker.ceph.com/issues/56446
1823
  Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
1824
* https://tracker.ceph.com/issues/57655 
1825
  qa: fs:mixed-clients kernel_untar_build failure
1826
* https://tracker.ceph.com/issues/54460
1827
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1828 103 Rishabh Dave
* https://tracker.ceph.com/issues/58340
1829
  mds: fsstress.sh hangs with multimds
1830 101 Rishabh Dave
* https://tracker.ceph.com/issues/58219
1831 102 Rishabh Dave
  Command crashed: 'ceph-dencoder type inode_backtrace_t import - decode dump_json'
1832
1833
* "Failed to load ceph-mgr modules: prometheus" in cluster log"
1834 106 Rishabh Dave
  http://pulpito.front.sepia.ceph.com/rishabh-2023-01-23_18:53:32-fs-wip-rishabh-testing-2022Nov24-11Jan2023-distro-default-smithi/7134086
1835
  Acc to Venky this was fixed in https://github.com/ceph/ceph/commit/cf6089200d96fc56b08ee17a4e31f19823370dc8
1836 102 Rishabh Dave
* Created https://tracker.ceph.com/issues/58564
1837 100 Venky Shankar
  workunit test suites/dbench.sh failed error code 1
1838
1839
h3. 15 Dec 2022
1840
1841
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20221215.112736
1842
1843
* https://tracker.ceph.com/issues/52624
1844
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1845
* https://tracker.ceph.com/issues/56695
1846
    [RHEL stock] pjd test failures
1847
* https://tracker.ceph.com/issues/58219
1848
* https://tracker.ceph.com/issues/57655
1849
* qa: fs:mixed-clients kernel_untar_build failure
1850
    Test failure: test_journal_migration (tasks.cephfs.test_journal_migration.TestJournalMigration)
1851
* https://tracker.ceph.com/issues/57676
1852
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1853
* https://tracker.ceph.com/issues/58340
1854 96 Venky Shankar
    mds: fsstress.sh hangs with multimds
1855
1856
h3. 08 Dec 2022
1857 99 Venky Shankar
1858 96 Venky Shankar
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20221130.043104
1859
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20221209.043803
1860
1861
(lots of transient git.ceph.com failures)
1862
1863
* https://tracker.ceph.com/issues/52624
1864
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1865
* https://tracker.ceph.com/issues/56695
1866
    [RHEL stock] pjd test failures
1867
* https://tracker.ceph.com/issues/57655
1868
    qa: fs:mixed-clients kernel_untar_build failure
1869
* https://tracker.ceph.com/issues/58219
1870
    Test failure: test_journal_migration (tasks.cephfs.test_journal_migration.TestJournalMigration)
1871
* https://tracker.ceph.com/issues/58220
1872
    Command failed (workunit test fs/quota/quota.sh) on smithi081 with status 1:
1873 97 Venky Shankar
* https://tracker.ceph.com/issues/57676
1874
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1875 98 Venky Shankar
* https://tracker.ceph.com/issues/53859
1876
    qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
1877
* https://tracker.ceph.com/issues/54460
1878
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1879 96 Venky Shankar
* https://tracker.ceph.com/issues/58244
1880 95 Venky Shankar
    Test failure: test_rebuild_inotable (tasks.cephfs.test_data_scan.TestDataScan)
1881
1882
h3. 14 Oct 2022
1883
1884
https://pulpito.ceph.com/vshankar-2022-10-12_04:56:59-fs-wip-vshankar-testing-20221011-145847-testing-default-smithi/
1885
https://pulpito.ceph.com/vshankar-2022-10-14_04:04:57-fs-wip-vshankar-testing-20221014-072608-testing-default-smithi/
1886
1887
* https://tracker.ceph.com/issues/52624
1888
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1889
* https://tracker.ceph.com/issues/55804
1890
    Command failed (workunit test suites/pjd.sh)
1891
* https://tracker.ceph.com/issues/51964
1892
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1893
* https://tracker.ceph.com/issues/57682
1894
    client: ERROR: test_reconnect_after_blocklisted
1895 90 Rishabh Dave
* https://tracker.ceph.com/issues/54460
1896 91 Rishabh Dave
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1897
1898
h3. 10 Oct 2022
1899 92 Rishabh Dave
1900 91 Rishabh Dave
http://pulpito.front.sepia.ceph.com/rishabh-2022-09-30_19:45:21-fs-wip-rishabh-testing-30Sep2022-testing-default-smithi/
1901
1902
reruns
1903
* fs-thrash, passed: http://pulpito.front.sepia.ceph.com/rishabh-2022-10-04_13:19:47-fs-wip-rishabh-testing-30Sep2022-testing-default-smithi/
1904 94 Rishabh Dave
* fs-verify, passed: http://pulpito.front.sepia.ceph.com/rishabh-2022-10-05_12:25:37-fs-wip-rishabh-testing-30Sep2022-testing-default-smithi/
1905 91 Rishabh Dave
* cephadm failures also passed after many re-runs: http://pulpito.front.sepia.ceph.com/rishabh-2022-10-06_13:50:51-fs-wip-rishabh-testing-30Sep2022-2-testing-default-smithi/
1906 93 Rishabh Dave
    ** needed this PR to be merged in ceph-ci branch - https://github.com/ceph/ceph/pull/47458
1907 91 Rishabh Dave
1908
known bugs
1909
* https://tracker.ceph.com/issues/52624
1910
  qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1911
* https://tracker.ceph.com/issues/50223
1912
  client.xxxx isn't responding to mclientcaps(revoke
1913
* https://tracker.ceph.com/issues/57299
1914
  qa: test_dump_loads fails with JSONDecodeError
1915
* https://tracker.ceph.com/issues/57655 [Exist in main as well]
1916
  qa: fs:mixed-clients kernel_untar_build failure
1917
* https://tracker.ceph.com/issues/57206
1918 90 Rishabh Dave
  libcephfs/test.sh: ceph_test_libcephfs_reclaim
1919
1920
h3. 2022 Sep 29
1921
1922
http://pulpito.front.sepia.ceph.com/rishabh-2022-09-14_12:48:43-fs-wip-rishabh-testing-2022Sep9-1708-testing-default-smithi/
1923
1924
* https://tracker.ceph.com/issues/55804
1925
  Command failed (workunit test suites/pjd.sh)
1926
* https://tracker.ceph.com/issues/36593
1927
  Command failed (workunit test fs/quota/quota.sh) on smithixxx with status 1
1928
* https://tracker.ceph.com/issues/52624
1929
  qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1930
* https://tracker.ceph.com/issues/51964
1931
  qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1932
* https://tracker.ceph.com/issues/56632
1933
  Test failure: test_subvolume_snapshot_clone_quota_exceeded
1934
* https://tracker.ceph.com/issues/50821
1935 88 Patrick Donnelly
  qa: untar_snap_rm failure during mds thrashing
1936
1937
h3. 2022 Sep 26
1938
1939
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20220923.171109
1940
1941
* https://tracker.ceph.com/issues/55804
1942
    qa failure: pjd link tests failed
1943
* https://tracker.ceph.com/issues/57676
1944
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1945
* https://tracker.ceph.com/issues/52624
1946
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1947
* https://tracker.ceph.com/issues/57580
1948
    Test failure: test_newops_getvxattr (tasks.cephfs.test_newops.TestNewOps)
1949
* https://tracker.ceph.com/issues/48773
1950
    qa: scrub does not complete
1951
* https://tracker.ceph.com/issues/57299
1952
    qa: test_dump_loads fails with JSONDecodeError
1953
* https://tracker.ceph.com/issues/57280
1954
    qa: tasks/kernel_cfuse_workunits_untarbuild_blogbench fails - Failed to fetch package version from shaman
1955
* https://tracker.ceph.com/issues/57205
1956
    Test failure: test_subvolume_group_ls_filter_internal_directories (tasks.cephfs.test_volumes.TestSubvolumeGroups)
1957
* https://tracker.ceph.com/issues/57656
1958
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
1959
* https://tracker.ceph.com/issues/57677
1960
    qa: "1 MDSs behind on trimming (MDS_TRIM)"
1961
* https://tracker.ceph.com/issues/57206
1962
    libcephfs/test.sh: ceph_test_libcephfs_reclaim
1963
* https://tracker.ceph.com/issues/57446
1964
    qa: test_subvolume_snapshot_info_if_orphan_clone fails
1965 89 Patrick Donnelly
* https://tracker.ceph.com/issues/57655 [Exist in main as well]
1966
    qa: fs:mixed-clients kernel_untar_build failure
1967 88 Patrick Donnelly
* https://tracker.ceph.com/issues/57682
1968
    client: ERROR: test_reconnect_after_blocklisted
1969 87 Patrick Donnelly
1970
1971
h3. 2022 Sep 22
1972
1973
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20220920.234701
1974
1975
* https://tracker.ceph.com/issues/57299
1976
    qa: test_dump_loads fails with JSONDecodeError
1977
* https://tracker.ceph.com/issues/57205
1978
    Test failure: test_subvolume_group_ls_filter_internal_directories (tasks.cephfs.test_volumes.TestSubvolumeGroups)
1979
* https://tracker.ceph.com/issues/52624
1980
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1981
* https://tracker.ceph.com/issues/57580
1982
    Test failure: test_newops_getvxattr (tasks.cephfs.test_newops.TestNewOps)
1983
* https://tracker.ceph.com/issues/57280
1984
    qa: tasks/kernel_cfuse_workunits_untarbuild_blogbench fails - Failed to fetch package version from shaman
1985
* https://tracker.ceph.com/issues/48773
1986
    qa: scrub does not complete
1987
* https://tracker.ceph.com/issues/56446
1988
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
1989
* https://tracker.ceph.com/issues/57206
1990
    libcephfs/test.sh: ceph_test_libcephfs_reclaim
1991
* https://tracker.ceph.com/issues/51267
1992
    CommandFailedError: Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithi096 with status 1:...
1993
1994
NEW:
1995
1996
* https://tracker.ceph.com/issues/57656
1997
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
1998
* https://tracker.ceph.com/issues/57655 [Exist in main as well]
1999
    qa: fs:mixed-clients kernel_untar_build failure
2000
* https://tracker.ceph.com/issues/57657
2001
    mds: scrub locates mismatch between child accounted_rstats and self rstats
2002
2003
Segfault probably caused by: https://github.com/ceph/ceph/pull/47795#issuecomment-1255724799
2004 80 Venky Shankar
2005 79 Venky Shankar
2006
h3. 2022 Sep 16
2007
2008
https://pulpito.ceph.com/?branch=wip-vshankar-testing1-20220905-132828
2009
2010
* https://tracker.ceph.com/issues/57446
2011
    qa: test_subvolume_snapshot_info_if_orphan_clone fails
2012
* https://tracker.ceph.com/issues/57299
2013
    qa: test_dump_loads fails with JSONDecodeError
2014
* https://tracker.ceph.com/issues/50223
2015
    client.xxxx isn't responding to mclientcaps(revoke)
2016
* https://tracker.ceph.com/issues/52624
2017
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2018
* https://tracker.ceph.com/issues/57205
2019
    Test failure: test_subvolume_group_ls_filter_internal_directories (tasks.cephfs.test_volumes.TestSubvolumeGroups)
2020
* https://tracker.ceph.com/issues/57280
2021
    qa: tasks/kernel_cfuse_workunits_untarbuild_blogbench fails - Failed to fetch package version from shaman
2022
* https://tracker.ceph.com/issues/51282
2023
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2024
* https://tracker.ceph.com/issues/48203
2025
  https://tracker.ceph.com/issues/36593
2026
    qa: quota failure
2027
    qa: quota failure caused by clients stepping on each other
2028
* https://tracker.ceph.com/issues/57580
2029 77 Rishabh Dave
    Test failure: test_newops_getvxattr (tasks.cephfs.test_newops.TestNewOps)
2030
2031 76 Rishabh Dave
2032
h3. 2022 Aug 26
2033
2034
http://pulpito.front.sepia.ceph.com/rishabh-2022-08-22_17:49:59-fs-wip-rishabh-testing-2022Aug19-testing-default-smithi/
2035
http://pulpito.front.sepia.ceph.com/rishabh-2022-08-24_11:56:51-fs-wip-rishabh-testing-2022Aug19-testing-default-smithi/
2036
2037
* https://tracker.ceph.com/issues/57206
2038
  libcephfs/test.sh: ceph_test_libcephfs_reclaim
2039
* https://tracker.ceph.com/issues/56632
2040
  Test failure: test_subvolume_snapshot_clone_quota_exceeded (tasks.cephfs.test_volumes.TestSubvolumeSnapshotClones)
2041
* https://tracker.ceph.com/issues/56446
2042
  Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
2043
* https://tracker.ceph.com/issues/51964
2044
  qa: test_cephfs_mirror_restart_sync_on_blocklist failure
2045
* https://tracker.ceph.com/issues/53859
2046
  qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
2047
2048
* https://tracker.ceph.com/issues/54460
2049
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
2050
* https://tracker.ceph.com/issues/54462
2051
  Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128
2052
* https://tracker.ceph.com/issues/54460
2053
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
2054
* https://tracker.ceph.com/issues/36593
2055
  Command failed (workunit test fs/quota/quota.sh) on smithixxx with status 1
2056
2057
* https://tracker.ceph.com/issues/52624
2058
  qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2059
* https://tracker.ceph.com/issues/55804
2060
  Command failed (workunit test suites/pjd.sh)
2061
* https://tracker.ceph.com/issues/50223
2062
  client.xxxx isn't responding to mclientcaps(revoke)
2063 75 Venky Shankar
2064
2065
h3. 2022 Aug 22
2066
2067
https://pulpito.ceph.com/vshankar-2022-08-12_09:34:24-fs-wip-vshankar-testing1-20220812-072441-testing-default-smithi/
2068
https://pulpito.ceph.com/vshankar-2022-08-18_04:30:42-fs-wip-vshankar-testing1-20220818-082047-testing-default-smithi/ (drop problematic PR and re-run)
2069
2070
* https://tracker.ceph.com/issues/52624
2071
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2072
* https://tracker.ceph.com/issues/56446
2073
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
2074
* https://tracker.ceph.com/issues/55804
2075
    Command failed (workunit test suites/pjd.sh)
2076
* https://tracker.ceph.com/issues/51278
2077
    mds: "FAILED ceph_assert(!segments.empty())"
2078
* https://tracker.ceph.com/issues/54460
2079
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
2080
* https://tracker.ceph.com/issues/57205
2081
    Test failure: test_subvolume_group_ls_filter_internal_directories (tasks.cephfs.test_volumes.TestSubvolumeGroups)
2082
* https://tracker.ceph.com/issues/57206
2083
    ceph_test_libcephfs_reclaim crashes during test
2084
* https://tracker.ceph.com/issues/53859
2085
  Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
2086
* https://tracker.ceph.com/issues/50223
2087 72 Venky Shankar
    client.xxxx isn't responding to mclientcaps(revoke)
2088
2089
h3. 2022 Aug 12
2090
2091
https://pulpito.ceph.com/vshankar-2022-08-10_04:06:00-fs-wip-vshankar-testing-20220805-190751-testing-default-smithi/
2092
https://pulpito.ceph.com/vshankar-2022-08-11_12:16:58-fs-wip-vshankar-testing-20220811-145809-testing-default-smithi/ (drop problematic PR and re-run)
2093
2094
* https://tracker.ceph.com/issues/52624
2095
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2096
* https://tracker.ceph.com/issues/56446
2097
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
2098
* https://tracker.ceph.com/issues/51964
2099
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
2100
* https://tracker.ceph.com/issues/55804
2101
    Command failed (workunit test suites/pjd.sh)
2102
* https://tracker.ceph.com/issues/50223
2103
    client.xxxx isn't responding to mclientcaps(revoke)
2104
* https://tracker.ceph.com/issues/50821
2105 73 Venky Shankar
    qa: untar_snap_rm failure during mds thrashing
2106 72 Venky Shankar
* https://tracker.ceph.com/issues/54460
2107 71 Venky Shankar
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
2108
2109
h3. 2022 Aug 04
2110
2111
https://pulpito.ceph.com/?branch=wip-vshankar-testing1-20220804-123835 (only mgr/volumes, mgr/stats)
2112
2113 69 Rishabh Dave
Unrealted teuthology failure on rhel
2114 68 Rishabh Dave
2115
h3. 2022 Jul 25
2116
2117
http://pulpito.front.sepia.ceph.com/rishabh-2022-07-22_11:34:20-fs-wip-rishabh-testing-2022Jul22-1400-testing-default-smithi/
2118
2119 74 Rishabh Dave
1st re-run: http://pulpito.front.sepia.ceph.com/rishabh-2022-07-24_03:51:19-fs-wip-rishabh-testing-2022Jul22-1400-testing-default-smithi
2120
2nd re-run: http://pulpito.front.sepia.ceph.com/rishabh-2022-07-24_08:53:36-fs-wip-rishabh-testing-2022Jul22-1400-testing-default-smithi/
2121 68 Rishabh Dave
3rd re-run: http://pulpito.front.sepia.ceph.com/rishabh-2022-07-24_08:53:36-fs-wip-rishabh-testing-2022Jul22-1400-testing-default-smithi/
2122
4th (final) re-run: http://pulpito.front.sepia.ceph.com/rishabh-2022-07-28_03:59:01-fs-wip-rishabh-testing-2022Jul28-0143-testing-default-smithi/
2123
2124
* https://tracker.ceph.com/issues/55804
2125
  Command failed (workunit test suites/pjd.sh)
2126
* https://tracker.ceph.com/issues/50223
2127
  client.xxxx isn't responding to mclientcaps(revoke)
2128
2129
* https://tracker.ceph.com/issues/54460
2130
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
2131 1 Patrick Donnelly
* https://tracker.ceph.com/issues/36593
2132 74 Rishabh Dave
  Command failed (workunit test fs/quota/quota.sh) on smithixxx with status 1
2133 68 Rishabh Dave
* https://tracker.ceph.com/issues/54462
2134 67 Patrick Donnelly
  Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128~
2135
2136
h3. 2022 July 22
2137
2138
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20220721.235756
2139
2140
MDS_HEALTH_DUMMY error in log fixed by followup commit.
2141
transient selinux ping failure
2142
2143
* https://tracker.ceph.com/issues/56694
2144
    qa: avoid blocking forever on hung umount
2145
* https://tracker.ceph.com/issues/56695
2146
    [RHEL stock] pjd test failures
2147
* https://tracker.ceph.com/issues/56696
2148
    admin keyring disappears during qa run
2149
* https://tracker.ceph.com/issues/56697
2150
    qa: fs/snaps fails for fuse
2151
* https://tracker.ceph.com/issues/50222
2152
    osd: 5.2s0 deep-scrub : stat mismatch
2153
* https://tracker.ceph.com/issues/56698
2154
    client: FAILED ceph_assert(_size == 0)
2155
* https://tracker.ceph.com/issues/50223
2156
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2157 66 Rishabh Dave
2158 65 Rishabh Dave
2159
h3. 2022 Jul 15
2160
2161
http://pulpito.front.sepia.ceph.com/rishabh-2022-07-08_23:53:34-fs-wip-rishabh-testing-2022Jul08-1820-testing-default-smithi/
2162
2163
re-run: http://pulpito.front.sepia.ceph.com/rishabh-2022-07-15_06:42:04-fs-wip-rishabh-testing-2022Jul08-1820-testing-default-smithi/
2164
2165
* https://tracker.ceph.com/issues/53859
2166
  Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
2167
* https://tracker.ceph.com/issues/55804
2168
  Command failed (workunit test suites/pjd.sh)
2169
* https://tracker.ceph.com/issues/50223
2170
  client.xxxx isn't responding to mclientcaps(revoke)
2171
* https://tracker.ceph.com/issues/50222
2172
  osd: deep-scrub : stat mismatch
2173
2174
* https://tracker.ceph.com/issues/56632
2175
  Test failure: test_subvolume_snapshot_clone_quota_exceeded (tasks.cephfs.test_volumes.TestSubvolumeSnapshotClones)
2176
* https://tracker.ceph.com/issues/56634
2177
  workunit test fs/snaps/snaptest-intodir.sh
2178
* https://tracker.ceph.com/issues/56644
2179
  Test failure: test_rapid_creation (tasks.cephfs.test_fragment.TestFragmentation)
2180
2181 61 Rishabh Dave
2182
2183
h3. 2022 July 05
2184 62 Rishabh Dave
2185 64 Rishabh Dave
http://pulpito.front.sepia.ceph.com/rishabh-2022-07-02_14:14:52-fs-wip-rishabh-testing-20220702-1631-testing-default-smithi/
2186
2187
On 1st re-run some jobs passed - http://pulpito.front.sepia.ceph.com/rishabh-2022-07-03_15:10:28-fs-wip-rishabh-testing-20220702-1631-distro-default-smithi/
2188
2189
On 2nd re-run only few jobs failed -
2190 62 Rishabh Dave
http://pulpito.front.sepia.ceph.com/rishabh-2022-07-06_05:24:29-fs-wip-rishabh-testing-20220705-2132-distro-default-smithi/
2191
http://pulpito.front.sepia.ceph.com/rishabh-2022-07-06_05:24:29-fs-wip-rishabh-testing-20220705-2132-distro-default-smithi/
2192
2193
* https://tracker.ceph.com/issues/56446
2194
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
2195
* https://tracker.ceph.com/issues/55804
2196
    Command failed (workunit test suites/pjd.sh) on smithi047 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/
2197
2198
* https://tracker.ceph.com/issues/56445
2199 63 Rishabh Dave
    Command failed on smithi080 with status 123: "find /home/ubuntu/cephtest/archive/syslog -name '*.log' -print0 | sudo xargs -0 --no-run-if-empty -- gzip --"
2200
* https://tracker.ceph.com/issues/51267
2201
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithi098 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1
2202 62 Rishabh Dave
* https://tracker.ceph.com/issues/50224
2203
    Test failure: test_mirroring_init_failure_with_recovery (tasks.cephfs.test_mirroring.TestMirroring)
2204 61 Rishabh Dave
2205 58 Venky Shankar
2206
2207
h3. 2022 July 04
2208
2209
https://pulpito.ceph.com/vshankar-2022-06-29_09:19:00-fs-wip-vshankar-testing-20220627-100931-testing-default-smithi/
2210
(rhel runs were borked due to: https://lists.ceph.io/hyperkitty/list/dev@ceph.io/thread/JSZQFUKVLDND4W33PXDGCABPHNSPT6SS/, tests ran with --filter-out=rhel)
2211
2212
* https://tracker.ceph.com/issues/56445
2213 59 Rishabh Dave
    Command failed on smithi162 with status 123: "find /home/ubuntu/cephtest/archive/syslog -name '*.log' -print0 | sudo xargs -0 --no-run-if-empty -- gzip --"
2214
* https://tracker.ceph.com/issues/56446
2215
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
2216
* https://tracker.ceph.com/issues/51964
2217 60 Rishabh Dave
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
2218 59 Rishabh Dave
* https://tracker.ceph.com/issues/52624
2219 57 Venky Shankar
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2220
2221
h3. 2022 June 20
2222
2223
https://pulpito.ceph.com/vshankar-2022-06-15_04:03:39-fs-wip-vshankar-testing1-20220615-072516-testing-default-smithi/
2224
https://pulpito.ceph.com/vshankar-2022-06-19_08:22:46-fs-wip-vshankar-testing1-20220619-102531-testing-default-smithi/
2225
2226
* https://tracker.ceph.com/issues/52624
2227
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2228
* https://tracker.ceph.com/issues/55804
2229
    qa failure: pjd link tests failed
2230
* https://tracker.ceph.com/issues/54108
2231
    qa: iogen workunit: "The following counters failed to be set on mds daemons: {'mds.exported', 'mds.imported'}"
2232
* https://tracker.ceph.com/issues/55332
2233 56 Patrick Donnelly
    Failure in snaptest-git-ceph.sh (it's an async unlink/create bug)
2234
2235
h3. 2022 June 13
2236
2237
https://pulpito.ceph.com/pdonnell-2022-06-12_05:08:12-fs:workload-wip-pdonnell-testing-20220612.004943-distro-default-smithi/
2238
2239
* https://tracker.ceph.com/issues/56024
2240
    cephadm: removes ceph.conf during qa run causing command failure
2241
* https://tracker.ceph.com/issues/48773
2242
    qa: scrub does not complete
2243
* https://tracker.ceph.com/issues/56012
2244
    mds: src/mds/MDLog.cc: 283: FAILED ceph_assert(!mds->is_ any_replay())
2245 55 Venky Shankar
2246 54 Venky Shankar
2247
h3. 2022 Jun 13
2248
2249
https://pulpito.ceph.com/vshankar-2022-06-07_00:25:50-fs-wip-vshankar-testing-20220606-223254-testing-default-smithi/
2250
https://pulpito.ceph.com/vshankar-2022-06-10_01:04:46-fs-wip-vshankar-testing-20220609-175550-testing-default-smithi/
2251
2252
* https://tracker.ceph.com/issues/52624
2253
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2254
* https://tracker.ceph.com/issues/51964
2255
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
2256
* https://tracker.ceph.com/issues/53859
2257
    qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
2258
* https://tracker.ceph.com/issues/55804
2259
    qa failure: pjd link tests failed
2260
* https://tracker.ceph.com/issues/56003
2261
    client: src/include/xlist.h: 81: FAILED ceph_assert(_size == 0)
2262
* https://tracker.ceph.com/issues/56011
2263
    fs/thrash: snaptest-snap-rm-cmp.sh fails in mds5sum comparison
2264
* https://tracker.ceph.com/issues/56012
2265 53 Venky Shankar
    mds: src/mds/MDLog.cc: 283: FAILED ceph_assert(!mds->is_ any_replay())
2266
2267
h3. 2022 Jun 07
2268
2269
https://pulpito.ceph.com/vshankar-2022-06-06_21:25:41-fs-wip-vshankar-testing1-20220606-230129-testing-default-smithi/
2270
https://pulpito.ceph.com/vshankar-2022-06-07_10:53:31-fs-wip-vshankar-testing1-20220607-104134-testing-default-smithi/ (rerun after dropping a problematic PR)
2271
2272
* https://tracker.ceph.com/issues/52624
2273
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2274
* https://tracker.ceph.com/issues/50223
2275
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2276
* https://tracker.ceph.com/issues/50224
2277 51 Venky Shankar
    qa: test_mirroring_init_failure_with_recovery failure
2278
2279
h3. 2022 May 12
2280 52 Venky Shankar
2281 51 Venky Shankar
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20220509-125847
2282
https://pulpito.ceph.com/vshankar-2022-05-13_17:09:16-fs-wip-vshankar-testing-20220513-120051-testing-default-smithi/ (drop prs + rerun)
2283
2284
* https://tracker.ceph.com/issues/52624
2285
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2286
* https://tracker.ceph.com/issues/50223
2287
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2288
* https://tracker.ceph.com/issues/55332
2289
    Failure in snaptest-git-ceph.sh
2290
* https://tracker.ceph.com/issues/53859
2291 1 Patrick Donnelly
    qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
2292 52 Venky Shankar
* https://tracker.ceph.com/issues/55538
2293
    Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead)
2294 51 Venky Shankar
* https://tracker.ceph.com/issues/55258
2295 49 Venky Shankar
    lots of "heartbeat_check: no reply from X.X.X.X" in OSD logs (cropss up again, though very infrequent)
2296
2297 50 Venky Shankar
h3. 2022 May 04
2298
2299
https://pulpito.ceph.com/vshankar-2022-05-01_13:18:44-fs-wip-vshankar-testing1-20220428-204527-testing-default-smithi/
2300 49 Venky Shankar
https://pulpito.ceph.com/vshankar-2022-05-02_16:58:59-fs-wip-vshankar-testing1-20220502-201957-testing-default-smithi/ (after dropping PRs)
2301
2302
* https://tracker.ceph.com/issues/52624
2303
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2304
* https://tracker.ceph.com/issues/50223
2305
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2306
* https://tracker.ceph.com/issues/55332
2307
    Failure in snaptest-git-ceph.sh
2308
* https://tracker.ceph.com/issues/53859
2309
    qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
2310
* https://tracker.ceph.com/issues/55516
2311
    qa: fs suite tests failing with "json.decoder.JSONDecodeError: Extra data: line 2 column 82 (char 82)"
2312
* https://tracker.ceph.com/issues/55537
2313
    mds: crash during fs:upgrade test
2314
* https://tracker.ceph.com/issues/55538
2315 48 Venky Shankar
    Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead)
2316
2317
h3. 2022 Apr 25
2318
2319
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20220420-113951 (owner vshankar)
2320
2321
* https://tracker.ceph.com/issues/52624
2322
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2323
* https://tracker.ceph.com/issues/50223
2324
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2325
* https://tracker.ceph.com/issues/55258
2326
    lots of "heartbeat_check: no reply from X.X.X.X" in OSD logs
2327
* https://tracker.ceph.com/issues/55377
2328 47 Venky Shankar
    kclient: mds revoke Fwb caps stuck after the kclient tries writebcak once
2329
2330
h3. 2022 Apr 14
2331
2332
https://pulpito.ceph.com/?branch=wip-vshankar-testing1-20220411-144044
2333
2334
* https://tracker.ceph.com/issues/52624
2335
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2336
* https://tracker.ceph.com/issues/50223
2337
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2338
* https://tracker.ceph.com/issues/52438
2339
    qa: ffsb timeout
2340
* https://tracker.ceph.com/issues/55170
2341
    mds: crash during rejoin (CDir::fetch_keys)
2342
* https://tracker.ceph.com/issues/55331
2343
    pjd failure
2344
* https://tracker.ceph.com/issues/48773
2345
    qa: scrub does not complete
2346
* https://tracker.ceph.com/issues/55332
2347
    Failure in snaptest-git-ceph.sh
2348
* https://tracker.ceph.com/issues/55258
2349 45 Venky Shankar
    lots of "heartbeat_check: no reply from X.X.X.X" in OSD logs
2350
2351 46 Venky Shankar
h3. 2022 Apr 11
2352 45 Venky Shankar
2353
https://pulpito.ceph.com/?branch=wip-vshankar-testing-55110-20220408-203242
2354
2355
* https://tracker.ceph.com/issues/48773
2356
    qa: scrub does not complete
2357
* https://tracker.ceph.com/issues/52624
2358
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2359
* https://tracker.ceph.com/issues/52438
2360
    qa: ffsb timeout
2361
* https://tracker.ceph.com/issues/48680
2362
    mds: scrubbing stuck "scrub active (0 inodes in the stack)"
2363
* https://tracker.ceph.com/issues/55236
2364
    qa: fs/snaps tests fails with "hit max job timeout"
2365
* https://tracker.ceph.com/issues/54108
2366
    qa: iogen workunit: "The following counters failed to be set on mds daemons: {'mds.exported', 'mds.imported'}"
2367
* https://tracker.ceph.com/issues/54971
2368
    Test failure: test_perf_stats_stale_metrics (tasks.cephfs.test_mds_metrics.TestMDSMetrics)
2369
* https://tracker.ceph.com/issues/50223
2370
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2371
* https://tracker.ceph.com/issues/55258
2372 44 Venky Shankar
    lots of "heartbeat_check: no reply from X.X.X.X" in OSD logs
2373 42 Venky Shankar
2374 43 Venky Shankar
h3. 2022 Mar 21
2375
2376
https://pulpito.ceph.com/vshankar-2022-03-20_02:16:37-fs-wip-vshankar-testing-20220319-163539-testing-default-smithi/
2377
2378
Run didn't go well, lots of failures - debugging by dropping PRs and running against master branch. Only merging unrelated PRs that pass tests.
2379
2380
2381 42 Venky Shankar
h3. 2022 Mar 08
2382
2383
https://pulpito.ceph.com/vshankar-2022-02-28_04:32:15-fs-wip-vshankar-testing-20220226-211550-testing-default-smithi/
2384
2385
rerun with
2386
- (drop) https://github.com/ceph/ceph/pull/44679
2387
- (drop) https://github.com/ceph/ceph/pull/44958
2388
https://pulpito.ceph.com/vshankar-2022-03-06_14:47:51-fs-wip-vshankar-testing-20220304-132102-testing-default-smithi/
2389
2390
* https://tracker.ceph.com/issues/54419 (new)
2391
    `ceph orch upgrade start` seems to never reach completion
2392
* https://tracker.ceph.com/issues/51964
2393
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
2394
* https://tracker.ceph.com/issues/52624
2395
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2396
* https://tracker.ceph.com/issues/50223
2397
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2398
* https://tracker.ceph.com/issues/52438
2399
    qa: ffsb timeout
2400
* https://tracker.ceph.com/issues/50821
2401
    qa: untar_snap_rm failure during mds thrashing
2402 41 Venky Shankar
2403
2404
h3. 2022 Feb 09
2405
2406
https://pulpito.ceph.com/vshankar-2022-02-05_17:27:49-fs-wip-vshankar-testing-20220201-113815-testing-default-smithi/
2407
2408
rerun with
2409
- (drop) https://github.com/ceph/ceph/pull/37938
2410
- (drop) https://github.com/ceph/ceph/pull/44335
2411
- (drop) https://github.com/ceph/ceph/pull/44491
2412
- (drop) https://github.com/ceph/ceph/pull/44501
2413
https://pulpito.ceph.com/vshankar-2022-02-08_14:27:29-fs-wip-vshankar-testing-20220208-181241-testing-default-smithi/
2414
2415
* https://tracker.ceph.com/issues/51964
2416
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
2417
* https://tracker.ceph.com/issues/54066
2418
    test_subvolume_no_upgrade_v1_sanity fails with `AssertionError: 1000 != 0`
2419
* https://tracker.ceph.com/issues/48773
2420
    qa: scrub does not complete
2421
* https://tracker.ceph.com/issues/52624
2422
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2423
* https://tracker.ceph.com/issues/50223
2424
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2425
* https://tracker.ceph.com/issues/52438
2426 40 Patrick Donnelly
    qa: ffsb timeout
2427
2428
h3. 2022 Feb 01
2429
2430
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20220127.171526
2431
2432
* https://tracker.ceph.com/issues/54107
2433
    kclient: hang during umount
2434
* https://tracker.ceph.com/issues/54106
2435
    kclient: hang during workunit cleanup
2436
* https://tracker.ceph.com/issues/54108
2437
    qa: iogen workunit: "The following counters failed to be set on mds daemons: {'mds.exported', 'mds.imported'}"
2438
* https://tracker.ceph.com/issues/48773
2439
    qa: scrub does not complete
2440
* https://tracker.ceph.com/issues/52438
2441
    qa: ffsb timeout
2442 36 Venky Shankar
2443
2444
h3. 2022 Jan 13
2445 39 Venky Shankar
2446 36 Venky Shankar
https://pulpito.ceph.com/vshankar-2022-01-06_13:18:41-fs-wip-vshankar-testing-20220106-145819-testing-default-smithi/
2447 38 Venky Shankar
2448
rerun with:
2449 36 Venky Shankar
- (add) https://github.com/ceph/ceph/pull/44570
2450
- (drop) https://github.com/ceph/ceph/pull/43184
2451
https://pulpito.ceph.com/vshankar-2022-01-13_04:42:40-fs-wip-vshankar-testing-20220106-145819-testing-default-smithi/
2452
2453
* https://tracker.ceph.com/issues/50223
2454
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2455
* https://tracker.ceph.com/issues/51282
2456
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2457
* https://tracker.ceph.com/issues/48773
2458
    qa: scrub does not complete
2459
* https://tracker.ceph.com/issues/52624
2460
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2461
* https://tracker.ceph.com/issues/53859
2462 34 Venky Shankar
    qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
2463
2464
h3. 2022 Jan 03
2465
2466
https://pulpito.ceph.com/vshankar-2021-12-22_07:37:44-fs-wip-vshankar-testing-20211216-114012-testing-default-smithi/
2467
https://pulpito.ceph.com/vshankar-2022-01-03_12:27:45-fs-wip-vshankar-testing-20220103-142738-testing-default-smithi/ (rerun)
2468
2469
* https://tracker.ceph.com/issues/50223
2470
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2471
* https://tracker.ceph.com/issues/51964
2472
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
2473
* https://tracker.ceph.com/issues/51267
2474
    CommandFailedError: Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithi096 with status 1:...
2475
* https://tracker.ceph.com/issues/51282
2476
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2477
* https://tracker.ceph.com/issues/50821
2478
    qa: untar_snap_rm failure during mds thrashing
2479 35 Ramana Raja
* https://tracker.ceph.com/issues/51278
2480
    mds: "FAILED ceph_assert(!segments.empty())"
2481
* https://tracker.ceph.com/issues/52279
2482 34 Venky Shankar
    cephadm tests fail due to: error adding seccomp filter rule for syscall bdflush: requested action matches default action of filter
2483 33 Patrick Donnelly
2484
2485
h3. 2021 Dec 22
2486
2487
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211222.014316
2488
2489
* https://tracker.ceph.com/issues/52624
2490
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2491
* https://tracker.ceph.com/issues/50223
2492
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2493
* https://tracker.ceph.com/issues/52279
2494
    cephadm tests fail due to: error adding seccomp filter rule for syscall bdflush: requested action matches default action of filter
2495
* https://tracker.ceph.com/issues/50224
2496
    qa: test_mirroring_init_failure_with_recovery failure
2497
* https://tracker.ceph.com/issues/48773
2498
    qa: scrub does not complete
2499 32 Venky Shankar
2500
2501
h3. 2021 Nov 30
2502
2503
https://pulpito.ceph.com/vshankar-2021-11-24_07:14:27-fs-wip-vshankar-testing-20211124-094330-testing-default-smithi/
2504
https://pulpito.ceph.com/vshankar-2021-11-30_06:23:32-fs-wip-vshankar-testing-20211124-094330-distro-default-smithi/ (rerun w/ QA fixes)
2505
2506
* https://tracker.ceph.com/issues/53436
2507
    mds, mon: mds beacon messages get dropped? (mds never reaches up:active state)
2508
* https://tracker.ceph.com/issues/51964
2509
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
2510
* https://tracker.ceph.com/issues/48812
2511
    qa: test_scrub_pause_and_resume_with_abort failure
2512
* https://tracker.ceph.com/issues/51076
2513
    "wait_for_recovery: failed before timeout expired" during thrashosd test with EC backend.
2514
* https://tracker.ceph.com/issues/50223
2515
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2516
* https://tracker.ceph.com/issues/52624
2517
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2518
* https://tracker.ceph.com/issues/50250
2519
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
2520 31 Patrick Donnelly
2521
2522
h3. 2021 November 9
2523
2524
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211109.180315
2525
2526
* https://tracker.ceph.com/issues/53214
2527
    qa: "dd: error reading '/sys/kernel/debug/ceph/2a934501-6731-4052-a836-f42229a869be.client4874/metrics': Is a directory"
2528
* https://tracker.ceph.com/issues/48773
2529
    qa: scrub does not complete
2530
* https://tracker.ceph.com/issues/50223
2531
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2532
* https://tracker.ceph.com/issues/51282
2533
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2534
* https://tracker.ceph.com/issues/52624
2535
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2536
* https://tracker.ceph.com/issues/53216
2537
    qa: "RuntimeError: value of attributes should be either str or None. client_id"
2538
* https://tracker.ceph.com/issues/50250
2539
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
2540
2541 30 Patrick Donnelly
2542
2543
h3. 2021 November 03
2544
2545
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211103.023355
2546
2547
* https://tracker.ceph.com/issues/51964
2548
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
2549
* https://tracker.ceph.com/issues/51282
2550
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2551
* https://tracker.ceph.com/issues/52436
2552
    fs/ceph: "corrupt mdsmap"
2553
* https://tracker.ceph.com/issues/53074
2554
    pybind/mgr/cephadm: upgrade sequence does not continue if no MDS are active
2555
* https://tracker.ceph.com/issues/53150
2556
    pybind/mgr/cephadm/upgrade: tolerate MDS failures during upgrade straddling v16.2.5
2557
* https://tracker.ceph.com/issues/53155
2558
    MDSMonitor: assertion during upgrade to v16.2.5+
2559 29 Patrick Donnelly
2560
2561
h3. 2021 October 26
2562
2563
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211025.000447
2564
2565
* https://tracker.ceph.com/issues/53074
2566
    pybind/mgr/cephadm: upgrade sequence does not continue if no MDS are active
2567
* https://tracker.ceph.com/issues/52997
2568
    testing: hang ing umount
2569
* https://tracker.ceph.com/issues/50824
2570
    qa: snaptest-git-ceph bus error
2571
* https://tracker.ceph.com/issues/52436
2572
    fs/ceph: "corrupt mdsmap"
2573
* https://tracker.ceph.com/issues/48773
2574
    qa: scrub does not complete
2575
* https://tracker.ceph.com/issues/53082
2576
    ceph-fuse: segmenetation fault in Client::handle_mds_map
2577
* https://tracker.ceph.com/issues/50223
2578
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2579
* https://tracker.ceph.com/issues/52624
2580
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2581
* https://tracker.ceph.com/issues/50224
2582
    qa: test_mirroring_init_failure_with_recovery failure
2583
* https://tracker.ceph.com/issues/50821
2584
    qa: untar_snap_rm failure during mds thrashing
2585
* https://tracker.ceph.com/issues/50250
2586
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
2587
2588 27 Patrick Donnelly
2589
2590 28 Patrick Donnelly
h3. 2021 October 19
2591 27 Patrick Donnelly
2592
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211019.013028
2593
2594
* https://tracker.ceph.com/issues/52995
2595
    qa: test_standby_count_wanted failure
2596
* https://tracker.ceph.com/issues/52948
2597
    osd: fails to come up: "teuthology.misc:7 of 8 OSDs are up"
2598
* https://tracker.ceph.com/issues/52996
2599
    qa: test_perf_counters via test_openfiletable
2600
* https://tracker.ceph.com/issues/48772
2601
    qa: pjd: not ok 9, 44, 80
2602
* https://tracker.ceph.com/issues/52997
2603
    testing: hang ing umount
2604
* https://tracker.ceph.com/issues/50250
2605
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
2606
* https://tracker.ceph.com/issues/52624
2607
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2608
* https://tracker.ceph.com/issues/50223
2609
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2610
* https://tracker.ceph.com/issues/50821
2611
    qa: untar_snap_rm failure during mds thrashing
2612
* https://tracker.ceph.com/issues/48773
2613
    qa: scrub does not complete
2614 26 Patrick Donnelly
2615
2616
h3. 2021 October 12
2617
2618
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211012.192211
2619
2620
Some failures caused by teuthology bug: https://tracker.ceph.com/issues/52944
2621
2622
New test caused failure: https://github.com/ceph/ceph/pull/43297#discussion_r729883167
2623
2624
2625
* https://tracker.ceph.com/issues/51282
2626
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2627
* https://tracker.ceph.com/issues/52948
2628
    osd: fails to come up: "teuthology.misc:7 of 8 OSDs are up"
2629
* https://tracker.ceph.com/issues/48773
2630
    qa: scrub does not complete
2631
* https://tracker.ceph.com/issues/50224
2632
    qa: test_mirroring_init_failure_with_recovery failure
2633
* https://tracker.ceph.com/issues/52949
2634
    RuntimeError: The following counters failed to be set on mds daemons: {'mds.dir_split'}
2635 25 Patrick Donnelly
2636 23 Patrick Donnelly
2637 24 Patrick Donnelly
h3. 2021 October 02
2638
2639
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211002.163337
2640
2641
Some failures caused by cephadm upgrade test. Fixed in follow-up qa commit.
2642
2643
test_simple failures caused by PR in this set.
2644
2645
A few reruns because of QA infra noise.
2646
2647
* https://tracker.ceph.com/issues/52822
2648
    qa: failed pacific install on fs:upgrade
2649
* https://tracker.ceph.com/issues/52624
2650
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2651
* https://tracker.ceph.com/issues/50223
2652
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2653
* https://tracker.ceph.com/issues/48773
2654
    qa: scrub does not complete
2655
2656
2657 23 Patrick Donnelly
h3. 2021 September 20
2658
2659
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210917.174826
2660
2661
* https://tracker.ceph.com/issues/52677
2662
    qa: test_simple failure
2663
* https://tracker.ceph.com/issues/51279
2664
    kclient hangs on umount (testing branch)
2665
* https://tracker.ceph.com/issues/50223
2666
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2667
* https://tracker.ceph.com/issues/50250
2668
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
2669
* https://tracker.ceph.com/issues/52624
2670
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2671
* https://tracker.ceph.com/issues/52438
2672
    qa: ffsb timeout
2673 22 Patrick Donnelly
2674
2675
h3. 2021 September 10
2676
2677
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210910.181451
2678
2679
* https://tracker.ceph.com/issues/50223
2680
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2681
* https://tracker.ceph.com/issues/50250
2682
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
2683
* https://tracker.ceph.com/issues/52624
2684
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2685
* https://tracker.ceph.com/issues/52625
2686
    qa: test_kill_mdstable (tasks.cephfs.test_snapshots.TestSnapshots)
2687
* https://tracker.ceph.com/issues/52439
2688
    qa: acls does not compile on centos stream
2689
* https://tracker.ceph.com/issues/50821
2690
    qa: untar_snap_rm failure during mds thrashing
2691
* https://tracker.ceph.com/issues/48773
2692
    qa: scrub does not complete
2693
* https://tracker.ceph.com/issues/52626
2694
    mds: ScrubStack.cc: 831: FAILED ceph_assert(diri)
2695
* https://tracker.ceph.com/issues/51279
2696
    kclient hangs on umount (testing branch)
2697 21 Patrick Donnelly
2698
2699
h3. 2021 August 27
2700
2701
Several jobs died because of device failures.
2702
2703
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210827.024746
2704
2705
* https://tracker.ceph.com/issues/52430
2706
    mds: fast async create client mount breaks racy test
2707
* https://tracker.ceph.com/issues/52436
2708
    fs/ceph: "corrupt mdsmap"
2709
* https://tracker.ceph.com/issues/52437
2710
    mds: InoTable::replay_release_ids abort via test_inotable_sync
2711
* https://tracker.ceph.com/issues/51282
2712
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2713
* https://tracker.ceph.com/issues/52438
2714
    qa: ffsb timeout
2715
* https://tracker.ceph.com/issues/52439
2716
    qa: acls does not compile on centos stream
2717 20 Patrick Donnelly
2718
2719
h3. 2021 July 30
2720
2721
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210729.214022
2722
2723
* https://tracker.ceph.com/issues/50250
2724
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
2725
* https://tracker.ceph.com/issues/51282
2726
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2727
* https://tracker.ceph.com/issues/48773
2728
    qa: scrub does not complete
2729
* https://tracker.ceph.com/issues/51975
2730
    pybind/mgr/stats: KeyError
2731 19 Patrick Donnelly
2732
2733
h3. 2021 July 28
2734
2735
https://pulpito.ceph.com/pdonnell-2021-07-28_00:39:45-fs-wip-pdonnell-testing-20210727.213757-distro-basic-smithi/
2736
2737
with qa fix: https://pulpito.ceph.com/pdonnell-2021-07-28_16:20:28-fs-wip-pdonnell-testing-20210728.141004-distro-basic-smithi/
2738
2739
* https://tracker.ceph.com/issues/51905
2740
    qa: "error reading sessionmap 'mds1_sessionmap'"
2741
* https://tracker.ceph.com/issues/48773
2742
    qa: scrub does not complete
2743
* https://tracker.ceph.com/issues/50250
2744
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
2745
* https://tracker.ceph.com/issues/51267
2746
    CommandFailedError: Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithi096 with status 1:...
2747
* https://tracker.ceph.com/issues/51279
2748
    kclient hangs on umount (testing branch)
2749 18 Patrick Donnelly
2750
2751
h3. 2021 July 16
2752
2753
https://pulpito.ceph.com/pdonnell-2021-07-16_05:50:11-fs-wip-pdonnell-testing-20210716.022804-distro-basic-smithi/
2754
2755
* https://tracker.ceph.com/issues/48773
2756
    qa: scrub does not complete
2757
* https://tracker.ceph.com/issues/48772
2758
    qa: pjd: not ok 9, 44, 80
2759
* https://tracker.ceph.com/issues/45434
2760
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2761
* https://tracker.ceph.com/issues/51279
2762
    kclient hangs on umount (testing branch)
2763
* https://tracker.ceph.com/issues/50824
2764
    qa: snaptest-git-ceph bus error
2765 17 Patrick Donnelly
2766
2767
h3. 2021 July 04
2768
2769
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210703.052904
2770
2771
* https://tracker.ceph.com/issues/48773
2772
    qa: scrub does not complete
2773
* https://tracker.ceph.com/issues/39150
2774
    mon: "FAILED ceph_assert(session_map.sessions.empty())" when out of quorum
2775
* https://tracker.ceph.com/issues/45434
2776
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2777
* https://tracker.ceph.com/issues/51282
2778
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2779
* https://tracker.ceph.com/issues/48771
2780
    qa: iogen: workload fails to cause balancing
2781
* https://tracker.ceph.com/issues/51279
2782
    kclient hangs on umount (testing branch)
2783
* https://tracker.ceph.com/issues/50250
2784
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
2785 16 Patrick Donnelly
2786
2787
h3. 2021 July 01
2788
2789
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210701.192056
2790
2791
* https://tracker.ceph.com/issues/51197
2792
    qa: [WRN] Scrub error on inode 0x10000001520 (/client.0/tmp/t/linux-5.4/Documentation/driver-api) see mds.f log and `damage ls` output for details
2793
* https://tracker.ceph.com/issues/50866
2794
    osd: stat mismatch on objects
2795
* https://tracker.ceph.com/issues/48773
2796
    qa: scrub does not complete
2797 15 Patrick Donnelly
2798
2799
h3. 2021 June 26
2800
2801
https://pulpito.ceph.com/pdonnell-2021-06-26_00:57:00-fs-wip-pdonnell-testing-20210625.225421-distro-basic-smithi/
2802
2803
* https://tracker.ceph.com/issues/51183
2804
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
2805
* https://tracker.ceph.com/issues/51410
2806
    kclient: fails to finish reconnect during MDS thrashing (testing branch)
2807
* https://tracker.ceph.com/issues/48773
2808
    qa: scrub does not complete
2809
* https://tracker.ceph.com/issues/51282
2810
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2811
* https://tracker.ceph.com/issues/51169
2812
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
2813
* https://tracker.ceph.com/issues/48772
2814
    qa: pjd: not ok 9, 44, 80
2815 14 Patrick Donnelly
2816
2817
h3. 2021 June 21
2818
2819
https://pulpito.ceph.com/pdonnell-2021-06-22_00:27:21-fs-wip-pdonnell-testing-20210621.231646-distro-basic-smithi/
2820
2821
One failure caused by PR: https://github.com/ceph/ceph/pull/41935#issuecomment-866472599
2822
2823
* https://tracker.ceph.com/issues/51282
2824
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2825
* https://tracker.ceph.com/issues/51183
2826
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
2827
* https://tracker.ceph.com/issues/48773
2828
    qa: scrub does not complete
2829
* https://tracker.ceph.com/issues/48771
2830
    qa: iogen: workload fails to cause balancing
2831
* https://tracker.ceph.com/issues/51169
2832
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
2833
* https://tracker.ceph.com/issues/50495
2834
    libcephfs: shutdown race fails with status 141
2835
* https://tracker.ceph.com/issues/45434
2836
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2837
* https://tracker.ceph.com/issues/50824
2838
    qa: snaptest-git-ceph bus error
2839
* https://tracker.ceph.com/issues/50223
2840
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2841 13 Patrick Donnelly
2842
2843
h3. 2021 June 16
2844
2845
https://pulpito.ceph.com/pdonnell-2021-06-16_21:26:55-fs-wip-pdonnell-testing-20210616.191804-distro-basic-smithi/
2846
2847
MDS abort class of failures caused by PR: https://github.com/ceph/ceph/pull/41667
2848
2849
* https://tracker.ceph.com/issues/45434
2850
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2851
* https://tracker.ceph.com/issues/51169
2852
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
2853
* https://tracker.ceph.com/issues/43216
2854
    MDSMonitor: removes MDS coming out of quorum election
2855
* https://tracker.ceph.com/issues/51278
2856
    mds: "FAILED ceph_assert(!segments.empty())"
2857
* https://tracker.ceph.com/issues/51279
2858
    kclient hangs on umount (testing branch)
2859
* https://tracker.ceph.com/issues/51280
2860
    mds: "FAILED ceph_assert(r == 0 || r == -2)"
2861
* https://tracker.ceph.com/issues/51183
2862
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
2863
* https://tracker.ceph.com/issues/51281
2864
    qa: snaptest-snap-rm-cmp.sh: "echo 'FAIL: bad match, /tmp/a 4637e766853d1ad16a7b17079e2c6f03 != real c3883760b18d50e8d78819c54d579b00'"
2865
* https://tracker.ceph.com/issues/48773
2866
    qa: scrub does not complete
2867
* https://tracker.ceph.com/issues/51076
2868
    "wait_for_recovery: failed before timeout expired" during thrashosd test with EC backend.
2869
* https://tracker.ceph.com/issues/51228
2870
    qa: rmdir: failed to remove 'a/.snap/*': No such file or directory
2871
* https://tracker.ceph.com/issues/51282
2872
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2873 12 Patrick Donnelly
2874
2875
h3. 2021 June 14
2876
2877
https://pulpito.ceph.com/pdonnell-2021-06-14_20:53:05-fs-wip-pdonnell-testing-20210614.173325-distro-basic-smithi/
2878
2879
Some Ubuntu 20.04 upgrade fallout. In particular, upgrade tests are failing due to missing packages for 18.04 Pacific.
2880
2881
* https://tracker.ceph.com/issues/51169
2882
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
2883
* https://tracker.ceph.com/issues/51228
2884
    qa: rmdir: failed to remove 'a/.snap/*': No such file or directory
2885
* https://tracker.ceph.com/issues/48773
2886
    qa: scrub does not complete
2887
* https://tracker.ceph.com/issues/51183
2888
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
2889
* https://tracker.ceph.com/issues/45434
2890
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2891
* https://tracker.ceph.com/issues/51182
2892
    pybind/mgr/snap_schedule: Invalid command: Unexpected argument 'fs=cephfs'
2893
* https://tracker.ceph.com/issues/51229
2894
    qa: test_multi_snap_schedule list difference failure
2895
* https://tracker.ceph.com/issues/50821
2896
    qa: untar_snap_rm failure during mds thrashing
2897 11 Patrick Donnelly
2898
2899
h3. 2021 June 13
2900
2901
https://pulpito.ceph.com/pdonnell-2021-06-12_02:45:35-fs-wip-pdonnell-testing-20210612.002809-distro-basic-smithi/
2902
2903
Some Ubuntu 20.04 upgrade fallout. In particular, upgrade tests are failing due to missing packages for 18.04 Pacific.
2904
2905
* https://tracker.ceph.com/issues/51169
2906
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
2907
* https://tracker.ceph.com/issues/48773
2908
    qa: scrub does not complete
2909
* https://tracker.ceph.com/issues/51182
2910
    pybind/mgr/snap_schedule: Invalid command: Unexpected argument 'fs=cephfs'
2911
* https://tracker.ceph.com/issues/51183
2912
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
2913
* https://tracker.ceph.com/issues/51197
2914
    qa: [WRN] Scrub error on inode 0x10000001520 (/client.0/tmp/t/linux-5.4/Documentation/driver-api) see mds.f log and `damage ls` output for details
2915
* https://tracker.ceph.com/issues/45434
2916 10 Patrick Donnelly
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2917
2918
h3. 2021 June 11
2919
2920
https://pulpito.ceph.com/pdonnell-2021-06-11_18:02:10-fs-wip-pdonnell-testing-20210611.162716-distro-basic-smithi/
2921
2922
Some Ubuntu 20.04 upgrade fallout. In particular, upgrade tests are failing due to missing packages for 18.04 Pacific.
2923
2924
* https://tracker.ceph.com/issues/51169
2925
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
2926
* https://tracker.ceph.com/issues/45434
2927
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2928
* https://tracker.ceph.com/issues/48771
2929
    qa: iogen: workload fails to cause balancing
2930
* https://tracker.ceph.com/issues/43216
2931
    MDSMonitor: removes MDS coming out of quorum election
2932
* https://tracker.ceph.com/issues/51182
2933
    pybind/mgr/snap_schedule: Invalid command: Unexpected argument 'fs=cephfs'
2934
* https://tracker.ceph.com/issues/50223
2935
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2936
* https://tracker.ceph.com/issues/48773
2937
    qa: scrub does not complete
2938
* https://tracker.ceph.com/issues/51183
2939
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
2940
* https://tracker.ceph.com/issues/51184
2941
    qa: fs:bugs does not specify distro
2942 9 Patrick Donnelly
2943
2944
h3. 2021 June 03
2945
2946
https://pulpito.ceph.com/pdonnell-2021-06-03_03:40:33-fs-wip-pdonnell-testing-20210603.020013-distro-basic-smithi/
2947
2948
* https://tracker.ceph.com/issues/45434
2949
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2950
* https://tracker.ceph.com/issues/50016
2951
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
2952
* https://tracker.ceph.com/issues/50821
2953
    qa: untar_snap_rm failure during mds thrashing
2954
* https://tracker.ceph.com/issues/50622 (regression)
2955
    msg: active_connections regression
2956
* https://tracker.ceph.com/issues/49845#note-2 (regression)
2957
    qa: failed umount in test_volumes
2958
* https://tracker.ceph.com/issues/48773
2959
    qa: scrub does not complete
2960
* https://tracker.ceph.com/issues/43216
2961
    MDSMonitor: removes MDS coming out of quorum election
2962 7 Patrick Donnelly
2963
2964 8 Patrick Donnelly
h3. 2021 May 18
2965
2966
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210518.214114
2967
2968
Regression in testing kernel caused some failures. Ilya fixed those and rerun
2969
looked better. Some odd new noise in the rerun relating to packaging and "No
2970
module named 'tasks.ceph'".
2971
2972
* https://tracker.ceph.com/issues/50824
2973
    qa: snaptest-git-ceph bus error
2974
* https://tracker.ceph.com/issues/50622 (regression)
2975
    msg: active_connections regression
2976
* https://tracker.ceph.com/issues/49845#note-2 (regression)
2977
    qa: failed umount in test_volumes
2978
* https://tracker.ceph.com/issues/48203 (stock kernel update required)
2979
    qa: quota failure
2980
2981
2982 7 Patrick Donnelly
h3. 2021 May 18
2983
2984
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210518.025642
2985
2986
* https://tracker.ceph.com/issues/50821
2987
    qa: untar_snap_rm failure during mds thrashing
2988
* https://tracker.ceph.com/issues/48773
2989
    qa: scrub does not complete
2990
* https://tracker.ceph.com/issues/45591
2991
    mgr: FAILED ceph_assert(daemon != nullptr)
2992
* https://tracker.ceph.com/issues/50866
2993
    osd: stat mismatch on objects
2994
* https://tracker.ceph.com/issues/50016
2995
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
2996
* https://tracker.ceph.com/issues/50867
2997
    qa: fs:mirror: reduced data availability
2998
* https://tracker.ceph.com/issues/50821
2999
    qa: untar_snap_rm failure during mds thrashing
3000
* https://tracker.ceph.com/issues/50622 (regression)
3001
    msg: active_connections regression
3002
* https://tracker.ceph.com/issues/50223
3003
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
3004
* https://tracker.ceph.com/issues/50868
3005
    qa: "kern.log.gz already exists; not overwritten"
3006
* https://tracker.ceph.com/issues/50870
3007
    qa: test_full: "rm: cannot remove 'large_file_a': Permission denied"
3008 6 Patrick Donnelly
3009
3010
h3. 2021 May 11
3011
3012
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210511.232042
3013
3014
* one class of failures caused by PR
3015
* https://tracker.ceph.com/issues/48812
3016
    qa: test_scrub_pause_and_resume_with_abort failure
3017
* https://tracker.ceph.com/issues/50390
3018
    mds: monclient: wait_auth_rotating timed out after 30
3019
* https://tracker.ceph.com/issues/48773
3020
    qa: scrub does not complete
3021
* https://tracker.ceph.com/issues/50821
3022
    qa: untar_snap_rm failure during mds thrashing
3023
* https://tracker.ceph.com/issues/50224
3024
    qa: test_mirroring_init_failure_with_recovery failure
3025
* https://tracker.ceph.com/issues/50622 (regression)
3026
    msg: active_connections regression
3027
* https://tracker.ceph.com/issues/50825
3028
    qa: snaptest-git-ceph hang during mon thrashing v2
3029
* https://tracker.ceph.com/issues/50821
3030
    qa: untar_snap_rm failure during mds thrashing
3031
* https://tracker.ceph.com/issues/50823
3032
    qa: RuntimeError: timeout waiting for cluster to stabilize
3033 5 Patrick Donnelly
3034
3035
h3. 2021 May 14
3036
3037
https://pulpito.ceph.com/pdonnell-2021-05-14_21:45:42-fs-master-distro-basic-smithi/
3038
3039
* https://tracker.ceph.com/issues/48812
3040
    qa: test_scrub_pause_and_resume_with_abort failure
3041
* https://tracker.ceph.com/issues/50821
3042
    qa: untar_snap_rm failure during mds thrashing
3043
* https://tracker.ceph.com/issues/50622 (regression)
3044
    msg: active_connections regression
3045
* https://tracker.ceph.com/issues/50822
3046
    qa: testing kernel patch for client metrics causes mds abort
3047
* https://tracker.ceph.com/issues/48773
3048
    qa: scrub does not complete
3049
* https://tracker.ceph.com/issues/50823
3050
    qa: RuntimeError: timeout waiting for cluster to stabilize
3051
* https://tracker.ceph.com/issues/50824
3052
    qa: snaptest-git-ceph bus error
3053
* https://tracker.ceph.com/issues/50825
3054
    qa: snaptest-git-ceph hang during mon thrashing v2
3055
* https://tracker.ceph.com/issues/50826
3056
    kceph: stock RHEL kernel hangs on snaptests with mon|osd thrashers
3057 4 Patrick Donnelly
3058
3059
h3. 2021 May 01
3060
3061
https://pulpito.ceph.com/pdonnell-2021-05-01_09:07:09-fs-wip-pdonnell-testing-20210501.040415-distro-basic-smithi/
3062
3063
* https://tracker.ceph.com/issues/45434
3064
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
3065
* https://tracker.ceph.com/issues/50281
3066
    qa: untar_snap_rm timeout
3067
* https://tracker.ceph.com/issues/48203 (stock kernel update required)
3068
    qa: quota failure
3069
* https://tracker.ceph.com/issues/48773
3070
    qa: scrub does not complete
3071
* https://tracker.ceph.com/issues/50390
3072
    mds: monclient: wait_auth_rotating timed out after 30
3073
* https://tracker.ceph.com/issues/50250
3074
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details"
3075
* https://tracker.ceph.com/issues/50622 (regression)
3076
    msg: active_connections regression
3077
* https://tracker.ceph.com/issues/45591
3078
    mgr: FAILED ceph_assert(daemon != nullptr)
3079
* https://tracker.ceph.com/issues/50221
3080
    qa: snaptest-git-ceph failure in git diff
3081
* https://tracker.ceph.com/issues/50016
3082
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
3083 3 Patrick Donnelly
3084
3085
h3. 2021 Apr 15
3086
3087
https://pulpito.ceph.com/pdonnell-2021-04-15_01:35:57-fs-wip-pdonnell-testing-20210414.230315-distro-basic-smithi/
3088
3089
* https://tracker.ceph.com/issues/50281
3090
    qa: untar_snap_rm timeout
3091
* https://tracker.ceph.com/issues/50220
3092
    qa: dbench workload timeout
3093
* https://tracker.ceph.com/issues/50246
3094
    mds: failure replaying journal (EMetaBlob)
3095
* https://tracker.ceph.com/issues/50250
3096
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details"
3097
* https://tracker.ceph.com/issues/50016
3098
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
3099
* https://tracker.ceph.com/issues/50222
3100
    osd: 5.2s0 deep-scrub : stat mismatch
3101
* https://tracker.ceph.com/issues/45434
3102
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
3103
* https://tracker.ceph.com/issues/49845
3104
    qa: failed umount in test_volumes
3105
* https://tracker.ceph.com/issues/37808
3106
    osd: osdmap cache weak_refs assert during shutdown
3107
* https://tracker.ceph.com/issues/50387
3108
    client: fs/snaps failure
3109
* https://tracker.ceph.com/issues/50389
3110
    mds: "cluster [ERR] Error recovering journal 0x203: (2) No such file or directory" in cluster log"
3111
* https://tracker.ceph.com/issues/50216
3112
    qa: "ls: cannot access 'lost+found': No such file or directory"
3113
* https://tracker.ceph.com/issues/50390
3114
    mds: monclient: wait_auth_rotating timed out after 30
3115
3116 1 Patrick Donnelly
3117
3118 2 Patrick Donnelly
h3. 2021 Apr 08
3119
3120
https://pulpito.ceph.com/pdonnell-2021-04-08_22:42:24-fs-wip-pdonnell-testing-20210408.192301-distro-basic-smithi/
3121
3122
* https://tracker.ceph.com/issues/45434
3123
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
3124
* https://tracker.ceph.com/issues/50016
3125
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
3126
* https://tracker.ceph.com/issues/48773
3127
    qa: scrub does not complete
3128
* https://tracker.ceph.com/issues/50279
3129
    qa: "Replacing daemon mds.b as rank 0 with standby daemon mds.c"
3130
* https://tracker.ceph.com/issues/50246
3131
    mds: failure replaying journal (EMetaBlob)
3132
* https://tracker.ceph.com/issues/48365
3133
    qa: ffsb build failure on CentOS 8.2
3134
* https://tracker.ceph.com/issues/50216
3135
    qa: "ls: cannot access 'lost+found': No such file or directory"
3136
* https://tracker.ceph.com/issues/50223
3137
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
3138
* https://tracker.ceph.com/issues/50280
3139
    cephadm: RuntimeError: uid/gid not found
3140
* https://tracker.ceph.com/issues/50281
3141
    qa: untar_snap_rm timeout
3142
3143 1 Patrick Donnelly
h3. 2021 Apr 08
3144
3145
https://pulpito.ceph.com/pdonnell-2021-04-08_04:31:36-fs-wip-pdonnell-testing-20210408.024225-distro-basic-smithi/
3146
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210408.142238 (with logic inversion / QA fix)
3147
3148
* https://tracker.ceph.com/issues/50246
3149
    mds: failure replaying journal (EMetaBlob)
3150
* https://tracker.ceph.com/issues/50250
3151
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details"
3152
3153
3154
h3. 2021 Apr 07
3155
3156
https://pulpito.ceph.com/pdonnell-2021-04-07_02:12:41-fs-wip-pdonnell-testing-20210406.213012-distro-basic-smithi/
3157
3158
* https://tracker.ceph.com/issues/50215
3159
    qa: "log [ERR] : error reading sessionmap 'mds2_sessionmap'"
3160
* https://tracker.ceph.com/issues/49466
3161
    qa: "Command failed on gibba030 with status 1: 'set -ex\nsudo dd of=/tmp/tmp.ZEeZBasJer'"
3162
* https://tracker.ceph.com/issues/50216
3163
    qa: "ls: cannot access 'lost+found': No such file or directory"
3164
* https://tracker.ceph.com/issues/48773
3165
    qa: scrub does not complete
3166
* https://tracker.ceph.com/issues/49845
3167
    qa: failed umount in test_volumes
3168
* https://tracker.ceph.com/issues/50220
3169
    qa: dbench workload timeout
3170
* https://tracker.ceph.com/issues/50221
3171
    qa: snaptest-git-ceph failure in git diff
3172
* https://tracker.ceph.com/issues/50222
3173
    osd: 5.2s0 deep-scrub : stat mismatch
3174
* https://tracker.ceph.com/issues/50223
3175
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
3176
* https://tracker.ceph.com/issues/50224
3177
    qa: test_mirroring_init_failure_with_recovery failure
3178
3179
h3. 2021 Apr 01
3180
3181
https://pulpito.ceph.com/pdonnell-2021-04-01_00:45:34-fs-wip-pdonnell-testing-20210331.222326-distro-basic-smithi/
3182
3183
* https://tracker.ceph.com/issues/48772
3184
    qa: pjd: not ok 9, 44, 80
3185
* https://tracker.ceph.com/issues/50177
3186
    osd: "stalled aio... buggy kernel or bad device?"
3187
* https://tracker.ceph.com/issues/48771
3188
    qa: iogen: workload fails to cause balancing
3189
* https://tracker.ceph.com/issues/49845
3190
    qa: failed umount in test_volumes
3191
* https://tracker.ceph.com/issues/48773
3192
    qa: scrub does not complete
3193
* https://tracker.ceph.com/issues/48805
3194
    mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details"
3195
* https://tracker.ceph.com/issues/50178
3196
    qa: "TypeError: run() got an unexpected keyword argument 'shell'"
3197
* https://tracker.ceph.com/issues/45434
3198
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
3199
3200
h3. 2021 Mar 24
3201
3202
https://pulpito.ceph.com/pdonnell-2021-03-24_23:26:35-fs-wip-pdonnell-testing-20210324.190252-distro-basic-smithi/
3203
3204
* https://tracker.ceph.com/issues/49500
3205
    qa: "Assertion `cb_done' failed."
3206
* https://tracker.ceph.com/issues/50019
3207
    qa: mount failure with cephadm "probably no MDS server is up?"
3208
* https://tracker.ceph.com/issues/50020
3209
    qa: "RADOS object not found (Failed to operate read op for oid cephfs_mirror)"
3210
* https://tracker.ceph.com/issues/48773
3211
    qa: scrub does not complete
3212
* https://tracker.ceph.com/issues/45434
3213
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
3214
* https://tracker.ceph.com/issues/48805
3215
    mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details"
3216
* https://tracker.ceph.com/issues/48772
3217
    qa: pjd: not ok 9, 44, 80
3218
* https://tracker.ceph.com/issues/50021
3219
    qa: snaptest-git-ceph failure during mon thrashing
3220
* https://tracker.ceph.com/issues/48771
3221
    qa: iogen: workload fails to cause balancing
3222
* https://tracker.ceph.com/issues/50016
3223
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
3224
* https://tracker.ceph.com/issues/49466
3225
    qa: "Command failed on gibba030 with status 1: 'set -ex\nsudo dd of=/tmp/tmp.ZEeZBasJer'"
3226
3227
3228
h3. 2021 Mar 18
3229
3230
https://pulpito.ceph.com/pdonnell-2021-03-18_13:46:31-fs-wip-pdonnell-testing-20210318.024145-distro-basic-smithi/
3231
3232
* https://tracker.ceph.com/issues/49466
3233
    qa: "Command failed on gibba030 with status 1: 'set -ex\nsudo dd of=/tmp/tmp.ZEeZBasJer'"
3234
* https://tracker.ceph.com/issues/48773
3235
    qa: scrub does not complete
3236
* https://tracker.ceph.com/issues/48805
3237
    mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details"
3238
* https://tracker.ceph.com/issues/45434
3239
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
3240
* https://tracker.ceph.com/issues/49845
3241
    qa: failed umount in test_volumes
3242
* https://tracker.ceph.com/issues/49605
3243
    mgr: drops command on the floor
3244
* https://tracker.ceph.com/issues/48203 (stock kernel update required)
3245
    qa: quota failure
3246
* https://tracker.ceph.com/issues/49928
3247
    client: items pinned in cache preventing unmount x2
3248
3249
h3. 2021 Mar 15
3250
3251
https://pulpito.ceph.com/pdonnell-2021-03-15_22:16:56-fs-wip-pdonnell-testing-20210315.182203-distro-basic-smithi/
3252
3253
* https://tracker.ceph.com/issues/49842
3254
    qa: stuck pkg install
3255
* https://tracker.ceph.com/issues/49466
3256
    qa: "Command failed on gibba030 with status 1: 'set -ex\nsudo dd of=/tmp/tmp.ZEeZBasJer'"
3257
* https://tracker.ceph.com/issues/49822
3258
    test: test_mirroring_command_idempotency (tasks.cephfs.test_admin.TestMirroringCommands) failure
3259
* https://tracker.ceph.com/issues/49240
3260
    terminate called after throwing an instance of 'std::bad_alloc'
3261
* https://tracker.ceph.com/issues/48773
3262
    qa: scrub does not complete
3263
* https://tracker.ceph.com/issues/45434
3264
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
3265
* https://tracker.ceph.com/issues/49500
3266
    qa: "Assertion `cb_done' failed."
3267
* https://tracker.ceph.com/issues/49843
3268
    qa: fs/snaps/snaptest-upchildrealms.sh failure
3269
* https://tracker.ceph.com/issues/49845
3270
    qa: failed umount in test_volumes
3271
* https://tracker.ceph.com/issues/48805
3272
    mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details"
3273
* https://tracker.ceph.com/issues/49605
3274
    mgr: drops command on the floor
3275
3276
and failure caused by PR: https://github.com/ceph/ceph/pull/39969
3277
3278
3279
h3. 2021 Mar 09
3280
3281
https://pulpito.ceph.com/pdonnell-2021-03-09_03:27:39-fs-wip-pdonnell-testing-20210308.214827-distro-basic-smithi/
3282
3283
* https://tracker.ceph.com/issues/49500
3284
    qa: "Assertion `cb_done' failed."
3285
* https://tracker.ceph.com/issues/48805
3286
    mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details"
3287
* https://tracker.ceph.com/issues/48773
3288
    qa: scrub does not complete
3289
* https://tracker.ceph.com/issues/45434
3290
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
3291
* https://tracker.ceph.com/issues/49240
3292
    terminate called after throwing an instance of 'std::bad_alloc'
3293
* https://tracker.ceph.com/issues/49466
3294
    qa: "Command failed on gibba030 with status 1: 'set -ex\nsudo dd of=/tmp/tmp.ZEeZBasJer'"
3295
* https://tracker.ceph.com/issues/49684
3296
    qa: fs:cephadm mount does not wait for mds to be created
3297
* https://tracker.ceph.com/issues/48771
3298
    qa: iogen: workload fails to cause balancing