Project

General

Profile

Main » History » Version 196

Patrick Donnelly, 10/20/2023 12:57 PM

1 79 Venky Shankar
h1. MAIN
2
3 148 Rishabh Dave
h3. NEW ENTRY BELOW
4
5 195 Venky Shankar
h3. 18 Oct 2023
6
7
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20231018.065603
8
9
* https://tracker.ceph.com/issues/52624
10
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
11
* https://tracker.ceph.com/issues/57676
12
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
13
* https://tracker.ceph.com/issues/63233
14
    mon|client|mds: valgrind reports possible leaks in the MDS
15
* https://tracker.ceph.com/issues/63141
16
    qa/cephfs: test_idem_unaffected_root_squash fails
17
* https://tracker.ceph.com/issues/59531
18
  "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio)" 
19
* https://tracker.ceph.com/issues/62658
20
    error during scrub thrashing: reached maximum tries (31) after waiting for 900 seconds
21
* https://tracker.ceph.com/issues/62580
22
    testing: Test failure: test_snapshot_remove (tasks.cephfs.test_strays.TestStrays)
23
* https://tracker.ceph.com/issues/62067
24
    ffsb.sh failure "Resource temporarily unavailable"
25
* https://tracker.ceph.com/issues/57655
26
    qa: fs:mixed-clients kernel_untar_build failure
27
* https://tracker.ceph.com/issues/62036
28
    src/mds/MDCache.cc: 5131: FAILED ceph_assert(isolated_inodes.empty())
29
* https://tracker.ceph.com/issues/58945
30
    qa: xfstests-dev's generic test suite has 20 failures with fuse client
31
* https://tracker.ceph.com/issues/62847
32
    mds: blogbench requests stuck (5mds+scrub+snaps-flush)
33
34 193 Venky Shankar
h3. 13 Oct 2023
35
36
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20231013.093215
37
38
* https://tracker.ceph.com/issues/52624
39
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
40
* https://tracker.ceph.com/issues/62936
41
    Test failure: test_mirroring_init_failure_with_recovery (tasks.cephfs.test_mirroring.TestMirroring)
42
* https://tracker.ceph.com/issues/47292
43
    cephfs-shell: test_df_for_valid_file failure
44
* https://tracker.ceph.com/issues/63141
45
    qa/cephfs: test_idem_unaffected_root_squash fails
46
* https://tracker.ceph.com/issues/62081
47
    tasks/fscrypt-common does not finish, timesout
48 1 Patrick Donnelly
* https://tracker.ceph.com/issues/58945
49
    qa: xfstests-dev's generic test suite has 20 failures with fuse client
50 194 Venky Shankar
* https://tracker.ceph.com/issues/63233
51
    mon|client|mds: valgrind reports possible leaks in the MDS
52 193 Venky Shankar
53 190 Patrick Donnelly
h3. 16 Oct 2023
54
55
https://pulpito.ceph.com/?branch=wip-batrick-testing-20231016.203825
56
57 192 Patrick Donnelly
Infrastructure issues:
58
* /teuthology/pdonnell-2023-10-19_12:04:12-fs-wip-batrick-testing-20231016.203825-distro-default-smithi/7432286/teuthology.log
59
    Host lost.
60
61 196 Patrick Donnelly
One followup fix:
62
* https://pulpito.ceph.com/pdonnell-2023-10-20_00:33:29-fs-wip-batrick-testing-20231016.203825-distro-default-smithi/
63
64 192 Patrick Donnelly
Failures:
65
66
* https://tracker.ceph.com/issues/56694
67
    qa: avoid blocking forever on hung umount
68
* https://tracker.ceph.com/issues/63089
69
    qa: tasks/mirror times out
70
* https://tracker.ceph.com/issues/52624
71
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
72
* https://tracker.ceph.com/issues/59531
73
  "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio)" 
74
* https://tracker.ceph.com/issues/57676
75
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
76
* https://tracker.ceph.com/issues/62658 
77
    error during scrub thrashing: reached maximum tries (31) after waiting for 900 seconds
78
* https://tracker.ceph.com/issues/61243
79
  test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
80
* https://tracker.ceph.com/issues/57656
81
  dbench: write failed on handle 10010 (Resource temporarily unavailable)
82
* https://tracker.ceph.com/issues/63233
83
  mon|client|mds: valgrind reports possible leaks in the MDS
84
85 189 Rishabh Dave
h3. 9 Oct 2023
86
87
https://pulpito.ceph.com/rishabh-2023-10-06_11:56:52-fs-rishabh-cephfs-mon-testing-default-smithi/
88
89
* https://tracker.ceph.com/issues/54460
90
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
91
* https://tracker.ceph.com/issues/63141
92
  test_idem_unaffected_root_squash (test_admin.TestFsAuthorizeUpdate) fails
93
* https://tracker.ceph.com/issues/62937
94
  logrotate doesn't support parallel execution on same set of logfiles
95
* https://tracker.ceph.com/issues/61400
96
  valgrind+ceph-mon issues
97
* https://tracker.ceph.com/issues/57676
98
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
99
* https://tracker.ceph.com/issues/55805
100
  error during scrub thrashing reached max tries in 900 secs
101
102 188 Venky Shankar
h3. 26 Sep 2023
103
104
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230926.081818
105
106
* https://tracker.ceph.com/issues/52624
107
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
108
* https://tracker.ceph.com/issues/62873
109
    qa: FAIL: test_client_blocklisted_oldest_tid (tasks.cephfs.test_client_limits.TestClientLimits)
110
* https://tracker.ceph.com/issues/61400
111
    valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
112
* https://tracker.ceph.com/issues/57676
113
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
114
* https://tracker.ceph.com/issues/62682
115
    mon: no mdsmap broadcast after "fs set joinable" is set to true
116
* https://tracker.ceph.com/issues/63089
117
    qa: tasks/mirror times out
118
119 185 Rishabh Dave
h3. 22 Sep 2023
120
121
https://pulpito.ceph.com/rishabh-2023-09-12_12:12:15-fs-wip-rishabh-2023sep12-b2-testing-default-smithi/
122
123
* https://tracker.ceph.com/issues/59348
124
  qa: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota
125
* https://tracker.ceph.com/issues/59344
126
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
127
* https://tracker.ceph.com/issues/59531
128
  "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio)" 
129
* https://tracker.ceph.com/issues/61574
130
  build failure for mdtest project
131
* https://tracker.ceph.com/issues/62702
132
  fsstress.sh: MDS slow requests for the internal 'rename' requests
133
* https://tracker.ceph.com/issues/57676
134
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
135
136
* https://tracker.ceph.com/issues/62863 
137
  deadlock in ceph-fuse causes teuthology job to hang and fail
138
* https://tracker.ceph.com/issues/62870
139
  test_cluster_info (tasks.cephfs.test_nfs.TestNFS)
140
* https://tracker.ceph.com/issues/62873
141
  test_client_blocklisted_oldest_tid (tasks.cephfs.test_client_limits.TestClientLimits)
142
143 186 Venky Shankar
h3. 20 Sep 2023
144
145
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230920.072635
146
147
* https://tracker.ceph.com/issues/52624
148
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
149
* https://tracker.ceph.com/issues/61400
150
    valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
151
* https://tracker.ceph.com/issues/61399
152
    libmpich: undefined references to fi_strerror
153
* https://tracker.ceph.com/issues/62081
154
    tasks/fscrypt-common does not finish, timesout
155
* https://tracker.ceph.com/issues/62658 
156
    error during scrub thrashing: reached maximum tries (31) after waiting for 900 seconds
157
* https://tracker.ceph.com/issues/62915
158
    qa/suites/fs/nfs: No orchestrator configured (try `ceph orch set backend`) while running test cases
159
* https://tracker.ceph.com/issues/59531
160
    quincy: "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio)"
161
* https://tracker.ceph.com/issues/62873
162
  qa: FAIL: test_client_blocklisted_oldest_tid (tasks.cephfs.test_client_limits.TestClientLimits)
163
* https://tracker.ceph.com/issues/62936
164
    Test failure: test_mirroring_init_failure_with_recovery (tasks.cephfs.test_mirroring.TestMirroring)
165
* https://tracker.ceph.com/issues/62937
166
    Command failed on smithi027 with status 3: 'sudo logrotate /etc/logrotate.d/ceph-test.conf'
167
* https://tracker.ceph.com/issues/62510
168
    snaptest-git-ceph.sh failure with fs/thrash
169
* https://tracker.ceph.com/issues/62081
170
    tasks/fscrypt-common does not finish, timesout
171
* https://tracker.ceph.com/issues/62126
172
    test failure: suites/blogbench.sh stops running
173 187 Venky Shankar
* https://tracker.ceph.com/issues/62682
174
    mon: no mdsmap broadcast after "fs set joinable" is set to true
175 186 Venky Shankar
176 184 Milind Changire
h3. 19 Sep 2023
177
178
http://pulpito.front.sepia.ceph.com/mchangir-2023-09-12_05:40:22-fs-wip-mchangir-testing-20230908.140927-testing-default-smithi/
179
180
* https://tracker.ceph.com/issues/58220#note-9
181
  workunit fs/test_python.sh: test_disk_quota_exceeeded_error failure
182
* https://tracker.ceph.com/issues/62702
183
  Command failed (workunit test suites/fsstress.sh) on smithi124 with status 124
184
* https://tracker.ceph.com/issues/57676
185
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
186
* https://tracker.ceph.com/issues/59348
187
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
188
* https://tracker.ceph.com/issues/52624
189
  qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
190
* https://tracker.ceph.com/issues/51964
191
  qa: test_cephfs_mirror_restart_sync_on_blocklist failure
192
* https://tracker.ceph.com/issues/61243
193
  test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
194
* https://tracker.ceph.com/issues/59344
195
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
196
* https://tracker.ceph.com/issues/62873
197
  qa: FAIL: test_client_blocklisted_oldest_tid (tasks.cephfs.test_client_limits.TestClientLimits)
198
* https://tracker.ceph.com/issues/59413
199
  cephfs: qa snaptest-git-ceph.sh failed with "got remote process result: 128"
200
* https://tracker.ceph.com/issues/53859
201
  qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
202
* https://tracker.ceph.com/issues/62482
203
  qa: cluster [WRN] Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
204
205 178 Patrick Donnelly
206 177 Venky Shankar
h3. 13 Sep 2023
207
208
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230908.065909
209
210
* https://tracker.ceph.com/issues/52624
211
      qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
212
* https://tracker.ceph.com/issues/57655
213
    qa: fs:mixed-clients kernel_untar_build failure
214
* https://tracker.ceph.com/issues/57676
215
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
216
* https://tracker.ceph.com/issues/61243
217
    qa: tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev - 17 tests failed
218
* https://tracker.ceph.com/issues/62567
219
    postgres workunit times out - MDS_SLOW_REQUEST in logs
220
* https://tracker.ceph.com/issues/61400
221
    valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
222
* https://tracker.ceph.com/issues/61399
223
    libmpich: undefined references to fi_strerror
224
* https://tracker.ceph.com/issues/57655
225
    qa: fs:mixed-clients kernel_untar_build failure
226
* https://tracker.ceph.com/issues/57676
227
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
228
* https://tracker.ceph.com/issues/51964
229
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
230
* https://tracker.ceph.com/issues/62081
231
    tasks/fscrypt-common does not finish, timesout
232 178 Patrick Donnelly
233 179 Patrick Donnelly
h3. 2023 Sep 12
234 178 Patrick Donnelly
235
https://pulpito.ceph.com/pdonnell-2023-09-12_14:07:50-fs-wip-batrick-testing-20230912.122437-distro-default-smithi/
236 1 Patrick Donnelly
237 181 Patrick Donnelly
A few failures caused by qa refactoring in https://github.com/ceph/ceph/pull/48130 ; notably:
238
239 182 Patrick Donnelly
* Test failure: test_export_pin_many (tasks.cephfs.test_exports.TestExportPin) caused by fragmentation from config changes.
240 181 Patrick Donnelly
241
Failures:
242
243 179 Patrick Donnelly
* https://tracker.ceph.com/issues/59348
244
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
245
* https://tracker.ceph.com/issues/57656
246
  dbench: write failed on handle 10010 (Resource temporarily unavailable)
247
* https://tracker.ceph.com/issues/55805
248
  error scrub thrashing reached max tries in 900 secs
249
* https://tracker.ceph.com/issues/62067
250
    ffsb.sh failure "Resource temporarily unavailable"
251
* https://tracker.ceph.com/issues/59344
252
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
253
* https://tracker.ceph.com/issues/61399
254 180 Patrick Donnelly
  libmpich: undefined references to fi_strerror
255
* https://tracker.ceph.com/issues/62832
256
  common: config_proxy deadlock during shutdown (and possibly other times)
257
* https://tracker.ceph.com/issues/59413
258 1 Patrick Donnelly
  cephfs: qa snaptest-git-ceph.sh failed with "got remote process result: 128"
259 181 Patrick Donnelly
* https://tracker.ceph.com/issues/57676
260
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
261
* https://tracker.ceph.com/issues/62567
262
  Command failed with status 1: "sudo TESTDIR=/home/ubuntu/cephtest bash -c 'sudo -u postgres -- pgbench -s 500 -i'"
263
* https://tracker.ceph.com/issues/54460
264
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
265
* https://tracker.ceph.com/issues/58220#note-9
266
  workunit fs/test_python.sh: test_disk_quota_exceeeded_error failure
267
* https://tracker.ceph.com/issues/59348
268
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
269 183 Patrick Donnelly
* https://tracker.ceph.com/issues/62847
270
    mds: blogbench requests stuck (5mds+scrub+snaps-flush)
271
* https://tracker.ceph.com/issues/62848
272
    qa: fail_fs upgrade scenario hanging
273
* https://tracker.ceph.com/issues/62081
274
    tasks/fscrypt-common does not finish, timesout
275 177 Venky Shankar
276 176 Venky Shankar
h3. 11 Sep 2023
277 175 Venky Shankar
278
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230830.153114
279
280
* https://tracker.ceph.com/issues/52624
281
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
282
* https://tracker.ceph.com/issues/61399
283
    qa: build failure for ior (the failed instance is when compiling `mdtest`)
284
* https://tracker.ceph.com/issues/57655
285
    qa: fs:mixed-clients kernel_untar_build failure
286
* https://tracker.ceph.com/issues/61399
287
    ior build failure
288
* https://tracker.ceph.com/issues/59531
289
    quincy: "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio)"
290
* https://tracker.ceph.com/issues/59344
291
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
292
* https://tracker.ceph.com/issues/59346
293
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
294
* https://tracker.ceph.com/issues/59348
295
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
296
* https://tracker.ceph.com/issues/57676
297
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
298
* https://tracker.ceph.com/issues/61243
299
  qa: tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev - 17 tests failed
300
* https://tracker.ceph.com/issues/62567
301
  postgres workunit times out - MDS_SLOW_REQUEST in logs
302
303
304 174 Rishabh Dave
h3. 6 Sep 2023 Run 2
305
306
https://pulpito.ceph.com/rishabh-2023-08-25_01:50:32-fs-wip-rishabh-2023aug3-b5-testing-default-smithi/ 
307
308
* https://tracker.ceph.com/issues/51964
309
  test_cephfs_mirror_restart_sync_on_blocklist failure
310
* https://tracker.ceph.com/issues/59348
311
  test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
312
* https://tracker.ceph.com/issues/53859
313
  qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
314
* https://tracker.ceph.com/issues/61892
315
  test_strays.TestStrays.test_snapshot_remove failed
316
* https://tracker.ceph.com/issues/54460
317
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
318
* https://tracker.ceph.com/issues/59346
319
  fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write"
320
* https://tracker.ceph.com/issues/59344
321
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
322
* https://tracker.ceph.com/issues/62484
323
  qa: ffsb.sh test failure
324
* https://tracker.ceph.com/issues/62567
325
  Command failed with status 1: "sudo TESTDIR=/home/ubuntu/cephtest bash -c 'sudo -u postgres -- pgbench -s 500 -i'"
326
  
327
* https://tracker.ceph.com/issues/61399
328
  ior build failure
329
* https://tracker.ceph.com/issues/57676
330
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
331
* https://tracker.ceph.com/issues/55805
332
  error scrub thrashing reached max tries in 900 secs
333
334 172 Rishabh Dave
h3. 6 Sep 2023
335 171 Rishabh Dave
336 173 Rishabh Dave
https://pulpito.ceph.com/rishabh-2023-08-10_20:16:46-fs-wip-rishabh-2023Aug1-b4-testing-default-smithi/
337 171 Rishabh Dave
338 1 Patrick Donnelly
* https://tracker.ceph.com/issues/53859
339
  qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
340 173 Rishabh Dave
* https://tracker.ceph.com/issues/51964
341
  qa: test_cephfs_mirror_restart_sync_on_blocklist failure
342 1 Patrick Donnelly
* https://tracker.ceph.com/issues/61892
343 173 Rishabh Dave
  test_snapshot_remove (test_strays.TestStrays) failed
344
* https://tracker.ceph.com/issues/59348
345
  qa: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota
346
* https://tracker.ceph.com/issues/54462
347
  Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128
348
* https://tracker.ceph.com/issues/62556
349
  test_acls: xfstests_dev: python2 is missing
350
* https://tracker.ceph.com/issues/62067
351
  ffsb.sh failure "Resource temporarily unavailable"
352
* https://tracker.ceph.com/issues/57656
353
  dbench: write failed on handle 10010 (Resource temporarily unavailable)
354 1 Patrick Donnelly
* https://tracker.ceph.com/issues/59346
355
  fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write"
356 171 Rishabh Dave
* https://tracker.ceph.com/issues/59344
357 173 Rishabh Dave
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
358
359 171 Rishabh Dave
* https://tracker.ceph.com/issues/61399
360
  ior build failure
361
* https://tracker.ceph.com/issues/57676
362
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
363
* https://tracker.ceph.com/issues/55805
364
  error scrub thrashing reached max tries in 900 secs
365 173 Rishabh Dave
366
* https://tracker.ceph.com/issues/62567
367
  Command failed on smithi008 with status 1: "sudo TESTDIR=/home/ubuntu/cephtest bash -c 'sudo -u postgres -- pgbench -s 500 -i'"
368
* https://tracker.ceph.com/issues/62702
369
  workunit test suites/fsstress.sh on smithi066 with status 124
370 170 Rishabh Dave
371
h3. 5 Sep 2023
372
373
https://pulpito.ceph.com/rishabh-2023-08-25_06:38:25-fs-wip-rishabh-2023aug3-b5-testing-default-smithi/
374
orch:cephadm suite run: http://pulpito.front.sepia.ceph.com/rishabh-2023-09-05_12:16:09-orch:cephadm-wip-rishabh-2023aug3-b5-testing-default-smithi/
375
  this run has failures but acc to Adam King these are not relevant and should be ignored
376
377
* https://tracker.ceph.com/issues/61892
378
  test_snapshot_remove (test_strays.TestStrays) failed
379
* https://tracker.ceph.com/issues/59348
380
  test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota
381
* https://tracker.ceph.com/issues/54462
382
  Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128
383
* https://tracker.ceph.com/issues/62067
384
  ffsb.sh failure "Resource temporarily unavailable"
385
* https://tracker.ceph.com/issues/57656 
386
  dbench: write failed on handle 10010 (Resource temporarily unavailable)
387
* https://tracker.ceph.com/issues/59346
388
  fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write"
389
* https://tracker.ceph.com/issues/59344
390
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
391
* https://tracker.ceph.com/issues/50223
392
  client.xxxx isn't responding to mclientcaps(revoke)
393
* https://tracker.ceph.com/issues/57655
394
  qa: fs:mixed-clients kernel_untar_build failure
395
* https://tracker.ceph.com/issues/62187
396
  iozone.sh: line 5: iozone: command not found
397
 
398
* https://tracker.ceph.com/issues/61399
399
  ior build failure
400
* https://tracker.ceph.com/issues/57676
401
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
402
* https://tracker.ceph.com/issues/55805
403
  error scrub thrashing reached max tries in 900 secs
404 169 Venky Shankar
405
406
h3. 31 Aug 2023
407
408
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230824.045828
409
410
* https://tracker.ceph.com/issues/52624
411
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
412
* https://tracker.ceph.com/issues/62187
413
    iozone: command not found
414
* https://tracker.ceph.com/issues/61399
415
    ior build failure
416
* https://tracker.ceph.com/issues/59531
417
    quincy: "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio)"
418
* https://tracker.ceph.com/issues/61399
419
    qa: build failure for ior (the failed instance is when compiling `mdtest`)
420
* https://tracker.ceph.com/issues/57655
421
    qa: fs:mixed-clients kernel_untar_build failure
422
* https://tracker.ceph.com/issues/59344
423
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
424
* https://tracker.ceph.com/issues/59346
425
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
426
* https://tracker.ceph.com/issues/59348
427
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
428
* https://tracker.ceph.com/issues/59413
429
    cephfs: qa snaptest-git-ceph.sh failed with "got remote process result: 128"
430
* https://tracker.ceph.com/issues/62653
431
    qa: unimplemented fcntl command: 1036 with fsstress
432
* https://tracker.ceph.com/issues/61400
433
    valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
434
* https://tracker.ceph.com/issues/62658
435
    error during scrub thrashing: reached maximum tries (31) after waiting for 900 seconds
436
* https://tracker.ceph.com/issues/62188
437
    AttributeError: 'RemoteProcess' object has no attribute 'read' (happens only with multis-auth test)
438 168 Venky Shankar
439
440
h3. 25 Aug 2023
441
442
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230822.064807
443
444
* https://tracker.ceph.com/issues/59344
445
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
446
* https://tracker.ceph.com/issues/59346
447
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
448
* https://tracker.ceph.com/issues/59348
449
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
450
* https://tracker.ceph.com/issues/57655
451
    qa: fs:mixed-clients kernel_untar_build failure
452
* https://tracker.ceph.com/issues/61243
453
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
454
* https://tracker.ceph.com/issues/61399
455
    ior build failure
456
* https://tracker.ceph.com/issues/61399
457
    qa: build failure for ior (the failed instance is when compiling `mdtest`)
458
* https://tracker.ceph.com/issues/62484
459
    qa: ffsb.sh test failure
460
* https://tracker.ceph.com/issues/59531
461
    quincy: "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio)"
462
* https://tracker.ceph.com/issues/62510
463
    snaptest-git-ceph.sh failure with fs/thrash
464 167 Venky Shankar
465
466
h3. 24 Aug 2023
467
468
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230822.060131
469
470
* https://tracker.ceph.com/issues/57676
471
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
472
* https://tracker.ceph.com/issues/51964
473
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
474
* https://tracker.ceph.com/issues/59344
475
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
476
* https://tracker.ceph.com/issues/59346
477
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
478
* https://tracker.ceph.com/issues/59348
479
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
480
* https://tracker.ceph.com/issues/61399
481
    ior build failure
482
* https://tracker.ceph.com/issues/61399
483
    qa: build failure for ior (the failed instance is when compiling `mdtest`)
484
* https://tracker.ceph.com/issues/62510
485
    snaptest-git-ceph.sh failure with fs/thrash
486
* https://tracker.ceph.com/issues/62484
487
    qa: ffsb.sh test failure
488
* https://tracker.ceph.com/issues/57087
489
    qa: test_fragmented_injection (tasks.cephfs.test_data_scan.TestDataScan) failure
490
* https://tracker.ceph.com/issues/57656
491
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
492
* https://tracker.ceph.com/issues/62187
493
    iozone: command not found
494
* https://tracker.ceph.com/issues/62188
495
    AttributeError: 'RemoteProcess' object has no attribute 'read' (happens only with multis-auth test)
496
* https://tracker.ceph.com/issues/62567
497
    postgres workunit times out - MDS_SLOW_REQUEST in logs
498 166 Venky Shankar
499
500
h3. 22 Aug 2023
501
502
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230809.035933
503
504
* https://tracker.ceph.com/issues/57676
505
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
506
* https://tracker.ceph.com/issues/51964
507
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
508
* https://tracker.ceph.com/issues/59344
509
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
510
* https://tracker.ceph.com/issues/59346
511
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
512
* https://tracker.ceph.com/issues/59348
513
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
514
* https://tracker.ceph.com/issues/61399
515
    ior build failure
516
* https://tracker.ceph.com/issues/61399
517
    qa: build failure for ior (the failed instance is when compiling `mdtest`)
518
* https://tracker.ceph.com/issues/57655
519
    qa: fs:mixed-clients kernel_untar_build failure
520
* https://tracker.ceph.com/issues/61243
521
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
522
* https://tracker.ceph.com/issues/62188
523
    AttributeError: 'RemoteProcess' object has no attribute 'read' (happens only with multis-auth test)
524
* https://tracker.ceph.com/issues/62510
525
    snaptest-git-ceph.sh failure with fs/thrash
526
* https://tracker.ceph.com/issues/62511
527
    src/mds/MDLog.cc: 299: FAILED ceph_assert(!mds_is_shutting_down)
528 165 Venky Shankar
529
530
h3. 14 Aug 2023
531
532
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230808.093601
533
534
* https://tracker.ceph.com/issues/51964
535
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
536
* https://tracker.ceph.com/issues/61400
537
    valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
538
* https://tracker.ceph.com/issues/61399
539
    ior build failure
540
* https://tracker.ceph.com/issues/59348
541
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
542
* https://tracker.ceph.com/issues/59531
543
    cluster [WRN] OSD bench result of 137706.272521 IOPS exceeded the threshold
544
* https://tracker.ceph.com/issues/59344
545
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
546
* https://tracker.ceph.com/issues/59346
547
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
548
* https://tracker.ceph.com/issues/61399
549
    qa: build failure for ior (the failed instance is when compiling `mdtest`)
550
* https://tracker.ceph.com/issues/59684 [kclient bug]
551
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
552
* https://tracker.ceph.com/issues/61243 (NEW)
553
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
554
* https://tracker.ceph.com/issues/57655
555
    qa: fs:mixed-clients kernel_untar_build failure
556
* https://tracker.ceph.com/issues/57656
557
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
558 163 Venky Shankar
559
560
h3. 28 JULY 2023
561
562
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230725.053049
563
564
* https://tracker.ceph.com/issues/51964
565
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
566
* https://tracker.ceph.com/issues/61400
567
    valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
568
* https://tracker.ceph.com/issues/61399
569
    ior build failure
570
* https://tracker.ceph.com/issues/57676
571
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
572
* https://tracker.ceph.com/issues/59348
573
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
574
* https://tracker.ceph.com/issues/59531
575
    cluster [WRN] OSD bench result of 137706.272521 IOPS exceeded the threshold
576
* https://tracker.ceph.com/issues/59344
577
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
578
* https://tracker.ceph.com/issues/59346
579
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
580
* https://github.com/ceph/ceph/pull/52556
581
    task/test_nfs: AttributeError: 'TestNFS' object has no attribute 'run_ceph_cmd' (see note #4)
582
* https://tracker.ceph.com/issues/62187
583
    iozone: command not found
584
* https://tracker.ceph.com/issues/61399
585
    qa: build failure for ior (the failed instance is when compiling `mdtest`)
586
* https://tracker.ceph.com/issues/62188
587 164 Rishabh Dave
    AttributeError: 'RemoteProcess' object has no attribute 'read' (happens only with multis-auth test)
588 158 Rishabh Dave
589
h3. 24 Jul 2023
590
591
https://pulpito.ceph.com/rishabh-2023-07-13_21:35:13-fs-wip-rishabh-2023Jul13-testing-default-smithi/
592
https://pulpito.ceph.com/rishabh-2023-07-14_10:26:42-fs-wip-rishabh-2023Jul13-testing-default-smithi/
593
There were few failure from one of the PRs under testing. Following run confirms that removing this PR fixes these failures -
594
https://pulpito.ceph.com/rishabh-2023-07-18_02:11:50-fs-wip-rishabh-2023Jul13-m-quota-testing-default-smithi/
595
One more extra run to check if blogbench.sh fail every time:
596
https://pulpito.ceph.com/rishabh-2023-07-21_17:58:19-fs-wip-rishabh-2023Jul13-m-quota-testing-default-smithi/
597
blogbench.sh failure were seen on above runs for first time, following run with main branch that confirms that "blogbench.sh" was not related to any of the PRs that are under testing -
598 161 Rishabh Dave
https://pulpito.ceph.com/rishabh-2023-07-21_21:30:53-fs-wip-rishabh-2023Jul13-base-2-testing-default-smithi/
599
600
* https://tracker.ceph.com/issues/61892
601
  test_snapshot_remove (test_strays.TestStrays) failed
602
* https://tracker.ceph.com/issues/53859
603
  test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
604
* https://tracker.ceph.com/issues/61982
605
  test_clean_stale_subvolume_snapshot_metadata (tasks.cephfs.test_volumes.TestSubvolumeSnapshots)
606
* https://tracker.ceph.com/issues/52438
607
  qa: ffsb timeout
608
* https://tracker.ceph.com/issues/54460
609
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
610
* https://tracker.ceph.com/issues/57655
611
  qa: fs:mixed-clients kernel_untar_build failure
612
* https://tracker.ceph.com/issues/48773
613
  reached max tries: scrub does not complete
614
* https://tracker.ceph.com/issues/58340
615
  mds: fsstress.sh hangs with multimds
616
* https://tracker.ceph.com/issues/61400
617
  valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
618
* https://tracker.ceph.com/issues/57206
619
  libcephfs/test.sh: ceph_test_libcephfs_reclaim
620
  
621
* https://tracker.ceph.com/issues/57656
622
  [testing] dbench: write failed on handle 10010 (Resource temporarily unavailable)
623
* https://tracker.ceph.com/issues/61399
624
  ior build failure
625
* https://tracker.ceph.com/issues/57676
626
  error during scrub thrashing: backtrace
627
  
628
* https://tracker.ceph.com/issues/38452
629
  'sudo -u postgres -- pgbench -s 500 -i' failed
630 158 Rishabh Dave
* https://tracker.ceph.com/issues/62126
631 157 Venky Shankar
  blogbench.sh failure
632
633
h3. 18 July 2023
634
635
* https://tracker.ceph.com/issues/52624
636
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
637
* https://tracker.ceph.com/issues/57676
638
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
639
* https://tracker.ceph.com/issues/54460
640
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
641
* https://tracker.ceph.com/issues/57655
642
    qa: fs:mixed-clients kernel_untar_build failure
643
* https://tracker.ceph.com/issues/51964
644
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
645
* https://tracker.ceph.com/issues/59344
646
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
647
* https://tracker.ceph.com/issues/61182
648
    cephfs-mirror-ha-workunit: reached maximum tries (50) after waiting for 300 seconds
649
* https://tracker.ceph.com/issues/61957
650
    test_client_limits.TestClientLimits.test_client_release_bug
651
* https://tracker.ceph.com/issues/59348
652
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
653
* https://tracker.ceph.com/issues/61892
654
    test_strays.TestStrays.test_snapshot_remove failed
655
* https://tracker.ceph.com/issues/59346
656
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
657
* https://tracker.ceph.com/issues/44565
658
    src/mds/SimpleLock.h: 528: FAILED ceph_assert(state == LOCK_XLOCK || state == LOCK_XLOCKDONE || state == LOCK_XLOCKSNAP || state == LOCK_LOCK_XLOCK || state == LOCK_LOCK || is_locallock())
659
* https://tracker.ceph.com/issues/62067
660
    ffsb.sh failure "Resource temporarily unavailable"
661 156 Venky Shankar
662
663
h3. 17 July 2023
664
665
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230704.040136
666
667
* https://tracker.ceph.com/issues/61982
668
    Test failure: test_clean_stale_subvolume_snapshot_metadata (tasks.cephfs.test_volumes.TestSubvolumeSnapshots)
669
* https://tracker.ceph.com/issues/59344
670
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
671
* https://tracker.ceph.com/issues/61182
672
    cephfs-mirror-ha-workunit: reached maximum tries (50) after waiting for 300 seconds
673
* https://tracker.ceph.com/issues/61957
674
    test_client_limits.TestClientLimits.test_client_release_bug
675
* https://tracker.ceph.com/issues/61400
676
    valgrind+ceph-mon: segmentation fault in rocksdb+tcmalloc
677
* https://tracker.ceph.com/issues/59348
678
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
679
* https://tracker.ceph.com/issues/61892
680
    test_strays.TestStrays.test_snapshot_remove failed
681
* https://tracker.ceph.com/issues/59346
682
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
683
* https://tracker.ceph.com/issues/62036
684
    src/mds/MDCache.cc: 5131: FAILED ceph_assert(isolated_inodes.empty())
685
* https://tracker.ceph.com/issues/61737
686
    coredump from '/bin/podman pull quay.ceph.io/ceph-ci/ceph:pacific'
687
* https://tracker.ceph.com/issues/44565
688
    src/mds/SimpleLock.h: 528: FAILED ceph_assert(state == LOCK_XLOCK || state == LOCK_XLOCKDONE || state == LOCK_XLOCKSNAP || state == LOCK_LOCK_XLOCK || state == LOCK_LOCK || is_locallock())
689 155 Rishabh Dave
690 1 Patrick Donnelly
691 153 Rishabh Dave
h3. 13 July 2023 Run 2
692 152 Rishabh Dave
693
694
https://pulpito.ceph.com/rishabh-2023-07-08_23:33:40-fs-wip-rishabh-2023Jul9-testing-default-smithi/
695
https://pulpito.ceph.com/rishabh-2023-07-09_20:19:09-fs-wip-rishabh-2023Jul9-testing-default-smithi/
696
697
* https://tracker.ceph.com/issues/61957
698
  test_client_limits.TestClientLimits.test_client_release_bug
699
* https://tracker.ceph.com/issues/61982
700
  Test failure: test_clean_stale_subvolume_snapshot_metadata (tasks.cephfs.test_volumes.TestSubvolumeSnapshots)
701
* https://tracker.ceph.com/issues/59348
702
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
703
* https://tracker.ceph.com/issues/59344
704
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
705
* https://tracker.ceph.com/issues/54460
706
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
707
* https://tracker.ceph.com/issues/57655
708
  qa: fs:mixed-clients kernel_untar_build failure
709
* https://tracker.ceph.com/issues/61400
710
  valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
711
* https://tracker.ceph.com/issues/61399
712
  ior build failure
713
714 151 Venky Shankar
h3. 13 July 2023
715
716
https://pulpito.ceph.com/vshankar-2023-07-04_11:45:30-fs-wip-vshankar-testing-20230704.040242-testing-default-smithi/
717
718
* https://tracker.ceph.com/issues/54460
719
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
720
* https://tracker.ceph.com/issues/61400
721
    valgrind+ceph-mon: segmentation fault in rocksdb+tcmalloc
722
* https://tracker.ceph.com/issues/57655
723
    qa: fs:mixed-clients kernel_untar_build failure
724
* https://tracker.ceph.com/issues/61945
725
    LibCephFS.DelegTimeout failure
726
* https://tracker.ceph.com/issues/52624
727
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
728
* https://tracker.ceph.com/issues/57676
729
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
730
* https://tracker.ceph.com/issues/59348
731
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
732
* https://tracker.ceph.com/issues/59344
733
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
734
* https://tracker.ceph.com/issues/51964
735
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
736
* https://tracker.ceph.com/issues/59346
737
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
738
* https://tracker.ceph.com/issues/61982
739
    Test failure: test_clean_stale_subvolume_snapshot_metadata (tasks.cephfs.test_volumes.TestSubvolumeSnapshots)
740 150 Rishabh Dave
741
742
h3. 13 Jul 2023
743
744
https://pulpito.ceph.com/rishabh-2023-07-05_22:21:20-fs-wip-rishabh-2023Jul5-testing-default-smithi/
745
https://pulpito.ceph.com/rishabh-2023-07-06_19:33:28-fs-wip-rishabh-2023Jul5-testing-default-smithi/
746
747
* https://tracker.ceph.com/issues/61957
748
  test_client_limits.TestClientLimits.test_client_release_bug
749
* https://tracker.ceph.com/issues/59348
750
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
751
* https://tracker.ceph.com/issues/59346
752
  fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
753
* https://tracker.ceph.com/issues/48773
754
  scrub does not complete: reached max tries
755
* https://tracker.ceph.com/issues/59344
756
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
757
* https://tracker.ceph.com/issues/52438
758
  qa: ffsb timeout
759
* https://tracker.ceph.com/issues/57656
760
  [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
761
* https://tracker.ceph.com/issues/58742
762
  xfstests-dev: kcephfs: generic
763
* https://tracker.ceph.com/issues/61399
764 148 Rishabh Dave
  libmpich: undefined references to fi_strerror
765 149 Rishabh Dave
766 148 Rishabh Dave
h3. 12 July 2023
767
768
https://pulpito.ceph.com/rishabh-2023-07-05_18:32:52-fs-wip-rishabh-CephManager-in-CephFSTestCase-testing-default-smithi/
769
https://pulpito.ceph.com/rishabh-2023-07-06_19:46:43-fs-wip-rishabh-CephManager-in-CephFSTestCase-testing-default-smithi/
770
771
* https://tracker.ceph.com/issues/61892
772
  test_strays.TestStrays.test_snapshot_remove failed
773
* https://tracker.ceph.com/issues/59348
774
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
775
* https://tracker.ceph.com/issues/53859
776
  qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
777
* https://tracker.ceph.com/issues/59346
778
  fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write"
779
* https://tracker.ceph.com/issues/58742
780
  xfstests-dev: kcephfs: generic
781
* https://tracker.ceph.com/issues/59344
782
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
783
* https://tracker.ceph.com/issues/52438
784
  qa: ffsb timeout
785
* https://tracker.ceph.com/issues/57656
786
  [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
787
* https://tracker.ceph.com/issues/54460
788
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
789
* https://tracker.ceph.com/issues/57655
790
  qa: fs:mixed-clients kernel_untar_build failure
791
* https://tracker.ceph.com/issues/61182
792
  cephfs-mirror-ha-workunit: reached maximum tries (50) after waiting for 300 seconds
793
* https://tracker.ceph.com/issues/61400
794
  valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
795 147 Rishabh Dave
* https://tracker.ceph.com/issues/48773
796 146 Patrick Donnelly
  reached max tries: scrub does not complete
797
798
h3. 05 July 2023
799
800
https://pulpito.ceph.com/pdonnell-2023-07-05_03:38:33-fs:libcephfs-wip-pdonnell-testing-20230705.003205-distro-default-smithi/
801
802 137 Rishabh Dave
* https://tracker.ceph.com/issues/59346
803 143 Rishabh Dave
  fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write"
804
805
h3. 27 Jun 2023
806
807
https://pulpito.ceph.com/rishabh-2023-06-21_23:38:17-fs-wip-rishabh-improvements-authmon-testing-default-smithi/
808 144 Rishabh Dave
https://pulpito.ceph.com/rishabh-2023-06-23_17:37:30-fs-wip-rishabh-improvements-authmon-distro-default-smithi/
809
810
* https://tracker.ceph.com/issues/59348
811
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
812
* https://tracker.ceph.com/issues/54460
813
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
814
* https://tracker.ceph.com/issues/59346
815
  fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write"
816
* https://tracker.ceph.com/issues/59344
817
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
818
* https://tracker.ceph.com/issues/61399
819
  libmpich: undefined references to fi_strerror
820
* https://tracker.ceph.com/issues/50223
821
  client.xxxx isn't responding to mclientcaps(revoke)
822 143 Rishabh Dave
* https://tracker.ceph.com/issues/61831
823
  Test failure: test_mirroring_init_failure_with_recovery (tasks.cephfs.test_mirroring.TestMirroring)
824 142 Venky Shankar
825
826
h3. 22 June 2023
827
828
* https://tracker.ceph.com/issues/57676
829
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
830
* https://tracker.ceph.com/issues/54460
831
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
832
* https://tracker.ceph.com/issues/59344
833
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
834
* https://tracker.ceph.com/issues/59348
835
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
836
* https://tracker.ceph.com/issues/61400
837
    valgrind+ceph-mon: segmentation fault in rocksdb+tcmalloc
838
* https://tracker.ceph.com/issues/57655
839
    qa: fs:mixed-clients kernel_untar_build failure
840
* https://tracker.ceph.com/issues/61394
841
    qa/quincy: cluster [WRN] evicting unresponsive client smithi152 (4298), after 303.726 seconds" in cluster log
842
* https://tracker.ceph.com/issues/61762
843
    qa: wait_for_clean: failed before timeout expired
844
* https://tracker.ceph.com/issues/61775
845
    cephfs-mirror: mirror daemon does not shutdown (in mirror ha tests)
846
* https://tracker.ceph.com/issues/44565
847
    src/mds/SimpleLock.h: 528: FAILED ceph_assert(state == LOCK_XLOCK || state == LOCK_XLOCKDONE || state == LOCK_XLOCKSNAP || state == LOCK_LOCK_XLOCK || state == LOCK_LOCK || is_locallock())
848
* https://tracker.ceph.com/issues/61790
849
    cephfs client to mds comms remain silent after reconnect
850
* https://tracker.ceph.com/issues/61791
851
    snaptest-git-ceph.sh test timed out (job dead)
852 139 Venky Shankar
853
854
h3. 20 June 2023
855
856
https://pulpito.ceph.com/vshankar-2023-06-15_04:58:28-fs-wip-vshankar-testing-20230614.124123-testing-default-smithi/
857
858
* https://tracker.ceph.com/issues/57676
859
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
860
* https://tracker.ceph.com/issues/54460
861
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
862 140 Venky Shankar
* https://tracker.ceph.com/issues/54462
863 1 Patrick Donnelly
  Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128
864 141 Venky Shankar
* https://tracker.ceph.com/issues/58340
865 139 Venky Shankar
  mds: fsstress.sh hangs with multimds
866
* https://tracker.ceph.com/issues/59344
867
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
868
* https://tracker.ceph.com/issues/59348
869
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
870
* https://tracker.ceph.com/issues/57656
871
  [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
872
* https://tracker.ceph.com/issues/61400
873
    valgrind+ceph-mon: segmentation fault in rocksdb+tcmalloc
874
* https://tracker.ceph.com/issues/57655
875
    qa: fs:mixed-clients kernel_untar_build failure
876
* https://tracker.ceph.com/issues/44565
877
    src/mds/SimpleLock.h: 528: FAILED ceph_assert(state == LOCK_XLOCK || state == LOCK_XLOCKDONE || state == LOCK_XLOCKSNAP || state == LOCK_LOCK_XLOCK || state == LOCK_LOCK || is_locallock())
878
* https://tracker.ceph.com/issues/61737
879 138 Rishabh Dave
    coredump from '/bin/podman pull quay.ceph.io/ceph-ci/ceph:pacific'
880
881
h3. 16 June 2023
882
883 1 Patrick Donnelly
https://pulpito.ceph.com/rishabh-2023-05-16_10:39:13-fs-wip-rishabh-2023May15-1524-testing-default-smithi/
884 145 Rishabh Dave
https://pulpito.ceph.com/rishabh-2023-05-17_11:09:48-fs-wip-rishabh-2023May15-1524-testing-default-smithi/
885 138 Rishabh Dave
https://pulpito.ceph.com/rishabh-2023-05-18_10:01:53-fs-wip-rishabh-2023May15-1524-testing-default-smithi/
886 1 Patrick Donnelly
(bins were rebuilt with a subset of orig PRs) https://pulpito.ceph.com/rishabh-2023-06-09_10:19:22-fs-wip-rishabh-2023Jun9-1308-testing-default-smithi/
887
888
889
* https://tracker.ceph.com/issues/59344
890
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
891 138 Rishabh Dave
* https://tracker.ceph.com/issues/59348
892
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
893 145 Rishabh Dave
* https://tracker.ceph.com/issues/59346
894
  fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write"
895
* https://tracker.ceph.com/issues/57656
896
  [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
897
* https://tracker.ceph.com/issues/54460
898
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
899 138 Rishabh Dave
* https://tracker.ceph.com/issues/54462
900
  Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128
901 145 Rishabh Dave
* https://tracker.ceph.com/issues/61399
902
  libmpich: undefined references to fi_strerror
903
* https://tracker.ceph.com/issues/58945
904
  xfstests-dev: ceph-fuse: generic 
905 138 Rishabh Dave
* https://tracker.ceph.com/issues/58742
906 136 Patrick Donnelly
  xfstests-dev: kcephfs: generic
907
908
h3. 24 May 2023
909
910
https://pulpito.ceph.com/pdonnell-2023-05-23_18:20:18-fs-wip-pdonnell-testing-20230523.134409-distro-default-smithi/
911
912
* https://tracker.ceph.com/issues/57676
913
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
914
* https://tracker.ceph.com/issues/59683
915
    Error: Unable to find a match: userspace-rcu-devel libedit-devel device-mapper-devel with fscrypt tests
916
* https://tracker.ceph.com/issues/61399
917
    qa: "[Makefile:299: ior] Error 1"
918
* https://tracker.ceph.com/issues/61265
919
    qa: tasks.cephfs.fuse_mount:process failed to terminate after unmount
920
* https://tracker.ceph.com/issues/59348
921
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
922
* https://tracker.ceph.com/issues/59346
923
    qa/workunits/fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write"
924
* https://tracker.ceph.com/issues/61400
925
    valgrind+ceph-mon: segmentation fault in rocksdb+tcmalloc
926
* https://tracker.ceph.com/issues/54460
927
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
928
* https://tracker.ceph.com/issues/51964
929
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
930
* https://tracker.ceph.com/issues/59344
931
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
932
* https://tracker.ceph.com/issues/61407
933
    mds: abort on CInode::verify_dirfrags
934
* https://tracker.ceph.com/issues/48773
935
    qa: scrub does not complete
936
* https://tracker.ceph.com/issues/57655
937
    qa: fs:mixed-clients kernel_untar_build failure
938
* https://tracker.ceph.com/issues/61409
939 128 Venky Shankar
    qa: _test_stale_caps does not wait for file flush before stat
940
941
h3. 15 May 2023
942 130 Venky Shankar
943 128 Venky Shankar
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230509.090020
944
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230509.090020-6
945
946
* https://tracker.ceph.com/issues/52624
947
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
948
* https://tracker.ceph.com/issues/54460
949
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
950
* https://tracker.ceph.com/issues/57676
951
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
952
* https://tracker.ceph.com/issues/59684 [kclient bug]
953
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
954
* https://tracker.ceph.com/issues/59348
955
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
956 131 Venky Shankar
* https://tracker.ceph.com/issues/61148
957
    dbench test results in call trace in dmesg [kclient bug]
958 133 Kotresh Hiremath Ravishankar
* https://tracker.ceph.com/issues/58340
959 134 Kotresh Hiremath Ravishankar
    mds: fsstress.sh hangs with multimds
960 125 Venky Shankar
961
 
962 129 Rishabh Dave
h3. 11 May 2023
963
964
https://pulpito.ceph.com/yuriw-2023-05-10_18:21:40-fs-wip-yuri7-testing-2023-05-10-0742-distro-default-smithi/
965
966
* https://tracker.ceph.com/issues/59684 [kclient bug]
967
  Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
968
* https://tracker.ceph.com/issues/59348
969
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
970
* https://tracker.ceph.com/issues/57655
971
  qa: fs:mixed-clients kernel_untar_build failure
972
* https://tracker.ceph.com/issues/57676
973
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
974
* https://tracker.ceph.com/issues/55805
975
  error during scrub thrashing reached max tries in 900 secs
976
* https://tracker.ceph.com/issues/54460
977
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
978
* https://tracker.ceph.com/issues/57656
979
  [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
980
* https://tracker.ceph.com/issues/58220
981
  Command failed (workunit test fs/quota/quota.sh) on smithi081 with status 1:
982 1 Patrick Donnelly
* https://tracker.ceph.com/issues/58220#note-9
983
  workunit fs/test_python.sh: test_disk_quota_exceeeded_error failure
984 134 Kotresh Hiremath Ravishankar
* https://tracker.ceph.com/issues/59342
985
  qa/workunits/kernel_untar_build.sh failed when compiling the Linux source
986 135 Kotresh Hiremath Ravishankar
* https://tracker.ceph.com/issues/58949
987
    test_cephfs.test_disk_quota_exceeeded_error - AssertionError: DiskQuotaExceeded not raised by write
988 129 Rishabh Dave
* https://tracker.ceph.com/issues/61243 (NEW)
989
  test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
990
991 125 Venky Shankar
h3. 11 May 2023
992 127 Venky Shankar
993
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230509.054005
994 126 Venky Shankar
995 125 Venky Shankar
(no fsstress job failure [https://tracker.ceph.com/issues/58340] since https://github.com/ceph/ceph/pull/49553
996
 was included in the branch, however, the PR got updated and needs retest).
997
998
* https://tracker.ceph.com/issues/52624
999
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
1000
* https://tracker.ceph.com/issues/54460
1001
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1002
* https://tracker.ceph.com/issues/57676
1003
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1004
* https://tracker.ceph.com/issues/59683
1005
    Error: Unable to find a match: userspace-rcu-devel libedit-devel device-mapper-devel with fscrypt tests
1006
* https://tracker.ceph.com/issues/59684 [kclient bug]
1007
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
1008
* https://tracker.ceph.com/issues/59348
1009 124 Venky Shankar
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1010
1011
h3. 09 May 2023
1012
1013
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230506.143554
1014
1015
* https://tracker.ceph.com/issues/52624
1016
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
1017
* https://tracker.ceph.com/issues/58340
1018
    mds: fsstress.sh hangs with multimds
1019
* https://tracker.ceph.com/issues/54460
1020
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1021
* https://tracker.ceph.com/issues/57676
1022
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1023
* https://tracker.ceph.com/issues/51964
1024
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1025
* https://tracker.ceph.com/issues/59350
1026
    qa: test_flag_scrub_mdsdir (tasks.cephfs.test_scrub_checks.TestScrubChecks) ... ERROR
1027
* https://tracker.ceph.com/issues/59683
1028
    Error: Unable to find a match: userspace-rcu-devel libedit-devel device-mapper-devel with fscrypt tests
1029
* https://tracker.ceph.com/issues/59684 [kclient bug]
1030
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
1031
* https://tracker.ceph.com/issues/59348
1032 123 Venky Shankar
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1033
1034
h3. 10 Apr 2023
1035
1036
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230330.105356
1037
1038
* https://tracker.ceph.com/issues/52624
1039
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
1040
* https://tracker.ceph.com/issues/58340
1041
    mds: fsstress.sh hangs with multimds
1042
* https://tracker.ceph.com/issues/54460
1043
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1044
* https://tracker.ceph.com/issues/57676
1045
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1046 119 Rishabh Dave
* https://tracker.ceph.com/issues/51964
1047 120 Rishabh Dave
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1048 121 Rishabh Dave
1049 120 Rishabh Dave
h3. 31 Mar 2023
1050 122 Rishabh Dave
1051
run: http://pulpito.front.sepia.ceph.com/rishabh-2023-03-03_21:39:49-fs-wip-rishabh-2023Mar03-2316-testing-default-smithi/
1052 120 Rishabh Dave
re-run (some PRs removed from batch): http://pulpito.front.sepia.ceph.com/rishabh-2023-03-11_05:54:03-fs-wip-rishabh-2023Mar10-1727-testing-default-smithi/
1053
re-run (some PRs removed from batch): http://pulpito.front.sepia.ceph.com/rishabh-2023-03-23_08:27:28-fs-wip-rishabh-2023Mar20-2250-testing-default-smithi/
1054
1055
There were many more re-runs for "failed+dead" jobs as well as for individual jobs. half of the PRs from the batch were removed (gradually over subsequent re-runs).
1056
1057
* https://tracker.ceph.com/issues/57676
1058
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
1059
* https://tracker.ceph.com/issues/54460
1060
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1061
* https://tracker.ceph.com/issues/58220
1062
  Command failed (workunit test fs/quota/quota.sh) on smithixxx with status 1
1063
* https://tracker.ceph.com/issues/58220#note-9
1064
  workunit fs/test_python.sh: test_disk_quota_exceeeded_error failure
1065
* https://tracker.ceph.com/issues/56695
1066
  Command failed (workunit test suites/pjd.sh)
1067
* https://tracker.ceph.com/issues/58564 
1068
  workuit dbench failed with error code 1
1069
* https://tracker.ceph.com/issues/57206
1070
  libcephfs/test.sh: ceph_test_libcephfs_reclaim
1071
* https://tracker.ceph.com/issues/57580
1072
  Test failure: test_newops_getvxattr (tasks.cephfs.test_newops.TestNewOps)
1073
* https://tracker.ceph.com/issues/58940
1074
  ceph osd hit ceph_abort
1075
* https://tracker.ceph.com/issues/55805
1076 118 Venky Shankar
  error scrub thrashing reached max tries in 900 secs
1077
1078
h3. 30 March 2023
1079
1080
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230315.085747
1081
1082
* https://tracker.ceph.com/issues/58938
1083
    qa: xfstests-dev's generic test suite has 7 failures with kclient
1084
* https://tracker.ceph.com/issues/51964
1085
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1086
* https://tracker.ceph.com/issues/58340
1087 114 Venky Shankar
    mds: fsstress.sh hangs with multimds
1088
1089 115 Venky Shankar
h3. 29 March 2023
1090 114 Venky Shankar
1091
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230317.095222
1092
1093
* https://tracker.ceph.com/issues/56695
1094
    [RHEL stock] pjd test failures
1095
* https://tracker.ceph.com/issues/57676
1096
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1097
* https://tracker.ceph.com/issues/57087
1098
    qa: test_fragmented_injection (tasks.cephfs.test_data_scan.TestDataScan) failure
1099 116 Venky Shankar
* https://tracker.ceph.com/issues/58340
1100
    mds: fsstress.sh hangs with multimds
1101 114 Venky Shankar
* https://tracker.ceph.com/issues/57655
1102
    qa: fs:mixed-clients kernel_untar_build failure
1103 117 Venky Shankar
* https://tracker.ceph.com/issues/59230
1104
    Test failure: test_object_deletion (tasks.cephfs.test_damage.TestDamage)
1105 114 Venky Shankar
* https://tracker.ceph.com/issues/58938
1106 113 Venky Shankar
    qa: xfstests-dev's generic test suite has 7 failures with kclient
1107
1108
h3. 13 Mar 2023
1109
1110
* https://tracker.ceph.com/issues/56695
1111
    [RHEL stock] pjd test failures
1112
* https://tracker.ceph.com/issues/57676
1113
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1114
* https://tracker.ceph.com/issues/51964
1115
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1116
* https://tracker.ceph.com/issues/54460
1117
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1118
* https://tracker.ceph.com/issues/57656
1119 112 Venky Shankar
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
1120
1121
h3. 09 Mar 2023
1122
1123
https://pulpito.ceph.com/vshankar-2023-03-03_04:39:14-fs-wip-vshankar-testing-20230303.023823-testing-default-smithi/
1124
https://pulpito.ceph.com/vshankar-2023-03-08_15:12:36-fs-wip-vshankar-testing-20230308.112059-testing-default-smithi/
1125
1126
* https://tracker.ceph.com/issues/56695
1127
    [RHEL stock] pjd test failures
1128
* https://tracker.ceph.com/issues/57676
1129
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1130
* https://tracker.ceph.com/issues/51964
1131
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1132
* https://tracker.ceph.com/issues/54460
1133
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1134
* https://tracker.ceph.com/issues/58340
1135
    mds: fsstress.sh hangs with multimds
1136
* https://tracker.ceph.com/issues/57087
1137 111 Venky Shankar
    qa: test_fragmented_injection (tasks.cephfs.test_data_scan.TestDataScan) failure
1138
1139
h3. 07 Mar 2023
1140
1141
https://pulpito.ceph.com/vshankar-2023-03-02_09:21:58-fs-wip-vshankar-testing-20230222.044949-testing-default-smithi/
1142
https://pulpito.ceph.com/vshankar-2023-03-07_05:15:12-fs-wip-vshankar-testing-20230307.030510-testing-default-smithi/
1143
1144
* https://tracker.ceph.com/issues/56695
1145
    [RHEL stock] pjd test failures
1146
* https://tracker.ceph.com/issues/57676
1147
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1148
* https://tracker.ceph.com/issues/51964
1149
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1150
* https://tracker.ceph.com/issues/57656
1151
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
1152
* https://tracker.ceph.com/issues/57655
1153
    qa: fs:mixed-clients kernel_untar_build failure
1154
* https://tracker.ceph.com/issues/58220
1155
    Command failed (workunit test fs/quota/quota.sh) on smithi081 with status 1:
1156
* https://tracker.ceph.com/issues/54460
1157
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1158
* https://tracker.ceph.com/issues/58934
1159 109 Venky Shankar
    snaptest-git-ceph.sh failure with ceph-fuse
1160
1161
h3. 28 Feb 2023
1162
1163
https://pulpito.ceph.com/vshankar-2023-02-24_02:11:45-fs-wip-vshankar-testing-20230222.025426-testing-default-smithi/
1164
1165
* https://tracker.ceph.com/issues/56695
1166
    [RHEL stock] pjd test failures
1167
* https://tracker.ceph.com/issues/57676
1168
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1169 110 Venky Shankar
* https://tracker.ceph.com/issues/56446
1170 109 Venky Shankar
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
1171
1172 107 Venky Shankar
(teuthology infra issues causing testing delays - merging PRs which have tests passing)
1173
1174
h3. 25 Jan 2023
1175
1176
https://pulpito.ceph.com/vshankar-2023-01-25_07:57:32-fs-wip-vshankar-testing-20230125.055346-testing-default-smithi/
1177
1178
* https://tracker.ceph.com/issues/52624
1179
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
1180
* https://tracker.ceph.com/issues/56695
1181
    [RHEL stock] pjd test failures
1182
* https://tracker.ceph.com/issues/57676
1183
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1184
* https://tracker.ceph.com/issues/56446
1185
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
1186
* https://tracker.ceph.com/issues/57206
1187
    libcephfs/test.sh: ceph_test_libcephfs_reclaim
1188
* https://tracker.ceph.com/issues/58220
1189
    Command failed (workunit test fs/quota/quota.sh) on smithi081 with status 1:
1190
* https://tracker.ceph.com/issues/58340
1191
  mds: fsstress.sh hangs with multimds
1192
* https://tracker.ceph.com/issues/56011
1193
    fs/thrash: snaptest-snap-rm-cmp.sh fails in mds5sum comparison
1194
* https://tracker.ceph.com/issues/54460
1195 101 Rishabh Dave
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1196
1197
h3. 30 JAN 2023
1198
1199
run: http://pulpito.front.sepia.ceph.com/rishabh-2022-11-28_08:04:11-fs-wip-rishabh-testing-2022Nov24-1818-testing-default-smithi/
1200
re-run: http://pulpito.front.sepia.ceph.com/rishabh-2023-01-13_12:08:33-fs-wip-rishabh-testing-2022Nov24-11Jan2023-distro-default-smithi/
1201 105 Rishabh Dave
re-run of re-run: http://pulpito.front.sepia.ceph.com/rishabh-2023-01-23_18:53:32-fs-wip-rishabh-testing-2022Nov24-11Jan2023-distro-default-smithi/
1202
1203 101 Rishabh Dave
* https://tracker.ceph.com/issues/52624
1204
  qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
1205
* https://tracker.ceph.com/issues/56695
1206
  [RHEL stock] pjd test failures
1207
* https://tracker.ceph.com/issues/57676
1208
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
1209
* https://tracker.ceph.com/issues/55332
1210
  Failure in snaptest-git-ceph.sh
1211
* https://tracker.ceph.com/issues/51964
1212
  qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1213
* https://tracker.ceph.com/issues/56446
1214
  Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
1215
* https://tracker.ceph.com/issues/57655 
1216
  qa: fs:mixed-clients kernel_untar_build failure
1217
* https://tracker.ceph.com/issues/54460
1218
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1219 103 Rishabh Dave
* https://tracker.ceph.com/issues/58340
1220
  mds: fsstress.sh hangs with multimds
1221 101 Rishabh Dave
* https://tracker.ceph.com/issues/58219
1222 102 Rishabh Dave
  Command crashed: 'ceph-dencoder type inode_backtrace_t import - decode dump_json'
1223
1224
* "Failed to load ceph-mgr modules: prometheus" in cluster log"
1225 106 Rishabh Dave
  http://pulpito.front.sepia.ceph.com/rishabh-2023-01-23_18:53:32-fs-wip-rishabh-testing-2022Nov24-11Jan2023-distro-default-smithi/7134086
1226
  Acc to Venky this was fixed in https://github.com/ceph/ceph/commit/cf6089200d96fc56b08ee17a4e31f19823370dc8
1227 102 Rishabh Dave
* Created https://tracker.ceph.com/issues/58564
1228 100 Venky Shankar
  workunit test suites/dbench.sh failed error code 1
1229
1230
h3. 15 Dec 2022
1231
1232
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20221215.112736
1233
1234
* https://tracker.ceph.com/issues/52624
1235
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1236
* https://tracker.ceph.com/issues/56695
1237
    [RHEL stock] pjd test failures
1238
* https://tracker.ceph.com/issues/58219
1239
* https://tracker.ceph.com/issues/57655
1240
* qa: fs:mixed-clients kernel_untar_build failure
1241
    Test failure: test_journal_migration (tasks.cephfs.test_journal_migration.TestJournalMigration)
1242
* https://tracker.ceph.com/issues/57676
1243
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1244
* https://tracker.ceph.com/issues/58340
1245 96 Venky Shankar
    mds: fsstress.sh hangs with multimds
1246
1247
h3. 08 Dec 2022
1248 99 Venky Shankar
1249 96 Venky Shankar
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20221130.043104
1250
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20221209.043803
1251
1252
(lots of transient git.ceph.com failures)
1253
1254
* https://tracker.ceph.com/issues/52624
1255
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1256
* https://tracker.ceph.com/issues/56695
1257
    [RHEL stock] pjd test failures
1258
* https://tracker.ceph.com/issues/57655
1259
    qa: fs:mixed-clients kernel_untar_build failure
1260
* https://tracker.ceph.com/issues/58219
1261
    Test failure: test_journal_migration (tasks.cephfs.test_journal_migration.TestJournalMigration)
1262
* https://tracker.ceph.com/issues/58220
1263
    Command failed (workunit test fs/quota/quota.sh) on smithi081 with status 1:
1264 97 Venky Shankar
* https://tracker.ceph.com/issues/57676
1265
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1266 98 Venky Shankar
* https://tracker.ceph.com/issues/53859
1267
    qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
1268
* https://tracker.ceph.com/issues/54460
1269
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1270 96 Venky Shankar
* https://tracker.ceph.com/issues/58244
1271 95 Venky Shankar
    Test failure: test_rebuild_inotable (tasks.cephfs.test_data_scan.TestDataScan)
1272
1273
h3. 14 Oct 2022
1274
1275
https://pulpito.ceph.com/vshankar-2022-10-12_04:56:59-fs-wip-vshankar-testing-20221011-145847-testing-default-smithi/
1276
https://pulpito.ceph.com/vshankar-2022-10-14_04:04:57-fs-wip-vshankar-testing-20221014-072608-testing-default-smithi/
1277
1278
* https://tracker.ceph.com/issues/52624
1279
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1280
* https://tracker.ceph.com/issues/55804
1281
    Command failed (workunit test suites/pjd.sh)
1282
* https://tracker.ceph.com/issues/51964
1283
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1284
* https://tracker.ceph.com/issues/57682
1285
    client: ERROR: test_reconnect_after_blocklisted
1286 90 Rishabh Dave
* https://tracker.ceph.com/issues/54460
1287 91 Rishabh Dave
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1288
1289
h3. 10 Oct 2022
1290 92 Rishabh Dave
1291 91 Rishabh Dave
http://pulpito.front.sepia.ceph.com/rishabh-2022-09-30_19:45:21-fs-wip-rishabh-testing-30Sep2022-testing-default-smithi/
1292
1293
reruns
1294
* fs-thrash, passed: http://pulpito.front.sepia.ceph.com/rishabh-2022-10-04_13:19:47-fs-wip-rishabh-testing-30Sep2022-testing-default-smithi/
1295 94 Rishabh Dave
* fs-verify, passed: http://pulpito.front.sepia.ceph.com/rishabh-2022-10-05_12:25:37-fs-wip-rishabh-testing-30Sep2022-testing-default-smithi/
1296 91 Rishabh Dave
* cephadm failures also passed after many re-runs: http://pulpito.front.sepia.ceph.com/rishabh-2022-10-06_13:50:51-fs-wip-rishabh-testing-30Sep2022-2-testing-default-smithi/
1297 93 Rishabh Dave
    ** needed this PR to be merged in ceph-ci branch - https://github.com/ceph/ceph/pull/47458
1298 91 Rishabh Dave
1299
known bugs
1300
* https://tracker.ceph.com/issues/52624
1301
  qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1302
* https://tracker.ceph.com/issues/50223
1303
  client.xxxx isn't responding to mclientcaps(revoke
1304
* https://tracker.ceph.com/issues/57299
1305
  qa: test_dump_loads fails with JSONDecodeError
1306
* https://tracker.ceph.com/issues/57655 [Exist in main as well]
1307
  qa: fs:mixed-clients kernel_untar_build failure
1308
* https://tracker.ceph.com/issues/57206
1309 90 Rishabh Dave
  libcephfs/test.sh: ceph_test_libcephfs_reclaim
1310
1311
h3. 2022 Sep 29
1312
1313
http://pulpito.front.sepia.ceph.com/rishabh-2022-09-14_12:48:43-fs-wip-rishabh-testing-2022Sep9-1708-testing-default-smithi/
1314
1315
* https://tracker.ceph.com/issues/55804
1316
  Command failed (workunit test suites/pjd.sh)
1317
* https://tracker.ceph.com/issues/36593
1318
  Command failed (workunit test fs/quota/quota.sh) on smithixxx with status 1
1319
* https://tracker.ceph.com/issues/52624
1320
  qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1321
* https://tracker.ceph.com/issues/51964
1322
  qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1323
* https://tracker.ceph.com/issues/56632
1324
  Test failure: test_subvolume_snapshot_clone_quota_exceeded
1325
* https://tracker.ceph.com/issues/50821
1326 88 Patrick Donnelly
  qa: untar_snap_rm failure during mds thrashing
1327
1328
h3. 2022 Sep 26
1329
1330
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20220923.171109
1331
1332
* https://tracker.ceph.com/issues/55804
1333
    qa failure: pjd link tests failed
1334
* https://tracker.ceph.com/issues/57676
1335
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1336
* https://tracker.ceph.com/issues/52624
1337
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1338
* https://tracker.ceph.com/issues/57580
1339
    Test failure: test_newops_getvxattr (tasks.cephfs.test_newops.TestNewOps)
1340
* https://tracker.ceph.com/issues/48773
1341
    qa: scrub does not complete
1342
* https://tracker.ceph.com/issues/57299
1343
    qa: test_dump_loads fails with JSONDecodeError
1344
* https://tracker.ceph.com/issues/57280
1345
    qa: tasks/kernel_cfuse_workunits_untarbuild_blogbench fails - Failed to fetch package version from shaman
1346
* https://tracker.ceph.com/issues/57205
1347
    Test failure: test_subvolume_group_ls_filter_internal_directories (tasks.cephfs.test_volumes.TestSubvolumeGroups)
1348
* https://tracker.ceph.com/issues/57656
1349
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
1350
* https://tracker.ceph.com/issues/57677
1351
    qa: "1 MDSs behind on trimming (MDS_TRIM)"
1352
* https://tracker.ceph.com/issues/57206
1353
    libcephfs/test.sh: ceph_test_libcephfs_reclaim
1354
* https://tracker.ceph.com/issues/57446
1355
    qa: test_subvolume_snapshot_info_if_orphan_clone fails
1356 89 Patrick Donnelly
* https://tracker.ceph.com/issues/57655 [Exist in main as well]
1357
    qa: fs:mixed-clients kernel_untar_build failure
1358 88 Patrick Donnelly
* https://tracker.ceph.com/issues/57682
1359
    client: ERROR: test_reconnect_after_blocklisted
1360 87 Patrick Donnelly
1361
1362
h3. 2022 Sep 22
1363
1364
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20220920.234701
1365
1366
* https://tracker.ceph.com/issues/57299
1367
    qa: test_dump_loads fails with JSONDecodeError
1368
* https://tracker.ceph.com/issues/57205
1369
    Test failure: test_subvolume_group_ls_filter_internal_directories (tasks.cephfs.test_volumes.TestSubvolumeGroups)
1370
* https://tracker.ceph.com/issues/52624
1371
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1372
* https://tracker.ceph.com/issues/57580
1373
    Test failure: test_newops_getvxattr (tasks.cephfs.test_newops.TestNewOps)
1374
* https://tracker.ceph.com/issues/57280
1375
    qa: tasks/kernel_cfuse_workunits_untarbuild_blogbench fails - Failed to fetch package version from shaman
1376
* https://tracker.ceph.com/issues/48773
1377
    qa: scrub does not complete
1378
* https://tracker.ceph.com/issues/56446
1379
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
1380
* https://tracker.ceph.com/issues/57206
1381
    libcephfs/test.sh: ceph_test_libcephfs_reclaim
1382
* https://tracker.ceph.com/issues/51267
1383
    CommandFailedError: Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithi096 with status 1:...
1384
1385
NEW:
1386
1387
* https://tracker.ceph.com/issues/57656
1388
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
1389
* https://tracker.ceph.com/issues/57655 [Exist in main as well]
1390
    qa: fs:mixed-clients kernel_untar_build failure
1391
* https://tracker.ceph.com/issues/57657
1392
    mds: scrub locates mismatch between child accounted_rstats and self rstats
1393
1394
Segfault probably caused by: https://github.com/ceph/ceph/pull/47795#issuecomment-1255724799
1395 80 Venky Shankar
1396 79 Venky Shankar
1397
h3. 2022 Sep 16
1398
1399
https://pulpito.ceph.com/?branch=wip-vshankar-testing1-20220905-132828
1400
1401
* https://tracker.ceph.com/issues/57446
1402
    qa: test_subvolume_snapshot_info_if_orphan_clone fails
1403
* https://tracker.ceph.com/issues/57299
1404
    qa: test_dump_loads fails with JSONDecodeError
1405
* https://tracker.ceph.com/issues/50223
1406
    client.xxxx isn't responding to mclientcaps(revoke)
1407
* https://tracker.ceph.com/issues/52624
1408
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1409
* https://tracker.ceph.com/issues/57205
1410
    Test failure: test_subvolume_group_ls_filter_internal_directories (tasks.cephfs.test_volumes.TestSubvolumeGroups)
1411
* https://tracker.ceph.com/issues/57280
1412
    qa: tasks/kernel_cfuse_workunits_untarbuild_blogbench fails - Failed to fetch package version from shaman
1413
* https://tracker.ceph.com/issues/51282
1414
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
1415
* https://tracker.ceph.com/issues/48203
1416
  https://tracker.ceph.com/issues/36593
1417
    qa: quota failure
1418
    qa: quota failure caused by clients stepping on each other
1419
* https://tracker.ceph.com/issues/57580
1420 77 Rishabh Dave
    Test failure: test_newops_getvxattr (tasks.cephfs.test_newops.TestNewOps)
1421
1422 76 Rishabh Dave
1423
h3. 2022 Aug 26
1424
1425
http://pulpito.front.sepia.ceph.com/rishabh-2022-08-22_17:49:59-fs-wip-rishabh-testing-2022Aug19-testing-default-smithi/
1426
http://pulpito.front.sepia.ceph.com/rishabh-2022-08-24_11:56:51-fs-wip-rishabh-testing-2022Aug19-testing-default-smithi/
1427
1428
* https://tracker.ceph.com/issues/57206
1429
  libcephfs/test.sh: ceph_test_libcephfs_reclaim
1430
* https://tracker.ceph.com/issues/56632
1431
  Test failure: test_subvolume_snapshot_clone_quota_exceeded (tasks.cephfs.test_volumes.TestSubvolumeSnapshotClones)
1432
* https://tracker.ceph.com/issues/56446
1433
  Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
1434
* https://tracker.ceph.com/issues/51964
1435
  qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1436
* https://tracker.ceph.com/issues/53859
1437
  qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
1438
1439
* https://tracker.ceph.com/issues/54460
1440
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1441
* https://tracker.ceph.com/issues/54462
1442
  Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128
1443
* https://tracker.ceph.com/issues/54460
1444
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1445
* https://tracker.ceph.com/issues/36593
1446
  Command failed (workunit test fs/quota/quota.sh) on smithixxx with status 1
1447
1448
* https://tracker.ceph.com/issues/52624
1449
  qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1450
* https://tracker.ceph.com/issues/55804
1451
  Command failed (workunit test suites/pjd.sh)
1452
* https://tracker.ceph.com/issues/50223
1453
  client.xxxx isn't responding to mclientcaps(revoke)
1454 75 Venky Shankar
1455
1456
h3. 2022 Aug 22
1457
1458
https://pulpito.ceph.com/vshankar-2022-08-12_09:34:24-fs-wip-vshankar-testing1-20220812-072441-testing-default-smithi/
1459
https://pulpito.ceph.com/vshankar-2022-08-18_04:30:42-fs-wip-vshankar-testing1-20220818-082047-testing-default-smithi/ (drop problematic PR and re-run)
1460
1461
* https://tracker.ceph.com/issues/52624
1462
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1463
* https://tracker.ceph.com/issues/56446
1464
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
1465
* https://tracker.ceph.com/issues/55804
1466
    Command failed (workunit test suites/pjd.sh)
1467
* https://tracker.ceph.com/issues/51278
1468
    mds: "FAILED ceph_assert(!segments.empty())"
1469
* https://tracker.ceph.com/issues/54460
1470
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1471
* https://tracker.ceph.com/issues/57205
1472
    Test failure: test_subvolume_group_ls_filter_internal_directories (tasks.cephfs.test_volumes.TestSubvolumeGroups)
1473
* https://tracker.ceph.com/issues/57206
1474
    ceph_test_libcephfs_reclaim crashes during test
1475
* https://tracker.ceph.com/issues/53859
1476
  Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
1477
* https://tracker.ceph.com/issues/50223
1478 72 Venky Shankar
    client.xxxx isn't responding to mclientcaps(revoke)
1479
1480
h3. 2022 Aug 12
1481
1482
https://pulpito.ceph.com/vshankar-2022-08-10_04:06:00-fs-wip-vshankar-testing-20220805-190751-testing-default-smithi/
1483
https://pulpito.ceph.com/vshankar-2022-08-11_12:16:58-fs-wip-vshankar-testing-20220811-145809-testing-default-smithi/ (drop problematic PR and re-run)
1484
1485
* https://tracker.ceph.com/issues/52624
1486
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1487
* https://tracker.ceph.com/issues/56446
1488
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
1489
* https://tracker.ceph.com/issues/51964
1490
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1491
* https://tracker.ceph.com/issues/55804
1492
    Command failed (workunit test suites/pjd.sh)
1493
* https://tracker.ceph.com/issues/50223
1494
    client.xxxx isn't responding to mclientcaps(revoke)
1495
* https://tracker.ceph.com/issues/50821
1496 73 Venky Shankar
    qa: untar_snap_rm failure during mds thrashing
1497 72 Venky Shankar
* https://tracker.ceph.com/issues/54460
1498 71 Venky Shankar
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1499
1500
h3. 2022 Aug 04
1501
1502
https://pulpito.ceph.com/?branch=wip-vshankar-testing1-20220804-123835 (only mgr/volumes, mgr/stats)
1503
1504 69 Rishabh Dave
Unrealted teuthology failure on rhel
1505 68 Rishabh Dave
1506
h3. 2022 Jul 25
1507
1508
http://pulpito.front.sepia.ceph.com/rishabh-2022-07-22_11:34:20-fs-wip-rishabh-testing-2022Jul22-1400-testing-default-smithi/
1509
1510 74 Rishabh Dave
1st re-run: http://pulpito.front.sepia.ceph.com/rishabh-2022-07-24_03:51:19-fs-wip-rishabh-testing-2022Jul22-1400-testing-default-smithi
1511
2nd re-run: http://pulpito.front.sepia.ceph.com/rishabh-2022-07-24_08:53:36-fs-wip-rishabh-testing-2022Jul22-1400-testing-default-smithi/
1512 68 Rishabh Dave
3rd re-run: http://pulpito.front.sepia.ceph.com/rishabh-2022-07-24_08:53:36-fs-wip-rishabh-testing-2022Jul22-1400-testing-default-smithi/
1513
4th (final) re-run: http://pulpito.front.sepia.ceph.com/rishabh-2022-07-28_03:59:01-fs-wip-rishabh-testing-2022Jul28-0143-testing-default-smithi/
1514
1515
* https://tracker.ceph.com/issues/55804
1516
  Command failed (workunit test suites/pjd.sh)
1517
* https://tracker.ceph.com/issues/50223
1518
  client.xxxx isn't responding to mclientcaps(revoke)
1519
1520
* https://tracker.ceph.com/issues/54460
1521
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1522 1 Patrick Donnelly
* https://tracker.ceph.com/issues/36593
1523 74 Rishabh Dave
  Command failed (workunit test fs/quota/quota.sh) on smithixxx with status 1
1524 68 Rishabh Dave
* https://tracker.ceph.com/issues/54462
1525 67 Patrick Donnelly
  Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128~
1526
1527
h3. 2022 July 22
1528
1529
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20220721.235756
1530
1531
MDS_HEALTH_DUMMY error in log fixed by followup commit.
1532
transient selinux ping failure
1533
1534
* https://tracker.ceph.com/issues/56694
1535
    qa: avoid blocking forever on hung umount
1536
* https://tracker.ceph.com/issues/56695
1537
    [RHEL stock] pjd test failures
1538
* https://tracker.ceph.com/issues/56696
1539
    admin keyring disappears during qa run
1540
* https://tracker.ceph.com/issues/56697
1541
    qa: fs/snaps fails for fuse
1542
* https://tracker.ceph.com/issues/50222
1543
    osd: 5.2s0 deep-scrub : stat mismatch
1544
* https://tracker.ceph.com/issues/56698
1545
    client: FAILED ceph_assert(_size == 0)
1546
* https://tracker.ceph.com/issues/50223
1547
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
1548 66 Rishabh Dave
1549 65 Rishabh Dave
1550
h3. 2022 Jul 15
1551
1552
http://pulpito.front.sepia.ceph.com/rishabh-2022-07-08_23:53:34-fs-wip-rishabh-testing-2022Jul08-1820-testing-default-smithi/
1553
1554
re-run: http://pulpito.front.sepia.ceph.com/rishabh-2022-07-15_06:42:04-fs-wip-rishabh-testing-2022Jul08-1820-testing-default-smithi/
1555
1556
* https://tracker.ceph.com/issues/53859
1557
  Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
1558
* https://tracker.ceph.com/issues/55804
1559
  Command failed (workunit test suites/pjd.sh)
1560
* https://tracker.ceph.com/issues/50223
1561
  client.xxxx isn't responding to mclientcaps(revoke)
1562
* https://tracker.ceph.com/issues/50222
1563
  osd: deep-scrub : stat mismatch
1564
1565
* https://tracker.ceph.com/issues/56632
1566
  Test failure: test_subvolume_snapshot_clone_quota_exceeded (tasks.cephfs.test_volumes.TestSubvolumeSnapshotClones)
1567
* https://tracker.ceph.com/issues/56634
1568
  workunit test fs/snaps/snaptest-intodir.sh
1569
* https://tracker.ceph.com/issues/56644
1570
  Test failure: test_rapid_creation (tasks.cephfs.test_fragment.TestFragmentation)
1571
1572 61 Rishabh Dave
1573
1574
h3. 2022 July 05
1575 62 Rishabh Dave
1576 64 Rishabh Dave
http://pulpito.front.sepia.ceph.com/rishabh-2022-07-02_14:14:52-fs-wip-rishabh-testing-20220702-1631-testing-default-smithi/
1577
1578
On 1st re-run some jobs passed - http://pulpito.front.sepia.ceph.com/rishabh-2022-07-03_15:10:28-fs-wip-rishabh-testing-20220702-1631-distro-default-smithi/
1579
1580
On 2nd re-run only few jobs failed -
1581 62 Rishabh Dave
http://pulpito.front.sepia.ceph.com/rishabh-2022-07-06_05:24:29-fs-wip-rishabh-testing-20220705-2132-distro-default-smithi/
1582
http://pulpito.front.sepia.ceph.com/rishabh-2022-07-06_05:24:29-fs-wip-rishabh-testing-20220705-2132-distro-default-smithi/
1583
1584
* https://tracker.ceph.com/issues/56446
1585
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
1586
* https://tracker.ceph.com/issues/55804
1587
    Command failed (workunit test suites/pjd.sh) on smithi047 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/
1588
1589
* https://tracker.ceph.com/issues/56445
1590 63 Rishabh Dave
    Command failed on smithi080 with status 123: "find /home/ubuntu/cephtest/archive/syslog -name '*.log' -print0 | sudo xargs -0 --no-run-if-empty -- gzip --"
1591
* https://tracker.ceph.com/issues/51267
1592
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithi098 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1
1593 62 Rishabh Dave
* https://tracker.ceph.com/issues/50224
1594
    Test failure: test_mirroring_init_failure_with_recovery (tasks.cephfs.test_mirroring.TestMirroring)
1595 61 Rishabh Dave
1596 58 Venky Shankar
1597
1598
h3. 2022 July 04
1599
1600
https://pulpito.ceph.com/vshankar-2022-06-29_09:19:00-fs-wip-vshankar-testing-20220627-100931-testing-default-smithi/
1601
(rhel runs were borked due to: https://lists.ceph.io/hyperkitty/list/dev@ceph.io/thread/JSZQFUKVLDND4W33PXDGCABPHNSPT6SS/, tests ran with --filter-out=rhel)
1602
1603
* https://tracker.ceph.com/issues/56445
1604 59 Rishabh Dave
    Command failed on smithi162 with status 123: "find /home/ubuntu/cephtest/archive/syslog -name '*.log' -print0 | sudo xargs -0 --no-run-if-empty -- gzip --"
1605
* https://tracker.ceph.com/issues/56446
1606
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
1607
* https://tracker.ceph.com/issues/51964
1608 60 Rishabh Dave
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1609 59 Rishabh Dave
* https://tracker.ceph.com/issues/52624
1610 57 Venky Shankar
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1611
1612
h3. 2022 June 20
1613
1614
https://pulpito.ceph.com/vshankar-2022-06-15_04:03:39-fs-wip-vshankar-testing1-20220615-072516-testing-default-smithi/
1615
https://pulpito.ceph.com/vshankar-2022-06-19_08:22:46-fs-wip-vshankar-testing1-20220619-102531-testing-default-smithi/
1616
1617
* https://tracker.ceph.com/issues/52624
1618
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1619
* https://tracker.ceph.com/issues/55804
1620
    qa failure: pjd link tests failed
1621
* https://tracker.ceph.com/issues/54108
1622
    qa: iogen workunit: "The following counters failed to be set on mds daemons: {'mds.exported', 'mds.imported'}"
1623
* https://tracker.ceph.com/issues/55332
1624 56 Patrick Donnelly
    Failure in snaptest-git-ceph.sh (it's an async unlink/create bug)
1625
1626
h3. 2022 June 13
1627
1628
https://pulpito.ceph.com/pdonnell-2022-06-12_05:08:12-fs:workload-wip-pdonnell-testing-20220612.004943-distro-default-smithi/
1629
1630
* https://tracker.ceph.com/issues/56024
1631
    cephadm: removes ceph.conf during qa run causing command failure
1632
* https://tracker.ceph.com/issues/48773
1633
    qa: scrub does not complete
1634
* https://tracker.ceph.com/issues/56012
1635
    mds: src/mds/MDLog.cc: 283: FAILED ceph_assert(!mds->is_ any_replay())
1636 55 Venky Shankar
1637 54 Venky Shankar
1638
h3. 2022 Jun 13
1639
1640
https://pulpito.ceph.com/vshankar-2022-06-07_00:25:50-fs-wip-vshankar-testing-20220606-223254-testing-default-smithi/
1641
https://pulpito.ceph.com/vshankar-2022-06-10_01:04:46-fs-wip-vshankar-testing-20220609-175550-testing-default-smithi/
1642
1643
* https://tracker.ceph.com/issues/52624
1644
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1645
* https://tracker.ceph.com/issues/51964
1646
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1647
* https://tracker.ceph.com/issues/53859
1648
    qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
1649
* https://tracker.ceph.com/issues/55804
1650
    qa failure: pjd link tests failed
1651
* https://tracker.ceph.com/issues/56003
1652
    client: src/include/xlist.h: 81: FAILED ceph_assert(_size == 0)
1653
* https://tracker.ceph.com/issues/56011
1654
    fs/thrash: snaptest-snap-rm-cmp.sh fails in mds5sum comparison
1655
* https://tracker.ceph.com/issues/56012
1656 53 Venky Shankar
    mds: src/mds/MDLog.cc: 283: FAILED ceph_assert(!mds->is_ any_replay())
1657
1658
h3. 2022 Jun 07
1659
1660
https://pulpito.ceph.com/vshankar-2022-06-06_21:25:41-fs-wip-vshankar-testing1-20220606-230129-testing-default-smithi/
1661
https://pulpito.ceph.com/vshankar-2022-06-07_10:53:31-fs-wip-vshankar-testing1-20220607-104134-testing-default-smithi/ (rerun after dropping a problematic PR)
1662
1663
* https://tracker.ceph.com/issues/52624
1664
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1665
* https://tracker.ceph.com/issues/50223
1666
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
1667
* https://tracker.ceph.com/issues/50224
1668 51 Venky Shankar
    qa: test_mirroring_init_failure_with_recovery failure
1669
1670
h3. 2022 May 12
1671 52 Venky Shankar
1672 51 Venky Shankar
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20220509-125847
1673
https://pulpito.ceph.com/vshankar-2022-05-13_17:09:16-fs-wip-vshankar-testing-20220513-120051-testing-default-smithi/ (drop prs + rerun)
1674
1675
* https://tracker.ceph.com/issues/52624
1676
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1677
* https://tracker.ceph.com/issues/50223
1678
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
1679
* https://tracker.ceph.com/issues/55332
1680
    Failure in snaptest-git-ceph.sh
1681
* https://tracker.ceph.com/issues/53859
1682 1 Patrick Donnelly
    qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
1683 52 Venky Shankar
* https://tracker.ceph.com/issues/55538
1684
    Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead)
1685 51 Venky Shankar
* https://tracker.ceph.com/issues/55258
1686 49 Venky Shankar
    lots of "heartbeat_check: no reply from X.X.X.X" in OSD logs (cropss up again, though very infrequent)
1687
1688 50 Venky Shankar
h3. 2022 May 04
1689
1690
https://pulpito.ceph.com/vshankar-2022-05-01_13:18:44-fs-wip-vshankar-testing1-20220428-204527-testing-default-smithi/
1691 49 Venky Shankar
https://pulpito.ceph.com/vshankar-2022-05-02_16:58:59-fs-wip-vshankar-testing1-20220502-201957-testing-default-smithi/ (after dropping PRs)
1692
1693
* https://tracker.ceph.com/issues/52624
1694
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1695
* https://tracker.ceph.com/issues/50223
1696
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
1697
* https://tracker.ceph.com/issues/55332
1698
    Failure in snaptest-git-ceph.sh
1699
* https://tracker.ceph.com/issues/53859
1700
    qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
1701
* https://tracker.ceph.com/issues/55516
1702
    qa: fs suite tests failing with "json.decoder.JSONDecodeError: Extra data: line 2 column 82 (char 82)"
1703
* https://tracker.ceph.com/issues/55537
1704
    mds: crash during fs:upgrade test
1705
* https://tracker.ceph.com/issues/55538
1706 48 Venky Shankar
    Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead)
1707
1708
h3. 2022 Apr 25
1709
1710
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20220420-113951 (owner vshankar)
1711
1712
* https://tracker.ceph.com/issues/52624
1713
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1714
* https://tracker.ceph.com/issues/50223
1715
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
1716
* https://tracker.ceph.com/issues/55258
1717
    lots of "heartbeat_check: no reply from X.X.X.X" in OSD logs
1718
* https://tracker.ceph.com/issues/55377
1719 47 Venky Shankar
    kclient: mds revoke Fwb caps stuck after the kclient tries writebcak once
1720
1721
h3. 2022 Apr 14
1722
1723
https://pulpito.ceph.com/?branch=wip-vshankar-testing1-20220411-144044
1724
1725
* https://tracker.ceph.com/issues/52624
1726
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1727
* https://tracker.ceph.com/issues/50223
1728
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
1729
* https://tracker.ceph.com/issues/52438
1730
    qa: ffsb timeout
1731
* https://tracker.ceph.com/issues/55170
1732
    mds: crash during rejoin (CDir::fetch_keys)
1733
* https://tracker.ceph.com/issues/55331
1734
    pjd failure
1735
* https://tracker.ceph.com/issues/48773
1736
    qa: scrub does not complete
1737
* https://tracker.ceph.com/issues/55332
1738
    Failure in snaptest-git-ceph.sh
1739
* https://tracker.ceph.com/issues/55258
1740 45 Venky Shankar
    lots of "heartbeat_check: no reply from X.X.X.X" in OSD logs
1741
1742 46 Venky Shankar
h3. 2022 Apr 11
1743 45 Venky Shankar
1744
https://pulpito.ceph.com/?branch=wip-vshankar-testing-55110-20220408-203242
1745
1746
* https://tracker.ceph.com/issues/48773
1747
    qa: scrub does not complete
1748
* https://tracker.ceph.com/issues/52624
1749
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1750
* https://tracker.ceph.com/issues/52438
1751
    qa: ffsb timeout
1752
* https://tracker.ceph.com/issues/48680
1753
    mds: scrubbing stuck "scrub active (0 inodes in the stack)"
1754
* https://tracker.ceph.com/issues/55236
1755
    qa: fs/snaps tests fails with "hit max job timeout"
1756
* https://tracker.ceph.com/issues/54108
1757
    qa: iogen workunit: "The following counters failed to be set on mds daemons: {'mds.exported', 'mds.imported'}"
1758
* https://tracker.ceph.com/issues/54971
1759
    Test failure: test_perf_stats_stale_metrics (tasks.cephfs.test_mds_metrics.TestMDSMetrics)
1760
* https://tracker.ceph.com/issues/50223
1761
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
1762
* https://tracker.ceph.com/issues/55258
1763 44 Venky Shankar
    lots of "heartbeat_check: no reply from X.X.X.X" in OSD logs
1764 42 Venky Shankar
1765 43 Venky Shankar
h3. 2022 Mar 21
1766
1767
https://pulpito.ceph.com/vshankar-2022-03-20_02:16:37-fs-wip-vshankar-testing-20220319-163539-testing-default-smithi/
1768
1769
Run didn't go well, lots of failures - debugging by dropping PRs and running against master branch. Only merging unrelated PRs that pass tests.
1770
1771
1772 42 Venky Shankar
h3. 2022 Mar 08
1773
1774
https://pulpito.ceph.com/vshankar-2022-02-28_04:32:15-fs-wip-vshankar-testing-20220226-211550-testing-default-smithi/
1775
1776
rerun with
1777
- (drop) https://github.com/ceph/ceph/pull/44679
1778
- (drop) https://github.com/ceph/ceph/pull/44958
1779
https://pulpito.ceph.com/vshankar-2022-03-06_14:47:51-fs-wip-vshankar-testing-20220304-132102-testing-default-smithi/
1780
1781
* https://tracker.ceph.com/issues/54419 (new)
1782
    `ceph orch upgrade start` seems to never reach completion
1783
* https://tracker.ceph.com/issues/51964
1784
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1785
* https://tracker.ceph.com/issues/52624
1786
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1787
* https://tracker.ceph.com/issues/50223
1788
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
1789
* https://tracker.ceph.com/issues/52438
1790
    qa: ffsb timeout
1791
* https://tracker.ceph.com/issues/50821
1792
    qa: untar_snap_rm failure during mds thrashing
1793 41 Venky Shankar
1794
1795
h3. 2022 Feb 09
1796
1797
https://pulpito.ceph.com/vshankar-2022-02-05_17:27:49-fs-wip-vshankar-testing-20220201-113815-testing-default-smithi/
1798
1799
rerun with
1800
- (drop) https://github.com/ceph/ceph/pull/37938
1801
- (drop) https://github.com/ceph/ceph/pull/44335
1802
- (drop) https://github.com/ceph/ceph/pull/44491
1803
- (drop) https://github.com/ceph/ceph/pull/44501
1804
https://pulpito.ceph.com/vshankar-2022-02-08_14:27:29-fs-wip-vshankar-testing-20220208-181241-testing-default-smithi/
1805
1806
* https://tracker.ceph.com/issues/51964
1807
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1808
* https://tracker.ceph.com/issues/54066
1809
    test_subvolume_no_upgrade_v1_sanity fails with `AssertionError: 1000 != 0`
1810
* https://tracker.ceph.com/issues/48773
1811
    qa: scrub does not complete
1812
* https://tracker.ceph.com/issues/52624
1813
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1814
* https://tracker.ceph.com/issues/50223
1815
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
1816
* https://tracker.ceph.com/issues/52438
1817 40 Patrick Donnelly
    qa: ffsb timeout
1818
1819
h3. 2022 Feb 01
1820
1821
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20220127.171526
1822
1823
* https://tracker.ceph.com/issues/54107
1824
    kclient: hang during umount
1825
* https://tracker.ceph.com/issues/54106
1826
    kclient: hang during workunit cleanup
1827
* https://tracker.ceph.com/issues/54108
1828
    qa: iogen workunit: "The following counters failed to be set on mds daemons: {'mds.exported', 'mds.imported'}"
1829
* https://tracker.ceph.com/issues/48773
1830
    qa: scrub does not complete
1831
* https://tracker.ceph.com/issues/52438
1832
    qa: ffsb timeout
1833 36 Venky Shankar
1834
1835
h3. 2022 Jan 13
1836 39 Venky Shankar
1837 36 Venky Shankar
https://pulpito.ceph.com/vshankar-2022-01-06_13:18:41-fs-wip-vshankar-testing-20220106-145819-testing-default-smithi/
1838 38 Venky Shankar
1839
rerun with:
1840 36 Venky Shankar
- (add) https://github.com/ceph/ceph/pull/44570
1841
- (drop) https://github.com/ceph/ceph/pull/43184
1842
https://pulpito.ceph.com/vshankar-2022-01-13_04:42:40-fs-wip-vshankar-testing-20220106-145819-testing-default-smithi/
1843
1844
* https://tracker.ceph.com/issues/50223
1845
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
1846
* https://tracker.ceph.com/issues/51282
1847
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
1848
* https://tracker.ceph.com/issues/48773
1849
    qa: scrub does not complete
1850
* https://tracker.ceph.com/issues/52624
1851
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1852
* https://tracker.ceph.com/issues/53859
1853 34 Venky Shankar
    qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
1854
1855
h3. 2022 Jan 03
1856
1857
https://pulpito.ceph.com/vshankar-2021-12-22_07:37:44-fs-wip-vshankar-testing-20211216-114012-testing-default-smithi/
1858
https://pulpito.ceph.com/vshankar-2022-01-03_12:27:45-fs-wip-vshankar-testing-20220103-142738-testing-default-smithi/ (rerun)
1859
1860
* https://tracker.ceph.com/issues/50223
1861
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
1862
* https://tracker.ceph.com/issues/51964
1863
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1864
* https://tracker.ceph.com/issues/51267
1865
    CommandFailedError: Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithi096 with status 1:...
1866
* https://tracker.ceph.com/issues/51282
1867
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
1868
* https://tracker.ceph.com/issues/50821
1869
    qa: untar_snap_rm failure during mds thrashing
1870 35 Ramana Raja
* https://tracker.ceph.com/issues/51278
1871
    mds: "FAILED ceph_assert(!segments.empty())"
1872
* https://tracker.ceph.com/issues/52279
1873 34 Venky Shankar
    cephadm tests fail due to: error adding seccomp filter rule for syscall bdflush: requested action matches default action of filter
1874 33 Patrick Donnelly
1875
1876
h3. 2021 Dec 22
1877
1878
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211222.014316
1879
1880
* https://tracker.ceph.com/issues/52624
1881
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1882
* https://tracker.ceph.com/issues/50223
1883
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
1884
* https://tracker.ceph.com/issues/52279
1885
    cephadm tests fail due to: error adding seccomp filter rule for syscall bdflush: requested action matches default action of filter
1886
* https://tracker.ceph.com/issues/50224
1887
    qa: test_mirroring_init_failure_with_recovery failure
1888
* https://tracker.ceph.com/issues/48773
1889
    qa: scrub does not complete
1890 32 Venky Shankar
1891
1892
h3. 2021 Nov 30
1893
1894
https://pulpito.ceph.com/vshankar-2021-11-24_07:14:27-fs-wip-vshankar-testing-20211124-094330-testing-default-smithi/
1895
https://pulpito.ceph.com/vshankar-2021-11-30_06:23:32-fs-wip-vshankar-testing-20211124-094330-distro-default-smithi/ (rerun w/ QA fixes)
1896
1897
* https://tracker.ceph.com/issues/53436
1898
    mds, mon: mds beacon messages get dropped? (mds never reaches up:active state)
1899
* https://tracker.ceph.com/issues/51964
1900
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1901
* https://tracker.ceph.com/issues/48812
1902
    qa: test_scrub_pause_and_resume_with_abort failure
1903
* https://tracker.ceph.com/issues/51076
1904
    "wait_for_recovery: failed before timeout expired" during thrashosd test with EC backend.
1905
* https://tracker.ceph.com/issues/50223
1906
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
1907
* https://tracker.ceph.com/issues/52624
1908
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1909
* https://tracker.ceph.com/issues/50250
1910
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
1911 31 Patrick Donnelly
1912
1913
h3. 2021 November 9
1914
1915
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211109.180315
1916
1917
* https://tracker.ceph.com/issues/53214
1918
    qa: "dd: error reading '/sys/kernel/debug/ceph/2a934501-6731-4052-a836-f42229a869be.client4874/metrics': Is a directory"
1919
* https://tracker.ceph.com/issues/48773
1920
    qa: scrub does not complete
1921
* https://tracker.ceph.com/issues/50223
1922
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
1923
* https://tracker.ceph.com/issues/51282
1924
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
1925
* https://tracker.ceph.com/issues/52624
1926
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1927
* https://tracker.ceph.com/issues/53216
1928
    qa: "RuntimeError: value of attributes should be either str or None. client_id"
1929
* https://tracker.ceph.com/issues/50250
1930
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
1931
1932 30 Patrick Donnelly
1933
1934
h3. 2021 November 03
1935
1936
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211103.023355
1937
1938
* https://tracker.ceph.com/issues/51964
1939
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1940
* https://tracker.ceph.com/issues/51282
1941
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
1942
* https://tracker.ceph.com/issues/52436
1943
    fs/ceph: "corrupt mdsmap"
1944
* https://tracker.ceph.com/issues/53074
1945
    pybind/mgr/cephadm: upgrade sequence does not continue if no MDS are active
1946
* https://tracker.ceph.com/issues/53150
1947
    pybind/mgr/cephadm/upgrade: tolerate MDS failures during upgrade straddling v16.2.5
1948
* https://tracker.ceph.com/issues/53155
1949
    MDSMonitor: assertion during upgrade to v16.2.5+
1950 29 Patrick Donnelly
1951
1952
h3. 2021 October 26
1953
1954
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211025.000447
1955
1956
* https://tracker.ceph.com/issues/53074
1957
    pybind/mgr/cephadm: upgrade sequence does not continue if no MDS are active
1958
* https://tracker.ceph.com/issues/52997
1959
    testing: hang ing umount
1960
* https://tracker.ceph.com/issues/50824
1961
    qa: snaptest-git-ceph bus error
1962
* https://tracker.ceph.com/issues/52436
1963
    fs/ceph: "corrupt mdsmap"
1964
* https://tracker.ceph.com/issues/48773
1965
    qa: scrub does not complete
1966
* https://tracker.ceph.com/issues/53082
1967
    ceph-fuse: segmenetation fault in Client::handle_mds_map
1968
* https://tracker.ceph.com/issues/50223
1969
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
1970
* https://tracker.ceph.com/issues/52624
1971
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1972
* https://tracker.ceph.com/issues/50224
1973
    qa: test_mirroring_init_failure_with_recovery failure
1974
* https://tracker.ceph.com/issues/50821
1975
    qa: untar_snap_rm failure during mds thrashing
1976
* https://tracker.ceph.com/issues/50250
1977
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
1978
1979 27 Patrick Donnelly
1980
1981 28 Patrick Donnelly
h3. 2021 October 19
1982 27 Patrick Donnelly
1983
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211019.013028
1984
1985
* https://tracker.ceph.com/issues/52995
1986
    qa: test_standby_count_wanted failure
1987
* https://tracker.ceph.com/issues/52948
1988
    osd: fails to come up: "teuthology.misc:7 of 8 OSDs are up"
1989
* https://tracker.ceph.com/issues/52996
1990
    qa: test_perf_counters via test_openfiletable
1991
* https://tracker.ceph.com/issues/48772
1992
    qa: pjd: not ok 9, 44, 80
1993
* https://tracker.ceph.com/issues/52997
1994
    testing: hang ing umount
1995
* https://tracker.ceph.com/issues/50250
1996
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
1997
* https://tracker.ceph.com/issues/52624
1998
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1999
* https://tracker.ceph.com/issues/50223
2000
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2001
* https://tracker.ceph.com/issues/50821
2002
    qa: untar_snap_rm failure during mds thrashing
2003
* https://tracker.ceph.com/issues/48773
2004
    qa: scrub does not complete
2005 26 Patrick Donnelly
2006
2007
h3. 2021 October 12
2008
2009
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211012.192211
2010
2011
Some failures caused by teuthology bug: https://tracker.ceph.com/issues/52944
2012
2013
New test caused failure: https://github.com/ceph/ceph/pull/43297#discussion_r729883167
2014
2015
2016
* https://tracker.ceph.com/issues/51282
2017
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2018
* https://tracker.ceph.com/issues/52948
2019
    osd: fails to come up: "teuthology.misc:7 of 8 OSDs are up"
2020
* https://tracker.ceph.com/issues/48773
2021
    qa: scrub does not complete
2022
* https://tracker.ceph.com/issues/50224
2023
    qa: test_mirroring_init_failure_with_recovery failure
2024
* https://tracker.ceph.com/issues/52949
2025
    RuntimeError: The following counters failed to be set on mds daemons: {'mds.dir_split'}
2026 25 Patrick Donnelly
2027 23 Patrick Donnelly
2028 24 Patrick Donnelly
h3. 2021 October 02
2029
2030
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211002.163337
2031
2032
Some failures caused by cephadm upgrade test. Fixed in follow-up qa commit.
2033
2034
test_simple failures caused by PR in this set.
2035
2036
A few reruns because of QA infra noise.
2037
2038
* https://tracker.ceph.com/issues/52822
2039
    qa: failed pacific install on fs:upgrade
2040
* https://tracker.ceph.com/issues/52624
2041
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2042
* https://tracker.ceph.com/issues/50223
2043
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2044
* https://tracker.ceph.com/issues/48773
2045
    qa: scrub does not complete
2046
2047
2048 23 Patrick Donnelly
h3. 2021 September 20
2049
2050
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210917.174826
2051
2052
* https://tracker.ceph.com/issues/52677
2053
    qa: test_simple failure
2054
* https://tracker.ceph.com/issues/51279
2055
    kclient hangs on umount (testing branch)
2056
* https://tracker.ceph.com/issues/50223
2057
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2058
* https://tracker.ceph.com/issues/50250
2059
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
2060
* https://tracker.ceph.com/issues/52624
2061
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2062
* https://tracker.ceph.com/issues/52438
2063
    qa: ffsb timeout
2064 22 Patrick Donnelly
2065
2066
h3. 2021 September 10
2067
2068
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210910.181451
2069
2070
* https://tracker.ceph.com/issues/50223
2071
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2072
* https://tracker.ceph.com/issues/50250
2073
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
2074
* https://tracker.ceph.com/issues/52624
2075
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2076
* https://tracker.ceph.com/issues/52625
2077
    qa: test_kill_mdstable (tasks.cephfs.test_snapshots.TestSnapshots)
2078
* https://tracker.ceph.com/issues/52439
2079
    qa: acls does not compile on centos stream
2080
* https://tracker.ceph.com/issues/50821
2081
    qa: untar_snap_rm failure during mds thrashing
2082
* https://tracker.ceph.com/issues/48773
2083
    qa: scrub does not complete
2084
* https://tracker.ceph.com/issues/52626
2085
    mds: ScrubStack.cc: 831: FAILED ceph_assert(diri)
2086
* https://tracker.ceph.com/issues/51279
2087
    kclient hangs on umount (testing branch)
2088 21 Patrick Donnelly
2089
2090
h3. 2021 August 27
2091
2092
Several jobs died because of device failures.
2093
2094
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210827.024746
2095
2096
* https://tracker.ceph.com/issues/52430
2097
    mds: fast async create client mount breaks racy test
2098
* https://tracker.ceph.com/issues/52436
2099
    fs/ceph: "corrupt mdsmap"
2100
* https://tracker.ceph.com/issues/52437
2101
    mds: InoTable::replay_release_ids abort via test_inotable_sync
2102
* https://tracker.ceph.com/issues/51282
2103
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2104
* https://tracker.ceph.com/issues/52438
2105
    qa: ffsb timeout
2106
* https://tracker.ceph.com/issues/52439
2107
    qa: acls does not compile on centos stream
2108 20 Patrick Donnelly
2109
2110
h3. 2021 July 30
2111
2112
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210729.214022
2113
2114
* https://tracker.ceph.com/issues/50250
2115
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
2116
* https://tracker.ceph.com/issues/51282
2117
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2118
* https://tracker.ceph.com/issues/48773
2119
    qa: scrub does not complete
2120
* https://tracker.ceph.com/issues/51975
2121
    pybind/mgr/stats: KeyError
2122 19 Patrick Donnelly
2123
2124
h3. 2021 July 28
2125
2126
https://pulpito.ceph.com/pdonnell-2021-07-28_00:39:45-fs-wip-pdonnell-testing-20210727.213757-distro-basic-smithi/
2127
2128
with qa fix: https://pulpito.ceph.com/pdonnell-2021-07-28_16:20:28-fs-wip-pdonnell-testing-20210728.141004-distro-basic-smithi/
2129
2130
* https://tracker.ceph.com/issues/51905
2131
    qa: "error reading sessionmap 'mds1_sessionmap'"
2132
* https://tracker.ceph.com/issues/48773
2133
    qa: scrub does not complete
2134
* https://tracker.ceph.com/issues/50250
2135
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
2136
* https://tracker.ceph.com/issues/51267
2137
    CommandFailedError: Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithi096 with status 1:...
2138
* https://tracker.ceph.com/issues/51279
2139
    kclient hangs on umount (testing branch)
2140 18 Patrick Donnelly
2141
2142
h3. 2021 July 16
2143
2144
https://pulpito.ceph.com/pdonnell-2021-07-16_05:50:11-fs-wip-pdonnell-testing-20210716.022804-distro-basic-smithi/
2145
2146
* https://tracker.ceph.com/issues/48773
2147
    qa: scrub does not complete
2148
* https://tracker.ceph.com/issues/48772
2149
    qa: pjd: not ok 9, 44, 80
2150
* https://tracker.ceph.com/issues/45434
2151
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2152
* https://tracker.ceph.com/issues/51279
2153
    kclient hangs on umount (testing branch)
2154
* https://tracker.ceph.com/issues/50824
2155
    qa: snaptest-git-ceph bus error
2156 17 Patrick Donnelly
2157
2158
h3. 2021 July 04
2159
2160
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210703.052904
2161
2162
* https://tracker.ceph.com/issues/48773
2163
    qa: scrub does not complete
2164
* https://tracker.ceph.com/issues/39150
2165
    mon: "FAILED ceph_assert(session_map.sessions.empty())" when out of quorum
2166
* https://tracker.ceph.com/issues/45434
2167
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2168
* https://tracker.ceph.com/issues/51282
2169
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2170
* https://tracker.ceph.com/issues/48771
2171
    qa: iogen: workload fails to cause balancing
2172
* https://tracker.ceph.com/issues/51279
2173
    kclient hangs on umount (testing branch)
2174
* https://tracker.ceph.com/issues/50250
2175
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
2176 16 Patrick Donnelly
2177
2178
h3. 2021 July 01
2179
2180
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210701.192056
2181
2182
* https://tracker.ceph.com/issues/51197
2183
    qa: [WRN] Scrub error on inode 0x10000001520 (/client.0/tmp/t/linux-5.4/Documentation/driver-api) see mds.f log and `damage ls` output for details
2184
* https://tracker.ceph.com/issues/50866
2185
    osd: stat mismatch on objects
2186
* https://tracker.ceph.com/issues/48773
2187
    qa: scrub does not complete
2188 15 Patrick Donnelly
2189
2190
h3. 2021 June 26
2191
2192
https://pulpito.ceph.com/pdonnell-2021-06-26_00:57:00-fs-wip-pdonnell-testing-20210625.225421-distro-basic-smithi/
2193
2194
* https://tracker.ceph.com/issues/51183
2195
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
2196
* https://tracker.ceph.com/issues/51410
2197
    kclient: fails to finish reconnect during MDS thrashing (testing branch)
2198
* https://tracker.ceph.com/issues/48773
2199
    qa: scrub does not complete
2200
* https://tracker.ceph.com/issues/51282
2201
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2202
* https://tracker.ceph.com/issues/51169
2203
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
2204
* https://tracker.ceph.com/issues/48772
2205
    qa: pjd: not ok 9, 44, 80
2206 14 Patrick Donnelly
2207
2208
h3. 2021 June 21
2209
2210
https://pulpito.ceph.com/pdonnell-2021-06-22_00:27:21-fs-wip-pdonnell-testing-20210621.231646-distro-basic-smithi/
2211
2212
One failure caused by PR: https://github.com/ceph/ceph/pull/41935#issuecomment-866472599
2213
2214
* https://tracker.ceph.com/issues/51282
2215
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2216
* https://tracker.ceph.com/issues/51183
2217
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
2218
* https://tracker.ceph.com/issues/48773
2219
    qa: scrub does not complete
2220
* https://tracker.ceph.com/issues/48771
2221
    qa: iogen: workload fails to cause balancing
2222
* https://tracker.ceph.com/issues/51169
2223
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
2224
* https://tracker.ceph.com/issues/50495
2225
    libcephfs: shutdown race fails with status 141
2226
* https://tracker.ceph.com/issues/45434
2227
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2228
* https://tracker.ceph.com/issues/50824
2229
    qa: snaptest-git-ceph bus error
2230
* https://tracker.ceph.com/issues/50223
2231
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2232 13 Patrick Donnelly
2233
2234
h3. 2021 June 16
2235
2236
https://pulpito.ceph.com/pdonnell-2021-06-16_21:26:55-fs-wip-pdonnell-testing-20210616.191804-distro-basic-smithi/
2237
2238
MDS abort class of failures caused by PR: https://github.com/ceph/ceph/pull/41667
2239
2240
* https://tracker.ceph.com/issues/45434
2241
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2242
* https://tracker.ceph.com/issues/51169
2243
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
2244
* https://tracker.ceph.com/issues/43216
2245
    MDSMonitor: removes MDS coming out of quorum election
2246
* https://tracker.ceph.com/issues/51278
2247
    mds: "FAILED ceph_assert(!segments.empty())"
2248
* https://tracker.ceph.com/issues/51279
2249
    kclient hangs on umount (testing branch)
2250
* https://tracker.ceph.com/issues/51280
2251
    mds: "FAILED ceph_assert(r == 0 || r == -2)"
2252
* https://tracker.ceph.com/issues/51183
2253
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
2254
* https://tracker.ceph.com/issues/51281
2255
    qa: snaptest-snap-rm-cmp.sh: "echo 'FAIL: bad match, /tmp/a 4637e766853d1ad16a7b17079e2c6f03 != real c3883760b18d50e8d78819c54d579b00'"
2256
* https://tracker.ceph.com/issues/48773
2257
    qa: scrub does not complete
2258
* https://tracker.ceph.com/issues/51076
2259
    "wait_for_recovery: failed before timeout expired" during thrashosd test with EC backend.
2260
* https://tracker.ceph.com/issues/51228
2261
    qa: rmdir: failed to remove 'a/.snap/*': No such file or directory
2262
* https://tracker.ceph.com/issues/51282
2263
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2264 12 Patrick Donnelly
2265
2266
h3. 2021 June 14
2267
2268
https://pulpito.ceph.com/pdonnell-2021-06-14_20:53:05-fs-wip-pdonnell-testing-20210614.173325-distro-basic-smithi/
2269
2270
Some Ubuntu 20.04 upgrade fallout. In particular, upgrade tests are failing due to missing packages for 18.04 Pacific.
2271
2272
* https://tracker.ceph.com/issues/51169
2273
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
2274
* https://tracker.ceph.com/issues/51228
2275
    qa: rmdir: failed to remove 'a/.snap/*': No such file or directory
2276
* https://tracker.ceph.com/issues/48773
2277
    qa: scrub does not complete
2278
* https://tracker.ceph.com/issues/51183
2279
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
2280
* https://tracker.ceph.com/issues/45434
2281
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2282
* https://tracker.ceph.com/issues/51182
2283
    pybind/mgr/snap_schedule: Invalid command: Unexpected argument 'fs=cephfs'
2284
* https://tracker.ceph.com/issues/51229
2285
    qa: test_multi_snap_schedule list difference failure
2286
* https://tracker.ceph.com/issues/50821
2287
    qa: untar_snap_rm failure during mds thrashing
2288 11 Patrick Donnelly
2289
2290
h3. 2021 June 13
2291
2292
https://pulpito.ceph.com/pdonnell-2021-06-12_02:45:35-fs-wip-pdonnell-testing-20210612.002809-distro-basic-smithi/
2293
2294
Some Ubuntu 20.04 upgrade fallout. In particular, upgrade tests are failing due to missing packages for 18.04 Pacific.
2295
2296
* https://tracker.ceph.com/issues/51169
2297
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
2298
* https://tracker.ceph.com/issues/48773
2299
    qa: scrub does not complete
2300
* https://tracker.ceph.com/issues/51182
2301
    pybind/mgr/snap_schedule: Invalid command: Unexpected argument 'fs=cephfs'
2302
* https://tracker.ceph.com/issues/51183
2303
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
2304
* https://tracker.ceph.com/issues/51197
2305
    qa: [WRN] Scrub error on inode 0x10000001520 (/client.0/tmp/t/linux-5.4/Documentation/driver-api) see mds.f log and `damage ls` output for details
2306
* https://tracker.ceph.com/issues/45434
2307 10 Patrick Donnelly
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2308
2309
h3. 2021 June 11
2310
2311
https://pulpito.ceph.com/pdonnell-2021-06-11_18:02:10-fs-wip-pdonnell-testing-20210611.162716-distro-basic-smithi/
2312
2313
Some Ubuntu 20.04 upgrade fallout. In particular, upgrade tests are failing due to missing packages for 18.04 Pacific.
2314
2315
* https://tracker.ceph.com/issues/51169
2316
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
2317
* https://tracker.ceph.com/issues/45434
2318
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2319
* https://tracker.ceph.com/issues/48771
2320
    qa: iogen: workload fails to cause balancing
2321
* https://tracker.ceph.com/issues/43216
2322
    MDSMonitor: removes MDS coming out of quorum election
2323
* https://tracker.ceph.com/issues/51182
2324
    pybind/mgr/snap_schedule: Invalid command: Unexpected argument 'fs=cephfs'
2325
* https://tracker.ceph.com/issues/50223
2326
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2327
* https://tracker.ceph.com/issues/48773
2328
    qa: scrub does not complete
2329
* https://tracker.ceph.com/issues/51183
2330
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
2331
* https://tracker.ceph.com/issues/51184
2332
    qa: fs:bugs does not specify distro
2333 9 Patrick Donnelly
2334
2335
h3. 2021 June 03
2336
2337
https://pulpito.ceph.com/pdonnell-2021-06-03_03:40:33-fs-wip-pdonnell-testing-20210603.020013-distro-basic-smithi/
2338
2339
* https://tracker.ceph.com/issues/45434
2340
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2341
* https://tracker.ceph.com/issues/50016
2342
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
2343
* https://tracker.ceph.com/issues/50821
2344
    qa: untar_snap_rm failure during mds thrashing
2345
* https://tracker.ceph.com/issues/50622 (regression)
2346
    msg: active_connections regression
2347
* https://tracker.ceph.com/issues/49845#note-2 (regression)
2348
    qa: failed umount in test_volumes
2349
* https://tracker.ceph.com/issues/48773
2350
    qa: scrub does not complete
2351
* https://tracker.ceph.com/issues/43216
2352
    MDSMonitor: removes MDS coming out of quorum election
2353 7 Patrick Donnelly
2354
2355 8 Patrick Donnelly
h3. 2021 May 18
2356
2357
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210518.214114
2358
2359
Regression in testing kernel caused some failures. Ilya fixed those and rerun
2360
looked better. Some odd new noise in the rerun relating to packaging and "No
2361
module named 'tasks.ceph'".
2362
2363
* https://tracker.ceph.com/issues/50824
2364
    qa: snaptest-git-ceph bus error
2365
* https://tracker.ceph.com/issues/50622 (regression)
2366
    msg: active_connections regression
2367
* https://tracker.ceph.com/issues/49845#note-2 (regression)
2368
    qa: failed umount in test_volumes
2369
* https://tracker.ceph.com/issues/48203 (stock kernel update required)
2370
    qa: quota failure
2371
2372
2373 7 Patrick Donnelly
h3. 2021 May 18
2374
2375
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210518.025642
2376
2377
* https://tracker.ceph.com/issues/50821
2378
    qa: untar_snap_rm failure during mds thrashing
2379
* https://tracker.ceph.com/issues/48773
2380
    qa: scrub does not complete
2381
* https://tracker.ceph.com/issues/45591
2382
    mgr: FAILED ceph_assert(daemon != nullptr)
2383
* https://tracker.ceph.com/issues/50866
2384
    osd: stat mismatch on objects
2385
* https://tracker.ceph.com/issues/50016
2386
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
2387
* https://tracker.ceph.com/issues/50867
2388
    qa: fs:mirror: reduced data availability
2389
* https://tracker.ceph.com/issues/50821
2390
    qa: untar_snap_rm failure during mds thrashing
2391
* https://tracker.ceph.com/issues/50622 (regression)
2392
    msg: active_connections regression
2393
* https://tracker.ceph.com/issues/50223
2394
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2395
* https://tracker.ceph.com/issues/50868
2396
    qa: "kern.log.gz already exists; not overwritten"
2397
* https://tracker.ceph.com/issues/50870
2398
    qa: test_full: "rm: cannot remove 'large_file_a': Permission denied"
2399 6 Patrick Donnelly
2400
2401
h3. 2021 May 11
2402
2403
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210511.232042
2404
2405
* one class of failures caused by PR
2406
* https://tracker.ceph.com/issues/48812
2407
    qa: test_scrub_pause_and_resume_with_abort failure
2408
* https://tracker.ceph.com/issues/50390
2409
    mds: monclient: wait_auth_rotating timed out after 30
2410
* https://tracker.ceph.com/issues/48773
2411
    qa: scrub does not complete
2412
* https://tracker.ceph.com/issues/50821
2413
    qa: untar_snap_rm failure during mds thrashing
2414
* https://tracker.ceph.com/issues/50224
2415
    qa: test_mirroring_init_failure_with_recovery failure
2416
* https://tracker.ceph.com/issues/50622 (regression)
2417
    msg: active_connections regression
2418
* https://tracker.ceph.com/issues/50825
2419
    qa: snaptest-git-ceph hang during mon thrashing v2
2420
* https://tracker.ceph.com/issues/50821
2421
    qa: untar_snap_rm failure during mds thrashing
2422
* https://tracker.ceph.com/issues/50823
2423
    qa: RuntimeError: timeout waiting for cluster to stabilize
2424 5 Patrick Donnelly
2425
2426
h3. 2021 May 14
2427
2428
https://pulpito.ceph.com/pdonnell-2021-05-14_21:45:42-fs-master-distro-basic-smithi/
2429
2430
* https://tracker.ceph.com/issues/48812
2431
    qa: test_scrub_pause_and_resume_with_abort failure
2432
* https://tracker.ceph.com/issues/50821
2433
    qa: untar_snap_rm failure during mds thrashing
2434
* https://tracker.ceph.com/issues/50622 (regression)
2435
    msg: active_connections regression
2436
* https://tracker.ceph.com/issues/50822
2437
    qa: testing kernel patch for client metrics causes mds abort
2438
* https://tracker.ceph.com/issues/48773
2439
    qa: scrub does not complete
2440
* https://tracker.ceph.com/issues/50823
2441
    qa: RuntimeError: timeout waiting for cluster to stabilize
2442
* https://tracker.ceph.com/issues/50824
2443
    qa: snaptest-git-ceph bus error
2444
* https://tracker.ceph.com/issues/50825
2445
    qa: snaptest-git-ceph hang during mon thrashing v2
2446
* https://tracker.ceph.com/issues/50826
2447
    kceph: stock RHEL kernel hangs on snaptests with mon|osd thrashers
2448 4 Patrick Donnelly
2449
2450
h3. 2021 May 01
2451
2452
https://pulpito.ceph.com/pdonnell-2021-05-01_09:07:09-fs-wip-pdonnell-testing-20210501.040415-distro-basic-smithi/
2453
2454
* https://tracker.ceph.com/issues/45434
2455
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2456
* https://tracker.ceph.com/issues/50281
2457
    qa: untar_snap_rm timeout
2458
* https://tracker.ceph.com/issues/48203 (stock kernel update required)
2459
    qa: quota failure
2460
* https://tracker.ceph.com/issues/48773
2461
    qa: scrub does not complete
2462
* https://tracker.ceph.com/issues/50390
2463
    mds: monclient: wait_auth_rotating timed out after 30
2464
* https://tracker.ceph.com/issues/50250
2465
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details"
2466
* https://tracker.ceph.com/issues/50622 (regression)
2467
    msg: active_connections regression
2468
* https://tracker.ceph.com/issues/45591
2469
    mgr: FAILED ceph_assert(daemon != nullptr)
2470
* https://tracker.ceph.com/issues/50221
2471
    qa: snaptest-git-ceph failure in git diff
2472
* https://tracker.ceph.com/issues/50016
2473
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
2474 3 Patrick Donnelly
2475
2476
h3. 2021 Apr 15
2477
2478
https://pulpito.ceph.com/pdonnell-2021-04-15_01:35:57-fs-wip-pdonnell-testing-20210414.230315-distro-basic-smithi/
2479
2480
* https://tracker.ceph.com/issues/50281
2481
    qa: untar_snap_rm timeout
2482
* https://tracker.ceph.com/issues/50220
2483
    qa: dbench workload timeout
2484
* https://tracker.ceph.com/issues/50246
2485
    mds: failure replaying journal (EMetaBlob)
2486
* https://tracker.ceph.com/issues/50250
2487
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details"
2488
* https://tracker.ceph.com/issues/50016
2489
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
2490
* https://tracker.ceph.com/issues/50222
2491
    osd: 5.2s0 deep-scrub : stat mismatch
2492
* https://tracker.ceph.com/issues/45434
2493
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2494
* https://tracker.ceph.com/issues/49845
2495
    qa: failed umount in test_volumes
2496
* https://tracker.ceph.com/issues/37808
2497
    osd: osdmap cache weak_refs assert during shutdown
2498
* https://tracker.ceph.com/issues/50387
2499
    client: fs/snaps failure
2500
* https://tracker.ceph.com/issues/50389
2501
    mds: "cluster [ERR] Error recovering journal 0x203: (2) No such file or directory" in cluster log"
2502
* https://tracker.ceph.com/issues/50216
2503
    qa: "ls: cannot access 'lost+found': No such file or directory"
2504
* https://tracker.ceph.com/issues/50390
2505
    mds: monclient: wait_auth_rotating timed out after 30
2506
2507 1 Patrick Donnelly
2508
2509 2 Patrick Donnelly
h3. 2021 Apr 08
2510
2511
https://pulpito.ceph.com/pdonnell-2021-04-08_22:42:24-fs-wip-pdonnell-testing-20210408.192301-distro-basic-smithi/
2512
2513
* https://tracker.ceph.com/issues/45434
2514
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2515
* https://tracker.ceph.com/issues/50016
2516
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
2517
* https://tracker.ceph.com/issues/48773
2518
    qa: scrub does not complete
2519
* https://tracker.ceph.com/issues/50279
2520
    qa: "Replacing daemon mds.b as rank 0 with standby daemon mds.c"
2521
* https://tracker.ceph.com/issues/50246
2522
    mds: failure replaying journal (EMetaBlob)
2523
* https://tracker.ceph.com/issues/48365
2524
    qa: ffsb build failure on CentOS 8.2
2525
* https://tracker.ceph.com/issues/50216
2526
    qa: "ls: cannot access 'lost+found': No such file or directory"
2527
* https://tracker.ceph.com/issues/50223
2528
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2529
* https://tracker.ceph.com/issues/50280
2530
    cephadm: RuntimeError: uid/gid not found
2531
* https://tracker.ceph.com/issues/50281
2532
    qa: untar_snap_rm timeout
2533
2534 1 Patrick Donnelly
h3. 2021 Apr 08
2535
2536
https://pulpito.ceph.com/pdonnell-2021-04-08_04:31:36-fs-wip-pdonnell-testing-20210408.024225-distro-basic-smithi/
2537
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210408.142238 (with logic inversion / QA fix)
2538
2539
* https://tracker.ceph.com/issues/50246
2540
    mds: failure replaying journal (EMetaBlob)
2541
* https://tracker.ceph.com/issues/50250
2542
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details"
2543
2544
2545
h3. 2021 Apr 07
2546
2547
https://pulpito.ceph.com/pdonnell-2021-04-07_02:12:41-fs-wip-pdonnell-testing-20210406.213012-distro-basic-smithi/
2548
2549
* https://tracker.ceph.com/issues/50215
2550
    qa: "log [ERR] : error reading sessionmap 'mds2_sessionmap'"
2551
* https://tracker.ceph.com/issues/49466
2552
    qa: "Command failed on gibba030 with status 1: 'set -ex\nsudo dd of=/tmp/tmp.ZEeZBasJer'"
2553
* https://tracker.ceph.com/issues/50216
2554
    qa: "ls: cannot access 'lost+found': No such file or directory"
2555
* https://tracker.ceph.com/issues/48773
2556
    qa: scrub does not complete
2557
* https://tracker.ceph.com/issues/49845
2558
    qa: failed umount in test_volumes
2559
* https://tracker.ceph.com/issues/50220
2560
    qa: dbench workload timeout
2561
* https://tracker.ceph.com/issues/50221
2562
    qa: snaptest-git-ceph failure in git diff
2563
* https://tracker.ceph.com/issues/50222
2564
    osd: 5.2s0 deep-scrub : stat mismatch
2565
* https://tracker.ceph.com/issues/50223
2566
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2567
* https://tracker.ceph.com/issues/50224
2568
    qa: test_mirroring_init_failure_with_recovery failure
2569
2570
h3. 2021 Apr 01
2571
2572
https://pulpito.ceph.com/pdonnell-2021-04-01_00:45:34-fs-wip-pdonnell-testing-20210331.222326-distro-basic-smithi/
2573
2574
* https://tracker.ceph.com/issues/48772
2575
    qa: pjd: not ok 9, 44, 80
2576
* https://tracker.ceph.com/issues/50177
2577
    osd: "stalled aio... buggy kernel or bad device?"
2578
* https://tracker.ceph.com/issues/48771
2579
    qa: iogen: workload fails to cause balancing
2580
* https://tracker.ceph.com/issues/49845
2581
    qa: failed umount in test_volumes
2582
* https://tracker.ceph.com/issues/48773
2583
    qa: scrub does not complete
2584
* https://tracker.ceph.com/issues/48805
2585
    mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details"
2586
* https://tracker.ceph.com/issues/50178
2587
    qa: "TypeError: run() got an unexpected keyword argument 'shell'"
2588
* https://tracker.ceph.com/issues/45434
2589
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2590
2591
h3. 2021 Mar 24
2592
2593
https://pulpito.ceph.com/pdonnell-2021-03-24_23:26:35-fs-wip-pdonnell-testing-20210324.190252-distro-basic-smithi/
2594
2595
* https://tracker.ceph.com/issues/49500
2596
    qa: "Assertion `cb_done' failed."
2597
* https://tracker.ceph.com/issues/50019
2598
    qa: mount failure with cephadm "probably no MDS server is up?"
2599
* https://tracker.ceph.com/issues/50020
2600
    qa: "RADOS object not found (Failed to operate read op for oid cephfs_mirror)"
2601
* https://tracker.ceph.com/issues/48773
2602
    qa: scrub does not complete
2603
* https://tracker.ceph.com/issues/45434
2604
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2605
* https://tracker.ceph.com/issues/48805
2606
    mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details"
2607
* https://tracker.ceph.com/issues/48772
2608
    qa: pjd: not ok 9, 44, 80
2609
* https://tracker.ceph.com/issues/50021
2610
    qa: snaptest-git-ceph failure during mon thrashing
2611
* https://tracker.ceph.com/issues/48771
2612
    qa: iogen: workload fails to cause balancing
2613
* https://tracker.ceph.com/issues/50016
2614
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
2615
* https://tracker.ceph.com/issues/49466
2616
    qa: "Command failed on gibba030 with status 1: 'set -ex\nsudo dd of=/tmp/tmp.ZEeZBasJer'"
2617
2618
2619
h3. 2021 Mar 18
2620
2621
https://pulpito.ceph.com/pdonnell-2021-03-18_13:46:31-fs-wip-pdonnell-testing-20210318.024145-distro-basic-smithi/
2622
2623
* https://tracker.ceph.com/issues/49466
2624
    qa: "Command failed on gibba030 with status 1: 'set -ex\nsudo dd of=/tmp/tmp.ZEeZBasJer'"
2625
* https://tracker.ceph.com/issues/48773
2626
    qa: scrub does not complete
2627
* https://tracker.ceph.com/issues/48805
2628
    mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details"
2629
* https://tracker.ceph.com/issues/45434
2630
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2631
* https://tracker.ceph.com/issues/49845
2632
    qa: failed umount in test_volumes
2633
* https://tracker.ceph.com/issues/49605
2634
    mgr: drops command on the floor
2635
* https://tracker.ceph.com/issues/48203 (stock kernel update required)
2636
    qa: quota failure
2637
* https://tracker.ceph.com/issues/49928
2638
    client: items pinned in cache preventing unmount x2
2639
2640
h3. 2021 Mar 15
2641
2642
https://pulpito.ceph.com/pdonnell-2021-03-15_22:16:56-fs-wip-pdonnell-testing-20210315.182203-distro-basic-smithi/
2643
2644
* https://tracker.ceph.com/issues/49842
2645
    qa: stuck pkg install
2646
* https://tracker.ceph.com/issues/49466
2647
    qa: "Command failed on gibba030 with status 1: 'set -ex\nsudo dd of=/tmp/tmp.ZEeZBasJer'"
2648
* https://tracker.ceph.com/issues/49822
2649
    test: test_mirroring_command_idempotency (tasks.cephfs.test_admin.TestMirroringCommands) failure
2650
* https://tracker.ceph.com/issues/49240
2651
    terminate called after throwing an instance of 'std::bad_alloc'
2652
* https://tracker.ceph.com/issues/48773
2653
    qa: scrub does not complete
2654
* https://tracker.ceph.com/issues/45434
2655
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2656
* https://tracker.ceph.com/issues/49500
2657
    qa: "Assertion `cb_done' failed."
2658
* https://tracker.ceph.com/issues/49843
2659
    qa: fs/snaps/snaptest-upchildrealms.sh failure
2660
* https://tracker.ceph.com/issues/49845
2661
    qa: failed umount in test_volumes
2662
* https://tracker.ceph.com/issues/48805
2663
    mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details"
2664
* https://tracker.ceph.com/issues/49605
2665
    mgr: drops command on the floor
2666
2667
and failure caused by PR: https://github.com/ceph/ceph/pull/39969
2668
2669
2670
h3. 2021 Mar 09
2671
2672
https://pulpito.ceph.com/pdonnell-2021-03-09_03:27:39-fs-wip-pdonnell-testing-20210308.214827-distro-basic-smithi/
2673
2674
* https://tracker.ceph.com/issues/49500
2675
    qa: "Assertion `cb_done' failed."
2676
* https://tracker.ceph.com/issues/48805
2677
    mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details"
2678
* https://tracker.ceph.com/issues/48773
2679
    qa: scrub does not complete
2680
* https://tracker.ceph.com/issues/45434
2681
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2682
* https://tracker.ceph.com/issues/49240
2683
    terminate called after throwing an instance of 'std::bad_alloc'
2684
* https://tracker.ceph.com/issues/49466
2685
    qa: "Command failed on gibba030 with status 1: 'set -ex\nsudo dd of=/tmp/tmp.ZEeZBasJer'"
2686
* https://tracker.ceph.com/issues/49684
2687
    qa: fs:cephadm mount does not wait for mds to be created
2688
* https://tracker.ceph.com/issues/48771
2689
    qa: iogen: workload fails to cause balancing