Project

General

Profile

Main » History » Version 193

Venky Shankar, 10/20/2023 04:49 AM

1 79 Venky Shankar
h1. MAIN
2
3 148 Rishabh Dave
h3. NEW ENTRY BELOW
4
5 193 Venky Shankar
h3. 13 Oct 2023
6
7
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20231013.093215
8
9
* https://tracker.ceph.com/issues/52624
10
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
11
* https://tracker.ceph.com/issues/62936
12
    Test failure: test_mirroring_init_failure_with_recovery (tasks.cephfs.test_mirroring.TestMirroring)
13
* https://tracker.ceph.com/issues/47292
14
    cephfs-shell: test_df_for_valid_file failure
15
* https://tracker.ceph.com/issues/63141
16
    qa/cephfs: test_idem_unaffected_root_squash fails
17
* https://tracker.ceph.com/issues/63233
18
    mds: valgrind reports possible leaks in the MDS
19
* https://tracker.ceph.com/issues/62081
20
    tasks/fscrypt-common does not finish, timesout
21
* https://tracker.ceph.com/issues/58945
22
    qa: xfstests-dev's generic test suite has 20 failures with fuse client
23
24
25 190 Patrick Donnelly
h3. 16 Oct 2023
26
27
https://pulpito.ceph.com/?branch=wip-batrick-testing-20231016.203825
28
29 192 Patrick Donnelly
Infrastructure issues:
30
* /teuthology/pdonnell-2023-10-19_12:04:12-fs-wip-batrick-testing-20231016.203825-distro-default-smithi/7432286/teuthology.log
31
    Host lost.
32
33
Failures:
34
35
* https://tracker.ceph.com/issues/56694
36
    qa: avoid blocking forever on hung umount
37
* https://tracker.ceph.com/issues/63089
38
    qa: tasks/mirror times out
39
* https://tracker.ceph.com/issues/52624
40
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
41
* https://tracker.ceph.com/issues/59531
42
  "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio)" 
43
* https://tracker.ceph.com/issues/57676
44
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
45
* https://tracker.ceph.com/issues/62658 
46
    error during scrub thrashing: reached maximum tries (31) after waiting for 900 seconds
47
* https://tracker.ceph.com/issues/61243
48
  test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
49
* https://tracker.ceph.com/issues/57656
50
  dbench: write failed on handle 10010 (Resource temporarily unavailable)
51
* https://tracker.ceph.com/issues/63233
52
  mon|client|mds: valgrind reports possible leaks in the MDS
53
54 189 Rishabh Dave
h3. 9 Oct 2023
55
56
https://pulpito.ceph.com/rishabh-2023-10-06_11:56:52-fs-rishabh-cephfs-mon-testing-default-smithi/
57
58
* https://tracker.ceph.com/issues/54460
59
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
60
* https://tracker.ceph.com/issues/63141
61
  test_idem_unaffected_root_squash (test_admin.TestFsAuthorizeUpdate) fails
62
* https://tracker.ceph.com/issues/62937
63
  logrotate doesn't support parallel execution on same set of logfiles
64
* https://tracker.ceph.com/issues/61400
65
  valgrind+ceph-mon issues
66
* https://tracker.ceph.com/issues/57676
67
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
68
* https://tracker.ceph.com/issues/55805
69
  error during scrub thrashing reached max tries in 900 secs
70
71 188 Venky Shankar
h3. 26 Sep 2023
72
73
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230926.081818
74
75
* https://tracker.ceph.com/issues/52624
76
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
77
* https://tracker.ceph.com/issues/62873
78
    qa: FAIL: test_client_blocklisted_oldest_tid (tasks.cephfs.test_client_limits.TestClientLimits)
79
* https://tracker.ceph.com/issues/61400
80
    valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
81
* https://tracker.ceph.com/issues/57676
82
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
83
* https://tracker.ceph.com/issues/62682
84
    mon: no mdsmap broadcast after "fs set joinable" is set to true
85
* https://tracker.ceph.com/issues/63089
86
    qa: tasks/mirror times out
87
88 185 Rishabh Dave
h3. 22 Sep 2023
89
90
https://pulpito.ceph.com/rishabh-2023-09-12_12:12:15-fs-wip-rishabh-2023sep12-b2-testing-default-smithi/
91
92
* https://tracker.ceph.com/issues/59348
93
  qa: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota
94
* https://tracker.ceph.com/issues/59344
95
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
96
* https://tracker.ceph.com/issues/59531
97
  "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio)" 
98
* https://tracker.ceph.com/issues/61574
99
  build failure for mdtest project
100
* https://tracker.ceph.com/issues/62702
101
  fsstress.sh: MDS slow requests for the internal 'rename' requests
102
* https://tracker.ceph.com/issues/57676
103
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
104
105
* https://tracker.ceph.com/issues/62863 
106
  deadlock in ceph-fuse causes teuthology job to hang and fail
107
* https://tracker.ceph.com/issues/62870
108
  test_cluster_info (tasks.cephfs.test_nfs.TestNFS)
109
* https://tracker.ceph.com/issues/62873
110
  test_client_blocklisted_oldest_tid (tasks.cephfs.test_client_limits.TestClientLimits)
111
112 186 Venky Shankar
h3. 20 Sep 2023
113
114
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230920.072635
115
116
* https://tracker.ceph.com/issues/52624
117
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
118
* https://tracker.ceph.com/issues/61400
119
    valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
120
* https://tracker.ceph.com/issues/61399
121
    libmpich: undefined references to fi_strerror
122
* https://tracker.ceph.com/issues/62081
123
    tasks/fscrypt-common does not finish, timesout
124
* https://tracker.ceph.com/issues/62658 
125
    error during scrub thrashing: reached maximum tries (31) after waiting for 900 seconds
126
* https://tracker.ceph.com/issues/62915
127
    qa/suites/fs/nfs: No orchestrator configured (try `ceph orch set backend`) while running test cases
128
* https://tracker.ceph.com/issues/59531
129
    quincy: "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio)"
130
* https://tracker.ceph.com/issues/62873
131
  qa: FAIL: test_client_blocklisted_oldest_tid (tasks.cephfs.test_client_limits.TestClientLimits)
132
* https://tracker.ceph.com/issues/62936
133
    Test failure: test_mirroring_init_failure_with_recovery (tasks.cephfs.test_mirroring.TestMirroring)
134
* https://tracker.ceph.com/issues/62937
135
    Command failed on smithi027 with status 3: 'sudo logrotate /etc/logrotate.d/ceph-test.conf'
136
* https://tracker.ceph.com/issues/62510
137
    snaptest-git-ceph.sh failure with fs/thrash
138
* https://tracker.ceph.com/issues/62081
139
    tasks/fscrypt-common does not finish, timesout
140
* https://tracker.ceph.com/issues/62126
141
    test failure: suites/blogbench.sh stops running
142 187 Venky Shankar
* https://tracker.ceph.com/issues/62682
143
    mon: no mdsmap broadcast after "fs set joinable" is set to true
144 186 Venky Shankar
145 184 Milind Changire
h3. 19 Sep 2023
146
147
http://pulpito.front.sepia.ceph.com/mchangir-2023-09-12_05:40:22-fs-wip-mchangir-testing-20230908.140927-testing-default-smithi/
148
149
* https://tracker.ceph.com/issues/58220#note-9
150
  workunit fs/test_python.sh: test_disk_quota_exceeeded_error failure
151
* https://tracker.ceph.com/issues/62702
152
  Command failed (workunit test suites/fsstress.sh) on smithi124 with status 124
153
* https://tracker.ceph.com/issues/57676
154
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
155
* https://tracker.ceph.com/issues/59348
156
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
157
* https://tracker.ceph.com/issues/52624
158
  qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
159
* https://tracker.ceph.com/issues/51964
160
  qa: test_cephfs_mirror_restart_sync_on_blocklist failure
161
* https://tracker.ceph.com/issues/61243
162
  test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
163
* https://tracker.ceph.com/issues/59344
164
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
165
* https://tracker.ceph.com/issues/62873
166
  qa: FAIL: test_client_blocklisted_oldest_tid (tasks.cephfs.test_client_limits.TestClientLimits)
167
* https://tracker.ceph.com/issues/59413
168
  cephfs: qa snaptest-git-ceph.sh failed with "got remote process result: 128"
169
* https://tracker.ceph.com/issues/53859
170
  qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
171
* https://tracker.ceph.com/issues/62482
172
  qa: cluster [WRN] Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
173
174 178 Patrick Donnelly
175 177 Venky Shankar
h3. 13 Sep 2023
176
177
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230908.065909
178
179
* https://tracker.ceph.com/issues/52624
180
      qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
181
* https://tracker.ceph.com/issues/57655
182
    qa: fs:mixed-clients kernel_untar_build failure
183
* https://tracker.ceph.com/issues/57676
184
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
185
* https://tracker.ceph.com/issues/61243
186
    qa: tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev - 17 tests failed
187
* https://tracker.ceph.com/issues/62567
188
    postgres workunit times out - MDS_SLOW_REQUEST in logs
189
* https://tracker.ceph.com/issues/61400
190
    valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
191
* https://tracker.ceph.com/issues/61399
192
    libmpich: undefined references to fi_strerror
193
* https://tracker.ceph.com/issues/57655
194
    qa: fs:mixed-clients kernel_untar_build failure
195
* https://tracker.ceph.com/issues/57676
196
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
197
* https://tracker.ceph.com/issues/51964
198
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
199
* https://tracker.ceph.com/issues/62081
200
    tasks/fscrypt-common does not finish, timesout
201 178 Patrick Donnelly
202 179 Patrick Donnelly
h3. 2023 Sep 12
203 178 Patrick Donnelly
204
https://pulpito.ceph.com/pdonnell-2023-09-12_14:07:50-fs-wip-batrick-testing-20230912.122437-distro-default-smithi/
205 1 Patrick Donnelly
206 181 Patrick Donnelly
A few failures caused by qa refactoring in https://github.com/ceph/ceph/pull/48130 ; notably:
207
208 182 Patrick Donnelly
* Test failure: test_export_pin_many (tasks.cephfs.test_exports.TestExportPin) caused by fragmentation from config changes.
209 181 Patrick Donnelly
210
Failures:
211
212 179 Patrick Donnelly
* https://tracker.ceph.com/issues/59348
213
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
214
* https://tracker.ceph.com/issues/57656
215
  dbench: write failed on handle 10010 (Resource temporarily unavailable)
216
* https://tracker.ceph.com/issues/55805
217
  error scrub thrashing reached max tries in 900 secs
218
* https://tracker.ceph.com/issues/62067
219
    ffsb.sh failure "Resource temporarily unavailable"
220
* https://tracker.ceph.com/issues/59344
221
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
222
* https://tracker.ceph.com/issues/61399
223 180 Patrick Donnelly
  libmpich: undefined references to fi_strerror
224
* https://tracker.ceph.com/issues/62832
225
  common: config_proxy deadlock during shutdown (and possibly other times)
226
* https://tracker.ceph.com/issues/59413
227 1 Patrick Donnelly
  cephfs: qa snaptest-git-ceph.sh failed with "got remote process result: 128"
228 181 Patrick Donnelly
* https://tracker.ceph.com/issues/57676
229
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
230
* https://tracker.ceph.com/issues/62567
231
  Command failed with status 1: "sudo TESTDIR=/home/ubuntu/cephtest bash -c 'sudo -u postgres -- pgbench -s 500 -i'"
232
* https://tracker.ceph.com/issues/54460
233
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
234
* https://tracker.ceph.com/issues/58220#note-9
235
  workunit fs/test_python.sh: test_disk_quota_exceeeded_error failure
236
* https://tracker.ceph.com/issues/59348
237
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
238 183 Patrick Donnelly
* https://tracker.ceph.com/issues/62847
239
    mds: blogbench requests stuck (5mds+scrub+snaps-flush)
240
* https://tracker.ceph.com/issues/62848
241
    qa: fail_fs upgrade scenario hanging
242
* https://tracker.ceph.com/issues/62081
243
    tasks/fscrypt-common does not finish, timesout
244 177 Venky Shankar
245 176 Venky Shankar
h3. 11 Sep 2023
246 175 Venky Shankar
247
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230830.153114
248
249
* https://tracker.ceph.com/issues/52624
250
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
251
* https://tracker.ceph.com/issues/61399
252
    qa: build failure for ior (the failed instance is when compiling `mdtest`)
253
* https://tracker.ceph.com/issues/57655
254
    qa: fs:mixed-clients kernel_untar_build failure
255
* https://tracker.ceph.com/issues/61399
256
    ior build failure
257
* https://tracker.ceph.com/issues/59531
258
    quincy: "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio)"
259
* https://tracker.ceph.com/issues/59344
260
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
261
* https://tracker.ceph.com/issues/59346
262
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
263
* https://tracker.ceph.com/issues/59348
264
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
265
* https://tracker.ceph.com/issues/57676
266
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
267
* https://tracker.ceph.com/issues/61243
268
  qa: tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev - 17 tests failed
269
* https://tracker.ceph.com/issues/62567
270
  postgres workunit times out - MDS_SLOW_REQUEST in logs
271
272
273 174 Rishabh Dave
h3. 6 Sep 2023 Run 2
274
275
https://pulpito.ceph.com/rishabh-2023-08-25_01:50:32-fs-wip-rishabh-2023aug3-b5-testing-default-smithi/ 
276
277
* https://tracker.ceph.com/issues/51964
278
  test_cephfs_mirror_restart_sync_on_blocklist failure
279
* https://tracker.ceph.com/issues/59348
280
  test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
281
* https://tracker.ceph.com/issues/53859
282
  qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
283
* https://tracker.ceph.com/issues/61892
284
  test_strays.TestStrays.test_snapshot_remove failed
285
* https://tracker.ceph.com/issues/54460
286
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
287
* https://tracker.ceph.com/issues/59346
288
  fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write"
289
* https://tracker.ceph.com/issues/59344
290
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
291
* https://tracker.ceph.com/issues/62484
292
  qa: ffsb.sh test failure
293
* https://tracker.ceph.com/issues/62567
294
  Command failed with status 1: "sudo TESTDIR=/home/ubuntu/cephtest bash -c 'sudo -u postgres -- pgbench -s 500 -i'"
295
  
296
* https://tracker.ceph.com/issues/61399
297
  ior build failure
298
* https://tracker.ceph.com/issues/57676
299
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
300
* https://tracker.ceph.com/issues/55805
301
  error scrub thrashing reached max tries in 900 secs
302
303 172 Rishabh Dave
h3. 6 Sep 2023
304 171 Rishabh Dave
305 173 Rishabh Dave
https://pulpito.ceph.com/rishabh-2023-08-10_20:16:46-fs-wip-rishabh-2023Aug1-b4-testing-default-smithi/
306 171 Rishabh Dave
307 1 Patrick Donnelly
* https://tracker.ceph.com/issues/53859
308
  qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
309 173 Rishabh Dave
* https://tracker.ceph.com/issues/51964
310
  qa: test_cephfs_mirror_restart_sync_on_blocklist failure
311 1 Patrick Donnelly
* https://tracker.ceph.com/issues/61892
312 173 Rishabh Dave
  test_snapshot_remove (test_strays.TestStrays) failed
313
* https://tracker.ceph.com/issues/59348
314
  qa: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota
315
* https://tracker.ceph.com/issues/54462
316
  Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128
317
* https://tracker.ceph.com/issues/62556
318
  test_acls: xfstests_dev: python2 is missing
319
* https://tracker.ceph.com/issues/62067
320
  ffsb.sh failure "Resource temporarily unavailable"
321
* https://tracker.ceph.com/issues/57656
322
  dbench: write failed on handle 10010 (Resource temporarily unavailable)
323 1 Patrick Donnelly
* https://tracker.ceph.com/issues/59346
324
  fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write"
325 171 Rishabh Dave
* https://tracker.ceph.com/issues/59344
326 173 Rishabh Dave
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
327
328 171 Rishabh Dave
* https://tracker.ceph.com/issues/61399
329
  ior build failure
330
* https://tracker.ceph.com/issues/57676
331
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
332
* https://tracker.ceph.com/issues/55805
333
  error scrub thrashing reached max tries in 900 secs
334 173 Rishabh Dave
335
* https://tracker.ceph.com/issues/62567
336
  Command failed on smithi008 with status 1: "sudo TESTDIR=/home/ubuntu/cephtest bash -c 'sudo -u postgres -- pgbench -s 500 -i'"
337
* https://tracker.ceph.com/issues/62702
338
  workunit test suites/fsstress.sh on smithi066 with status 124
339 170 Rishabh Dave
340
h3. 5 Sep 2023
341
342
https://pulpito.ceph.com/rishabh-2023-08-25_06:38:25-fs-wip-rishabh-2023aug3-b5-testing-default-smithi/
343
orch:cephadm suite run: http://pulpito.front.sepia.ceph.com/rishabh-2023-09-05_12:16:09-orch:cephadm-wip-rishabh-2023aug3-b5-testing-default-smithi/
344
  this run has failures but acc to Adam King these are not relevant and should be ignored
345
346
* https://tracker.ceph.com/issues/61892
347
  test_snapshot_remove (test_strays.TestStrays) failed
348
* https://tracker.ceph.com/issues/59348
349
  test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota
350
* https://tracker.ceph.com/issues/54462
351
  Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128
352
* https://tracker.ceph.com/issues/62067
353
  ffsb.sh failure "Resource temporarily unavailable"
354
* https://tracker.ceph.com/issues/57656 
355
  dbench: write failed on handle 10010 (Resource temporarily unavailable)
356
* https://tracker.ceph.com/issues/59346
357
  fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write"
358
* https://tracker.ceph.com/issues/59344
359
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
360
* https://tracker.ceph.com/issues/50223
361
  client.xxxx isn't responding to mclientcaps(revoke)
362
* https://tracker.ceph.com/issues/57655
363
  qa: fs:mixed-clients kernel_untar_build failure
364
* https://tracker.ceph.com/issues/62187
365
  iozone.sh: line 5: iozone: command not found
366
 
367
* https://tracker.ceph.com/issues/61399
368
  ior build failure
369
* https://tracker.ceph.com/issues/57676
370
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
371
* https://tracker.ceph.com/issues/55805
372
  error scrub thrashing reached max tries in 900 secs
373 169 Venky Shankar
374
375
h3. 31 Aug 2023
376
377
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230824.045828
378
379
* https://tracker.ceph.com/issues/52624
380
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
381
* https://tracker.ceph.com/issues/62187
382
    iozone: command not found
383
* https://tracker.ceph.com/issues/61399
384
    ior build failure
385
* https://tracker.ceph.com/issues/59531
386
    quincy: "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio)"
387
* https://tracker.ceph.com/issues/61399
388
    qa: build failure for ior (the failed instance is when compiling `mdtest`)
389
* https://tracker.ceph.com/issues/57655
390
    qa: fs:mixed-clients kernel_untar_build failure
391
* https://tracker.ceph.com/issues/59344
392
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
393
* https://tracker.ceph.com/issues/59346
394
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
395
* https://tracker.ceph.com/issues/59348
396
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
397
* https://tracker.ceph.com/issues/59413
398
    cephfs: qa snaptest-git-ceph.sh failed with "got remote process result: 128"
399
* https://tracker.ceph.com/issues/62653
400
    qa: unimplemented fcntl command: 1036 with fsstress
401
* https://tracker.ceph.com/issues/61400
402
    valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
403
* https://tracker.ceph.com/issues/62658
404
    error during scrub thrashing: reached maximum tries (31) after waiting for 900 seconds
405
* https://tracker.ceph.com/issues/62188
406
    AttributeError: 'RemoteProcess' object has no attribute 'read' (happens only with multis-auth test)
407 168 Venky Shankar
408
409
h3. 25 Aug 2023
410
411
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230822.064807
412
413
* https://tracker.ceph.com/issues/59344
414
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
415
* https://tracker.ceph.com/issues/59346
416
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
417
* https://tracker.ceph.com/issues/59348
418
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
419
* https://tracker.ceph.com/issues/57655
420
    qa: fs:mixed-clients kernel_untar_build failure
421
* https://tracker.ceph.com/issues/61243
422
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
423
* https://tracker.ceph.com/issues/61399
424
    ior build failure
425
* https://tracker.ceph.com/issues/61399
426
    qa: build failure for ior (the failed instance is when compiling `mdtest`)
427
* https://tracker.ceph.com/issues/62484
428
    qa: ffsb.sh test failure
429
* https://tracker.ceph.com/issues/59531
430
    quincy: "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio)"
431
* https://tracker.ceph.com/issues/62510
432
    snaptest-git-ceph.sh failure with fs/thrash
433 167 Venky Shankar
434
435
h3. 24 Aug 2023
436
437
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230822.060131
438
439
* https://tracker.ceph.com/issues/57676
440
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
441
* https://tracker.ceph.com/issues/51964
442
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
443
* https://tracker.ceph.com/issues/59344
444
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
445
* https://tracker.ceph.com/issues/59346
446
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
447
* https://tracker.ceph.com/issues/59348
448
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
449
* https://tracker.ceph.com/issues/61399
450
    ior build failure
451
* https://tracker.ceph.com/issues/61399
452
    qa: build failure for ior (the failed instance is when compiling `mdtest`)
453
* https://tracker.ceph.com/issues/62510
454
    snaptest-git-ceph.sh failure with fs/thrash
455
* https://tracker.ceph.com/issues/62484
456
    qa: ffsb.sh test failure
457
* https://tracker.ceph.com/issues/57087
458
    qa: test_fragmented_injection (tasks.cephfs.test_data_scan.TestDataScan) failure
459
* https://tracker.ceph.com/issues/57656
460
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
461
* https://tracker.ceph.com/issues/62187
462
    iozone: command not found
463
* https://tracker.ceph.com/issues/62188
464
    AttributeError: 'RemoteProcess' object has no attribute 'read' (happens only with multis-auth test)
465
* https://tracker.ceph.com/issues/62567
466
    postgres workunit times out - MDS_SLOW_REQUEST in logs
467 166 Venky Shankar
468
469
h3. 22 Aug 2023
470
471
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230809.035933
472
473
* https://tracker.ceph.com/issues/57676
474
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
475
* https://tracker.ceph.com/issues/51964
476
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
477
* https://tracker.ceph.com/issues/59344
478
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
479
* https://tracker.ceph.com/issues/59346
480
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
481
* https://tracker.ceph.com/issues/59348
482
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
483
* https://tracker.ceph.com/issues/61399
484
    ior build failure
485
* https://tracker.ceph.com/issues/61399
486
    qa: build failure for ior (the failed instance is when compiling `mdtest`)
487
* https://tracker.ceph.com/issues/57655
488
    qa: fs:mixed-clients kernel_untar_build failure
489
* https://tracker.ceph.com/issues/61243
490
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
491
* https://tracker.ceph.com/issues/62188
492
    AttributeError: 'RemoteProcess' object has no attribute 'read' (happens only with multis-auth test)
493
* https://tracker.ceph.com/issues/62510
494
    snaptest-git-ceph.sh failure with fs/thrash
495
* https://tracker.ceph.com/issues/62511
496
    src/mds/MDLog.cc: 299: FAILED ceph_assert(!mds_is_shutting_down)
497 165 Venky Shankar
498
499
h3. 14 Aug 2023
500
501
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230808.093601
502
503
* https://tracker.ceph.com/issues/51964
504
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
505
* https://tracker.ceph.com/issues/61400
506
    valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
507
* https://tracker.ceph.com/issues/61399
508
    ior build failure
509
* https://tracker.ceph.com/issues/59348
510
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
511
* https://tracker.ceph.com/issues/59531
512
    cluster [WRN] OSD bench result of 137706.272521 IOPS exceeded the threshold
513
* https://tracker.ceph.com/issues/59344
514
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
515
* https://tracker.ceph.com/issues/59346
516
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
517
* https://tracker.ceph.com/issues/61399
518
    qa: build failure for ior (the failed instance is when compiling `mdtest`)
519
* https://tracker.ceph.com/issues/59684 [kclient bug]
520
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
521
* https://tracker.ceph.com/issues/61243 (NEW)
522
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
523
* https://tracker.ceph.com/issues/57655
524
    qa: fs:mixed-clients kernel_untar_build failure
525
* https://tracker.ceph.com/issues/57656
526
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
527 163 Venky Shankar
528
529
h3. 28 JULY 2023
530
531
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230725.053049
532
533
* https://tracker.ceph.com/issues/51964
534
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
535
* https://tracker.ceph.com/issues/61400
536
    valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
537
* https://tracker.ceph.com/issues/61399
538
    ior build failure
539
* https://tracker.ceph.com/issues/57676
540
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
541
* https://tracker.ceph.com/issues/59348
542
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
543
* https://tracker.ceph.com/issues/59531
544
    cluster [WRN] OSD bench result of 137706.272521 IOPS exceeded the threshold
545
* https://tracker.ceph.com/issues/59344
546
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
547
* https://tracker.ceph.com/issues/59346
548
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
549
* https://github.com/ceph/ceph/pull/52556
550
    task/test_nfs: AttributeError: 'TestNFS' object has no attribute 'run_ceph_cmd' (see note #4)
551
* https://tracker.ceph.com/issues/62187
552
    iozone: command not found
553
* https://tracker.ceph.com/issues/61399
554
    qa: build failure for ior (the failed instance is when compiling `mdtest`)
555
* https://tracker.ceph.com/issues/62188
556 164 Rishabh Dave
    AttributeError: 'RemoteProcess' object has no attribute 'read' (happens only with multis-auth test)
557 158 Rishabh Dave
558
h3. 24 Jul 2023
559
560
https://pulpito.ceph.com/rishabh-2023-07-13_21:35:13-fs-wip-rishabh-2023Jul13-testing-default-smithi/
561
https://pulpito.ceph.com/rishabh-2023-07-14_10:26:42-fs-wip-rishabh-2023Jul13-testing-default-smithi/
562
There were few failure from one of the PRs under testing. Following run confirms that removing this PR fixes these failures -
563
https://pulpito.ceph.com/rishabh-2023-07-18_02:11:50-fs-wip-rishabh-2023Jul13-m-quota-testing-default-smithi/
564
One more extra run to check if blogbench.sh fail every time:
565
https://pulpito.ceph.com/rishabh-2023-07-21_17:58:19-fs-wip-rishabh-2023Jul13-m-quota-testing-default-smithi/
566
blogbench.sh failure were seen on above runs for first time, following run with main branch that confirms that "blogbench.sh" was not related to any of the PRs that are under testing -
567 161 Rishabh Dave
https://pulpito.ceph.com/rishabh-2023-07-21_21:30:53-fs-wip-rishabh-2023Jul13-base-2-testing-default-smithi/
568
569
* https://tracker.ceph.com/issues/61892
570
  test_snapshot_remove (test_strays.TestStrays) failed
571
* https://tracker.ceph.com/issues/53859
572
  test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
573
* https://tracker.ceph.com/issues/61982
574
  test_clean_stale_subvolume_snapshot_metadata (tasks.cephfs.test_volumes.TestSubvolumeSnapshots)
575
* https://tracker.ceph.com/issues/52438
576
  qa: ffsb timeout
577
* https://tracker.ceph.com/issues/54460
578
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
579
* https://tracker.ceph.com/issues/57655
580
  qa: fs:mixed-clients kernel_untar_build failure
581
* https://tracker.ceph.com/issues/48773
582
  reached max tries: scrub does not complete
583
* https://tracker.ceph.com/issues/58340
584
  mds: fsstress.sh hangs with multimds
585
* https://tracker.ceph.com/issues/61400
586
  valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
587
* https://tracker.ceph.com/issues/57206
588
  libcephfs/test.sh: ceph_test_libcephfs_reclaim
589
  
590
* https://tracker.ceph.com/issues/57656
591
  [testing] dbench: write failed on handle 10010 (Resource temporarily unavailable)
592
* https://tracker.ceph.com/issues/61399
593
  ior build failure
594
* https://tracker.ceph.com/issues/57676
595
  error during scrub thrashing: backtrace
596
  
597
* https://tracker.ceph.com/issues/38452
598
  'sudo -u postgres -- pgbench -s 500 -i' failed
599 158 Rishabh Dave
* https://tracker.ceph.com/issues/62126
600 157 Venky Shankar
  blogbench.sh failure
601
602
h3. 18 July 2023
603
604
* https://tracker.ceph.com/issues/52624
605
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
606
* https://tracker.ceph.com/issues/57676
607
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
608
* https://tracker.ceph.com/issues/54460
609
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
610
* https://tracker.ceph.com/issues/57655
611
    qa: fs:mixed-clients kernel_untar_build failure
612
* https://tracker.ceph.com/issues/51964
613
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
614
* https://tracker.ceph.com/issues/59344
615
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
616
* https://tracker.ceph.com/issues/61182
617
    cephfs-mirror-ha-workunit: reached maximum tries (50) after waiting for 300 seconds
618
* https://tracker.ceph.com/issues/61957
619
    test_client_limits.TestClientLimits.test_client_release_bug
620
* https://tracker.ceph.com/issues/59348
621
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
622
* https://tracker.ceph.com/issues/61892
623
    test_strays.TestStrays.test_snapshot_remove failed
624
* https://tracker.ceph.com/issues/59346
625
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
626
* https://tracker.ceph.com/issues/44565
627
    src/mds/SimpleLock.h: 528: FAILED ceph_assert(state == LOCK_XLOCK || state == LOCK_XLOCKDONE || state == LOCK_XLOCKSNAP || state == LOCK_LOCK_XLOCK || state == LOCK_LOCK || is_locallock())
628
* https://tracker.ceph.com/issues/62067
629
    ffsb.sh failure "Resource temporarily unavailable"
630 156 Venky Shankar
631
632
h3. 17 July 2023
633
634
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230704.040136
635
636
* https://tracker.ceph.com/issues/61982
637
    Test failure: test_clean_stale_subvolume_snapshot_metadata (tasks.cephfs.test_volumes.TestSubvolumeSnapshots)
638
* https://tracker.ceph.com/issues/59344
639
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
640
* https://tracker.ceph.com/issues/61182
641
    cephfs-mirror-ha-workunit: reached maximum tries (50) after waiting for 300 seconds
642
* https://tracker.ceph.com/issues/61957
643
    test_client_limits.TestClientLimits.test_client_release_bug
644
* https://tracker.ceph.com/issues/61400
645
    valgrind+ceph-mon: segmentation fault in rocksdb+tcmalloc
646
* https://tracker.ceph.com/issues/59348
647
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
648
* https://tracker.ceph.com/issues/61892
649
    test_strays.TestStrays.test_snapshot_remove failed
650
* https://tracker.ceph.com/issues/59346
651
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
652
* https://tracker.ceph.com/issues/62036
653
    src/mds/MDCache.cc: 5131: FAILED ceph_assert(isolated_inodes.empty())
654
* https://tracker.ceph.com/issues/61737
655
    coredump from '/bin/podman pull quay.ceph.io/ceph-ci/ceph:pacific'
656
* https://tracker.ceph.com/issues/44565
657
    src/mds/SimpleLock.h: 528: FAILED ceph_assert(state == LOCK_XLOCK || state == LOCK_XLOCKDONE || state == LOCK_XLOCKSNAP || state == LOCK_LOCK_XLOCK || state == LOCK_LOCK || is_locallock())
658 155 Rishabh Dave
659 1 Patrick Donnelly
660 153 Rishabh Dave
h3. 13 July 2023 Run 2
661 152 Rishabh Dave
662
663
https://pulpito.ceph.com/rishabh-2023-07-08_23:33:40-fs-wip-rishabh-2023Jul9-testing-default-smithi/
664
https://pulpito.ceph.com/rishabh-2023-07-09_20:19:09-fs-wip-rishabh-2023Jul9-testing-default-smithi/
665
666
* https://tracker.ceph.com/issues/61957
667
  test_client_limits.TestClientLimits.test_client_release_bug
668
* https://tracker.ceph.com/issues/61982
669
  Test failure: test_clean_stale_subvolume_snapshot_metadata (tasks.cephfs.test_volumes.TestSubvolumeSnapshots)
670
* https://tracker.ceph.com/issues/59348
671
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
672
* https://tracker.ceph.com/issues/59344
673
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
674
* https://tracker.ceph.com/issues/54460
675
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
676
* https://tracker.ceph.com/issues/57655
677
  qa: fs:mixed-clients kernel_untar_build failure
678
* https://tracker.ceph.com/issues/61400
679
  valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
680
* https://tracker.ceph.com/issues/61399
681
  ior build failure
682
683 151 Venky Shankar
h3. 13 July 2023
684
685
https://pulpito.ceph.com/vshankar-2023-07-04_11:45:30-fs-wip-vshankar-testing-20230704.040242-testing-default-smithi/
686
687
* https://tracker.ceph.com/issues/54460
688
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
689
* https://tracker.ceph.com/issues/61400
690
    valgrind+ceph-mon: segmentation fault in rocksdb+tcmalloc
691
* https://tracker.ceph.com/issues/57655
692
    qa: fs:mixed-clients kernel_untar_build failure
693
* https://tracker.ceph.com/issues/61945
694
    LibCephFS.DelegTimeout failure
695
* https://tracker.ceph.com/issues/52624
696
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
697
* https://tracker.ceph.com/issues/57676
698
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
699
* https://tracker.ceph.com/issues/59348
700
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
701
* https://tracker.ceph.com/issues/59344
702
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
703
* https://tracker.ceph.com/issues/51964
704
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
705
* https://tracker.ceph.com/issues/59346
706
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
707
* https://tracker.ceph.com/issues/61982
708
    Test failure: test_clean_stale_subvolume_snapshot_metadata (tasks.cephfs.test_volumes.TestSubvolumeSnapshots)
709 150 Rishabh Dave
710
711
h3. 13 Jul 2023
712
713
https://pulpito.ceph.com/rishabh-2023-07-05_22:21:20-fs-wip-rishabh-2023Jul5-testing-default-smithi/
714
https://pulpito.ceph.com/rishabh-2023-07-06_19:33:28-fs-wip-rishabh-2023Jul5-testing-default-smithi/
715
716
* https://tracker.ceph.com/issues/61957
717
  test_client_limits.TestClientLimits.test_client_release_bug
718
* https://tracker.ceph.com/issues/59348
719
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
720
* https://tracker.ceph.com/issues/59346
721
  fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
722
* https://tracker.ceph.com/issues/48773
723
  scrub does not complete: reached max tries
724
* https://tracker.ceph.com/issues/59344
725
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
726
* https://tracker.ceph.com/issues/52438
727
  qa: ffsb timeout
728
* https://tracker.ceph.com/issues/57656
729
  [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
730
* https://tracker.ceph.com/issues/58742
731
  xfstests-dev: kcephfs: generic
732
* https://tracker.ceph.com/issues/61399
733 148 Rishabh Dave
  libmpich: undefined references to fi_strerror
734 149 Rishabh Dave
735 148 Rishabh Dave
h3. 12 July 2023
736
737
https://pulpito.ceph.com/rishabh-2023-07-05_18:32:52-fs-wip-rishabh-CephManager-in-CephFSTestCase-testing-default-smithi/
738
https://pulpito.ceph.com/rishabh-2023-07-06_19:46:43-fs-wip-rishabh-CephManager-in-CephFSTestCase-testing-default-smithi/
739
740
* https://tracker.ceph.com/issues/61892
741
  test_strays.TestStrays.test_snapshot_remove failed
742
* https://tracker.ceph.com/issues/59348
743
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
744
* https://tracker.ceph.com/issues/53859
745
  qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
746
* https://tracker.ceph.com/issues/59346
747
  fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write"
748
* https://tracker.ceph.com/issues/58742
749
  xfstests-dev: kcephfs: generic
750
* https://tracker.ceph.com/issues/59344
751
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
752
* https://tracker.ceph.com/issues/52438
753
  qa: ffsb timeout
754
* https://tracker.ceph.com/issues/57656
755
  [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
756
* https://tracker.ceph.com/issues/54460
757
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
758
* https://tracker.ceph.com/issues/57655
759
  qa: fs:mixed-clients kernel_untar_build failure
760
* https://tracker.ceph.com/issues/61182
761
  cephfs-mirror-ha-workunit: reached maximum tries (50) after waiting for 300 seconds
762
* https://tracker.ceph.com/issues/61400
763
  valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
764 147 Rishabh Dave
* https://tracker.ceph.com/issues/48773
765 146 Patrick Donnelly
  reached max tries: scrub does not complete
766
767
h3. 05 July 2023
768
769
https://pulpito.ceph.com/pdonnell-2023-07-05_03:38:33-fs:libcephfs-wip-pdonnell-testing-20230705.003205-distro-default-smithi/
770
771 137 Rishabh Dave
* https://tracker.ceph.com/issues/59346
772 143 Rishabh Dave
  fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write"
773
774
h3. 27 Jun 2023
775
776
https://pulpito.ceph.com/rishabh-2023-06-21_23:38:17-fs-wip-rishabh-improvements-authmon-testing-default-smithi/
777 144 Rishabh Dave
https://pulpito.ceph.com/rishabh-2023-06-23_17:37:30-fs-wip-rishabh-improvements-authmon-distro-default-smithi/
778
779
* https://tracker.ceph.com/issues/59348
780
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
781
* https://tracker.ceph.com/issues/54460
782
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
783
* https://tracker.ceph.com/issues/59346
784
  fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write"
785
* https://tracker.ceph.com/issues/59344
786
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
787
* https://tracker.ceph.com/issues/61399
788
  libmpich: undefined references to fi_strerror
789
* https://tracker.ceph.com/issues/50223
790
  client.xxxx isn't responding to mclientcaps(revoke)
791 143 Rishabh Dave
* https://tracker.ceph.com/issues/61831
792
  Test failure: test_mirroring_init_failure_with_recovery (tasks.cephfs.test_mirroring.TestMirroring)
793 142 Venky Shankar
794
795
h3. 22 June 2023
796
797
* https://tracker.ceph.com/issues/57676
798
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
799
* https://tracker.ceph.com/issues/54460
800
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
801
* https://tracker.ceph.com/issues/59344
802
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
803
* https://tracker.ceph.com/issues/59348
804
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
805
* https://tracker.ceph.com/issues/61400
806
    valgrind+ceph-mon: segmentation fault in rocksdb+tcmalloc
807
* https://tracker.ceph.com/issues/57655
808
    qa: fs:mixed-clients kernel_untar_build failure
809
* https://tracker.ceph.com/issues/61394
810
    qa/quincy: cluster [WRN] evicting unresponsive client smithi152 (4298), after 303.726 seconds" in cluster log
811
* https://tracker.ceph.com/issues/61762
812
    qa: wait_for_clean: failed before timeout expired
813
* https://tracker.ceph.com/issues/61775
814
    cephfs-mirror: mirror daemon does not shutdown (in mirror ha tests)
815
* https://tracker.ceph.com/issues/44565
816
    src/mds/SimpleLock.h: 528: FAILED ceph_assert(state == LOCK_XLOCK || state == LOCK_XLOCKDONE || state == LOCK_XLOCKSNAP || state == LOCK_LOCK_XLOCK || state == LOCK_LOCK || is_locallock())
817
* https://tracker.ceph.com/issues/61790
818
    cephfs client to mds comms remain silent after reconnect
819
* https://tracker.ceph.com/issues/61791
820
    snaptest-git-ceph.sh test timed out (job dead)
821 139 Venky Shankar
822
823
h3. 20 June 2023
824
825
https://pulpito.ceph.com/vshankar-2023-06-15_04:58:28-fs-wip-vshankar-testing-20230614.124123-testing-default-smithi/
826
827
* https://tracker.ceph.com/issues/57676
828
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
829
* https://tracker.ceph.com/issues/54460
830
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
831 140 Venky Shankar
* https://tracker.ceph.com/issues/54462
832 1 Patrick Donnelly
  Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128
833 141 Venky Shankar
* https://tracker.ceph.com/issues/58340
834 139 Venky Shankar
  mds: fsstress.sh hangs with multimds
835
* https://tracker.ceph.com/issues/59344
836
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
837
* https://tracker.ceph.com/issues/59348
838
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
839
* https://tracker.ceph.com/issues/57656
840
  [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
841
* https://tracker.ceph.com/issues/61400
842
    valgrind+ceph-mon: segmentation fault in rocksdb+tcmalloc
843
* https://tracker.ceph.com/issues/57655
844
    qa: fs:mixed-clients kernel_untar_build failure
845
* https://tracker.ceph.com/issues/44565
846
    src/mds/SimpleLock.h: 528: FAILED ceph_assert(state == LOCK_XLOCK || state == LOCK_XLOCKDONE || state == LOCK_XLOCKSNAP || state == LOCK_LOCK_XLOCK || state == LOCK_LOCK || is_locallock())
847
* https://tracker.ceph.com/issues/61737
848 138 Rishabh Dave
    coredump from '/bin/podman pull quay.ceph.io/ceph-ci/ceph:pacific'
849
850
h3. 16 June 2023
851
852 1 Patrick Donnelly
https://pulpito.ceph.com/rishabh-2023-05-16_10:39:13-fs-wip-rishabh-2023May15-1524-testing-default-smithi/
853 145 Rishabh Dave
https://pulpito.ceph.com/rishabh-2023-05-17_11:09:48-fs-wip-rishabh-2023May15-1524-testing-default-smithi/
854 138 Rishabh Dave
https://pulpito.ceph.com/rishabh-2023-05-18_10:01:53-fs-wip-rishabh-2023May15-1524-testing-default-smithi/
855 1 Patrick Donnelly
(bins were rebuilt with a subset of orig PRs) https://pulpito.ceph.com/rishabh-2023-06-09_10:19:22-fs-wip-rishabh-2023Jun9-1308-testing-default-smithi/
856
857
858
* https://tracker.ceph.com/issues/59344
859
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
860 138 Rishabh Dave
* https://tracker.ceph.com/issues/59348
861
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
862 145 Rishabh Dave
* https://tracker.ceph.com/issues/59346
863
  fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write"
864
* https://tracker.ceph.com/issues/57656
865
  [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
866
* https://tracker.ceph.com/issues/54460
867
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
868 138 Rishabh Dave
* https://tracker.ceph.com/issues/54462
869
  Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128
870 145 Rishabh Dave
* https://tracker.ceph.com/issues/61399
871
  libmpich: undefined references to fi_strerror
872
* https://tracker.ceph.com/issues/58945
873
  xfstests-dev: ceph-fuse: generic 
874 138 Rishabh Dave
* https://tracker.ceph.com/issues/58742
875 136 Patrick Donnelly
  xfstests-dev: kcephfs: generic
876
877
h3. 24 May 2023
878
879
https://pulpito.ceph.com/pdonnell-2023-05-23_18:20:18-fs-wip-pdonnell-testing-20230523.134409-distro-default-smithi/
880
881
* https://tracker.ceph.com/issues/57676
882
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
883
* https://tracker.ceph.com/issues/59683
884
    Error: Unable to find a match: userspace-rcu-devel libedit-devel device-mapper-devel with fscrypt tests
885
* https://tracker.ceph.com/issues/61399
886
    qa: "[Makefile:299: ior] Error 1"
887
* https://tracker.ceph.com/issues/61265
888
    qa: tasks.cephfs.fuse_mount:process failed to terminate after unmount
889
* https://tracker.ceph.com/issues/59348
890
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
891
* https://tracker.ceph.com/issues/59346
892
    qa/workunits/fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write"
893
* https://tracker.ceph.com/issues/61400
894
    valgrind+ceph-mon: segmentation fault in rocksdb+tcmalloc
895
* https://tracker.ceph.com/issues/54460
896
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
897
* https://tracker.ceph.com/issues/51964
898
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
899
* https://tracker.ceph.com/issues/59344
900
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
901
* https://tracker.ceph.com/issues/61407
902
    mds: abort on CInode::verify_dirfrags
903
* https://tracker.ceph.com/issues/48773
904
    qa: scrub does not complete
905
* https://tracker.ceph.com/issues/57655
906
    qa: fs:mixed-clients kernel_untar_build failure
907
* https://tracker.ceph.com/issues/61409
908 128 Venky Shankar
    qa: _test_stale_caps does not wait for file flush before stat
909
910
h3. 15 May 2023
911 130 Venky Shankar
912 128 Venky Shankar
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230509.090020
913
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230509.090020-6
914
915
* https://tracker.ceph.com/issues/52624
916
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
917
* https://tracker.ceph.com/issues/54460
918
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
919
* https://tracker.ceph.com/issues/57676
920
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
921
* https://tracker.ceph.com/issues/59684 [kclient bug]
922
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
923
* https://tracker.ceph.com/issues/59348
924
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
925 131 Venky Shankar
* https://tracker.ceph.com/issues/61148
926
    dbench test results in call trace in dmesg [kclient bug]
927 133 Kotresh Hiremath Ravishankar
* https://tracker.ceph.com/issues/58340
928 134 Kotresh Hiremath Ravishankar
    mds: fsstress.sh hangs with multimds
929 125 Venky Shankar
930
 
931 129 Rishabh Dave
h3. 11 May 2023
932
933
https://pulpito.ceph.com/yuriw-2023-05-10_18:21:40-fs-wip-yuri7-testing-2023-05-10-0742-distro-default-smithi/
934
935
* https://tracker.ceph.com/issues/59684 [kclient bug]
936
  Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
937
* https://tracker.ceph.com/issues/59348
938
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
939
* https://tracker.ceph.com/issues/57655
940
  qa: fs:mixed-clients kernel_untar_build failure
941
* https://tracker.ceph.com/issues/57676
942
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
943
* https://tracker.ceph.com/issues/55805
944
  error during scrub thrashing reached max tries in 900 secs
945
* https://tracker.ceph.com/issues/54460
946
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
947
* https://tracker.ceph.com/issues/57656
948
  [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
949
* https://tracker.ceph.com/issues/58220
950
  Command failed (workunit test fs/quota/quota.sh) on smithi081 with status 1:
951 1 Patrick Donnelly
* https://tracker.ceph.com/issues/58220#note-9
952
  workunit fs/test_python.sh: test_disk_quota_exceeeded_error failure
953 134 Kotresh Hiremath Ravishankar
* https://tracker.ceph.com/issues/59342
954
  qa/workunits/kernel_untar_build.sh failed when compiling the Linux source
955 135 Kotresh Hiremath Ravishankar
* https://tracker.ceph.com/issues/58949
956
    test_cephfs.test_disk_quota_exceeeded_error - AssertionError: DiskQuotaExceeded not raised by write
957 129 Rishabh Dave
* https://tracker.ceph.com/issues/61243 (NEW)
958
  test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
959
960 125 Venky Shankar
h3. 11 May 2023
961 127 Venky Shankar
962
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230509.054005
963 126 Venky Shankar
964 125 Venky Shankar
(no fsstress job failure [https://tracker.ceph.com/issues/58340] since https://github.com/ceph/ceph/pull/49553
965
 was included in the branch, however, the PR got updated and needs retest).
966
967
* https://tracker.ceph.com/issues/52624
968
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
969
* https://tracker.ceph.com/issues/54460
970
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
971
* https://tracker.ceph.com/issues/57676
972
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
973
* https://tracker.ceph.com/issues/59683
974
    Error: Unable to find a match: userspace-rcu-devel libedit-devel device-mapper-devel with fscrypt tests
975
* https://tracker.ceph.com/issues/59684 [kclient bug]
976
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
977
* https://tracker.ceph.com/issues/59348
978 124 Venky Shankar
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
979
980
h3. 09 May 2023
981
982
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230506.143554
983
984
* https://tracker.ceph.com/issues/52624
985
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
986
* https://tracker.ceph.com/issues/58340
987
    mds: fsstress.sh hangs with multimds
988
* https://tracker.ceph.com/issues/54460
989
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
990
* https://tracker.ceph.com/issues/57676
991
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
992
* https://tracker.ceph.com/issues/51964
993
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
994
* https://tracker.ceph.com/issues/59350
995
    qa: test_flag_scrub_mdsdir (tasks.cephfs.test_scrub_checks.TestScrubChecks) ... ERROR
996
* https://tracker.ceph.com/issues/59683
997
    Error: Unable to find a match: userspace-rcu-devel libedit-devel device-mapper-devel with fscrypt tests
998
* https://tracker.ceph.com/issues/59684 [kclient bug]
999
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
1000
* https://tracker.ceph.com/issues/59348
1001 123 Venky Shankar
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
1002
1003
h3. 10 Apr 2023
1004
1005
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230330.105356
1006
1007
* https://tracker.ceph.com/issues/52624
1008
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
1009
* https://tracker.ceph.com/issues/58340
1010
    mds: fsstress.sh hangs with multimds
1011
* https://tracker.ceph.com/issues/54460
1012
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1013
* https://tracker.ceph.com/issues/57676
1014
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1015 119 Rishabh Dave
* https://tracker.ceph.com/issues/51964
1016 120 Rishabh Dave
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1017 121 Rishabh Dave
1018 120 Rishabh Dave
h3. 31 Mar 2023
1019 122 Rishabh Dave
1020
run: http://pulpito.front.sepia.ceph.com/rishabh-2023-03-03_21:39:49-fs-wip-rishabh-2023Mar03-2316-testing-default-smithi/
1021 120 Rishabh Dave
re-run (some PRs removed from batch): http://pulpito.front.sepia.ceph.com/rishabh-2023-03-11_05:54:03-fs-wip-rishabh-2023Mar10-1727-testing-default-smithi/
1022
re-run (some PRs removed from batch): http://pulpito.front.sepia.ceph.com/rishabh-2023-03-23_08:27:28-fs-wip-rishabh-2023Mar20-2250-testing-default-smithi/
1023
1024
There were many more re-runs for "failed+dead" jobs as well as for individual jobs. half of the PRs from the batch were removed (gradually over subsequent re-runs).
1025
1026
* https://tracker.ceph.com/issues/57676
1027
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
1028
* https://tracker.ceph.com/issues/54460
1029
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1030
* https://tracker.ceph.com/issues/58220
1031
  Command failed (workunit test fs/quota/quota.sh) on smithixxx with status 1
1032
* https://tracker.ceph.com/issues/58220#note-9
1033
  workunit fs/test_python.sh: test_disk_quota_exceeeded_error failure
1034
* https://tracker.ceph.com/issues/56695
1035
  Command failed (workunit test suites/pjd.sh)
1036
* https://tracker.ceph.com/issues/58564 
1037
  workuit dbench failed with error code 1
1038
* https://tracker.ceph.com/issues/57206
1039
  libcephfs/test.sh: ceph_test_libcephfs_reclaim
1040
* https://tracker.ceph.com/issues/57580
1041
  Test failure: test_newops_getvxattr (tasks.cephfs.test_newops.TestNewOps)
1042
* https://tracker.ceph.com/issues/58940
1043
  ceph osd hit ceph_abort
1044
* https://tracker.ceph.com/issues/55805
1045 118 Venky Shankar
  error scrub thrashing reached max tries in 900 secs
1046
1047
h3. 30 March 2023
1048
1049
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230315.085747
1050
1051
* https://tracker.ceph.com/issues/58938
1052
    qa: xfstests-dev's generic test suite has 7 failures with kclient
1053
* https://tracker.ceph.com/issues/51964
1054
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1055
* https://tracker.ceph.com/issues/58340
1056 114 Venky Shankar
    mds: fsstress.sh hangs with multimds
1057
1058 115 Venky Shankar
h3. 29 March 2023
1059 114 Venky Shankar
1060
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230317.095222
1061
1062
* https://tracker.ceph.com/issues/56695
1063
    [RHEL stock] pjd test failures
1064
* https://tracker.ceph.com/issues/57676
1065
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1066
* https://tracker.ceph.com/issues/57087
1067
    qa: test_fragmented_injection (tasks.cephfs.test_data_scan.TestDataScan) failure
1068 116 Venky Shankar
* https://tracker.ceph.com/issues/58340
1069
    mds: fsstress.sh hangs with multimds
1070 114 Venky Shankar
* https://tracker.ceph.com/issues/57655
1071
    qa: fs:mixed-clients kernel_untar_build failure
1072 117 Venky Shankar
* https://tracker.ceph.com/issues/59230
1073
    Test failure: test_object_deletion (tasks.cephfs.test_damage.TestDamage)
1074 114 Venky Shankar
* https://tracker.ceph.com/issues/58938
1075 113 Venky Shankar
    qa: xfstests-dev's generic test suite has 7 failures with kclient
1076
1077
h3. 13 Mar 2023
1078
1079
* https://tracker.ceph.com/issues/56695
1080
    [RHEL stock] pjd test failures
1081
* https://tracker.ceph.com/issues/57676
1082
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1083
* https://tracker.ceph.com/issues/51964
1084
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1085
* https://tracker.ceph.com/issues/54460
1086
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1087
* https://tracker.ceph.com/issues/57656
1088 112 Venky Shankar
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
1089
1090
h3. 09 Mar 2023
1091
1092
https://pulpito.ceph.com/vshankar-2023-03-03_04:39:14-fs-wip-vshankar-testing-20230303.023823-testing-default-smithi/
1093
https://pulpito.ceph.com/vshankar-2023-03-08_15:12:36-fs-wip-vshankar-testing-20230308.112059-testing-default-smithi/
1094
1095
* https://tracker.ceph.com/issues/56695
1096
    [RHEL stock] pjd test failures
1097
* https://tracker.ceph.com/issues/57676
1098
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1099
* https://tracker.ceph.com/issues/51964
1100
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1101
* https://tracker.ceph.com/issues/54460
1102
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1103
* https://tracker.ceph.com/issues/58340
1104
    mds: fsstress.sh hangs with multimds
1105
* https://tracker.ceph.com/issues/57087
1106 111 Venky Shankar
    qa: test_fragmented_injection (tasks.cephfs.test_data_scan.TestDataScan) failure
1107
1108
h3. 07 Mar 2023
1109
1110
https://pulpito.ceph.com/vshankar-2023-03-02_09:21:58-fs-wip-vshankar-testing-20230222.044949-testing-default-smithi/
1111
https://pulpito.ceph.com/vshankar-2023-03-07_05:15:12-fs-wip-vshankar-testing-20230307.030510-testing-default-smithi/
1112
1113
* https://tracker.ceph.com/issues/56695
1114
    [RHEL stock] pjd test failures
1115
* https://tracker.ceph.com/issues/57676
1116
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1117
* https://tracker.ceph.com/issues/51964
1118
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1119
* https://tracker.ceph.com/issues/57656
1120
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
1121
* https://tracker.ceph.com/issues/57655
1122
    qa: fs:mixed-clients kernel_untar_build failure
1123
* https://tracker.ceph.com/issues/58220
1124
    Command failed (workunit test fs/quota/quota.sh) on smithi081 with status 1:
1125
* https://tracker.ceph.com/issues/54460
1126
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1127
* https://tracker.ceph.com/issues/58934
1128 109 Venky Shankar
    snaptest-git-ceph.sh failure with ceph-fuse
1129
1130
h3. 28 Feb 2023
1131
1132
https://pulpito.ceph.com/vshankar-2023-02-24_02:11:45-fs-wip-vshankar-testing-20230222.025426-testing-default-smithi/
1133
1134
* https://tracker.ceph.com/issues/56695
1135
    [RHEL stock] pjd test failures
1136
* https://tracker.ceph.com/issues/57676
1137
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1138 110 Venky Shankar
* https://tracker.ceph.com/issues/56446
1139 109 Venky Shankar
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
1140
1141 107 Venky Shankar
(teuthology infra issues causing testing delays - merging PRs which have tests passing)
1142
1143
h3. 25 Jan 2023
1144
1145
https://pulpito.ceph.com/vshankar-2023-01-25_07:57:32-fs-wip-vshankar-testing-20230125.055346-testing-default-smithi/
1146
1147
* https://tracker.ceph.com/issues/52624
1148
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
1149
* https://tracker.ceph.com/issues/56695
1150
    [RHEL stock] pjd test failures
1151
* https://tracker.ceph.com/issues/57676
1152
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1153
* https://tracker.ceph.com/issues/56446
1154
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
1155
* https://tracker.ceph.com/issues/57206
1156
    libcephfs/test.sh: ceph_test_libcephfs_reclaim
1157
* https://tracker.ceph.com/issues/58220
1158
    Command failed (workunit test fs/quota/quota.sh) on smithi081 with status 1:
1159
* https://tracker.ceph.com/issues/58340
1160
  mds: fsstress.sh hangs with multimds
1161
* https://tracker.ceph.com/issues/56011
1162
    fs/thrash: snaptest-snap-rm-cmp.sh fails in mds5sum comparison
1163
* https://tracker.ceph.com/issues/54460
1164 101 Rishabh Dave
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1165
1166
h3. 30 JAN 2023
1167
1168
run: http://pulpito.front.sepia.ceph.com/rishabh-2022-11-28_08:04:11-fs-wip-rishabh-testing-2022Nov24-1818-testing-default-smithi/
1169
re-run: http://pulpito.front.sepia.ceph.com/rishabh-2023-01-13_12:08:33-fs-wip-rishabh-testing-2022Nov24-11Jan2023-distro-default-smithi/
1170 105 Rishabh Dave
re-run of re-run: http://pulpito.front.sepia.ceph.com/rishabh-2023-01-23_18:53:32-fs-wip-rishabh-testing-2022Nov24-11Jan2023-distro-default-smithi/
1171
1172 101 Rishabh Dave
* https://tracker.ceph.com/issues/52624
1173
  qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
1174
* https://tracker.ceph.com/issues/56695
1175
  [RHEL stock] pjd test failures
1176
* https://tracker.ceph.com/issues/57676
1177
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
1178
* https://tracker.ceph.com/issues/55332
1179
  Failure in snaptest-git-ceph.sh
1180
* https://tracker.ceph.com/issues/51964
1181
  qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1182
* https://tracker.ceph.com/issues/56446
1183
  Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
1184
* https://tracker.ceph.com/issues/57655 
1185
  qa: fs:mixed-clients kernel_untar_build failure
1186
* https://tracker.ceph.com/issues/54460
1187
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1188 103 Rishabh Dave
* https://tracker.ceph.com/issues/58340
1189
  mds: fsstress.sh hangs with multimds
1190 101 Rishabh Dave
* https://tracker.ceph.com/issues/58219
1191 102 Rishabh Dave
  Command crashed: 'ceph-dencoder type inode_backtrace_t import - decode dump_json'
1192
1193
* "Failed to load ceph-mgr modules: prometheus" in cluster log"
1194 106 Rishabh Dave
  http://pulpito.front.sepia.ceph.com/rishabh-2023-01-23_18:53:32-fs-wip-rishabh-testing-2022Nov24-11Jan2023-distro-default-smithi/7134086
1195
  Acc to Venky this was fixed in https://github.com/ceph/ceph/commit/cf6089200d96fc56b08ee17a4e31f19823370dc8
1196 102 Rishabh Dave
* Created https://tracker.ceph.com/issues/58564
1197 100 Venky Shankar
  workunit test suites/dbench.sh failed error code 1
1198
1199
h3. 15 Dec 2022
1200
1201
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20221215.112736
1202
1203
* https://tracker.ceph.com/issues/52624
1204
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1205
* https://tracker.ceph.com/issues/56695
1206
    [RHEL stock] pjd test failures
1207
* https://tracker.ceph.com/issues/58219
1208
* https://tracker.ceph.com/issues/57655
1209
* qa: fs:mixed-clients kernel_untar_build failure
1210
    Test failure: test_journal_migration (tasks.cephfs.test_journal_migration.TestJournalMigration)
1211
* https://tracker.ceph.com/issues/57676
1212
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1213
* https://tracker.ceph.com/issues/58340
1214 96 Venky Shankar
    mds: fsstress.sh hangs with multimds
1215
1216
h3. 08 Dec 2022
1217 99 Venky Shankar
1218 96 Venky Shankar
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20221130.043104
1219
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20221209.043803
1220
1221
(lots of transient git.ceph.com failures)
1222
1223
* https://tracker.ceph.com/issues/52624
1224
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1225
* https://tracker.ceph.com/issues/56695
1226
    [RHEL stock] pjd test failures
1227
* https://tracker.ceph.com/issues/57655
1228
    qa: fs:mixed-clients kernel_untar_build failure
1229
* https://tracker.ceph.com/issues/58219
1230
    Test failure: test_journal_migration (tasks.cephfs.test_journal_migration.TestJournalMigration)
1231
* https://tracker.ceph.com/issues/58220
1232
    Command failed (workunit test fs/quota/quota.sh) on smithi081 with status 1:
1233 97 Venky Shankar
* https://tracker.ceph.com/issues/57676
1234
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1235 98 Venky Shankar
* https://tracker.ceph.com/issues/53859
1236
    qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
1237
* https://tracker.ceph.com/issues/54460
1238
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1239 96 Venky Shankar
* https://tracker.ceph.com/issues/58244
1240 95 Venky Shankar
    Test failure: test_rebuild_inotable (tasks.cephfs.test_data_scan.TestDataScan)
1241
1242
h3. 14 Oct 2022
1243
1244
https://pulpito.ceph.com/vshankar-2022-10-12_04:56:59-fs-wip-vshankar-testing-20221011-145847-testing-default-smithi/
1245
https://pulpito.ceph.com/vshankar-2022-10-14_04:04:57-fs-wip-vshankar-testing-20221014-072608-testing-default-smithi/
1246
1247
* https://tracker.ceph.com/issues/52624
1248
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1249
* https://tracker.ceph.com/issues/55804
1250
    Command failed (workunit test suites/pjd.sh)
1251
* https://tracker.ceph.com/issues/51964
1252
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1253
* https://tracker.ceph.com/issues/57682
1254
    client: ERROR: test_reconnect_after_blocklisted
1255 90 Rishabh Dave
* https://tracker.ceph.com/issues/54460
1256 91 Rishabh Dave
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1257
1258
h3. 10 Oct 2022
1259 92 Rishabh Dave
1260 91 Rishabh Dave
http://pulpito.front.sepia.ceph.com/rishabh-2022-09-30_19:45:21-fs-wip-rishabh-testing-30Sep2022-testing-default-smithi/
1261
1262
reruns
1263
* fs-thrash, passed: http://pulpito.front.sepia.ceph.com/rishabh-2022-10-04_13:19:47-fs-wip-rishabh-testing-30Sep2022-testing-default-smithi/
1264 94 Rishabh Dave
* fs-verify, passed: http://pulpito.front.sepia.ceph.com/rishabh-2022-10-05_12:25:37-fs-wip-rishabh-testing-30Sep2022-testing-default-smithi/
1265 91 Rishabh Dave
* cephadm failures also passed after many re-runs: http://pulpito.front.sepia.ceph.com/rishabh-2022-10-06_13:50:51-fs-wip-rishabh-testing-30Sep2022-2-testing-default-smithi/
1266 93 Rishabh Dave
    ** needed this PR to be merged in ceph-ci branch - https://github.com/ceph/ceph/pull/47458
1267 91 Rishabh Dave
1268
known bugs
1269
* https://tracker.ceph.com/issues/52624
1270
  qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1271
* https://tracker.ceph.com/issues/50223
1272
  client.xxxx isn't responding to mclientcaps(revoke
1273
* https://tracker.ceph.com/issues/57299
1274
  qa: test_dump_loads fails with JSONDecodeError
1275
* https://tracker.ceph.com/issues/57655 [Exist in main as well]
1276
  qa: fs:mixed-clients kernel_untar_build failure
1277
* https://tracker.ceph.com/issues/57206
1278 90 Rishabh Dave
  libcephfs/test.sh: ceph_test_libcephfs_reclaim
1279
1280
h3. 2022 Sep 29
1281
1282
http://pulpito.front.sepia.ceph.com/rishabh-2022-09-14_12:48:43-fs-wip-rishabh-testing-2022Sep9-1708-testing-default-smithi/
1283
1284
* https://tracker.ceph.com/issues/55804
1285
  Command failed (workunit test suites/pjd.sh)
1286
* https://tracker.ceph.com/issues/36593
1287
  Command failed (workunit test fs/quota/quota.sh) on smithixxx with status 1
1288
* https://tracker.ceph.com/issues/52624
1289
  qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1290
* https://tracker.ceph.com/issues/51964
1291
  qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1292
* https://tracker.ceph.com/issues/56632
1293
  Test failure: test_subvolume_snapshot_clone_quota_exceeded
1294
* https://tracker.ceph.com/issues/50821
1295 88 Patrick Donnelly
  qa: untar_snap_rm failure during mds thrashing
1296
1297
h3. 2022 Sep 26
1298
1299
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20220923.171109
1300
1301
* https://tracker.ceph.com/issues/55804
1302
    qa failure: pjd link tests failed
1303
* https://tracker.ceph.com/issues/57676
1304
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1305
* https://tracker.ceph.com/issues/52624
1306
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1307
* https://tracker.ceph.com/issues/57580
1308
    Test failure: test_newops_getvxattr (tasks.cephfs.test_newops.TestNewOps)
1309
* https://tracker.ceph.com/issues/48773
1310
    qa: scrub does not complete
1311
* https://tracker.ceph.com/issues/57299
1312
    qa: test_dump_loads fails with JSONDecodeError
1313
* https://tracker.ceph.com/issues/57280
1314
    qa: tasks/kernel_cfuse_workunits_untarbuild_blogbench fails - Failed to fetch package version from shaman
1315
* https://tracker.ceph.com/issues/57205
1316
    Test failure: test_subvolume_group_ls_filter_internal_directories (tasks.cephfs.test_volumes.TestSubvolumeGroups)
1317
* https://tracker.ceph.com/issues/57656
1318
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
1319
* https://tracker.ceph.com/issues/57677
1320
    qa: "1 MDSs behind on trimming (MDS_TRIM)"
1321
* https://tracker.ceph.com/issues/57206
1322
    libcephfs/test.sh: ceph_test_libcephfs_reclaim
1323
* https://tracker.ceph.com/issues/57446
1324
    qa: test_subvolume_snapshot_info_if_orphan_clone fails
1325 89 Patrick Donnelly
* https://tracker.ceph.com/issues/57655 [Exist in main as well]
1326
    qa: fs:mixed-clients kernel_untar_build failure
1327 88 Patrick Donnelly
* https://tracker.ceph.com/issues/57682
1328
    client: ERROR: test_reconnect_after_blocklisted
1329 87 Patrick Donnelly
1330
1331
h3. 2022 Sep 22
1332
1333
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20220920.234701
1334
1335
* https://tracker.ceph.com/issues/57299
1336
    qa: test_dump_loads fails with JSONDecodeError
1337
* https://tracker.ceph.com/issues/57205
1338
    Test failure: test_subvolume_group_ls_filter_internal_directories (tasks.cephfs.test_volumes.TestSubvolumeGroups)
1339
* https://tracker.ceph.com/issues/52624
1340
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1341
* https://tracker.ceph.com/issues/57580
1342
    Test failure: test_newops_getvxattr (tasks.cephfs.test_newops.TestNewOps)
1343
* https://tracker.ceph.com/issues/57280
1344
    qa: tasks/kernel_cfuse_workunits_untarbuild_blogbench fails - Failed to fetch package version from shaman
1345
* https://tracker.ceph.com/issues/48773
1346
    qa: scrub does not complete
1347
* https://tracker.ceph.com/issues/56446
1348
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
1349
* https://tracker.ceph.com/issues/57206
1350
    libcephfs/test.sh: ceph_test_libcephfs_reclaim
1351
* https://tracker.ceph.com/issues/51267
1352
    CommandFailedError: Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithi096 with status 1:...
1353
1354
NEW:
1355
1356
* https://tracker.ceph.com/issues/57656
1357
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
1358
* https://tracker.ceph.com/issues/57655 [Exist in main as well]
1359
    qa: fs:mixed-clients kernel_untar_build failure
1360
* https://tracker.ceph.com/issues/57657
1361
    mds: scrub locates mismatch between child accounted_rstats and self rstats
1362
1363
Segfault probably caused by: https://github.com/ceph/ceph/pull/47795#issuecomment-1255724799
1364 80 Venky Shankar
1365 79 Venky Shankar
1366
h3. 2022 Sep 16
1367
1368
https://pulpito.ceph.com/?branch=wip-vshankar-testing1-20220905-132828
1369
1370
* https://tracker.ceph.com/issues/57446
1371
    qa: test_subvolume_snapshot_info_if_orphan_clone fails
1372
* https://tracker.ceph.com/issues/57299
1373
    qa: test_dump_loads fails with JSONDecodeError
1374
* https://tracker.ceph.com/issues/50223
1375
    client.xxxx isn't responding to mclientcaps(revoke)
1376
* https://tracker.ceph.com/issues/52624
1377
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1378
* https://tracker.ceph.com/issues/57205
1379
    Test failure: test_subvolume_group_ls_filter_internal_directories (tasks.cephfs.test_volumes.TestSubvolumeGroups)
1380
* https://tracker.ceph.com/issues/57280
1381
    qa: tasks/kernel_cfuse_workunits_untarbuild_blogbench fails - Failed to fetch package version from shaman
1382
* https://tracker.ceph.com/issues/51282
1383
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
1384
* https://tracker.ceph.com/issues/48203
1385
  https://tracker.ceph.com/issues/36593
1386
    qa: quota failure
1387
    qa: quota failure caused by clients stepping on each other
1388
* https://tracker.ceph.com/issues/57580
1389 77 Rishabh Dave
    Test failure: test_newops_getvxattr (tasks.cephfs.test_newops.TestNewOps)
1390
1391 76 Rishabh Dave
1392
h3. 2022 Aug 26
1393
1394
http://pulpito.front.sepia.ceph.com/rishabh-2022-08-22_17:49:59-fs-wip-rishabh-testing-2022Aug19-testing-default-smithi/
1395
http://pulpito.front.sepia.ceph.com/rishabh-2022-08-24_11:56:51-fs-wip-rishabh-testing-2022Aug19-testing-default-smithi/
1396
1397
* https://tracker.ceph.com/issues/57206
1398
  libcephfs/test.sh: ceph_test_libcephfs_reclaim
1399
* https://tracker.ceph.com/issues/56632
1400
  Test failure: test_subvolume_snapshot_clone_quota_exceeded (tasks.cephfs.test_volumes.TestSubvolumeSnapshotClones)
1401
* https://tracker.ceph.com/issues/56446
1402
  Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
1403
* https://tracker.ceph.com/issues/51964
1404
  qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1405
* https://tracker.ceph.com/issues/53859
1406
  qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
1407
1408
* https://tracker.ceph.com/issues/54460
1409
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1410
* https://tracker.ceph.com/issues/54462
1411
  Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128
1412
* https://tracker.ceph.com/issues/54460
1413
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1414
* https://tracker.ceph.com/issues/36593
1415
  Command failed (workunit test fs/quota/quota.sh) on smithixxx with status 1
1416
1417
* https://tracker.ceph.com/issues/52624
1418
  qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1419
* https://tracker.ceph.com/issues/55804
1420
  Command failed (workunit test suites/pjd.sh)
1421
* https://tracker.ceph.com/issues/50223
1422
  client.xxxx isn't responding to mclientcaps(revoke)
1423 75 Venky Shankar
1424
1425
h3. 2022 Aug 22
1426
1427
https://pulpito.ceph.com/vshankar-2022-08-12_09:34:24-fs-wip-vshankar-testing1-20220812-072441-testing-default-smithi/
1428
https://pulpito.ceph.com/vshankar-2022-08-18_04:30:42-fs-wip-vshankar-testing1-20220818-082047-testing-default-smithi/ (drop problematic PR and re-run)
1429
1430
* https://tracker.ceph.com/issues/52624
1431
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1432
* https://tracker.ceph.com/issues/56446
1433
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
1434
* https://tracker.ceph.com/issues/55804
1435
    Command failed (workunit test suites/pjd.sh)
1436
* https://tracker.ceph.com/issues/51278
1437
    mds: "FAILED ceph_assert(!segments.empty())"
1438
* https://tracker.ceph.com/issues/54460
1439
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1440
* https://tracker.ceph.com/issues/57205
1441
    Test failure: test_subvolume_group_ls_filter_internal_directories (tasks.cephfs.test_volumes.TestSubvolumeGroups)
1442
* https://tracker.ceph.com/issues/57206
1443
    ceph_test_libcephfs_reclaim crashes during test
1444
* https://tracker.ceph.com/issues/53859
1445
  Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
1446
* https://tracker.ceph.com/issues/50223
1447 72 Venky Shankar
    client.xxxx isn't responding to mclientcaps(revoke)
1448
1449
h3. 2022 Aug 12
1450
1451
https://pulpito.ceph.com/vshankar-2022-08-10_04:06:00-fs-wip-vshankar-testing-20220805-190751-testing-default-smithi/
1452
https://pulpito.ceph.com/vshankar-2022-08-11_12:16:58-fs-wip-vshankar-testing-20220811-145809-testing-default-smithi/ (drop problematic PR and re-run)
1453
1454
* https://tracker.ceph.com/issues/52624
1455
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1456
* https://tracker.ceph.com/issues/56446
1457
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
1458
* https://tracker.ceph.com/issues/51964
1459
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1460
* https://tracker.ceph.com/issues/55804
1461
    Command failed (workunit test suites/pjd.sh)
1462
* https://tracker.ceph.com/issues/50223
1463
    client.xxxx isn't responding to mclientcaps(revoke)
1464
* https://tracker.ceph.com/issues/50821
1465 73 Venky Shankar
    qa: untar_snap_rm failure during mds thrashing
1466 72 Venky Shankar
* https://tracker.ceph.com/issues/54460
1467 71 Venky Shankar
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1468
1469
h3. 2022 Aug 04
1470
1471
https://pulpito.ceph.com/?branch=wip-vshankar-testing1-20220804-123835 (only mgr/volumes, mgr/stats)
1472
1473 69 Rishabh Dave
Unrealted teuthology failure on rhel
1474 68 Rishabh Dave
1475
h3. 2022 Jul 25
1476
1477
http://pulpito.front.sepia.ceph.com/rishabh-2022-07-22_11:34:20-fs-wip-rishabh-testing-2022Jul22-1400-testing-default-smithi/
1478
1479 74 Rishabh Dave
1st re-run: http://pulpito.front.sepia.ceph.com/rishabh-2022-07-24_03:51:19-fs-wip-rishabh-testing-2022Jul22-1400-testing-default-smithi
1480
2nd re-run: http://pulpito.front.sepia.ceph.com/rishabh-2022-07-24_08:53:36-fs-wip-rishabh-testing-2022Jul22-1400-testing-default-smithi/
1481 68 Rishabh Dave
3rd re-run: http://pulpito.front.sepia.ceph.com/rishabh-2022-07-24_08:53:36-fs-wip-rishabh-testing-2022Jul22-1400-testing-default-smithi/
1482
4th (final) re-run: http://pulpito.front.sepia.ceph.com/rishabh-2022-07-28_03:59:01-fs-wip-rishabh-testing-2022Jul28-0143-testing-default-smithi/
1483
1484
* https://tracker.ceph.com/issues/55804
1485
  Command failed (workunit test suites/pjd.sh)
1486
* https://tracker.ceph.com/issues/50223
1487
  client.xxxx isn't responding to mclientcaps(revoke)
1488
1489
* https://tracker.ceph.com/issues/54460
1490
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1491 1 Patrick Donnelly
* https://tracker.ceph.com/issues/36593
1492 74 Rishabh Dave
  Command failed (workunit test fs/quota/quota.sh) on smithixxx with status 1
1493 68 Rishabh Dave
* https://tracker.ceph.com/issues/54462
1494 67 Patrick Donnelly
  Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128~
1495
1496
h3. 2022 July 22
1497
1498
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20220721.235756
1499
1500
MDS_HEALTH_DUMMY error in log fixed by followup commit.
1501
transient selinux ping failure
1502
1503
* https://tracker.ceph.com/issues/56694
1504
    qa: avoid blocking forever on hung umount
1505
* https://tracker.ceph.com/issues/56695
1506
    [RHEL stock] pjd test failures
1507
* https://tracker.ceph.com/issues/56696
1508
    admin keyring disappears during qa run
1509
* https://tracker.ceph.com/issues/56697
1510
    qa: fs/snaps fails for fuse
1511
* https://tracker.ceph.com/issues/50222
1512
    osd: 5.2s0 deep-scrub : stat mismatch
1513
* https://tracker.ceph.com/issues/56698
1514
    client: FAILED ceph_assert(_size == 0)
1515
* https://tracker.ceph.com/issues/50223
1516
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
1517 66 Rishabh Dave
1518 65 Rishabh Dave
1519
h3. 2022 Jul 15
1520
1521
http://pulpito.front.sepia.ceph.com/rishabh-2022-07-08_23:53:34-fs-wip-rishabh-testing-2022Jul08-1820-testing-default-smithi/
1522
1523
re-run: http://pulpito.front.sepia.ceph.com/rishabh-2022-07-15_06:42:04-fs-wip-rishabh-testing-2022Jul08-1820-testing-default-smithi/
1524
1525
* https://tracker.ceph.com/issues/53859
1526
  Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
1527
* https://tracker.ceph.com/issues/55804
1528
  Command failed (workunit test suites/pjd.sh)
1529
* https://tracker.ceph.com/issues/50223
1530
  client.xxxx isn't responding to mclientcaps(revoke)
1531
* https://tracker.ceph.com/issues/50222
1532
  osd: deep-scrub : stat mismatch
1533
1534
* https://tracker.ceph.com/issues/56632
1535
  Test failure: test_subvolume_snapshot_clone_quota_exceeded (tasks.cephfs.test_volumes.TestSubvolumeSnapshotClones)
1536
* https://tracker.ceph.com/issues/56634
1537
  workunit test fs/snaps/snaptest-intodir.sh
1538
* https://tracker.ceph.com/issues/56644
1539
  Test failure: test_rapid_creation (tasks.cephfs.test_fragment.TestFragmentation)
1540
1541 61 Rishabh Dave
1542
1543
h3. 2022 July 05
1544 62 Rishabh Dave
1545 64 Rishabh Dave
http://pulpito.front.sepia.ceph.com/rishabh-2022-07-02_14:14:52-fs-wip-rishabh-testing-20220702-1631-testing-default-smithi/
1546
1547
On 1st re-run some jobs passed - http://pulpito.front.sepia.ceph.com/rishabh-2022-07-03_15:10:28-fs-wip-rishabh-testing-20220702-1631-distro-default-smithi/
1548
1549
On 2nd re-run only few jobs failed -
1550 62 Rishabh Dave
http://pulpito.front.sepia.ceph.com/rishabh-2022-07-06_05:24:29-fs-wip-rishabh-testing-20220705-2132-distro-default-smithi/
1551
http://pulpito.front.sepia.ceph.com/rishabh-2022-07-06_05:24:29-fs-wip-rishabh-testing-20220705-2132-distro-default-smithi/
1552
1553
* https://tracker.ceph.com/issues/56446
1554
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
1555
* https://tracker.ceph.com/issues/55804
1556
    Command failed (workunit test suites/pjd.sh) on smithi047 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/
1557
1558
* https://tracker.ceph.com/issues/56445
1559 63 Rishabh Dave
    Command failed on smithi080 with status 123: "find /home/ubuntu/cephtest/archive/syslog -name '*.log' -print0 | sudo xargs -0 --no-run-if-empty -- gzip --"
1560
* https://tracker.ceph.com/issues/51267
1561
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithi098 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1
1562 62 Rishabh Dave
* https://tracker.ceph.com/issues/50224
1563
    Test failure: test_mirroring_init_failure_with_recovery (tasks.cephfs.test_mirroring.TestMirroring)
1564 61 Rishabh Dave
1565 58 Venky Shankar
1566
1567
h3. 2022 July 04
1568
1569
https://pulpito.ceph.com/vshankar-2022-06-29_09:19:00-fs-wip-vshankar-testing-20220627-100931-testing-default-smithi/
1570
(rhel runs were borked due to: https://lists.ceph.io/hyperkitty/list/dev@ceph.io/thread/JSZQFUKVLDND4W33PXDGCABPHNSPT6SS/, tests ran with --filter-out=rhel)
1571
1572
* https://tracker.ceph.com/issues/56445
1573 59 Rishabh Dave
    Command failed on smithi162 with status 123: "find /home/ubuntu/cephtest/archive/syslog -name '*.log' -print0 | sudo xargs -0 --no-run-if-empty -- gzip --"
1574
* https://tracker.ceph.com/issues/56446
1575
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
1576
* https://tracker.ceph.com/issues/51964
1577 60 Rishabh Dave
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1578 59 Rishabh Dave
* https://tracker.ceph.com/issues/52624
1579 57 Venky Shankar
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1580
1581
h3. 2022 June 20
1582
1583
https://pulpito.ceph.com/vshankar-2022-06-15_04:03:39-fs-wip-vshankar-testing1-20220615-072516-testing-default-smithi/
1584
https://pulpito.ceph.com/vshankar-2022-06-19_08:22:46-fs-wip-vshankar-testing1-20220619-102531-testing-default-smithi/
1585
1586
* https://tracker.ceph.com/issues/52624
1587
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1588
* https://tracker.ceph.com/issues/55804
1589
    qa failure: pjd link tests failed
1590
* https://tracker.ceph.com/issues/54108
1591
    qa: iogen workunit: "The following counters failed to be set on mds daemons: {'mds.exported', 'mds.imported'}"
1592
* https://tracker.ceph.com/issues/55332
1593 56 Patrick Donnelly
    Failure in snaptest-git-ceph.sh (it's an async unlink/create bug)
1594
1595
h3. 2022 June 13
1596
1597
https://pulpito.ceph.com/pdonnell-2022-06-12_05:08:12-fs:workload-wip-pdonnell-testing-20220612.004943-distro-default-smithi/
1598
1599
* https://tracker.ceph.com/issues/56024
1600
    cephadm: removes ceph.conf during qa run causing command failure
1601
* https://tracker.ceph.com/issues/48773
1602
    qa: scrub does not complete
1603
* https://tracker.ceph.com/issues/56012
1604
    mds: src/mds/MDLog.cc: 283: FAILED ceph_assert(!mds->is_ any_replay())
1605 55 Venky Shankar
1606 54 Venky Shankar
1607
h3. 2022 Jun 13
1608
1609
https://pulpito.ceph.com/vshankar-2022-06-07_00:25:50-fs-wip-vshankar-testing-20220606-223254-testing-default-smithi/
1610
https://pulpito.ceph.com/vshankar-2022-06-10_01:04:46-fs-wip-vshankar-testing-20220609-175550-testing-default-smithi/
1611
1612
* https://tracker.ceph.com/issues/52624
1613
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1614
* https://tracker.ceph.com/issues/51964
1615
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1616
* https://tracker.ceph.com/issues/53859
1617
    qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
1618
* https://tracker.ceph.com/issues/55804
1619
    qa failure: pjd link tests failed
1620
* https://tracker.ceph.com/issues/56003
1621
    client: src/include/xlist.h: 81: FAILED ceph_assert(_size == 0)
1622
* https://tracker.ceph.com/issues/56011
1623
    fs/thrash: snaptest-snap-rm-cmp.sh fails in mds5sum comparison
1624
* https://tracker.ceph.com/issues/56012
1625 53 Venky Shankar
    mds: src/mds/MDLog.cc: 283: FAILED ceph_assert(!mds->is_ any_replay())
1626
1627
h3. 2022 Jun 07
1628
1629
https://pulpito.ceph.com/vshankar-2022-06-06_21:25:41-fs-wip-vshankar-testing1-20220606-230129-testing-default-smithi/
1630
https://pulpito.ceph.com/vshankar-2022-06-07_10:53:31-fs-wip-vshankar-testing1-20220607-104134-testing-default-smithi/ (rerun after dropping a problematic PR)
1631
1632
* https://tracker.ceph.com/issues/52624
1633
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1634
* https://tracker.ceph.com/issues/50223
1635
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
1636
* https://tracker.ceph.com/issues/50224
1637 51 Venky Shankar
    qa: test_mirroring_init_failure_with_recovery failure
1638
1639
h3. 2022 May 12
1640 52 Venky Shankar
1641 51 Venky Shankar
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20220509-125847
1642
https://pulpito.ceph.com/vshankar-2022-05-13_17:09:16-fs-wip-vshankar-testing-20220513-120051-testing-default-smithi/ (drop prs + rerun)
1643
1644
* https://tracker.ceph.com/issues/52624
1645
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1646
* https://tracker.ceph.com/issues/50223
1647
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
1648
* https://tracker.ceph.com/issues/55332
1649
    Failure in snaptest-git-ceph.sh
1650
* https://tracker.ceph.com/issues/53859
1651 1 Patrick Donnelly
    qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
1652 52 Venky Shankar
* https://tracker.ceph.com/issues/55538
1653
    Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead)
1654 51 Venky Shankar
* https://tracker.ceph.com/issues/55258
1655 49 Venky Shankar
    lots of "heartbeat_check: no reply from X.X.X.X" in OSD logs (cropss up again, though very infrequent)
1656
1657 50 Venky Shankar
h3. 2022 May 04
1658
1659
https://pulpito.ceph.com/vshankar-2022-05-01_13:18:44-fs-wip-vshankar-testing1-20220428-204527-testing-default-smithi/
1660 49 Venky Shankar
https://pulpito.ceph.com/vshankar-2022-05-02_16:58:59-fs-wip-vshankar-testing1-20220502-201957-testing-default-smithi/ (after dropping PRs)
1661
1662
* https://tracker.ceph.com/issues/52624
1663
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1664
* https://tracker.ceph.com/issues/50223
1665
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
1666
* https://tracker.ceph.com/issues/55332
1667
    Failure in snaptest-git-ceph.sh
1668
* https://tracker.ceph.com/issues/53859
1669
    qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
1670
* https://tracker.ceph.com/issues/55516
1671
    qa: fs suite tests failing with "json.decoder.JSONDecodeError: Extra data: line 2 column 82 (char 82)"
1672
* https://tracker.ceph.com/issues/55537
1673
    mds: crash during fs:upgrade test
1674
* https://tracker.ceph.com/issues/55538
1675 48 Venky Shankar
    Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead)
1676
1677
h3. 2022 Apr 25
1678
1679
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20220420-113951 (owner vshankar)
1680
1681
* https://tracker.ceph.com/issues/52624
1682
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1683
* https://tracker.ceph.com/issues/50223
1684
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
1685
* https://tracker.ceph.com/issues/55258
1686
    lots of "heartbeat_check: no reply from X.X.X.X" in OSD logs
1687
* https://tracker.ceph.com/issues/55377
1688 47 Venky Shankar
    kclient: mds revoke Fwb caps stuck after the kclient tries writebcak once
1689
1690
h3. 2022 Apr 14
1691
1692
https://pulpito.ceph.com/?branch=wip-vshankar-testing1-20220411-144044
1693
1694
* https://tracker.ceph.com/issues/52624
1695
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1696
* https://tracker.ceph.com/issues/50223
1697
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
1698
* https://tracker.ceph.com/issues/52438
1699
    qa: ffsb timeout
1700
* https://tracker.ceph.com/issues/55170
1701
    mds: crash during rejoin (CDir::fetch_keys)
1702
* https://tracker.ceph.com/issues/55331
1703
    pjd failure
1704
* https://tracker.ceph.com/issues/48773
1705
    qa: scrub does not complete
1706
* https://tracker.ceph.com/issues/55332
1707
    Failure in snaptest-git-ceph.sh
1708
* https://tracker.ceph.com/issues/55258
1709 45 Venky Shankar
    lots of "heartbeat_check: no reply from X.X.X.X" in OSD logs
1710
1711 46 Venky Shankar
h3. 2022 Apr 11
1712 45 Venky Shankar
1713
https://pulpito.ceph.com/?branch=wip-vshankar-testing-55110-20220408-203242
1714
1715
* https://tracker.ceph.com/issues/48773
1716
    qa: scrub does not complete
1717
* https://tracker.ceph.com/issues/52624
1718
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1719
* https://tracker.ceph.com/issues/52438
1720
    qa: ffsb timeout
1721
* https://tracker.ceph.com/issues/48680
1722
    mds: scrubbing stuck "scrub active (0 inodes in the stack)"
1723
* https://tracker.ceph.com/issues/55236
1724
    qa: fs/snaps tests fails with "hit max job timeout"
1725
* https://tracker.ceph.com/issues/54108
1726
    qa: iogen workunit: "The following counters failed to be set on mds daemons: {'mds.exported', 'mds.imported'}"
1727
* https://tracker.ceph.com/issues/54971
1728
    Test failure: test_perf_stats_stale_metrics (tasks.cephfs.test_mds_metrics.TestMDSMetrics)
1729
* https://tracker.ceph.com/issues/50223
1730
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
1731
* https://tracker.ceph.com/issues/55258
1732 44 Venky Shankar
    lots of "heartbeat_check: no reply from X.X.X.X" in OSD logs
1733 42 Venky Shankar
1734 43 Venky Shankar
h3. 2022 Mar 21
1735
1736
https://pulpito.ceph.com/vshankar-2022-03-20_02:16:37-fs-wip-vshankar-testing-20220319-163539-testing-default-smithi/
1737
1738
Run didn't go well, lots of failures - debugging by dropping PRs and running against master branch. Only merging unrelated PRs that pass tests.
1739
1740
1741 42 Venky Shankar
h3. 2022 Mar 08
1742
1743
https://pulpito.ceph.com/vshankar-2022-02-28_04:32:15-fs-wip-vshankar-testing-20220226-211550-testing-default-smithi/
1744
1745
rerun with
1746
- (drop) https://github.com/ceph/ceph/pull/44679
1747
- (drop) https://github.com/ceph/ceph/pull/44958
1748
https://pulpito.ceph.com/vshankar-2022-03-06_14:47:51-fs-wip-vshankar-testing-20220304-132102-testing-default-smithi/
1749
1750
* https://tracker.ceph.com/issues/54419 (new)
1751
    `ceph orch upgrade start` seems to never reach completion
1752
* https://tracker.ceph.com/issues/51964
1753
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1754
* https://tracker.ceph.com/issues/52624
1755
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1756
* https://tracker.ceph.com/issues/50223
1757
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
1758
* https://tracker.ceph.com/issues/52438
1759
    qa: ffsb timeout
1760
* https://tracker.ceph.com/issues/50821
1761
    qa: untar_snap_rm failure during mds thrashing
1762 41 Venky Shankar
1763
1764
h3. 2022 Feb 09
1765
1766
https://pulpito.ceph.com/vshankar-2022-02-05_17:27:49-fs-wip-vshankar-testing-20220201-113815-testing-default-smithi/
1767
1768
rerun with
1769
- (drop) https://github.com/ceph/ceph/pull/37938
1770
- (drop) https://github.com/ceph/ceph/pull/44335
1771
- (drop) https://github.com/ceph/ceph/pull/44491
1772
- (drop) https://github.com/ceph/ceph/pull/44501
1773
https://pulpito.ceph.com/vshankar-2022-02-08_14:27:29-fs-wip-vshankar-testing-20220208-181241-testing-default-smithi/
1774
1775
* https://tracker.ceph.com/issues/51964
1776
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1777
* https://tracker.ceph.com/issues/54066
1778
    test_subvolume_no_upgrade_v1_sanity fails with `AssertionError: 1000 != 0`
1779
* https://tracker.ceph.com/issues/48773
1780
    qa: scrub does not complete
1781
* https://tracker.ceph.com/issues/52624
1782
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1783
* https://tracker.ceph.com/issues/50223
1784
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
1785
* https://tracker.ceph.com/issues/52438
1786 40 Patrick Donnelly
    qa: ffsb timeout
1787
1788
h3. 2022 Feb 01
1789
1790
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20220127.171526
1791
1792
* https://tracker.ceph.com/issues/54107
1793
    kclient: hang during umount
1794
* https://tracker.ceph.com/issues/54106
1795
    kclient: hang during workunit cleanup
1796
* https://tracker.ceph.com/issues/54108
1797
    qa: iogen workunit: "The following counters failed to be set on mds daemons: {'mds.exported', 'mds.imported'}"
1798
* https://tracker.ceph.com/issues/48773
1799
    qa: scrub does not complete
1800
* https://tracker.ceph.com/issues/52438
1801
    qa: ffsb timeout
1802 36 Venky Shankar
1803
1804
h3. 2022 Jan 13
1805 39 Venky Shankar
1806 36 Venky Shankar
https://pulpito.ceph.com/vshankar-2022-01-06_13:18:41-fs-wip-vshankar-testing-20220106-145819-testing-default-smithi/
1807 38 Venky Shankar
1808
rerun with:
1809 36 Venky Shankar
- (add) https://github.com/ceph/ceph/pull/44570
1810
- (drop) https://github.com/ceph/ceph/pull/43184
1811
https://pulpito.ceph.com/vshankar-2022-01-13_04:42:40-fs-wip-vshankar-testing-20220106-145819-testing-default-smithi/
1812
1813
* https://tracker.ceph.com/issues/50223
1814
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
1815
* https://tracker.ceph.com/issues/51282
1816
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
1817
* https://tracker.ceph.com/issues/48773
1818
    qa: scrub does not complete
1819
* https://tracker.ceph.com/issues/52624
1820
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1821
* https://tracker.ceph.com/issues/53859
1822 34 Venky Shankar
    qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
1823
1824
h3. 2022 Jan 03
1825
1826
https://pulpito.ceph.com/vshankar-2021-12-22_07:37:44-fs-wip-vshankar-testing-20211216-114012-testing-default-smithi/
1827
https://pulpito.ceph.com/vshankar-2022-01-03_12:27:45-fs-wip-vshankar-testing-20220103-142738-testing-default-smithi/ (rerun)
1828
1829
* https://tracker.ceph.com/issues/50223
1830
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
1831
* https://tracker.ceph.com/issues/51964
1832
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1833
* https://tracker.ceph.com/issues/51267
1834
    CommandFailedError: Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithi096 with status 1:...
1835
* https://tracker.ceph.com/issues/51282
1836
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
1837
* https://tracker.ceph.com/issues/50821
1838
    qa: untar_snap_rm failure during mds thrashing
1839 35 Ramana Raja
* https://tracker.ceph.com/issues/51278
1840
    mds: "FAILED ceph_assert(!segments.empty())"
1841
* https://tracker.ceph.com/issues/52279
1842 34 Venky Shankar
    cephadm tests fail due to: error adding seccomp filter rule for syscall bdflush: requested action matches default action of filter
1843 33 Patrick Donnelly
1844
1845
h3. 2021 Dec 22
1846
1847
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211222.014316
1848
1849
* https://tracker.ceph.com/issues/52624
1850
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1851
* https://tracker.ceph.com/issues/50223
1852
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
1853
* https://tracker.ceph.com/issues/52279
1854
    cephadm tests fail due to: error adding seccomp filter rule for syscall bdflush: requested action matches default action of filter
1855
* https://tracker.ceph.com/issues/50224
1856
    qa: test_mirroring_init_failure_with_recovery failure
1857
* https://tracker.ceph.com/issues/48773
1858
    qa: scrub does not complete
1859 32 Venky Shankar
1860
1861
h3. 2021 Nov 30
1862
1863
https://pulpito.ceph.com/vshankar-2021-11-24_07:14:27-fs-wip-vshankar-testing-20211124-094330-testing-default-smithi/
1864
https://pulpito.ceph.com/vshankar-2021-11-30_06:23:32-fs-wip-vshankar-testing-20211124-094330-distro-default-smithi/ (rerun w/ QA fixes)
1865
1866
* https://tracker.ceph.com/issues/53436
1867
    mds, mon: mds beacon messages get dropped? (mds never reaches up:active state)
1868
* https://tracker.ceph.com/issues/51964
1869
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1870
* https://tracker.ceph.com/issues/48812
1871
    qa: test_scrub_pause_and_resume_with_abort failure
1872
* https://tracker.ceph.com/issues/51076
1873
    "wait_for_recovery: failed before timeout expired" during thrashosd test with EC backend.
1874
* https://tracker.ceph.com/issues/50223
1875
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
1876
* https://tracker.ceph.com/issues/52624
1877
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1878
* https://tracker.ceph.com/issues/50250
1879
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
1880 31 Patrick Donnelly
1881
1882
h3. 2021 November 9
1883
1884
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211109.180315
1885
1886
* https://tracker.ceph.com/issues/53214
1887
    qa: "dd: error reading '/sys/kernel/debug/ceph/2a934501-6731-4052-a836-f42229a869be.client4874/metrics': Is a directory"
1888
* https://tracker.ceph.com/issues/48773
1889
    qa: scrub does not complete
1890
* https://tracker.ceph.com/issues/50223
1891
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
1892
* https://tracker.ceph.com/issues/51282
1893
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
1894
* https://tracker.ceph.com/issues/52624
1895
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1896
* https://tracker.ceph.com/issues/53216
1897
    qa: "RuntimeError: value of attributes should be either str or None. client_id"
1898
* https://tracker.ceph.com/issues/50250
1899
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
1900
1901 30 Patrick Donnelly
1902
1903
h3. 2021 November 03
1904
1905
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211103.023355
1906
1907
* https://tracker.ceph.com/issues/51964
1908
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1909
* https://tracker.ceph.com/issues/51282
1910
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
1911
* https://tracker.ceph.com/issues/52436
1912
    fs/ceph: "corrupt mdsmap"
1913
* https://tracker.ceph.com/issues/53074
1914
    pybind/mgr/cephadm: upgrade sequence does not continue if no MDS are active
1915
* https://tracker.ceph.com/issues/53150
1916
    pybind/mgr/cephadm/upgrade: tolerate MDS failures during upgrade straddling v16.2.5
1917
* https://tracker.ceph.com/issues/53155
1918
    MDSMonitor: assertion during upgrade to v16.2.5+
1919 29 Patrick Donnelly
1920
1921
h3. 2021 October 26
1922
1923
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211025.000447
1924
1925
* https://tracker.ceph.com/issues/53074
1926
    pybind/mgr/cephadm: upgrade sequence does not continue if no MDS are active
1927
* https://tracker.ceph.com/issues/52997
1928
    testing: hang ing umount
1929
* https://tracker.ceph.com/issues/50824
1930
    qa: snaptest-git-ceph bus error
1931
* https://tracker.ceph.com/issues/52436
1932
    fs/ceph: "corrupt mdsmap"
1933
* https://tracker.ceph.com/issues/48773
1934
    qa: scrub does not complete
1935
* https://tracker.ceph.com/issues/53082
1936
    ceph-fuse: segmenetation fault in Client::handle_mds_map
1937
* https://tracker.ceph.com/issues/50223
1938
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
1939
* https://tracker.ceph.com/issues/52624
1940
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1941
* https://tracker.ceph.com/issues/50224
1942
    qa: test_mirroring_init_failure_with_recovery failure
1943
* https://tracker.ceph.com/issues/50821
1944
    qa: untar_snap_rm failure during mds thrashing
1945
* https://tracker.ceph.com/issues/50250
1946
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
1947
1948 27 Patrick Donnelly
1949
1950 28 Patrick Donnelly
h3. 2021 October 19
1951 27 Patrick Donnelly
1952
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211019.013028
1953
1954
* https://tracker.ceph.com/issues/52995
1955
    qa: test_standby_count_wanted failure
1956
* https://tracker.ceph.com/issues/52948
1957
    osd: fails to come up: "teuthology.misc:7 of 8 OSDs are up"
1958
* https://tracker.ceph.com/issues/52996
1959
    qa: test_perf_counters via test_openfiletable
1960
* https://tracker.ceph.com/issues/48772
1961
    qa: pjd: not ok 9, 44, 80
1962
* https://tracker.ceph.com/issues/52997
1963
    testing: hang ing umount
1964
* https://tracker.ceph.com/issues/50250
1965
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
1966
* https://tracker.ceph.com/issues/52624
1967
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1968
* https://tracker.ceph.com/issues/50223
1969
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
1970
* https://tracker.ceph.com/issues/50821
1971
    qa: untar_snap_rm failure during mds thrashing
1972
* https://tracker.ceph.com/issues/48773
1973
    qa: scrub does not complete
1974 26 Patrick Donnelly
1975
1976
h3. 2021 October 12
1977
1978
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211012.192211
1979
1980
Some failures caused by teuthology bug: https://tracker.ceph.com/issues/52944
1981
1982
New test caused failure: https://github.com/ceph/ceph/pull/43297#discussion_r729883167
1983
1984
1985
* https://tracker.ceph.com/issues/51282
1986
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
1987
* https://tracker.ceph.com/issues/52948
1988
    osd: fails to come up: "teuthology.misc:7 of 8 OSDs are up"
1989
* https://tracker.ceph.com/issues/48773
1990
    qa: scrub does not complete
1991
* https://tracker.ceph.com/issues/50224
1992
    qa: test_mirroring_init_failure_with_recovery failure
1993
* https://tracker.ceph.com/issues/52949
1994
    RuntimeError: The following counters failed to be set on mds daemons: {'mds.dir_split'}
1995 25 Patrick Donnelly
1996 23 Patrick Donnelly
1997 24 Patrick Donnelly
h3. 2021 October 02
1998
1999
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211002.163337
2000
2001
Some failures caused by cephadm upgrade test. Fixed in follow-up qa commit.
2002
2003
test_simple failures caused by PR in this set.
2004
2005
A few reruns because of QA infra noise.
2006
2007
* https://tracker.ceph.com/issues/52822
2008
    qa: failed pacific install on fs:upgrade
2009
* https://tracker.ceph.com/issues/52624
2010
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2011
* https://tracker.ceph.com/issues/50223
2012
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2013
* https://tracker.ceph.com/issues/48773
2014
    qa: scrub does not complete
2015
2016
2017 23 Patrick Donnelly
h3. 2021 September 20
2018
2019
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210917.174826
2020
2021
* https://tracker.ceph.com/issues/52677
2022
    qa: test_simple failure
2023
* https://tracker.ceph.com/issues/51279
2024
    kclient hangs on umount (testing branch)
2025
* https://tracker.ceph.com/issues/50223
2026
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2027
* https://tracker.ceph.com/issues/50250
2028
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
2029
* https://tracker.ceph.com/issues/52624
2030
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2031
* https://tracker.ceph.com/issues/52438
2032
    qa: ffsb timeout
2033 22 Patrick Donnelly
2034
2035
h3. 2021 September 10
2036
2037
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210910.181451
2038
2039
* https://tracker.ceph.com/issues/50223
2040
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2041
* https://tracker.ceph.com/issues/50250
2042
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
2043
* https://tracker.ceph.com/issues/52624
2044
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
2045
* https://tracker.ceph.com/issues/52625
2046
    qa: test_kill_mdstable (tasks.cephfs.test_snapshots.TestSnapshots)
2047
* https://tracker.ceph.com/issues/52439
2048
    qa: acls does not compile on centos stream
2049
* https://tracker.ceph.com/issues/50821
2050
    qa: untar_snap_rm failure during mds thrashing
2051
* https://tracker.ceph.com/issues/48773
2052
    qa: scrub does not complete
2053
* https://tracker.ceph.com/issues/52626
2054
    mds: ScrubStack.cc: 831: FAILED ceph_assert(diri)
2055
* https://tracker.ceph.com/issues/51279
2056
    kclient hangs on umount (testing branch)
2057 21 Patrick Donnelly
2058
2059
h3. 2021 August 27
2060
2061
Several jobs died because of device failures.
2062
2063
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210827.024746
2064
2065
* https://tracker.ceph.com/issues/52430
2066
    mds: fast async create client mount breaks racy test
2067
* https://tracker.ceph.com/issues/52436
2068
    fs/ceph: "corrupt mdsmap"
2069
* https://tracker.ceph.com/issues/52437
2070
    mds: InoTable::replay_release_ids abort via test_inotable_sync
2071
* https://tracker.ceph.com/issues/51282
2072
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2073
* https://tracker.ceph.com/issues/52438
2074
    qa: ffsb timeout
2075
* https://tracker.ceph.com/issues/52439
2076
    qa: acls does not compile on centos stream
2077 20 Patrick Donnelly
2078
2079
h3. 2021 July 30
2080
2081
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210729.214022
2082
2083
* https://tracker.ceph.com/issues/50250
2084
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
2085
* https://tracker.ceph.com/issues/51282
2086
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2087
* https://tracker.ceph.com/issues/48773
2088
    qa: scrub does not complete
2089
* https://tracker.ceph.com/issues/51975
2090
    pybind/mgr/stats: KeyError
2091 19 Patrick Donnelly
2092
2093
h3. 2021 July 28
2094
2095
https://pulpito.ceph.com/pdonnell-2021-07-28_00:39:45-fs-wip-pdonnell-testing-20210727.213757-distro-basic-smithi/
2096
2097
with qa fix: https://pulpito.ceph.com/pdonnell-2021-07-28_16:20:28-fs-wip-pdonnell-testing-20210728.141004-distro-basic-smithi/
2098
2099
* https://tracker.ceph.com/issues/51905
2100
    qa: "error reading sessionmap 'mds1_sessionmap'"
2101
* https://tracker.ceph.com/issues/48773
2102
    qa: scrub does not complete
2103
* https://tracker.ceph.com/issues/50250
2104
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
2105
* https://tracker.ceph.com/issues/51267
2106
    CommandFailedError: Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithi096 with status 1:...
2107
* https://tracker.ceph.com/issues/51279
2108
    kclient hangs on umount (testing branch)
2109 18 Patrick Donnelly
2110
2111
h3. 2021 July 16
2112
2113
https://pulpito.ceph.com/pdonnell-2021-07-16_05:50:11-fs-wip-pdonnell-testing-20210716.022804-distro-basic-smithi/
2114
2115
* https://tracker.ceph.com/issues/48773
2116
    qa: scrub does not complete
2117
* https://tracker.ceph.com/issues/48772
2118
    qa: pjd: not ok 9, 44, 80
2119
* https://tracker.ceph.com/issues/45434
2120
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2121
* https://tracker.ceph.com/issues/51279
2122
    kclient hangs on umount (testing branch)
2123
* https://tracker.ceph.com/issues/50824
2124
    qa: snaptest-git-ceph bus error
2125 17 Patrick Donnelly
2126
2127
h3. 2021 July 04
2128
2129
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210703.052904
2130
2131
* https://tracker.ceph.com/issues/48773
2132
    qa: scrub does not complete
2133
* https://tracker.ceph.com/issues/39150
2134
    mon: "FAILED ceph_assert(session_map.sessions.empty())" when out of quorum
2135
* https://tracker.ceph.com/issues/45434
2136
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2137
* https://tracker.ceph.com/issues/51282
2138
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2139
* https://tracker.ceph.com/issues/48771
2140
    qa: iogen: workload fails to cause balancing
2141
* https://tracker.ceph.com/issues/51279
2142
    kclient hangs on umount (testing branch)
2143
* https://tracker.ceph.com/issues/50250
2144
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
2145 16 Patrick Donnelly
2146
2147
h3. 2021 July 01
2148
2149
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210701.192056
2150
2151
* https://tracker.ceph.com/issues/51197
2152
    qa: [WRN] Scrub error on inode 0x10000001520 (/client.0/tmp/t/linux-5.4/Documentation/driver-api) see mds.f log and `damage ls` output for details
2153
* https://tracker.ceph.com/issues/50866
2154
    osd: stat mismatch on objects
2155
* https://tracker.ceph.com/issues/48773
2156
    qa: scrub does not complete
2157 15 Patrick Donnelly
2158
2159
h3. 2021 June 26
2160
2161
https://pulpito.ceph.com/pdonnell-2021-06-26_00:57:00-fs-wip-pdonnell-testing-20210625.225421-distro-basic-smithi/
2162
2163
* https://tracker.ceph.com/issues/51183
2164
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
2165
* https://tracker.ceph.com/issues/51410
2166
    kclient: fails to finish reconnect during MDS thrashing (testing branch)
2167
* https://tracker.ceph.com/issues/48773
2168
    qa: scrub does not complete
2169
* https://tracker.ceph.com/issues/51282
2170
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2171
* https://tracker.ceph.com/issues/51169
2172
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
2173
* https://tracker.ceph.com/issues/48772
2174
    qa: pjd: not ok 9, 44, 80
2175 14 Patrick Donnelly
2176
2177
h3. 2021 June 21
2178
2179
https://pulpito.ceph.com/pdonnell-2021-06-22_00:27:21-fs-wip-pdonnell-testing-20210621.231646-distro-basic-smithi/
2180
2181
One failure caused by PR: https://github.com/ceph/ceph/pull/41935#issuecomment-866472599
2182
2183
* https://tracker.ceph.com/issues/51282
2184
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2185
* https://tracker.ceph.com/issues/51183
2186
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
2187
* https://tracker.ceph.com/issues/48773
2188
    qa: scrub does not complete
2189
* https://tracker.ceph.com/issues/48771
2190
    qa: iogen: workload fails to cause balancing
2191
* https://tracker.ceph.com/issues/51169
2192
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
2193
* https://tracker.ceph.com/issues/50495
2194
    libcephfs: shutdown race fails with status 141
2195
* https://tracker.ceph.com/issues/45434
2196
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2197
* https://tracker.ceph.com/issues/50824
2198
    qa: snaptest-git-ceph bus error
2199
* https://tracker.ceph.com/issues/50223
2200
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2201 13 Patrick Donnelly
2202
2203
h3. 2021 June 16
2204
2205
https://pulpito.ceph.com/pdonnell-2021-06-16_21:26:55-fs-wip-pdonnell-testing-20210616.191804-distro-basic-smithi/
2206
2207
MDS abort class of failures caused by PR: https://github.com/ceph/ceph/pull/41667
2208
2209
* https://tracker.ceph.com/issues/45434
2210
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2211
* https://tracker.ceph.com/issues/51169
2212
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
2213
* https://tracker.ceph.com/issues/43216
2214
    MDSMonitor: removes MDS coming out of quorum election
2215
* https://tracker.ceph.com/issues/51278
2216
    mds: "FAILED ceph_assert(!segments.empty())"
2217
* https://tracker.ceph.com/issues/51279
2218
    kclient hangs on umount (testing branch)
2219
* https://tracker.ceph.com/issues/51280
2220
    mds: "FAILED ceph_assert(r == 0 || r == -2)"
2221
* https://tracker.ceph.com/issues/51183
2222
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
2223
* https://tracker.ceph.com/issues/51281
2224
    qa: snaptest-snap-rm-cmp.sh: "echo 'FAIL: bad match, /tmp/a 4637e766853d1ad16a7b17079e2c6f03 != real c3883760b18d50e8d78819c54d579b00'"
2225
* https://tracker.ceph.com/issues/48773
2226
    qa: scrub does not complete
2227
* https://tracker.ceph.com/issues/51076
2228
    "wait_for_recovery: failed before timeout expired" during thrashosd test with EC backend.
2229
* https://tracker.ceph.com/issues/51228
2230
    qa: rmdir: failed to remove 'a/.snap/*': No such file or directory
2231
* https://tracker.ceph.com/issues/51282
2232
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2233 12 Patrick Donnelly
2234
2235
h3. 2021 June 14
2236
2237
https://pulpito.ceph.com/pdonnell-2021-06-14_20:53:05-fs-wip-pdonnell-testing-20210614.173325-distro-basic-smithi/
2238
2239
Some Ubuntu 20.04 upgrade fallout. In particular, upgrade tests are failing due to missing packages for 18.04 Pacific.
2240
2241
* https://tracker.ceph.com/issues/51169
2242
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
2243
* https://tracker.ceph.com/issues/51228
2244
    qa: rmdir: failed to remove 'a/.snap/*': No such file or directory
2245
* https://tracker.ceph.com/issues/48773
2246
    qa: scrub does not complete
2247
* https://tracker.ceph.com/issues/51183
2248
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
2249
* https://tracker.ceph.com/issues/45434
2250
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2251
* https://tracker.ceph.com/issues/51182
2252
    pybind/mgr/snap_schedule: Invalid command: Unexpected argument 'fs=cephfs'
2253
* https://tracker.ceph.com/issues/51229
2254
    qa: test_multi_snap_schedule list difference failure
2255
* https://tracker.ceph.com/issues/50821
2256
    qa: untar_snap_rm failure during mds thrashing
2257 11 Patrick Donnelly
2258
2259
h3. 2021 June 13
2260
2261
https://pulpito.ceph.com/pdonnell-2021-06-12_02:45:35-fs-wip-pdonnell-testing-20210612.002809-distro-basic-smithi/
2262
2263
Some Ubuntu 20.04 upgrade fallout. In particular, upgrade tests are failing due to missing packages for 18.04 Pacific.
2264
2265
* https://tracker.ceph.com/issues/51169
2266
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
2267
* https://tracker.ceph.com/issues/48773
2268
    qa: scrub does not complete
2269
* https://tracker.ceph.com/issues/51182
2270
    pybind/mgr/snap_schedule: Invalid command: Unexpected argument 'fs=cephfs'
2271
* https://tracker.ceph.com/issues/51183
2272
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
2273
* https://tracker.ceph.com/issues/51197
2274
    qa: [WRN] Scrub error on inode 0x10000001520 (/client.0/tmp/t/linux-5.4/Documentation/driver-api) see mds.f log and `damage ls` output for details
2275
* https://tracker.ceph.com/issues/45434
2276 10 Patrick Donnelly
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2277
2278
h3. 2021 June 11
2279
2280
https://pulpito.ceph.com/pdonnell-2021-06-11_18:02:10-fs-wip-pdonnell-testing-20210611.162716-distro-basic-smithi/
2281
2282
Some Ubuntu 20.04 upgrade fallout. In particular, upgrade tests are failing due to missing packages for 18.04 Pacific.
2283
2284
* https://tracker.ceph.com/issues/51169
2285
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
2286
* https://tracker.ceph.com/issues/45434
2287
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2288
* https://tracker.ceph.com/issues/48771
2289
    qa: iogen: workload fails to cause balancing
2290
* https://tracker.ceph.com/issues/43216
2291
    MDSMonitor: removes MDS coming out of quorum election
2292
* https://tracker.ceph.com/issues/51182
2293
    pybind/mgr/snap_schedule: Invalid command: Unexpected argument 'fs=cephfs'
2294
* https://tracker.ceph.com/issues/50223
2295
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2296
* https://tracker.ceph.com/issues/48773
2297
    qa: scrub does not complete
2298
* https://tracker.ceph.com/issues/51183
2299
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
2300
* https://tracker.ceph.com/issues/51184
2301
    qa: fs:bugs does not specify distro
2302 9 Patrick Donnelly
2303
2304
h3. 2021 June 03
2305
2306
https://pulpito.ceph.com/pdonnell-2021-06-03_03:40:33-fs-wip-pdonnell-testing-20210603.020013-distro-basic-smithi/
2307
2308
* https://tracker.ceph.com/issues/45434
2309
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2310
* https://tracker.ceph.com/issues/50016
2311
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
2312
* https://tracker.ceph.com/issues/50821
2313
    qa: untar_snap_rm failure during mds thrashing
2314
* https://tracker.ceph.com/issues/50622 (regression)
2315
    msg: active_connections regression
2316
* https://tracker.ceph.com/issues/49845#note-2 (regression)
2317
    qa: failed umount in test_volumes
2318
* https://tracker.ceph.com/issues/48773
2319
    qa: scrub does not complete
2320
* https://tracker.ceph.com/issues/43216
2321
    MDSMonitor: removes MDS coming out of quorum election
2322 7 Patrick Donnelly
2323
2324 8 Patrick Donnelly
h3. 2021 May 18
2325
2326
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210518.214114
2327
2328
Regression in testing kernel caused some failures. Ilya fixed those and rerun
2329
looked better. Some odd new noise in the rerun relating to packaging and "No
2330
module named 'tasks.ceph'".
2331
2332
* https://tracker.ceph.com/issues/50824
2333
    qa: snaptest-git-ceph bus error
2334
* https://tracker.ceph.com/issues/50622 (regression)
2335
    msg: active_connections regression
2336
* https://tracker.ceph.com/issues/49845#note-2 (regression)
2337
    qa: failed umount in test_volumes
2338
* https://tracker.ceph.com/issues/48203 (stock kernel update required)
2339
    qa: quota failure
2340
2341
2342 7 Patrick Donnelly
h3. 2021 May 18
2343
2344
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210518.025642
2345
2346
* https://tracker.ceph.com/issues/50821
2347
    qa: untar_snap_rm failure during mds thrashing
2348
* https://tracker.ceph.com/issues/48773
2349
    qa: scrub does not complete
2350
* https://tracker.ceph.com/issues/45591
2351
    mgr: FAILED ceph_assert(daemon != nullptr)
2352
* https://tracker.ceph.com/issues/50866
2353
    osd: stat mismatch on objects
2354
* https://tracker.ceph.com/issues/50016
2355
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
2356
* https://tracker.ceph.com/issues/50867
2357
    qa: fs:mirror: reduced data availability
2358
* https://tracker.ceph.com/issues/50821
2359
    qa: untar_snap_rm failure during mds thrashing
2360
* https://tracker.ceph.com/issues/50622 (regression)
2361
    msg: active_connections regression
2362
* https://tracker.ceph.com/issues/50223
2363
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2364
* https://tracker.ceph.com/issues/50868
2365
    qa: "kern.log.gz already exists; not overwritten"
2366
* https://tracker.ceph.com/issues/50870
2367
    qa: test_full: "rm: cannot remove 'large_file_a': Permission denied"
2368 6 Patrick Donnelly
2369
2370
h3. 2021 May 11
2371
2372
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210511.232042
2373
2374
* one class of failures caused by PR
2375
* https://tracker.ceph.com/issues/48812
2376
    qa: test_scrub_pause_and_resume_with_abort failure
2377
* https://tracker.ceph.com/issues/50390
2378
    mds: monclient: wait_auth_rotating timed out after 30
2379
* https://tracker.ceph.com/issues/48773
2380
    qa: scrub does not complete
2381
* https://tracker.ceph.com/issues/50821
2382
    qa: untar_snap_rm failure during mds thrashing
2383
* https://tracker.ceph.com/issues/50224
2384
    qa: test_mirroring_init_failure_with_recovery failure
2385
* https://tracker.ceph.com/issues/50622 (regression)
2386
    msg: active_connections regression
2387
* https://tracker.ceph.com/issues/50825
2388
    qa: snaptest-git-ceph hang during mon thrashing v2
2389
* https://tracker.ceph.com/issues/50821
2390
    qa: untar_snap_rm failure during mds thrashing
2391
* https://tracker.ceph.com/issues/50823
2392
    qa: RuntimeError: timeout waiting for cluster to stabilize
2393 5 Patrick Donnelly
2394
2395
h3. 2021 May 14
2396
2397
https://pulpito.ceph.com/pdonnell-2021-05-14_21:45:42-fs-master-distro-basic-smithi/
2398
2399
* https://tracker.ceph.com/issues/48812
2400
    qa: test_scrub_pause_and_resume_with_abort failure
2401
* https://tracker.ceph.com/issues/50821
2402
    qa: untar_snap_rm failure during mds thrashing
2403
* https://tracker.ceph.com/issues/50622 (regression)
2404
    msg: active_connections regression
2405
* https://tracker.ceph.com/issues/50822
2406
    qa: testing kernel patch for client metrics causes mds abort
2407
* https://tracker.ceph.com/issues/48773
2408
    qa: scrub does not complete
2409
* https://tracker.ceph.com/issues/50823
2410
    qa: RuntimeError: timeout waiting for cluster to stabilize
2411
* https://tracker.ceph.com/issues/50824
2412
    qa: snaptest-git-ceph bus error
2413
* https://tracker.ceph.com/issues/50825
2414
    qa: snaptest-git-ceph hang during mon thrashing v2
2415
* https://tracker.ceph.com/issues/50826
2416
    kceph: stock RHEL kernel hangs on snaptests with mon|osd thrashers
2417 4 Patrick Donnelly
2418
2419
h3. 2021 May 01
2420
2421
https://pulpito.ceph.com/pdonnell-2021-05-01_09:07:09-fs-wip-pdonnell-testing-20210501.040415-distro-basic-smithi/
2422
2423
* https://tracker.ceph.com/issues/45434
2424
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2425
* https://tracker.ceph.com/issues/50281
2426
    qa: untar_snap_rm timeout
2427
* https://tracker.ceph.com/issues/48203 (stock kernel update required)
2428
    qa: quota failure
2429
* https://tracker.ceph.com/issues/48773
2430
    qa: scrub does not complete
2431
* https://tracker.ceph.com/issues/50390
2432
    mds: monclient: wait_auth_rotating timed out after 30
2433
* https://tracker.ceph.com/issues/50250
2434
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details"
2435
* https://tracker.ceph.com/issues/50622 (regression)
2436
    msg: active_connections regression
2437
* https://tracker.ceph.com/issues/45591
2438
    mgr: FAILED ceph_assert(daemon != nullptr)
2439
* https://tracker.ceph.com/issues/50221
2440
    qa: snaptest-git-ceph failure in git diff
2441
* https://tracker.ceph.com/issues/50016
2442
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
2443 3 Patrick Donnelly
2444
2445
h3. 2021 Apr 15
2446
2447
https://pulpito.ceph.com/pdonnell-2021-04-15_01:35:57-fs-wip-pdonnell-testing-20210414.230315-distro-basic-smithi/
2448
2449
* https://tracker.ceph.com/issues/50281
2450
    qa: untar_snap_rm timeout
2451
* https://tracker.ceph.com/issues/50220
2452
    qa: dbench workload timeout
2453
* https://tracker.ceph.com/issues/50246
2454
    mds: failure replaying journal (EMetaBlob)
2455
* https://tracker.ceph.com/issues/50250
2456
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details"
2457
* https://tracker.ceph.com/issues/50016
2458
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
2459
* https://tracker.ceph.com/issues/50222
2460
    osd: 5.2s0 deep-scrub : stat mismatch
2461
* https://tracker.ceph.com/issues/45434
2462
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2463
* https://tracker.ceph.com/issues/49845
2464
    qa: failed umount in test_volumes
2465
* https://tracker.ceph.com/issues/37808
2466
    osd: osdmap cache weak_refs assert during shutdown
2467
* https://tracker.ceph.com/issues/50387
2468
    client: fs/snaps failure
2469
* https://tracker.ceph.com/issues/50389
2470
    mds: "cluster [ERR] Error recovering journal 0x203: (2) No such file or directory" in cluster log"
2471
* https://tracker.ceph.com/issues/50216
2472
    qa: "ls: cannot access 'lost+found': No such file or directory"
2473
* https://tracker.ceph.com/issues/50390
2474
    mds: monclient: wait_auth_rotating timed out after 30
2475
2476 1 Patrick Donnelly
2477
2478 2 Patrick Donnelly
h3. 2021 Apr 08
2479
2480
https://pulpito.ceph.com/pdonnell-2021-04-08_22:42:24-fs-wip-pdonnell-testing-20210408.192301-distro-basic-smithi/
2481
2482
* https://tracker.ceph.com/issues/45434
2483
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2484
* https://tracker.ceph.com/issues/50016
2485
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
2486
* https://tracker.ceph.com/issues/48773
2487
    qa: scrub does not complete
2488
* https://tracker.ceph.com/issues/50279
2489
    qa: "Replacing daemon mds.b as rank 0 with standby daemon mds.c"
2490
* https://tracker.ceph.com/issues/50246
2491
    mds: failure replaying journal (EMetaBlob)
2492
* https://tracker.ceph.com/issues/48365
2493
    qa: ffsb build failure on CentOS 8.2
2494
* https://tracker.ceph.com/issues/50216
2495
    qa: "ls: cannot access 'lost+found': No such file or directory"
2496
* https://tracker.ceph.com/issues/50223
2497
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2498
* https://tracker.ceph.com/issues/50280
2499
    cephadm: RuntimeError: uid/gid not found
2500
* https://tracker.ceph.com/issues/50281
2501
    qa: untar_snap_rm timeout
2502
2503 1 Patrick Donnelly
h3. 2021 Apr 08
2504
2505
https://pulpito.ceph.com/pdonnell-2021-04-08_04:31:36-fs-wip-pdonnell-testing-20210408.024225-distro-basic-smithi/
2506
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210408.142238 (with logic inversion / QA fix)
2507
2508
* https://tracker.ceph.com/issues/50246
2509
    mds: failure replaying journal (EMetaBlob)
2510
* https://tracker.ceph.com/issues/50250
2511
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details"
2512
2513
2514
h3. 2021 Apr 07
2515
2516
https://pulpito.ceph.com/pdonnell-2021-04-07_02:12:41-fs-wip-pdonnell-testing-20210406.213012-distro-basic-smithi/
2517
2518
* https://tracker.ceph.com/issues/50215
2519
    qa: "log [ERR] : error reading sessionmap 'mds2_sessionmap'"
2520
* https://tracker.ceph.com/issues/49466
2521
    qa: "Command failed on gibba030 with status 1: 'set -ex\nsudo dd of=/tmp/tmp.ZEeZBasJer'"
2522
* https://tracker.ceph.com/issues/50216
2523
    qa: "ls: cannot access 'lost+found': No such file or directory"
2524
* https://tracker.ceph.com/issues/48773
2525
    qa: scrub does not complete
2526
* https://tracker.ceph.com/issues/49845
2527
    qa: failed umount in test_volumes
2528
* https://tracker.ceph.com/issues/50220
2529
    qa: dbench workload timeout
2530
* https://tracker.ceph.com/issues/50221
2531
    qa: snaptest-git-ceph failure in git diff
2532
* https://tracker.ceph.com/issues/50222
2533
    osd: 5.2s0 deep-scrub : stat mismatch
2534
* https://tracker.ceph.com/issues/50223
2535
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2536
* https://tracker.ceph.com/issues/50224
2537
    qa: test_mirroring_init_failure_with_recovery failure
2538
2539
h3. 2021 Apr 01
2540
2541
https://pulpito.ceph.com/pdonnell-2021-04-01_00:45:34-fs-wip-pdonnell-testing-20210331.222326-distro-basic-smithi/
2542
2543
* https://tracker.ceph.com/issues/48772
2544
    qa: pjd: not ok 9, 44, 80
2545
* https://tracker.ceph.com/issues/50177
2546
    osd: "stalled aio... buggy kernel or bad device?"
2547
* https://tracker.ceph.com/issues/48771
2548
    qa: iogen: workload fails to cause balancing
2549
* https://tracker.ceph.com/issues/49845
2550
    qa: failed umount in test_volumes
2551
* https://tracker.ceph.com/issues/48773
2552
    qa: scrub does not complete
2553
* https://tracker.ceph.com/issues/48805
2554
    mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details"
2555
* https://tracker.ceph.com/issues/50178
2556
    qa: "TypeError: run() got an unexpected keyword argument 'shell'"
2557
* https://tracker.ceph.com/issues/45434
2558
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2559
2560
h3. 2021 Mar 24
2561
2562
https://pulpito.ceph.com/pdonnell-2021-03-24_23:26:35-fs-wip-pdonnell-testing-20210324.190252-distro-basic-smithi/
2563
2564
* https://tracker.ceph.com/issues/49500
2565
    qa: "Assertion `cb_done' failed."
2566
* https://tracker.ceph.com/issues/50019
2567
    qa: mount failure with cephadm "probably no MDS server is up?"
2568
* https://tracker.ceph.com/issues/50020
2569
    qa: "RADOS object not found (Failed to operate read op for oid cephfs_mirror)"
2570
* https://tracker.ceph.com/issues/48773
2571
    qa: scrub does not complete
2572
* https://tracker.ceph.com/issues/45434
2573
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2574
* https://tracker.ceph.com/issues/48805
2575
    mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details"
2576
* https://tracker.ceph.com/issues/48772
2577
    qa: pjd: not ok 9, 44, 80
2578
* https://tracker.ceph.com/issues/50021
2579
    qa: snaptest-git-ceph failure during mon thrashing
2580
* https://tracker.ceph.com/issues/48771
2581
    qa: iogen: workload fails to cause balancing
2582
* https://tracker.ceph.com/issues/50016
2583
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
2584
* https://tracker.ceph.com/issues/49466
2585
    qa: "Command failed on gibba030 with status 1: 'set -ex\nsudo dd of=/tmp/tmp.ZEeZBasJer'"
2586
2587
2588
h3. 2021 Mar 18
2589
2590
https://pulpito.ceph.com/pdonnell-2021-03-18_13:46:31-fs-wip-pdonnell-testing-20210318.024145-distro-basic-smithi/
2591
2592
* https://tracker.ceph.com/issues/49466
2593
    qa: "Command failed on gibba030 with status 1: 'set -ex\nsudo dd of=/tmp/tmp.ZEeZBasJer'"
2594
* https://tracker.ceph.com/issues/48773
2595
    qa: scrub does not complete
2596
* https://tracker.ceph.com/issues/48805
2597
    mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details"
2598
* https://tracker.ceph.com/issues/45434
2599
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2600
* https://tracker.ceph.com/issues/49845
2601
    qa: failed umount in test_volumes
2602
* https://tracker.ceph.com/issues/49605
2603
    mgr: drops command on the floor
2604
* https://tracker.ceph.com/issues/48203 (stock kernel update required)
2605
    qa: quota failure
2606
* https://tracker.ceph.com/issues/49928
2607
    client: items pinned in cache preventing unmount x2
2608
2609
h3. 2021 Mar 15
2610
2611
https://pulpito.ceph.com/pdonnell-2021-03-15_22:16:56-fs-wip-pdonnell-testing-20210315.182203-distro-basic-smithi/
2612
2613
* https://tracker.ceph.com/issues/49842
2614
    qa: stuck pkg install
2615
* https://tracker.ceph.com/issues/49466
2616
    qa: "Command failed on gibba030 with status 1: 'set -ex\nsudo dd of=/tmp/tmp.ZEeZBasJer'"
2617
* https://tracker.ceph.com/issues/49822
2618
    test: test_mirroring_command_idempotency (tasks.cephfs.test_admin.TestMirroringCommands) failure
2619
* https://tracker.ceph.com/issues/49240
2620
    terminate called after throwing an instance of 'std::bad_alloc'
2621
* https://tracker.ceph.com/issues/48773
2622
    qa: scrub does not complete
2623
* https://tracker.ceph.com/issues/45434
2624
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2625
* https://tracker.ceph.com/issues/49500
2626
    qa: "Assertion `cb_done' failed."
2627
* https://tracker.ceph.com/issues/49843
2628
    qa: fs/snaps/snaptest-upchildrealms.sh failure
2629
* https://tracker.ceph.com/issues/49845
2630
    qa: failed umount in test_volumes
2631
* https://tracker.ceph.com/issues/48805
2632
    mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details"
2633
* https://tracker.ceph.com/issues/49605
2634
    mgr: drops command on the floor
2635
2636
and failure caused by PR: https://github.com/ceph/ceph/pull/39969
2637
2638
2639
h3. 2021 Mar 09
2640
2641
https://pulpito.ceph.com/pdonnell-2021-03-09_03:27:39-fs-wip-pdonnell-testing-20210308.214827-distro-basic-smithi/
2642
2643
* https://tracker.ceph.com/issues/49500
2644
    qa: "Assertion `cb_done' failed."
2645
* https://tracker.ceph.com/issues/48805
2646
    mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details"
2647
* https://tracker.ceph.com/issues/48773
2648
    qa: scrub does not complete
2649
* https://tracker.ceph.com/issues/45434
2650
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2651
* https://tracker.ceph.com/issues/49240
2652
    terminate called after throwing an instance of 'std::bad_alloc'
2653
* https://tracker.ceph.com/issues/49466
2654
    qa: "Command failed on gibba030 with status 1: 'set -ex\nsudo dd of=/tmp/tmp.ZEeZBasJer'"
2655
* https://tracker.ceph.com/issues/49684
2656
    qa: fs:cephadm mount does not wait for mds to be created
2657
* https://tracker.ceph.com/issues/48771
2658
    qa: iogen: workload fails to cause balancing