Project

General

Profile

Main » History » Version 181

Patrick Donnelly, 09/14/2023 04:15 PM

1 79 Venky Shankar
h1. MAIN
2
3 148 Rishabh Dave
h3. NEW ENTRY BELOW
4
5 178 Patrick Donnelly
6 177 Venky Shankar
h3. 13 Sep 2023
7
8
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230908.065909
9
10
* https://tracker.ceph.com/issues/52624
11
      qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
12
* https://tracker.ceph.com/issues/57655
13
    qa: fs:mixed-clients kernel_untar_build failure
14
* https://tracker.ceph.com/issues/57676
15
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
16
* https://tracker.ceph.com/issues/61243
17
    qa: tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev - 17 tests failed
18
* https://tracker.ceph.com/issues/62567
19
    postgres workunit times out - MDS_SLOW_REQUEST in logs
20
* https://tracker.ceph.com/issues/61400
21
    valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
22
* https://tracker.ceph.com/issues/61399
23
    libmpich: undefined references to fi_strerror
24
* https://tracker.ceph.com/issues/57655
25
    qa: fs:mixed-clients kernel_untar_build failure
26
* https://tracker.ceph.com/issues/57676
27
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
28
* https://tracker.ceph.com/issues/51964
29
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
30
* https://tracker.ceph.com/issues/62081
31
    tasks/fscrypt-common does not finish, timesout
32 178 Patrick Donnelly
33 179 Patrick Donnelly
h3. 2023 Sep 12
34 178 Patrick Donnelly
35
https://pulpito.ceph.com/pdonnell-2023-09-12_14:07:50-fs-wip-batrick-testing-20230912.122437-distro-default-smithi/
36 1 Patrick Donnelly
37 181 Patrick Donnelly
A few failures caused by qa refactoring in https://github.com/ceph/ceph/pull/48130 ; notably:
38
39
* Test failure: test_export_pin_many (tasks.cephfs.test_exports.TestExportPin) 
40
41
Failures:
42
43 179 Patrick Donnelly
* https://tracker.ceph.com/issues/59348
44
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
45
* https://tracker.ceph.com/issues/57656
46
  dbench: write failed on handle 10010 (Resource temporarily unavailable)
47
* https://tracker.ceph.com/issues/55805
48
  error scrub thrashing reached max tries in 900 secs
49
* https://tracker.ceph.com/issues/62067
50
    ffsb.sh failure "Resource temporarily unavailable"
51
* https://tracker.ceph.com/issues/59344
52
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
53
* https://tracker.ceph.com/issues/61399
54 180 Patrick Donnelly
  libmpich: undefined references to fi_strerror
55
* https://tracker.ceph.com/issues/62832
56
  common: config_proxy deadlock during shutdown (and possibly other times)
57
* https://tracker.ceph.com/issues/59413
58 1 Patrick Donnelly
  cephfs: qa snaptest-git-ceph.sh failed with "got remote process result: 128"
59 181 Patrick Donnelly
* https://tracker.ceph.com/issues/57676
60
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
61
* https://tracker.ceph.com/issues/62567
62
  Command failed with status 1: "sudo TESTDIR=/home/ubuntu/cephtest bash -c 'sudo -u postgres -- pgbench -s 500 -i'"
63
* https://tracker.ceph.com/issues/54460
64
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
65
* https://tracker.ceph.com/issues/58220#note-9
66
  workunit fs/test_python.sh: test_disk_quota_exceeeded_error failure
67
* https://tracker.ceph.com/issues/59348
68
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
69 177 Venky Shankar
70 176 Venky Shankar
h3. 11 Sep 2023
71 175 Venky Shankar
72
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230830.153114
73
74
* https://tracker.ceph.com/issues/52624
75
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
76
* https://tracker.ceph.com/issues/61399
77
    qa: build failure for ior (the failed instance is when compiling `mdtest`)
78
* https://tracker.ceph.com/issues/57655
79
    qa: fs:mixed-clients kernel_untar_build failure
80
* https://tracker.ceph.com/issues/61399
81
    ior build failure
82
* https://tracker.ceph.com/issues/59531
83
    quincy: "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio)"
84
* https://tracker.ceph.com/issues/59344
85
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
86
* https://tracker.ceph.com/issues/59346
87
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
88
* https://tracker.ceph.com/issues/59348
89
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
90
* https://tracker.ceph.com/issues/57676
91
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
92
* https://tracker.ceph.com/issues/61243
93
  qa: tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev - 17 tests failed
94
* https://tracker.ceph.com/issues/62567
95
  postgres workunit times out - MDS_SLOW_REQUEST in logs
96
97
98 174 Rishabh Dave
h3. 6 Sep 2023 Run 2
99
100
https://pulpito.ceph.com/rishabh-2023-08-25_01:50:32-fs-wip-rishabh-2023aug3-b5-testing-default-smithi/ 
101
102
* https://tracker.ceph.com/issues/51964
103
  test_cephfs_mirror_restart_sync_on_blocklist failure
104
* https://tracker.ceph.com/issues/59348
105
  test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
106
* https://tracker.ceph.com/issues/53859
107
  qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
108
* https://tracker.ceph.com/issues/61892
109
  test_strays.TestStrays.test_snapshot_remove failed
110
* https://tracker.ceph.com/issues/54460
111
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
112
* https://tracker.ceph.com/issues/59346
113
  fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write"
114
* https://tracker.ceph.com/issues/59344
115
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
116
* https://tracker.ceph.com/issues/62484
117
  qa: ffsb.sh test failure
118
* https://tracker.ceph.com/issues/62567
119
  Command failed with status 1: "sudo TESTDIR=/home/ubuntu/cephtest bash -c 'sudo -u postgres -- pgbench -s 500 -i'"
120
  
121
* https://tracker.ceph.com/issues/61399
122
  ior build failure
123
* https://tracker.ceph.com/issues/57676
124
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
125
* https://tracker.ceph.com/issues/55805
126
  error scrub thrashing reached max tries in 900 secs
127
128 172 Rishabh Dave
h3. 6 Sep 2023
129 171 Rishabh Dave
130 173 Rishabh Dave
https://pulpito.ceph.com/rishabh-2023-08-10_20:16:46-fs-wip-rishabh-2023Aug1-b4-testing-default-smithi/
131 171 Rishabh Dave
132 1 Patrick Donnelly
* https://tracker.ceph.com/issues/53859
133
  qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
134 173 Rishabh Dave
* https://tracker.ceph.com/issues/51964
135
  qa: test_cephfs_mirror_restart_sync_on_blocklist failure
136 1 Patrick Donnelly
* https://tracker.ceph.com/issues/61892
137 173 Rishabh Dave
  test_snapshot_remove (test_strays.TestStrays) failed
138
* https://tracker.ceph.com/issues/59348
139
  qa: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota
140
* https://tracker.ceph.com/issues/54462
141
  Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128
142
* https://tracker.ceph.com/issues/62556
143
  test_acls: xfstests_dev: python2 is missing
144
* https://tracker.ceph.com/issues/62067
145
  ffsb.sh failure "Resource temporarily unavailable"
146
* https://tracker.ceph.com/issues/57656
147
  dbench: write failed on handle 10010 (Resource temporarily unavailable)
148 1 Patrick Donnelly
* https://tracker.ceph.com/issues/59346
149
  fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write"
150 171 Rishabh Dave
* https://tracker.ceph.com/issues/59344
151 173 Rishabh Dave
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
152
153 171 Rishabh Dave
* https://tracker.ceph.com/issues/61399
154
  ior build failure
155
* https://tracker.ceph.com/issues/57676
156
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
157
* https://tracker.ceph.com/issues/55805
158
  error scrub thrashing reached max tries in 900 secs
159 173 Rishabh Dave
160
* https://tracker.ceph.com/issues/62567
161
  Command failed on smithi008 with status 1: "sudo TESTDIR=/home/ubuntu/cephtest bash -c 'sudo -u postgres -- pgbench -s 500 -i'"
162
* https://tracker.ceph.com/issues/62702
163
  workunit test suites/fsstress.sh on smithi066 with status 124
164 170 Rishabh Dave
165
h3. 5 Sep 2023
166
167
https://pulpito.ceph.com/rishabh-2023-08-25_06:38:25-fs-wip-rishabh-2023aug3-b5-testing-default-smithi/
168
orch:cephadm suite run: http://pulpito.front.sepia.ceph.com/rishabh-2023-09-05_12:16:09-orch:cephadm-wip-rishabh-2023aug3-b5-testing-default-smithi/
169
  this run has failures but acc to Adam King these are not relevant and should be ignored
170
171
* https://tracker.ceph.com/issues/61892
172
  test_snapshot_remove (test_strays.TestStrays) failed
173
* https://tracker.ceph.com/issues/59348
174
  test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota
175
* https://tracker.ceph.com/issues/54462
176
  Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128
177
* https://tracker.ceph.com/issues/62067
178
  ffsb.sh failure "Resource temporarily unavailable"
179
* https://tracker.ceph.com/issues/57656 
180
  dbench: write failed on handle 10010 (Resource temporarily unavailable)
181
* https://tracker.ceph.com/issues/59346
182
  fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write"
183
* https://tracker.ceph.com/issues/59344
184
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
185
* https://tracker.ceph.com/issues/50223
186
  client.xxxx isn't responding to mclientcaps(revoke)
187
* https://tracker.ceph.com/issues/57655
188
  qa: fs:mixed-clients kernel_untar_build failure
189
* https://tracker.ceph.com/issues/62187
190
  iozone.sh: line 5: iozone: command not found
191
 
192
* https://tracker.ceph.com/issues/61399
193
  ior build failure
194
* https://tracker.ceph.com/issues/57676
195
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
196
* https://tracker.ceph.com/issues/55805
197
  error scrub thrashing reached max tries in 900 secs
198 169 Venky Shankar
199
200
h3. 31 Aug 2023
201
202
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230824.045828
203
204
* https://tracker.ceph.com/issues/52624
205
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
206
* https://tracker.ceph.com/issues/62187
207
    iozone: command not found
208
* https://tracker.ceph.com/issues/61399
209
    ior build failure
210
* https://tracker.ceph.com/issues/59531
211
    quincy: "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio)"
212
* https://tracker.ceph.com/issues/61399
213
    qa: build failure for ior (the failed instance is when compiling `mdtest`)
214
* https://tracker.ceph.com/issues/57655
215
    qa: fs:mixed-clients kernel_untar_build failure
216
* https://tracker.ceph.com/issues/59344
217
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
218
* https://tracker.ceph.com/issues/59346
219
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
220
* https://tracker.ceph.com/issues/59348
221
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
222
* https://tracker.ceph.com/issues/59413
223
    cephfs: qa snaptest-git-ceph.sh failed with "got remote process result: 128"
224
* https://tracker.ceph.com/issues/62653
225
    qa: unimplemented fcntl command: 1036 with fsstress
226
* https://tracker.ceph.com/issues/61400
227
    valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
228
* https://tracker.ceph.com/issues/62658
229
    error during scrub thrashing: reached maximum tries (31) after waiting for 900 seconds
230
* https://tracker.ceph.com/issues/62188
231
    AttributeError: 'RemoteProcess' object has no attribute 'read' (happens only with multis-auth test)
232 168 Venky Shankar
233
234
h3. 25 Aug 2023
235
236
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230822.064807
237
238
* https://tracker.ceph.com/issues/59344
239
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
240
* https://tracker.ceph.com/issues/59346
241
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
242
* https://tracker.ceph.com/issues/59348
243
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
244
* https://tracker.ceph.com/issues/57655
245
    qa: fs:mixed-clients kernel_untar_build failure
246
* https://tracker.ceph.com/issues/61243
247
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
248
* https://tracker.ceph.com/issues/61399
249
    ior build failure
250
* https://tracker.ceph.com/issues/61399
251
    qa: build failure for ior (the failed instance is when compiling `mdtest`)
252
* https://tracker.ceph.com/issues/62484
253
    qa: ffsb.sh test failure
254
* https://tracker.ceph.com/issues/59531
255
    quincy: "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio)"
256
* https://tracker.ceph.com/issues/62510
257
    snaptest-git-ceph.sh failure with fs/thrash
258 167 Venky Shankar
259
260
h3. 24 Aug 2023
261
262
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230822.060131
263
264
* https://tracker.ceph.com/issues/57676
265
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
266
* https://tracker.ceph.com/issues/51964
267
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
268
* https://tracker.ceph.com/issues/59344
269
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
270
* https://tracker.ceph.com/issues/59346
271
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
272
* https://tracker.ceph.com/issues/59348
273
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
274
* https://tracker.ceph.com/issues/61399
275
    ior build failure
276
* https://tracker.ceph.com/issues/61399
277
    qa: build failure for ior (the failed instance is when compiling `mdtest`)
278
* https://tracker.ceph.com/issues/62510
279
    snaptest-git-ceph.sh failure with fs/thrash
280
* https://tracker.ceph.com/issues/62484
281
    qa: ffsb.sh test failure
282
* https://tracker.ceph.com/issues/57087
283
    qa: test_fragmented_injection (tasks.cephfs.test_data_scan.TestDataScan) failure
284
* https://tracker.ceph.com/issues/57656
285
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
286
* https://tracker.ceph.com/issues/62187
287
    iozone: command not found
288
* https://tracker.ceph.com/issues/62188
289
    AttributeError: 'RemoteProcess' object has no attribute 'read' (happens only with multis-auth test)
290
* https://tracker.ceph.com/issues/62567
291
    postgres workunit times out - MDS_SLOW_REQUEST in logs
292 166 Venky Shankar
293
294
h3. 22 Aug 2023
295
296
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230809.035933
297
298
* https://tracker.ceph.com/issues/57676
299
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
300
* https://tracker.ceph.com/issues/51964
301
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
302
* https://tracker.ceph.com/issues/59344
303
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
304
* https://tracker.ceph.com/issues/59346
305
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
306
* https://tracker.ceph.com/issues/59348
307
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
308
* https://tracker.ceph.com/issues/61399
309
    ior build failure
310
* https://tracker.ceph.com/issues/61399
311
    qa: build failure for ior (the failed instance is when compiling `mdtest`)
312
* https://tracker.ceph.com/issues/57655
313
    qa: fs:mixed-clients kernel_untar_build failure
314
* https://tracker.ceph.com/issues/61243
315
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
316
* https://tracker.ceph.com/issues/62188
317
    AttributeError: 'RemoteProcess' object has no attribute 'read' (happens only with multis-auth test)
318
* https://tracker.ceph.com/issues/62510
319
    snaptest-git-ceph.sh failure with fs/thrash
320
* https://tracker.ceph.com/issues/62511
321
    src/mds/MDLog.cc: 299: FAILED ceph_assert(!mds_is_shutting_down)
322 165 Venky Shankar
323
324
h3. 14 Aug 2023
325
326
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230808.093601
327
328
* https://tracker.ceph.com/issues/51964
329
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
330
* https://tracker.ceph.com/issues/61400
331
    valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
332
* https://tracker.ceph.com/issues/61399
333
    ior build failure
334
* https://tracker.ceph.com/issues/59348
335
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
336
* https://tracker.ceph.com/issues/59531
337
    cluster [WRN] OSD bench result of 137706.272521 IOPS exceeded the threshold
338
* https://tracker.ceph.com/issues/59344
339
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
340
* https://tracker.ceph.com/issues/59346
341
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
342
* https://tracker.ceph.com/issues/61399
343
    qa: build failure for ior (the failed instance is when compiling `mdtest`)
344
* https://tracker.ceph.com/issues/59684 [kclient bug]
345
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
346
* https://tracker.ceph.com/issues/61243 (NEW)
347
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
348
* https://tracker.ceph.com/issues/57655
349
    qa: fs:mixed-clients kernel_untar_build failure
350
* https://tracker.ceph.com/issues/57656
351
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
352 163 Venky Shankar
353
354
h3. 28 JULY 2023
355
356
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230725.053049
357
358
* https://tracker.ceph.com/issues/51964
359
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
360
* https://tracker.ceph.com/issues/61400
361
    valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
362
* https://tracker.ceph.com/issues/61399
363
    ior build failure
364
* https://tracker.ceph.com/issues/57676
365
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
366
* https://tracker.ceph.com/issues/59348
367
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
368
* https://tracker.ceph.com/issues/59531
369
    cluster [WRN] OSD bench result of 137706.272521 IOPS exceeded the threshold
370
* https://tracker.ceph.com/issues/59344
371
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
372
* https://tracker.ceph.com/issues/59346
373
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
374
* https://github.com/ceph/ceph/pull/52556
375
    task/test_nfs: AttributeError: 'TestNFS' object has no attribute 'run_ceph_cmd' (see note #4)
376
* https://tracker.ceph.com/issues/62187
377
    iozone: command not found
378
* https://tracker.ceph.com/issues/61399
379
    qa: build failure for ior (the failed instance is when compiling `mdtest`)
380
* https://tracker.ceph.com/issues/62188
381 164 Rishabh Dave
    AttributeError: 'RemoteProcess' object has no attribute 'read' (happens only with multis-auth test)
382 158 Rishabh Dave
383
h3. 24 Jul 2023
384
385
https://pulpito.ceph.com/rishabh-2023-07-13_21:35:13-fs-wip-rishabh-2023Jul13-testing-default-smithi/
386
https://pulpito.ceph.com/rishabh-2023-07-14_10:26:42-fs-wip-rishabh-2023Jul13-testing-default-smithi/
387
There were few failure from one of the PRs under testing. Following run confirms that removing this PR fixes these failures -
388
https://pulpito.ceph.com/rishabh-2023-07-18_02:11:50-fs-wip-rishabh-2023Jul13-m-quota-testing-default-smithi/
389
One more extra run to check if blogbench.sh fail every time:
390
https://pulpito.ceph.com/rishabh-2023-07-21_17:58:19-fs-wip-rishabh-2023Jul13-m-quota-testing-default-smithi/
391
blogbench.sh failure were seen on above runs for first time, following run with main branch that confirms that "blogbench.sh" was not related to any of the PRs that are under testing -
392 161 Rishabh Dave
https://pulpito.ceph.com/rishabh-2023-07-21_21:30:53-fs-wip-rishabh-2023Jul13-base-2-testing-default-smithi/
393
394
* https://tracker.ceph.com/issues/61892
395
  test_snapshot_remove (test_strays.TestStrays) failed
396
* https://tracker.ceph.com/issues/53859
397
  test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
398
* https://tracker.ceph.com/issues/61982
399
  test_clean_stale_subvolume_snapshot_metadata (tasks.cephfs.test_volumes.TestSubvolumeSnapshots)
400
* https://tracker.ceph.com/issues/52438
401
  qa: ffsb timeout
402
* https://tracker.ceph.com/issues/54460
403
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
404
* https://tracker.ceph.com/issues/57655
405
  qa: fs:mixed-clients kernel_untar_build failure
406
* https://tracker.ceph.com/issues/48773
407
  reached max tries: scrub does not complete
408
* https://tracker.ceph.com/issues/58340
409
  mds: fsstress.sh hangs with multimds
410
* https://tracker.ceph.com/issues/61400
411
  valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
412
* https://tracker.ceph.com/issues/57206
413
  libcephfs/test.sh: ceph_test_libcephfs_reclaim
414
  
415
* https://tracker.ceph.com/issues/57656
416
  [testing] dbench: write failed on handle 10010 (Resource temporarily unavailable)
417
* https://tracker.ceph.com/issues/61399
418
  ior build failure
419
* https://tracker.ceph.com/issues/57676
420
  error during scrub thrashing: backtrace
421
  
422
* https://tracker.ceph.com/issues/38452
423
  'sudo -u postgres -- pgbench -s 500 -i' failed
424 158 Rishabh Dave
* https://tracker.ceph.com/issues/62126
425 157 Venky Shankar
  blogbench.sh failure
426
427
h3. 18 July 2023
428
429
* https://tracker.ceph.com/issues/52624
430
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
431
* https://tracker.ceph.com/issues/57676
432
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
433
* https://tracker.ceph.com/issues/54460
434
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
435
* https://tracker.ceph.com/issues/57655
436
    qa: fs:mixed-clients kernel_untar_build failure
437
* https://tracker.ceph.com/issues/51964
438
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
439
* https://tracker.ceph.com/issues/59344
440
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
441
* https://tracker.ceph.com/issues/61182
442
    cephfs-mirror-ha-workunit: reached maximum tries (50) after waiting for 300 seconds
443
* https://tracker.ceph.com/issues/61957
444
    test_client_limits.TestClientLimits.test_client_release_bug
445
* https://tracker.ceph.com/issues/59348
446
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
447
* https://tracker.ceph.com/issues/61892
448
    test_strays.TestStrays.test_snapshot_remove failed
449
* https://tracker.ceph.com/issues/59346
450
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
451
* https://tracker.ceph.com/issues/44565
452
    src/mds/SimpleLock.h: 528: FAILED ceph_assert(state == LOCK_XLOCK || state == LOCK_XLOCKDONE || state == LOCK_XLOCKSNAP || state == LOCK_LOCK_XLOCK || state == LOCK_LOCK || is_locallock())
453
* https://tracker.ceph.com/issues/62067
454
    ffsb.sh failure "Resource temporarily unavailable"
455 156 Venky Shankar
456
457
h3. 17 July 2023
458
459
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230704.040136
460
461
* https://tracker.ceph.com/issues/61982
462
    Test failure: test_clean_stale_subvolume_snapshot_metadata (tasks.cephfs.test_volumes.TestSubvolumeSnapshots)
463
* https://tracker.ceph.com/issues/59344
464
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
465
* https://tracker.ceph.com/issues/61182
466
    cephfs-mirror-ha-workunit: reached maximum tries (50) after waiting for 300 seconds
467
* https://tracker.ceph.com/issues/61957
468
    test_client_limits.TestClientLimits.test_client_release_bug
469
* https://tracker.ceph.com/issues/61400
470
    valgrind+ceph-mon: segmentation fault in rocksdb+tcmalloc
471
* https://tracker.ceph.com/issues/59348
472
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
473
* https://tracker.ceph.com/issues/61892
474
    test_strays.TestStrays.test_snapshot_remove failed
475
* https://tracker.ceph.com/issues/59346
476
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
477
* https://tracker.ceph.com/issues/62036
478
    src/mds/MDCache.cc: 5131: FAILED ceph_assert(isolated_inodes.empty())
479
* https://tracker.ceph.com/issues/61737
480
    coredump from '/bin/podman pull quay.ceph.io/ceph-ci/ceph:pacific'
481
* https://tracker.ceph.com/issues/44565
482
    src/mds/SimpleLock.h: 528: FAILED ceph_assert(state == LOCK_XLOCK || state == LOCK_XLOCKDONE || state == LOCK_XLOCKSNAP || state == LOCK_LOCK_XLOCK || state == LOCK_LOCK || is_locallock())
483 155 Rishabh Dave
484 1 Patrick Donnelly
485 153 Rishabh Dave
h3. 13 July 2023 Run 2
486 152 Rishabh Dave
487
488
https://pulpito.ceph.com/rishabh-2023-07-08_23:33:40-fs-wip-rishabh-2023Jul9-testing-default-smithi/
489
https://pulpito.ceph.com/rishabh-2023-07-09_20:19:09-fs-wip-rishabh-2023Jul9-testing-default-smithi/
490
491
* https://tracker.ceph.com/issues/61957
492
  test_client_limits.TestClientLimits.test_client_release_bug
493
* https://tracker.ceph.com/issues/61982
494
  Test failure: test_clean_stale_subvolume_snapshot_metadata (tasks.cephfs.test_volumes.TestSubvolumeSnapshots)
495
* https://tracker.ceph.com/issues/59348
496
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
497
* https://tracker.ceph.com/issues/59344
498
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
499
* https://tracker.ceph.com/issues/54460
500
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
501
* https://tracker.ceph.com/issues/57655
502
  qa: fs:mixed-clients kernel_untar_build failure
503
* https://tracker.ceph.com/issues/61400
504
  valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
505
* https://tracker.ceph.com/issues/61399
506
  ior build failure
507
508 151 Venky Shankar
h3. 13 July 2023
509
510
https://pulpito.ceph.com/vshankar-2023-07-04_11:45:30-fs-wip-vshankar-testing-20230704.040242-testing-default-smithi/
511
512
* https://tracker.ceph.com/issues/54460
513
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
514
* https://tracker.ceph.com/issues/61400
515
    valgrind+ceph-mon: segmentation fault in rocksdb+tcmalloc
516
* https://tracker.ceph.com/issues/57655
517
    qa: fs:mixed-clients kernel_untar_build failure
518
* https://tracker.ceph.com/issues/61945
519
    LibCephFS.DelegTimeout failure
520
* https://tracker.ceph.com/issues/52624
521
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
522
* https://tracker.ceph.com/issues/57676
523
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
524
* https://tracker.ceph.com/issues/59348
525
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
526
* https://tracker.ceph.com/issues/59344
527
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
528
* https://tracker.ceph.com/issues/51964
529
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
530
* https://tracker.ceph.com/issues/59346
531
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
532
* https://tracker.ceph.com/issues/61982
533
    Test failure: test_clean_stale_subvolume_snapshot_metadata (tasks.cephfs.test_volumes.TestSubvolumeSnapshots)
534 150 Rishabh Dave
535
536
h3. 13 Jul 2023
537
538
https://pulpito.ceph.com/rishabh-2023-07-05_22:21:20-fs-wip-rishabh-2023Jul5-testing-default-smithi/
539
https://pulpito.ceph.com/rishabh-2023-07-06_19:33:28-fs-wip-rishabh-2023Jul5-testing-default-smithi/
540
541
* https://tracker.ceph.com/issues/61957
542
  test_client_limits.TestClientLimits.test_client_release_bug
543
* https://tracker.ceph.com/issues/59348
544
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
545
* https://tracker.ceph.com/issues/59346
546
  fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
547
* https://tracker.ceph.com/issues/48773
548
  scrub does not complete: reached max tries
549
* https://tracker.ceph.com/issues/59344
550
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
551
* https://tracker.ceph.com/issues/52438
552
  qa: ffsb timeout
553
* https://tracker.ceph.com/issues/57656
554
  [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
555
* https://tracker.ceph.com/issues/58742
556
  xfstests-dev: kcephfs: generic
557
* https://tracker.ceph.com/issues/61399
558 148 Rishabh Dave
  libmpich: undefined references to fi_strerror
559 149 Rishabh Dave
560 148 Rishabh Dave
h3. 12 July 2023
561
562
https://pulpito.ceph.com/rishabh-2023-07-05_18:32:52-fs-wip-rishabh-CephManager-in-CephFSTestCase-testing-default-smithi/
563
https://pulpito.ceph.com/rishabh-2023-07-06_19:46:43-fs-wip-rishabh-CephManager-in-CephFSTestCase-testing-default-smithi/
564
565
* https://tracker.ceph.com/issues/61892
566
  test_strays.TestStrays.test_snapshot_remove failed
567
* https://tracker.ceph.com/issues/59348
568
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
569
* https://tracker.ceph.com/issues/53859
570
  qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
571
* https://tracker.ceph.com/issues/59346
572
  fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write"
573
* https://tracker.ceph.com/issues/58742
574
  xfstests-dev: kcephfs: generic
575
* https://tracker.ceph.com/issues/59344
576
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
577
* https://tracker.ceph.com/issues/52438
578
  qa: ffsb timeout
579
* https://tracker.ceph.com/issues/57656
580
  [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
581
* https://tracker.ceph.com/issues/54460
582
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
583
* https://tracker.ceph.com/issues/57655
584
  qa: fs:mixed-clients kernel_untar_build failure
585
* https://tracker.ceph.com/issues/61182
586
  cephfs-mirror-ha-workunit: reached maximum tries (50) after waiting for 300 seconds
587
* https://tracker.ceph.com/issues/61400
588
  valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
589 147 Rishabh Dave
* https://tracker.ceph.com/issues/48773
590 146 Patrick Donnelly
  reached max tries: scrub does not complete
591
592
h3. 05 July 2023
593
594
https://pulpito.ceph.com/pdonnell-2023-07-05_03:38:33-fs:libcephfs-wip-pdonnell-testing-20230705.003205-distro-default-smithi/
595
596 137 Rishabh Dave
* https://tracker.ceph.com/issues/59346
597 143 Rishabh Dave
  fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write"
598
599
h3. 27 Jun 2023
600
601
https://pulpito.ceph.com/rishabh-2023-06-21_23:38:17-fs-wip-rishabh-improvements-authmon-testing-default-smithi/
602 144 Rishabh Dave
https://pulpito.ceph.com/rishabh-2023-06-23_17:37:30-fs-wip-rishabh-improvements-authmon-distro-default-smithi/
603
604
* https://tracker.ceph.com/issues/59348
605
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
606
* https://tracker.ceph.com/issues/54460
607
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
608
* https://tracker.ceph.com/issues/59346
609
  fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write"
610
* https://tracker.ceph.com/issues/59344
611
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
612
* https://tracker.ceph.com/issues/61399
613
  libmpich: undefined references to fi_strerror
614
* https://tracker.ceph.com/issues/50223
615
  client.xxxx isn't responding to mclientcaps(revoke)
616 143 Rishabh Dave
* https://tracker.ceph.com/issues/61831
617
  Test failure: test_mirroring_init_failure_with_recovery (tasks.cephfs.test_mirroring.TestMirroring)
618 142 Venky Shankar
619
620
h3. 22 June 2023
621
622
* https://tracker.ceph.com/issues/57676
623
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
624
* https://tracker.ceph.com/issues/54460
625
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
626
* https://tracker.ceph.com/issues/59344
627
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
628
* https://tracker.ceph.com/issues/59348
629
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
630
* https://tracker.ceph.com/issues/61400
631
    valgrind+ceph-mon: segmentation fault in rocksdb+tcmalloc
632
* https://tracker.ceph.com/issues/57655
633
    qa: fs:mixed-clients kernel_untar_build failure
634
* https://tracker.ceph.com/issues/61394
635
    qa/quincy: cluster [WRN] evicting unresponsive client smithi152 (4298), after 303.726 seconds" in cluster log
636
* https://tracker.ceph.com/issues/61762
637
    qa: wait_for_clean: failed before timeout expired
638
* https://tracker.ceph.com/issues/61775
639
    cephfs-mirror: mirror daemon does not shutdown (in mirror ha tests)
640
* https://tracker.ceph.com/issues/44565
641
    src/mds/SimpleLock.h: 528: FAILED ceph_assert(state == LOCK_XLOCK || state == LOCK_XLOCKDONE || state == LOCK_XLOCKSNAP || state == LOCK_LOCK_XLOCK || state == LOCK_LOCK || is_locallock())
642
* https://tracker.ceph.com/issues/61790
643
    cephfs client to mds comms remain silent after reconnect
644
* https://tracker.ceph.com/issues/61791
645
    snaptest-git-ceph.sh test timed out (job dead)
646 139 Venky Shankar
647
648
h3. 20 June 2023
649
650
https://pulpito.ceph.com/vshankar-2023-06-15_04:58:28-fs-wip-vshankar-testing-20230614.124123-testing-default-smithi/
651
652
* https://tracker.ceph.com/issues/57676
653
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
654
* https://tracker.ceph.com/issues/54460
655
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
656 140 Venky Shankar
* https://tracker.ceph.com/issues/54462
657 1 Patrick Donnelly
  Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128
658 141 Venky Shankar
* https://tracker.ceph.com/issues/58340
659 139 Venky Shankar
  mds: fsstress.sh hangs with multimds
660
* https://tracker.ceph.com/issues/59344
661
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
662
* https://tracker.ceph.com/issues/59348
663
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
664
* https://tracker.ceph.com/issues/57656
665
  [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
666
* https://tracker.ceph.com/issues/61400
667
    valgrind+ceph-mon: segmentation fault in rocksdb+tcmalloc
668
* https://tracker.ceph.com/issues/57655
669
    qa: fs:mixed-clients kernel_untar_build failure
670
* https://tracker.ceph.com/issues/44565
671
    src/mds/SimpleLock.h: 528: FAILED ceph_assert(state == LOCK_XLOCK || state == LOCK_XLOCKDONE || state == LOCK_XLOCKSNAP || state == LOCK_LOCK_XLOCK || state == LOCK_LOCK || is_locallock())
672
* https://tracker.ceph.com/issues/61737
673 138 Rishabh Dave
    coredump from '/bin/podman pull quay.ceph.io/ceph-ci/ceph:pacific'
674
675
h3. 16 June 2023
676
677 1 Patrick Donnelly
https://pulpito.ceph.com/rishabh-2023-05-16_10:39:13-fs-wip-rishabh-2023May15-1524-testing-default-smithi/
678 145 Rishabh Dave
https://pulpito.ceph.com/rishabh-2023-05-17_11:09:48-fs-wip-rishabh-2023May15-1524-testing-default-smithi/
679 138 Rishabh Dave
https://pulpito.ceph.com/rishabh-2023-05-18_10:01:53-fs-wip-rishabh-2023May15-1524-testing-default-smithi/
680 1 Patrick Donnelly
(bins were rebuilt with a subset of orig PRs) https://pulpito.ceph.com/rishabh-2023-06-09_10:19:22-fs-wip-rishabh-2023Jun9-1308-testing-default-smithi/
681
682
683
* https://tracker.ceph.com/issues/59344
684
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
685 138 Rishabh Dave
* https://tracker.ceph.com/issues/59348
686
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
687 145 Rishabh Dave
* https://tracker.ceph.com/issues/59346
688
  fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write"
689
* https://tracker.ceph.com/issues/57656
690
  [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
691
* https://tracker.ceph.com/issues/54460
692
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
693 138 Rishabh Dave
* https://tracker.ceph.com/issues/54462
694
  Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128
695 145 Rishabh Dave
* https://tracker.ceph.com/issues/61399
696
  libmpich: undefined references to fi_strerror
697
* https://tracker.ceph.com/issues/58945
698
  xfstests-dev: ceph-fuse: generic 
699 138 Rishabh Dave
* https://tracker.ceph.com/issues/58742
700 136 Patrick Donnelly
  xfstests-dev: kcephfs: generic
701
702
h3. 24 May 2023
703
704
https://pulpito.ceph.com/pdonnell-2023-05-23_18:20:18-fs-wip-pdonnell-testing-20230523.134409-distro-default-smithi/
705
706
* https://tracker.ceph.com/issues/57676
707
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
708
* https://tracker.ceph.com/issues/59683
709
    Error: Unable to find a match: userspace-rcu-devel libedit-devel device-mapper-devel with fscrypt tests
710
* https://tracker.ceph.com/issues/61399
711
    qa: "[Makefile:299: ior] Error 1"
712
* https://tracker.ceph.com/issues/61265
713
    qa: tasks.cephfs.fuse_mount:process failed to terminate after unmount
714
* https://tracker.ceph.com/issues/59348
715
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
716
* https://tracker.ceph.com/issues/59346
717
    qa/workunits/fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write"
718
* https://tracker.ceph.com/issues/61400
719
    valgrind+ceph-mon: segmentation fault in rocksdb+tcmalloc
720
* https://tracker.ceph.com/issues/54460
721
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
722
* https://tracker.ceph.com/issues/51964
723
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
724
* https://tracker.ceph.com/issues/59344
725
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
726
* https://tracker.ceph.com/issues/61407
727
    mds: abort on CInode::verify_dirfrags
728
* https://tracker.ceph.com/issues/48773
729
    qa: scrub does not complete
730
* https://tracker.ceph.com/issues/57655
731
    qa: fs:mixed-clients kernel_untar_build failure
732
* https://tracker.ceph.com/issues/61409
733 128 Venky Shankar
    qa: _test_stale_caps does not wait for file flush before stat
734
735
h3. 15 May 2023
736 130 Venky Shankar
737 128 Venky Shankar
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230509.090020
738
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230509.090020-6
739
740
* https://tracker.ceph.com/issues/52624
741
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
742
* https://tracker.ceph.com/issues/54460
743
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
744
* https://tracker.ceph.com/issues/57676
745
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
746
* https://tracker.ceph.com/issues/59684 [kclient bug]
747
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
748
* https://tracker.ceph.com/issues/59348
749
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
750 131 Venky Shankar
* https://tracker.ceph.com/issues/61148
751
    dbench test results in call trace in dmesg [kclient bug]
752 133 Kotresh Hiremath Ravishankar
* https://tracker.ceph.com/issues/58340
753 134 Kotresh Hiremath Ravishankar
    mds: fsstress.sh hangs with multimds
754 125 Venky Shankar
755
 
756 129 Rishabh Dave
h3. 11 May 2023
757
758
https://pulpito.ceph.com/yuriw-2023-05-10_18:21:40-fs-wip-yuri7-testing-2023-05-10-0742-distro-default-smithi/
759
760
* https://tracker.ceph.com/issues/59684 [kclient bug]
761
  Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
762
* https://tracker.ceph.com/issues/59348
763
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
764
* https://tracker.ceph.com/issues/57655
765
  qa: fs:mixed-clients kernel_untar_build failure
766
* https://tracker.ceph.com/issues/57676
767
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
768
* https://tracker.ceph.com/issues/55805
769
  error during scrub thrashing reached max tries in 900 secs
770
* https://tracker.ceph.com/issues/54460
771
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
772
* https://tracker.ceph.com/issues/57656
773
  [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
774
* https://tracker.ceph.com/issues/58220
775
  Command failed (workunit test fs/quota/quota.sh) on smithi081 with status 1:
776 1 Patrick Donnelly
* https://tracker.ceph.com/issues/58220#note-9
777
  workunit fs/test_python.sh: test_disk_quota_exceeeded_error failure
778 134 Kotresh Hiremath Ravishankar
* https://tracker.ceph.com/issues/59342
779
  qa/workunits/kernel_untar_build.sh failed when compiling the Linux source
780 135 Kotresh Hiremath Ravishankar
* https://tracker.ceph.com/issues/58949
781
    test_cephfs.test_disk_quota_exceeeded_error - AssertionError: DiskQuotaExceeded not raised by write
782 129 Rishabh Dave
* https://tracker.ceph.com/issues/61243 (NEW)
783
  test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
784
785 125 Venky Shankar
h3. 11 May 2023
786 127 Venky Shankar
787
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230509.054005
788 126 Venky Shankar
789 125 Venky Shankar
(no fsstress job failure [https://tracker.ceph.com/issues/58340] since https://github.com/ceph/ceph/pull/49553
790
 was included in the branch, however, the PR got updated and needs retest).
791
792
* https://tracker.ceph.com/issues/52624
793
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
794
* https://tracker.ceph.com/issues/54460
795
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
796
* https://tracker.ceph.com/issues/57676
797
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
798
* https://tracker.ceph.com/issues/59683
799
    Error: Unable to find a match: userspace-rcu-devel libedit-devel device-mapper-devel with fscrypt tests
800
* https://tracker.ceph.com/issues/59684 [kclient bug]
801
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
802
* https://tracker.ceph.com/issues/59348
803 124 Venky Shankar
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
804
805
h3. 09 May 2023
806
807
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230506.143554
808
809
* https://tracker.ceph.com/issues/52624
810
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
811
* https://tracker.ceph.com/issues/58340
812
    mds: fsstress.sh hangs with multimds
813
* https://tracker.ceph.com/issues/54460
814
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
815
* https://tracker.ceph.com/issues/57676
816
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
817
* https://tracker.ceph.com/issues/51964
818
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
819
* https://tracker.ceph.com/issues/59350
820
    qa: test_flag_scrub_mdsdir (tasks.cephfs.test_scrub_checks.TestScrubChecks) ... ERROR
821
* https://tracker.ceph.com/issues/59683
822
    Error: Unable to find a match: userspace-rcu-devel libedit-devel device-mapper-devel with fscrypt tests
823
* https://tracker.ceph.com/issues/59684 [kclient bug]
824
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
825
* https://tracker.ceph.com/issues/59348
826 123 Venky Shankar
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
827
828
h3. 10 Apr 2023
829
830
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230330.105356
831
832
* https://tracker.ceph.com/issues/52624
833
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
834
* https://tracker.ceph.com/issues/58340
835
    mds: fsstress.sh hangs with multimds
836
* https://tracker.ceph.com/issues/54460
837
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
838
* https://tracker.ceph.com/issues/57676
839
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
840 119 Rishabh Dave
* https://tracker.ceph.com/issues/51964
841 120 Rishabh Dave
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
842 121 Rishabh Dave
843 120 Rishabh Dave
h3. 31 Mar 2023
844 122 Rishabh Dave
845
run: http://pulpito.front.sepia.ceph.com/rishabh-2023-03-03_21:39:49-fs-wip-rishabh-2023Mar03-2316-testing-default-smithi/
846 120 Rishabh Dave
re-run (some PRs removed from batch): http://pulpito.front.sepia.ceph.com/rishabh-2023-03-11_05:54:03-fs-wip-rishabh-2023Mar10-1727-testing-default-smithi/
847
re-run (some PRs removed from batch): http://pulpito.front.sepia.ceph.com/rishabh-2023-03-23_08:27:28-fs-wip-rishabh-2023Mar20-2250-testing-default-smithi/
848
849
There were many more re-runs for "failed+dead" jobs as well as for individual jobs. half of the PRs from the batch were removed (gradually over subsequent re-runs).
850
851
* https://tracker.ceph.com/issues/57676
852
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
853
* https://tracker.ceph.com/issues/54460
854
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
855
* https://tracker.ceph.com/issues/58220
856
  Command failed (workunit test fs/quota/quota.sh) on smithixxx with status 1
857
* https://tracker.ceph.com/issues/58220#note-9
858
  workunit fs/test_python.sh: test_disk_quota_exceeeded_error failure
859
* https://tracker.ceph.com/issues/56695
860
  Command failed (workunit test suites/pjd.sh)
861
* https://tracker.ceph.com/issues/58564 
862
  workuit dbench failed with error code 1
863
* https://tracker.ceph.com/issues/57206
864
  libcephfs/test.sh: ceph_test_libcephfs_reclaim
865
* https://tracker.ceph.com/issues/57580
866
  Test failure: test_newops_getvxattr (tasks.cephfs.test_newops.TestNewOps)
867
* https://tracker.ceph.com/issues/58940
868
  ceph osd hit ceph_abort
869
* https://tracker.ceph.com/issues/55805
870 118 Venky Shankar
  error scrub thrashing reached max tries in 900 secs
871
872
h3. 30 March 2023
873
874
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230315.085747
875
876
* https://tracker.ceph.com/issues/58938
877
    qa: xfstests-dev's generic test suite has 7 failures with kclient
878
* https://tracker.ceph.com/issues/51964
879
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
880
* https://tracker.ceph.com/issues/58340
881 114 Venky Shankar
    mds: fsstress.sh hangs with multimds
882
883 115 Venky Shankar
h3. 29 March 2023
884 114 Venky Shankar
885
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230317.095222
886
887
* https://tracker.ceph.com/issues/56695
888
    [RHEL stock] pjd test failures
889
* https://tracker.ceph.com/issues/57676
890
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
891
* https://tracker.ceph.com/issues/57087
892
    qa: test_fragmented_injection (tasks.cephfs.test_data_scan.TestDataScan) failure
893 116 Venky Shankar
* https://tracker.ceph.com/issues/58340
894
    mds: fsstress.sh hangs with multimds
895 114 Venky Shankar
* https://tracker.ceph.com/issues/57655
896
    qa: fs:mixed-clients kernel_untar_build failure
897 117 Venky Shankar
* https://tracker.ceph.com/issues/59230
898
    Test failure: test_object_deletion (tasks.cephfs.test_damage.TestDamage)
899 114 Venky Shankar
* https://tracker.ceph.com/issues/58938
900 113 Venky Shankar
    qa: xfstests-dev's generic test suite has 7 failures with kclient
901
902
h3. 13 Mar 2023
903
904
* https://tracker.ceph.com/issues/56695
905
    [RHEL stock] pjd test failures
906
* https://tracker.ceph.com/issues/57676
907
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
908
* https://tracker.ceph.com/issues/51964
909
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
910
* https://tracker.ceph.com/issues/54460
911
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
912
* https://tracker.ceph.com/issues/57656
913 112 Venky Shankar
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
914
915
h3. 09 Mar 2023
916
917
https://pulpito.ceph.com/vshankar-2023-03-03_04:39:14-fs-wip-vshankar-testing-20230303.023823-testing-default-smithi/
918
https://pulpito.ceph.com/vshankar-2023-03-08_15:12:36-fs-wip-vshankar-testing-20230308.112059-testing-default-smithi/
919
920
* https://tracker.ceph.com/issues/56695
921
    [RHEL stock] pjd test failures
922
* https://tracker.ceph.com/issues/57676
923
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
924
* https://tracker.ceph.com/issues/51964
925
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
926
* https://tracker.ceph.com/issues/54460
927
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
928
* https://tracker.ceph.com/issues/58340
929
    mds: fsstress.sh hangs with multimds
930
* https://tracker.ceph.com/issues/57087
931 111 Venky Shankar
    qa: test_fragmented_injection (tasks.cephfs.test_data_scan.TestDataScan) failure
932
933
h3. 07 Mar 2023
934
935
https://pulpito.ceph.com/vshankar-2023-03-02_09:21:58-fs-wip-vshankar-testing-20230222.044949-testing-default-smithi/
936
https://pulpito.ceph.com/vshankar-2023-03-07_05:15:12-fs-wip-vshankar-testing-20230307.030510-testing-default-smithi/
937
938
* https://tracker.ceph.com/issues/56695
939
    [RHEL stock] pjd test failures
940
* https://tracker.ceph.com/issues/57676
941
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
942
* https://tracker.ceph.com/issues/51964
943
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
944
* https://tracker.ceph.com/issues/57656
945
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
946
* https://tracker.ceph.com/issues/57655
947
    qa: fs:mixed-clients kernel_untar_build failure
948
* https://tracker.ceph.com/issues/58220
949
    Command failed (workunit test fs/quota/quota.sh) on smithi081 with status 1:
950
* https://tracker.ceph.com/issues/54460
951
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
952
* https://tracker.ceph.com/issues/58934
953 109 Venky Shankar
    snaptest-git-ceph.sh failure with ceph-fuse
954
955
h3. 28 Feb 2023
956
957
https://pulpito.ceph.com/vshankar-2023-02-24_02:11:45-fs-wip-vshankar-testing-20230222.025426-testing-default-smithi/
958
959
* https://tracker.ceph.com/issues/56695
960
    [RHEL stock] pjd test failures
961
* https://tracker.ceph.com/issues/57676
962
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
963 110 Venky Shankar
* https://tracker.ceph.com/issues/56446
964 109 Venky Shankar
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
965
966 107 Venky Shankar
(teuthology infra issues causing testing delays - merging PRs which have tests passing)
967
968
h3. 25 Jan 2023
969
970
https://pulpito.ceph.com/vshankar-2023-01-25_07:57:32-fs-wip-vshankar-testing-20230125.055346-testing-default-smithi/
971
972
* https://tracker.ceph.com/issues/52624
973
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
974
* https://tracker.ceph.com/issues/56695
975
    [RHEL stock] pjd test failures
976
* https://tracker.ceph.com/issues/57676
977
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
978
* https://tracker.ceph.com/issues/56446
979
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
980
* https://tracker.ceph.com/issues/57206
981
    libcephfs/test.sh: ceph_test_libcephfs_reclaim
982
* https://tracker.ceph.com/issues/58220
983
    Command failed (workunit test fs/quota/quota.sh) on smithi081 with status 1:
984
* https://tracker.ceph.com/issues/58340
985
  mds: fsstress.sh hangs with multimds
986
* https://tracker.ceph.com/issues/56011
987
    fs/thrash: snaptest-snap-rm-cmp.sh fails in mds5sum comparison
988
* https://tracker.ceph.com/issues/54460
989 101 Rishabh Dave
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
990
991
h3. 30 JAN 2023
992
993
run: http://pulpito.front.sepia.ceph.com/rishabh-2022-11-28_08:04:11-fs-wip-rishabh-testing-2022Nov24-1818-testing-default-smithi/
994
re-run: http://pulpito.front.sepia.ceph.com/rishabh-2023-01-13_12:08:33-fs-wip-rishabh-testing-2022Nov24-11Jan2023-distro-default-smithi/
995 105 Rishabh Dave
re-run of re-run: http://pulpito.front.sepia.ceph.com/rishabh-2023-01-23_18:53:32-fs-wip-rishabh-testing-2022Nov24-11Jan2023-distro-default-smithi/
996
997 101 Rishabh Dave
* https://tracker.ceph.com/issues/52624
998
  qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
999
* https://tracker.ceph.com/issues/56695
1000
  [RHEL stock] pjd test failures
1001
* https://tracker.ceph.com/issues/57676
1002
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
1003
* https://tracker.ceph.com/issues/55332
1004
  Failure in snaptest-git-ceph.sh
1005
* https://tracker.ceph.com/issues/51964
1006
  qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1007
* https://tracker.ceph.com/issues/56446
1008
  Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
1009
* https://tracker.ceph.com/issues/57655 
1010
  qa: fs:mixed-clients kernel_untar_build failure
1011
* https://tracker.ceph.com/issues/54460
1012
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1013 103 Rishabh Dave
* https://tracker.ceph.com/issues/58340
1014
  mds: fsstress.sh hangs with multimds
1015 101 Rishabh Dave
* https://tracker.ceph.com/issues/58219
1016 102 Rishabh Dave
  Command crashed: 'ceph-dencoder type inode_backtrace_t import - decode dump_json'
1017
1018
* "Failed to load ceph-mgr modules: prometheus" in cluster log"
1019 106 Rishabh Dave
  http://pulpito.front.sepia.ceph.com/rishabh-2023-01-23_18:53:32-fs-wip-rishabh-testing-2022Nov24-11Jan2023-distro-default-smithi/7134086
1020
  Acc to Venky this was fixed in https://github.com/ceph/ceph/commit/cf6089200d96fc56b08ee17a4e31f19823370dc8
1021 102 Rishabh Dave
* Created https://tracker.ceph.com/issues/58564
1022 100 Venky Shankar
  workunit test suites/dbench.sh failed error code 1
1023
1024
h3. 15 Dec 2022
1025
1026
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20221215.112736
1027
1028
* https://tracker.ceph.com/issues/52624
1029
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1030
* https://tracker.ceph.com/issues/56695
1031
    [RHEL stock] pjd test failures
1032
* https://tracker.ceph.com/issues/58219
1033
* https://tracker.ceph.com/issues/57655
1034
* qa: fs:mixed-clients kernel_untar_build failure
1035
    Test failure: test_journal_migration (tasks.cephfs.test_journal_migration.TestJournalMigration)
1036
* https://tracker.ceph.com/issues/57676
1037
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1038
* https://tracker.ceph.com/issues/58340
1039 96 Venky Shankar
    mds: fsstress.sh hangs with multimds
1040
1041
h3. 08 Dec 2022
1042 99 Venky Shankar
1043 96 Venky Shankar
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20221130.043104
1044
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20221209.043803
1045
1046
(lots of transient git.ceph.com failures)
1047
1048
* https://tracker.ceph.com/issues/52624
1049
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1050
* https://tracker.ceph.com/issues/56695
1051
    [RHEL stock] pjd test failures
1052
* https://tracker.ceph.com/issues/57655
1053
    qa: fs:mixed-clients kernel_untar_build failure
1054
* https://tracker.ceph.com/issues/58219
1055
    Test failure: test_journal_migration (tasks.cephfs.test_journal_migration.TestJournalMigration)
1056
* https://tracker.ceph.com/issues/58220
1057
    Command failed (workunit test fs/quota/quota.sh) on smithi081 with status 1:
1058 97 Venky Shankar
* https://tracker.ceph.com/issues/57676
1059
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1060 98 Venky Shankar
* https://tracker.ceph.com/issues/53859
1061
    qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
1062
* https://tracker.ceph.com/issues/54460
1063
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1064 96 Venky Shankar
* https://tracker.ceph.com/issues/58244
1065 95 Venky Shankar
    Test failure: test_rebuild_inotable (tasks.cephfs.test_data_scan.TestDataScan)
1066
1067
h3. 14 Oct 2022
1068
1069
https://pulpito.ceph.com/vshankar-2022-10-12_04:56:59-fs-wip-vshankar-testing-20221011-145847-testing-default-smithi/
1070
https://pulpito.ceph.com/vshankar-2022-10-14_04:04:57-fs-wip-vshankar-testing-20221014-072608-testing-default-smithi/
1071
1072
* https://tracker.ceph.com/issues/52624
1073
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1074
* https://tracker.ceph.com/issues/55804
1075
    Command failed (workunit test suites/pjd.sh)
1076
* https://tracker.ceph.com/issues/51964
1077
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1078
* https://tracker.ceph.com/issues/57682
1079
    client: ERROR: test_reconnect_after_blocklisted
1080 90 Rishabh Dave
* https://tracker.ceph.com/issues/54460
1081 91 Rishabh Dave
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1082
1083
h3. 10 Oct 2022
1084 92 Rishabh Dave
1085 91 Rishabh Dave
http://pulpito.front.sepia.ceph.com/rishabh-2022-09-30_19:45:21-fs-wip-rishabh-testing-30Sep2022-testing-default-smithi/
1086
1087
reruns
1088
* fs-thrash, passed: http://pulpito.front.sepia.ceph.com/rishabh-2022-10-04_13:19:47-fs-wip-rishabh-testing-30Sep2022-testing-default-smithi/
1089 94 Rishabh Dave
* fs-verify, passed: http://pulpito.front.sepia.ceph.com/rishabh-2022-10-05_12:25:37-fs-wip-rishabh-testing-30Sep2022-testing-default-smithi/
1090 91 Rishabh Dave
* cephadm failures also passed after many re-runs: http://pulpito.front.sepia.ceph.com/rishabh-2022-10-06_13:50:51-fs-wip-rishabh-testing-30Sep2022-2-testing-default-smithi/
1091 93 Rishabh Dave
    ** needed this PR to be merged in ceph-ci branch - https://github.com/ceph/ceph/pull/47458
1092 91 Rishabh Dave
1093
known bugs
1094
* https://tracker.ceph.com/issues/52624
1095
  qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1096
* https://tracker.ceph.com/issues/50223
1097
  client.xxxx isn't responding to mclientcaps(revoke
1098
* https://tracker.ceph.com/issues/57299
1099
  qa: test_dump_loads fails with JSONDecodeError
1100
* https://tracker.ceph.com/issues/57655 [Exist in main as well]
1101
  qa: fs:mixed-clients kernel_untar_build failure
1102
* https://tracker.ceph.com/issues/57206
1103 90 Rishabh Dave
  libcephfs/test.sh: ceph_test_libcephfs_reclaim
1104
1105
h3. 2022 Sep 29
1106
1107
http://pulpito.front.sepia.ceph.com/rishabh-2022-09-14_12:48:43-fs-wip-rishabh-testing-2022Sep9-1708-testing-default-smithi/
1108
1109
* https://tracker.ceph.com/issues/55804
1110
  Command failed (workunit test suites/pjd.sh)
1111
* https://tracker.ceph.com/issues/36593
1112
  Command failed (workunit test fs/quota/quota.sh) on smithixxx with status 1
1113
* https://tracker.ceph.com/issues/52624
1114
  qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1115
* https://tracker.ceph.com/issues/51964
1116
  qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1117
* https://tracker.ceph.com/issues/56632
1118
  Test failure: test_subvolume_snapshot_clone_quota_exceeded
1119
* https://tracker.ceph.com/issues/50821
1120 88 Patrick Donnelly
  qa: untar_snap_rm failure during mds thrashing
1121
1122
h3. 2022 Sep 26
1123
1124
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20220923.171109
1125
1126
* https://tracker.ceph.com/issues/55804
1127
    qa failure: pjd link tests failed
1128
* https://tracker.ceph.com/issues/57676
1129
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1130
* https://tracker.ceph.com/issues/52624
1131
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1132
* https://tracker.ceph.com/issues/57580
1133
    Test failure: test_newops_getvxattr (tasks.cephfs.test_newops.TestNewOps)
1134
* https://tracker.ceph.com/issues/48773
1135
    qa: scrub does not complete
1136
* https://tracker.ceph.com/issues/57299
1137
    qa: test_dump_loads fails with JSONDecodeError
1138
* https://tracker.ceph.com/issues/57280
1139
    qa: tasks/kernel_cfuse_workunits_untarbuild_blogbench fails - Failed to fetch package version from shaman
1140
* https://tracker.ceph.com/issues/57205
1141
    Test failure: test_subvolume_group_ls_filter_internal_directories (tasks.cephfs.test_volumes.TestSubvolumeGroups)
1142
* https://tracker.ceph.com/issues/57656
1143
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
1144
* https://tracker.ceph.com/issues/57677
1145
    qa: "1 MDSs behind on trimming (MDS_TRIM)"
1146
* https://tracker.ceph.com/issues/57206
1147
    libcephfs/test.sh: ceph_test_libcephfs_reclaim
1148
* https://tracker.ceph.com/issues/57446
1149
    qa: test_subvolume_snapshot_info_if_orphan_clone fails
1150 89 Patrick Donnelly
* https://tracker.ceph.com/issues/57655 [Exist in main as well]
1151
    qa: fs:mixed-clients kernel_untar_build failure
1152 88 Patrick Donnelly
* https://tracker.ceph.com/issues/57682
1153
    client: ERROR: test_reconnect_after_blocklisted
1154 87 Patrick Donnelly
1155
1156
h3. 2022 Sep 22
1157
1158
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20220920.234701
1159
1160
* https://tracker.ceph.com/issues/57299
1161
    qa: test_dump_loads fails with JSONDecodeError
1162
* https://tracker.ceph.com/issues/57205
1163
    Test failure: test_subvolume_group_ls_filter_internal_directories (tasks.cephfs.test_volumes.TestSubvolumeGroups)
1164
* https://tracker.ceph.com/issues/52624
1165
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1166
* https://tracker.ceph.com/issues/57580
1167
    Test failure: test_newops_getvxattr (tasks.cephfs.test_newops.TestNewOps)
1168
* https://tracker.ceph.com/issues/57280
1169
    qa: tasks/kernel_cfuse_workunits_untarbuild_blogbench fails - Failed to fetch package version from shaman
1170
* https://tracker.ceph.com/issues/48773
1171
    qa: scrub does not complete
1172
* https://tracker.ceph.com/issues/56446
1173
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
1174
* https://tracker.ceph.com/issues/57206
1175
    libcephfs/test.sh: ceph_test_libcephfs_reclaim
1176
* https://tracker.ceph.com/issues/51267
1177
    CommandFailedError: Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithi096 with status 1:...
1178
1179
NEW:
1180
1181
* https://tracker.ceph.com/issues/57656
1182
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
1183
* https://tracker.ceph.com/issues/57655 [Exist in main as well]
1184
    qa: fs:mixed-clients kernel_untar_build failure
1185
* https://tracker.ceph.com/issues/57657
1186
    mds: scrub locates mismatch between child accounted_rstats and self rstats
1187
1188
Segfault probably caused by: https://github.com/ceph/ceph/pull/47795#issuecomment-1255724799
1189 80 Venky Shankar
1190 79 Venky Shankar
1191
h3. 2022 Sep 16
1192
1193
https://pulpito.ceph.com/?branch=wip-vshankar-testing1-20220905-132828
1194
1195
* https://tracker.ceph.com/issues/57446
1196
    qa: test_subvolume_snapshot_info_if_orphan_clone fails
1197
* https://tracker.ceph.com/issues/57299
1198
    qa: test_dump_loads fails with JSONDecodeError
1199
* https://tracker.ceph.com/issues/50223
1200
    client.xxxx isn't responding to mclientcaps(revoke)
1201
* https://tracker.ceph.com/issues/52624
1202
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1203
* https://tracker.ceph.com/issues/57205
1204
    Test failure: test_subvolume_group_ls_filter_internal_directories (tasks.cephfs.test_volumes.TestSubvolumeGroups)
1205
* https://tracker.ceph.com/issues/57280
1206
    qa: tasks/kernel_cfuse_workunits_untarbuild_blogbench fails - Failed to fetch package version from shaman
1207
* https://tracker.ceph.com/issues/51282
1208
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
1209
* https://tracker.ceph.com/issues/48203
1210
  https://tracker.ceph.com/issues/36593
1211
    qa: quota failure
1212
    qa: quota failure caused by clients stepping on each other
1213
* https://tracker.ceph.com/issues/57580
1214 77 Rishabh Dave
    Test failure: test_newops_getvxattr (tasks.cephfs.test_newops.TestNewOps)
1215
1216 76 Rishabh Dave
1217
h3. 2022 Aug 26
1218
1219
http://pulpito.front.sepia.ceph.com/rishabh-2022-08-22_17:49:59-fs-wip-rishabh-testing-2022Aug19-testing-default-smithi/
1220
http://pulpito.front.sepia.ceph.com/rishabh-2022-08-24_11:56:51-fs-wip-rishabh-testing-2022Aug19-testing-default-smithi/
1221
1222
* https://tracker.ceph.com/issues/57206
1223
  libcephfs/test.sh: ceph_test_libcephfs_reclaim
1224
* https://tracker.ceph.com/issues/56632
1225
  Test failure: test_subvolume_snapshot_clone_quota_exceeded (tasks.cephfs.test_volumes.TestSubvolumeSnapshotClones)
1226
* https://tracker.ceph.com/issues/56446
1227
  Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
1228
* https://tracker.ceph.com/issues/51964
1229
  qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1230
* https://tracker.ceph.com/issues/53859
1231
  qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
1232
1233
* https://tracker.ceph.com/issues/54460
1234
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1235
* https://tracker.ceph.com/issues/54462
1236
  Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128
1237
* https://tracker.ceph.com/issues/54460
1238
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1239
* https://tracker.ceph.com/issues/36593
1240
  Command failed (workunit test fs/quota/quota.sh) on smithixxx with status 1
1241
1242
* https://tracker.ceph.com/issues/52624
1243
  qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1244
* https://tracker.ceph.com/issues/55804
1245
  Command failed (workunit test suites/pjd.sh)
1246
* https://tracker.ceph.com/issues/50223
1247
  client.xxxx isn't responding to mclientcaps(revoke)
1248 75 Venky Shankar
1249
1250
h3. 2022 Aug 22
1251
1252
https://pulpito.ceph.com/vshankar-2022-08-12_09:34:24-fs-wip-vshankar-testing1-20220812-072441-testing-default-smithi/
1253
https://pulpito.ceph.com/vshankar-2022-08-18_04:30:42-fs-wip-vshankar-testing1-20220818-082047-testing-default-smithi/ (drop problematic PR and re-run)
1254
1255
* https://tracker.ceph.com/issues/52624
1256
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1257
* https://tracker.ceph.com/issues/56446
1258
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
1259
* https://tracker.ceph.com/issues/55804
1260
    Command failed (workunit test suites/pjd.sh)
1261
* https://tracker.ceph.com/issues/51278
1262
    mds: "FAILED ceph_assert(!segments.empty())"
1263
* https://tracker.ceph.com/issues/54460
1264
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1265
* https://tracker.ceph.com/issues/57205
1266
    Test failure: test_subvolume_group_ls_filter_internal_directories (tasks.cephfs.test_volumes.TestSubvolumeGroups)
1267
* https://tracker.ceph.com/issues/57206
1268
    ceph_test_libcephfs_reclaim crashes during test
1269
* https://tracker.ceph.com/issues/53859
1270
  Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
1271
* https://tracker.ceph.com/issues/50223
1272 72 Venky Shankar
    client.xxxx isn't responding to mclientcaps(revoke)
1273
1274
h3. 2022 Aug 12
1275
1276
https://pulpito.ceph.com/vshankar-2022-08-10_04:06:00-fs-wip-vshankar-testing-20220805-190751-testing-default-smithi/
1277
https://pulpito.ceph.com/vshankar-2022-08-11_12:16:58-fs-wip-vshankar-testing-20220811-145809-testing-default-smithi/ (drop problematic PR and re-run)
1278
1279
* https://tracker.ceph.com/issues/52624
1280
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1281
* https://tracker.ceph.com/issues/56446
1282
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
1283
* https://tracker.ceph.com/issues/51964
1284
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1285
* https://tracker.ceph.com/issues/55804
1286
    Command failed (workunit test suites/pjd.sh)
1287
* https://tracker.ceph.com/issues/50223
1288
    client.xxxx isn't responding to mclientcaps(revoke)
1289
* https://tracker.ceph.com/issues/50821
1290 73 Venky Shankar
    qa: untar_snap_rm failure during mds thrashing
1291 72 Venky Shankar
* https://tracker.ceph.com/issues/54460
1292 71 Venky Shankar
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1293
1294
h3. 2022 Aug 04
1295
1296
https://pulpito.ceph.com/?branch=wip-vshankar-testing1-20220804-123835 (only mgr/volumes, mgr/stats)
1297
1298 69 Rishabh Dave
Unrealted teuthology failure on rhel
1299 68 Rishabh Dave
1300
h3. 2022 Jul 25
1301
1302
http://pulpito.front.sepia.ceph.com/rishabh-2022-07-22_11:34:20-fs-wip-rishabh-testing-2022Jul22-1400-testing-default-smithi/
1303
1304 74 Rishabh Dave
1st re-run: http://pulpito.front.sepia.ceph.com/rishabh-2022-07-24_03:51:19-fs-wip-rishabh-testing-2022Jul22-1400-testing-default-smithi
1305
2nd re-run: http://pulpito.front.sepia.ceph.com/rishabh-2022-07-24_08:53:36-fs-wip-rishabh-testing-2022Jul22-1400-testing-default-smithi/
1306 68 Rishabh Dave
3rd re-run: http://pulpito.front.sepia.ceph.com/rishabh-2022-07-24_08:53:36-fs-wip-rishabh-testing-2022Jul22-1400-testing-default-smithi/
1307
4th (final) re-run: http://pulpito.front.sepia.ceph.com/rishabh-2022-07-28_03:59:01-fs-wip-rishabh-testing-2022Jul28-0143-testing-default-smithi/
1308
1309
* https://tracker.ceph.com/issues/55804
1310
  Command failed (workunit test suites/pjd.sh)
1311
* https://tracker.ceph.com/issues/50223
1312
  client.xxxx isn't responding to mclientcaps(revoke)
1313
1314
* https://tracker.ceph.com/issues/54460
1315
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1316 1 Patrick Donnelly
* https://tracker.ceph.com/issues/36593
1317 74 Rishabh Dave
  Command failed (workunit test fs/quota/quota.sh) on smithixxx with status 1
1318 68 Rishabh Dave
* https://tracker.ceph.com/issues/54462
1319 67 Patrick Donnelly
  Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128~
1320
1321
h3. 2022 July 22
1322
1323
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20220721.235756
1324
1325
MDS_HEALTH_DUMMY error in log fixed by followup commit.
1326
transient selinux ping failure
1327
1328
* https://tracker.ceph.com/issues/56694
1329
    qa: avoid blocking forever on hung umount
1330
* https://tracker.ceph.com/issues/56695
1331
    [RHEL stock] pjd test failures
1332
* https://tracker.ceph.com/issues/56696
1333
    admin keyring disappears during qa run
1334
* https://tracker.ceph.com/issues/56697
1335
    qa: fs/snaps fails for fuse
1336
* https://tracker.ceph.com/issues/50222
1337
    osd: 5.2s0 deep-scrub : stat mismatch
1338
* https://tracker.ceph.com/issues/56698
1339
    client: FAILED ceph_assert(_size == 0)
1340
* https://tracker.ceph.com/issues/50223
1341
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
1342 66 Rishabh Dave
1343 65 Rishabh Dave
1344
h3. 2022 Jul 15
1345
1346
http://pulpito.front.sepia.ceph.com/rishabh-2022-07-08_23:53:34-fs-wip-rishabh-testing-2022Jul08-1820-testing-default-smithi/
1347
1348
re-run: http://pulpito.front.sepia.ceph.com/rishabh-2022-07-15_06:42:04-fs-wip-rishabh-testing-2022Jul08-1820-testing-default-smithi/
1349
1350
* https://tracker.ceph.com/issues/53859
1351
  Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
1352
* https://tracker.ceph.com/issues/55804
1353
  Command failed (workunit test suites/pjd.sh)
1354
* https://tracker.ceph.com/issues/50223
1355
  client.xxxx isn't responding to mclientcaps(revoke)
1356
* https://tracker.ceph.com/issues/50222
1357
  osd: deep-scrub : stat mismatch
1358
1359
* https://tracker.ceph.com/issues/56632
1360
  Test failure: test_subvolume_snapshot_clone_quota_exceeded (tasks.cephfs.test_volumes.TestSubvolumeSnapshotClones)
1361
* https://tracker.ceph.com/issues/56634
1362
  workunit test fs/snaps/snaptest-intodir.sh
1363
* https://tracker.ceph.com/issues/56644
1364
  Test failure: test_rapid_creation (tasks.cephfs.test_fragment.TestFragmentation)
1365
1366 61 Rishabh Dave
1367
1368
h3. 2022 July 05
1369 62 Rishabh Dave
1370 64 Rishabh Dave
http://pulpito.front.sepia.ceph.com/rishabh-2022-07-02_14:14:52-fs-wip-rishabh-testing-20220702-1631-testing-default-smithi/
1371
1372
On 1st re-run some jobs passed - http://pulpito.front.sepia.ceph.com/rishabh-2022-07-03_15:10:28-fs-wip-rishabh-testing-20220702-1631-distro-default-smithi/
1373
1374
On 2nd re-run only few jobs failed -
1375 62 Rishabh Dave
http://pulpito.front.sepia.ceph.com/rishabh-2022-07-06_05:24:29-fs-wip-rishabh-testing-20220705-2132-distro-default-smithi/
1376
http://pulpito.front.sepia.ceph.com/rishabh-2022-07-06_05:24:29-fs-wip-rishabh-testing-20220705-2132-distro-default-smithi/
1377
1378
* https://tracker.ceph.com/issues/56446
1379
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
1380
* https://tracker.ceph.com/issues/55804
1381
    Command failed (workunit test suites/pjd.sh) on smithi047 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/
1382
1383
* https://tracker.ceph.com/issues/56445
1384 63 Rishabh Dave
    Command failed on smithi080 with status 123: "find /home/ubuntu/cephtest/archive/syslog -name '*.log' -print0 | sudo xargs -0 --no-run-if-empty -- gzip --"
1385
* https://tracker.ceph.com/issues/51267
1386
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithi098 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1
1387 62 Rishabh Dave
* https://tracker.ceph.com/issues/50224
1388
    Test failure: test_mirroring_init_failure_with_recovery (tasks.cephfs.test_mirroring.TestMirroring)
1389 61 Rishabh Dave
1390 58 Venky Shankar
1391
1392
h3. 2022 July 04
1393
1394
https://pulpito.ceph.com/vshankar-2022-06-29_09:19:00-fs-wip-vshankar-testing-20220627-100931-testing-default-smithi/
1395
(rhel runs were borked due to: https://lists.ceph.io/hyperkitty/list/dev@ceph.io/thread/JSZQFUKVLDND4W33PXDGCABPHNSPT6SS/, tests ran with --filter-out=rhel)
1396
1397
* https://tracker.ceph.com/issues/56445
1398 59 Rishabh Dave
    Command failed on smithi162 with status 123: "find /home/ubuntu/cephtest/archive/syslog -name '*.log' -print0 | sudo xargs -0 --no-run-if-empty -- gzip --"
1399
* https://tracker.ceph.com/issues/56446
1400
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
1401
* https://tracker.ceph.com/issues/51964
1402 60 Rishabh Dave
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1403 59 Rishabh Dave
* https://tracker.ceph.com/issues/52624
1404 57 Venky Shankar
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1405
1406
h3. 2022 June 20
1407
1408
https://pulpito.ceph.com/vshankar-2022-06-15_04:03:39-fs-wip-vshankar-testing1-20220615-072516-testing-default-smithi/
1409
https://pulpito.ceph.com/vshankar-2022-06-19_08:22:46-fs-wip-vshankar-testing1-20220619-102531-testing-default-smithi/
1410
1411
* https://tracker.ceph.com/issues/52624
1412
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1413
* https://tracker.ceph.com/issues/55804
1414
    qa failure: pjd link tests failed
1415
* https://tracker.ceph.com/issues/54108
1416
    qa: iogen workunit: "The following counters failed to be set on mds daemons: {'mds.exported', 'mds.imported'}"
1417
* https://tracker.ceph.com/issues/55332
1418 56 Patrick Donnelly
    Failure in snaptest-git-ceph.sh (it's an async unlink/create bug)
1419
1420
h3. 2022 June 13
1421
1422
https://pulpito.ceph.com/pdonnell-2022-06-12_05:08:12-fs:workload-wip-pdonnell-testing-20220612.004943-distro-default-smithi/
1423
1424
* https://tracker.ceph.com/issues/56024
1425
    cephadm: removes ceph.conf during qa run causing command failure
1426
* https://tracker.ceph.com/issues/48773
1427
    qa: scrub does not complete
1428
* https://tracker.ceph.com/issues/56012
1429
    mds: src/mds/MDLog.cc: 283: FAILED ceph_assert(!mds->is_ any_replay())
1430 55 Venky Shankar
1431 54 Venky Shankar
1432
h3. 2022 Jun 13
1433
1434
https://pulpito.ceph.com/vshankar-2022-06-07_00:25:50-fs-wip-vshankar-testing-20220606-223254-testing-default-smithi/
1435
https://pulpito.ceph.com/vshankar-2022-06-10_01:04:46-fs-wip-vshankar-testing-20220609-175550-testing-default-smithi/
1436
1437
* https://tracker.ceph.com/issues/52624
1438
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1439
* https://tracker.ceph.com/issues/51964
1440
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1441
* https://tracker.ceph.com/issues/53859
1442
    qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
1443
* https://tracker.ceph.com/issues/55804
1444
    qa failure: pjd link tests failed
1445
* https://tracker.ceph.com/issues/56003
1446
    client: src/include/xlist.h: 81: FAILED ceph_assert(_size == 0)
1447
* https://tracker.ceph.com/issues/56011
1448
    fs/thrash: snaptest-snap-rm-cmp.sh fails in mds5sum comparison
1449
* https://tracker.ceph.com/issues/56012
1450 53 Venky Shankar
    mds: src/mds/MDLog.cc: 283: FAILED ceph_assert(!mds->is_ any_replay())
1451
1452
h3. 2022 Jun 07
1453
1454
https://pulpito.ceph.com/vshankar-2022-06-06_21:25:41-fs-wip-vshankar-testing1-20220606-230129-testing-default-smithi/
1455
https://pulpito.ceph.com/vshankar-2022-06-07_10:53:31-fs-wip-vshankar-testing1-20220607-104134-testing-default-smithi/ (rerun after dropping a problematic PR)
1456
1457
* https://tracker.ceph.com/issues/52624
1458
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1459
* https://tracker.ceph.com/issues/50223
1460
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
1461
* https://tracker.ceph.com/issues/50224
1462 51 Venky Shankar
    qa: test_mirroring_init_failure_with_recovery failure
1463
1464
h3. 2022 May 12
1465 52 Venky Shankar
1466 51 Venky Shankar
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20220509-125847
1467
https://pulpito.ceph.com/vshankar-2022-05-13_17:09:16-fs-wip-vshankar-testing-20220513-120051-testing-default-smithi/ (drop prs + rerun)
1468
1469
* https://tracker.ceph.com/issues/52624
1470
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1471
* https://tracker.ceph.com/issues/50223
1472
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
1473
* https://tracker.ceph.com/issues/55332
1474
    Failure in snaptest-git-ceph.sh
1475
* https://tracker.ceph.com/issues/53859
1476 1 Patrick Donnelly
    qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
1477 52 Venky Shankar
* https://tracker.ceph.com/issues/55538
1478
    Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead)
1479 51 Venky Shankar
* https://tracker.ceph.com/issues/55258
1480 49 Venky Shankar
    lots of "heartbeat_check: no reply from X.X.X.X" in OSD logs (cropss up again, though very infrequent)
1481
1482 50 Venky Shankar
h3. 2022 May 04
1483
1484
https://pulpito.ceph.com/vshankar-2022-05-01_13:18:44-fs-wip-vshankar-testing1-20220428-204527-testing-default-smithi/
1485 49 Venky Shankar
https://pulpito.ceph.com/vshankar-2022-05-02_16:58:59-fs-wip-vshankar-testing1-20220502-201957-testing-default-smithi/ (after dropping PRs)
1486
1487
* https://tracker.ceph.com/issues/52624
1488
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1489
* https://tracker.ceph.com/issues/50223
1490
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
1491
* https://tracker.ceph.com/issues/55332
1492
    Failure in snaptest-git-ceph.sh
1493
* https://tracker.ceph.com/issues/53859
1494
    qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
1495
* https://tracker.ceph.com/issues/55516
1496
    qa: fs suite tests failing with "json.decoder.JSONDecodeError: Extra data: line 2 column 82 (char 82)"
1497
* https://tracker.ceph.com/issues/55537
1498
    mds: crash during fs:upgrade test
1499
* https://tracker.ceph.com/issues/55538
1500 48 Venky Shankar
    Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead)
1501
1502
h3. 2022 Apr 25
1503
1504
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20220420-113951 (owner vshankar)
1505
1506
* https://tracker.ceph.com/issues/52624
1507
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1508
* https://tracker.ceph.com/issues/50223
1509
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
1510
* https://tracker.ceph.com/issues/55258
1511
    lots of "heartbeat_check: no reply from X.X.X.X" in OSD logs
1512
* https://tracker.ceph.com/issues/55377
1513 47 Venky Shankar
    kclient: mds revoke Fwb caps stuck after the kclient tries writebcak once
1514
1515
h3. 2022 Apr 14
1516
1517
https://pulpito.ceph.com/?branch=wip-vshankar-testing1-20220411-144044
1518
1519
* https://tracker.ceph.com/issues/52624
1520
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1521
* https://tracker.ceph.com/issues/50223
1522
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
1523
* https://tracker.ceph.com/issues/52438
1524
    qa: ffsb timeout
1525
* https://tracker.ceph.com/issues/55170
1526
    mds: crash during rejoin (CDir::fetch_keys)
1527
* https://tracker.ceph.com/issues/55331
1528
    pjd failure
1529
* https://tracker.ceph.com/issues/48773
1530
    qa: scrub does not complete
1531
* https://tracker.ceph.com/issues/55332
1532
    Failure in snaptest-git-ceph.sh
1533
* https://tracker.ceph.com/issues/55258
1534 45 Venky Shankar
    lots of "heartbeat_check: no reply from X.X.X.X" in OSD logs
1535
1536 46 Venky Shankar
h3. 2022 Apr 11
1537 45 Venky Shankar
1538
https://pulpito.ceph.com/?branch=wip-vshankar-testing-55110-20220408-203242
1539
1540
* https://tracker.ceph.com/issues/48773
1541
    qa: scrub does not complete
1542
* https://tracker.ceph.com/issues/52624
1543
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1544
* https://tracker.ceph.com/issues/52438
1545
    qa: ffsb timeout
1546
* https://tracker.ceph.com/issues/48680
1547
    mds: scrubbing stuck "scrub active (0 inodes in the stack)"
1548
* https://tracker.ceph.com/issues/55236
1549
    qa: fs/snaps tests fails with "hit max job timeout"
1550
* https://tracker.ceph.com/issues/54108
1551
    qa: iogen workunit: "The following counters failed to be set on mds daemons: {'mds.exported', 'mds.imported'}"
1552
* https://tracker.ceph.com/issues/54971
1553
    Test failure: test_perf_stats_stale_metrics (tasks.cephfs.test_mds_metrics.TestMDSMetrics)
1554
* https://tracker.ceph.com/issues/50223
1555
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
1556
* https://tracker.ceph.com/issues/55258
1557 44 Venky Shankar
    lots of "heartbeat_check: no reply from X.X.X.X" in OSD logs
1558 42 Venky Shankar
1559 43 Venky Shankar
h3. 2022 Mar 21
1560
1561
https://pulpito.ceph.com/vshankar-2022-03-20_02:16:37-fs-wip-vshankar-testing-20220319-163539-testing-default-smithi/
1562
1563
Run didn't go well, lots of failures - debugging by dropping PRs and running against master branch. Only merging unrelated PRs that pass tests.
1564
1565
1566 42 Venky Shankar
h3. 2022 Mar 08
1567
1568
https://pulpito.ceph.com/vshankar-2022-02-28_04:32:15-fs-wip-vshankar-testing-20220226-211550-testing-default-smithi/
1569
1570
rerun with
1571
- (drop) https://github.com/ceph/ceph/pull/44679
1572
- (drop) https://github.com/ceph/ceph/pull/44958
1573
https://pulpito.ceph.com/vshankar-2022-03-06_14:47:51-fs-wip-vshankar-testing-20220304-132102-testing-default-smithi/
1574
1575
* https://tracker.ceph.com/issues/54419 (new)
1576
    `ceph orch upgrade start` seems to never reach completion
1577
* https://tracker.ceph.com/issues/51964
1578
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1579
* https://tracker.ceph.com/issues/52624
1580
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1581
* https://tracker.ceph.com/issues/50223
1582
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
1583
* https://tracker.ceph.com/issues/52438
1584
    qa: ffsb timeout
1585
* https://tracker.ceph.com/issues/50821
1586
    qa: untar_snap_rm failure during mds thrashing
1587 41 Venky Shankar
1588
1589
h3. 2022 Feb 09
1590
1591
https://pulpito.ceph.com/vshankar-2022-02-05_17:27:49-fs-wip-vshankar-testing-20220201-113815-testing-default-smithi/
1592
1593
rerun with
1594
- (drop) https://github.com/ceph/ceph/pull/37938
1595
- (drop) https://github.com/ceph/ceph/pull/44335
1596
- (drop) https://github.com/ceph/ceph/pull/44491
1597
- (drop) https://github.com/ceph/ceph/pull/44501
1598
https://pulpito.ceph.com/vshankar-2022-02-08_14:27:29-fs-wip-vshankar-testing-20220208-181241-testing-default-smithi/
1599
1600
* https://tracker.ceph.com/issues/51964
1601
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1602
* https://tracker.ceph.com/issues/54066
1603
    test_subvolume_no_upgrade_v1_sanity fails with `AssertionError: 1000 != 0`
1604
* https://tracker.ceph.com/issues/48773
1605
    qa: scrub does not complete
1606
* https://tracker.ceph.com/issues/52624
1607
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1608
* https://tracker.ceph.com/issues/50223
1609
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
1610
* https://tracker.ceph.com/issues/52438
1611 40 Patrick Donnelly
    qa: ffsb timeout
1612
1613
h3. 2022 Feb 01
1614
1615
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20220127.171526
1616
1617
* https://tracker.ceph.com/issues/54107
1618
    kclient: hang during umount
1619
* https://tracker.ceph.com/issues/54106
1620
    kclient: hang during workunit cleanup
1621
* https://tracker.ceph.com/issues/54108
1622
    qa: iogen workunit: "The following counters failed to be set on mds daemons: {'mds.exported', 'mds.imported'}"
1623
* https://tracker.ceph.com/issues/48773
1624
    qa: scrub does not complete
1625
* https://tracker.ceph.com/issues/52438
1626
    qa: ffsb timeout
1627 36 Venky Shankar
1628
1629
h3. 2022 Jan 13
1630 39 Venky Shankar
1631 36 Venky Shankar
https://pulpito.ceph.com/vshankar-2022-01-06_13:18:41-fs-wip-vshankar-testing-20220106-145819-testing-default-smithi/
1632 38 Venky Shankar
1633
rerun with:
1634 36 Venky Shankar
- (add) https://github.com/ceph/ceph/pull/44570
1635
- (drop) https://github.com/ceph/ceph/pull/43184
1636
https://pulpito.ceph.com/vshankar-2022-01-13_04:42:40-fs-wip-vshankar-testing-20220106-145819-testing-default-smithi/
1637
1638
* https://tracker.ceph.com/issues/50223
1639
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
1640
* https://tracker.ceph.com/issues/51282
1641
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
1642
* https://tracker.ceph.com/issues/48773
1643
    qa: scrub does not complete
1644
* https://tracker.ceph.com/issues/52624
1645
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1646
* https://tracker.ceph.com/issues/53859
1647 34 Venky Shankar
    qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
1648
1649
h3. 2022 Jan 03
1650
1651
https://pulpito.ceph.com/vshankar-2021-12-22_07:37:44-fs-wip-vshankar-testing-20211216-114012-testing-default-smithi/
1652
https://pulpito.ceph.com/vshankar-2022-01-03_12:27:45-fs-wip-vshankar-testing-20220103-142738-testing-default-smithi/ (rerun)
1653
1654
* https://tracker.ceph.com/issues/50223
1655
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
1656
* https://tracker.ceph.com/issues/51964
1657
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1658
* https://tracker.ceph.com/issues/51267
1659
    CommandFailedError: Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithi096 with status 1:...
1660
* https://tracker.ceph.com/issues/51282
1661
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
1662
* https://tracker.ceph.com/issues/50821
1663
    qa: untar_snap_rm failure during mds thrashing
1664 35 Ramana Raja
* https://tracker.ceph.com/issues/51278
1665
    mds: "FAILED ceph_assert(!segments.empty())"
1666
* https://tracker.ceph.com/issues/52279
1667 34 Venky Shankar
    cephadm tests fail due to: error adding seccomp filter rule for syscall bdflush: requested action matches default action of filter
1668 33 Patrick Donnelly
1669
1670
h3. 2021 Dec 22
1671
1672
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211222.014316
1673
1674
* https://tracker.ceph.com/issues/52624
1675
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1676
* https://tracker.ceph.com/issues/50223
1677
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
1678
* https://tracker.ceph.com/issues/52279
1679
    cephadm tests fail due to: error adding seccomp filter rule for syscall bdflush: requested action matches default action of filter
1680
* https://tracker.ceph.com/issues/50224
1681
    qa: test_mirroring_init_failure_with_recovery failure
1682
* https://tracker.ceph.com/issues/48773
1683
    qa: scrub does not complete
1684 32 Venky Shankar
1685
1686
h3. 2021 Nov 30
1687
1688
https://pulpito.ceph.com/vshankar-2021-11-24_07:14:27-fs-wip-vshankar-testing-20211124-094330-testing-default-smithi/
1689
https://pulpito.ceph.com/vshankar-2021-11-30_06:23:32-fs-wip-vshankar-testing-20211124-094330-distro-default-smithi/ (rerun w/ QA fixes)
1690
1691
* https://tracker.ceph.com/issues/53436
1692
    mds, mon: mds beacon messages get dropped? (mds never reaches up:active state)
1693
* https://tracker.ceph.com/issues/51964
1694
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1695
* https://tracker.ceph.com/issues/48812
1696
    qa: test_scrub_pause_and_resume_with_abort failure
1697
* https://tracker.ceph.com/issues/51076
1698
    "wait_for_recovery: failed before timeout expired" during thrashosd test with EC backend.
1699
* https://tracker.ceph.com/issues/50223
1700
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
1701
* https://tracker.ceph.com/issues/52624
1702
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1703
* https://tracker.ceph.com/issues/50250
1704
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
1705 31 Patrick Donnelly
1706
1707
h3. 2021 November 9
1708
1709
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211109.180315
1710
1711
* https://tracker.ceph.com/issues/53214
1712
    qa: "dd: error reading '/sys/kernel/debug/ceph/2a934501-6731-4052-a836-f42229a869be.client4874/metrics': Is a directory"
1713
* https://tracker.ceph.com/issues/48773
1714
    qa: scrub does not complete
1715
* https://tracker.ceph.com/issues/50223
1716
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
1717
* https://tracker.ceph.com/issues/51282
1718
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
1719
* https://tracker.ceph.com/issues/52624
1720
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1721
* https://tracker.ceph.com/issues/53216
1722
    qa: "RuntimeError: value of attributes should be either str or None. client_id"
1723
* https://tracker.ceph.com/issues/50250
1724
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
1725
1726 30 Patrick Donnelly
1727
1728
h3. 2021 November 03
1729
1730
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211103.023355
1731
1732
* https://tracker.ceph.com/issues/51964
1733
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1734
* https://tracker.ceph.com/issues/51282
1735
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
1736
* https://tracker.ceph.com/issues/52436
1737
    fs/ceph: "corrupt mdsmap"
1738
* https://tracker.ceph.com/issues/53074
1739
    pybind/mgr/cephadm: upgrade sequence does not continue if no MDS are active
1740
* https://tracker.ceph.com/issues/53150
1741
    pybind/mgr/cephadm/upgrade: tolerate MDS failures during upgrade straddling v16.2.5
1742
* https://tracker.ceph.com/issues/53155
1743
    MDSMonitor: assertion during upgrade to v16.2.5+
1744 29 Patrick Donnelly
1745
1746
h3. 2021 October 26
1747
1748
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211025.000447
1749
1750
* https://tracker.ceph.com/issues/53074
1751
    pybind/mgr/cephadm: upgrade sequence does not continue if no MDS are active
1752
* https://tracker.ceph.com/issues/52997
1753
    testing: hang ing umount
1754
* https://tracker.ceph.com/issues/50824
1755
    qa: snaptest-git-ceph bus error
1756
* https://tracker.ceph.com/issues/52436
1757
    fs/ceph: "corrupt mdsmap"
1758
* https://tracker.ceph.com/issues/48773
1759
    qa: scrub does not complete
1760
* https://tracker.ceph.com/issues/53082
1761
    ceph-fuse: segmenetation fault in Client::handle_mds_map
1762
* https://tracker.ceph.com/issues/50223
1763
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
1764
* https://tracker.ceph.com/issues/52624
1765
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1766
* https://tracker.ceph.com/issues/50224
1767
    qa: test_mirroring_init_failure_with_recovery failure
1768
* https://tracker.ceph.com/issues/50821
1769
    qa: untar_snap_rm failure during mds thrashing
1770
* https://tracker.ceph.com/issues/50250
1771
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
1772
1773 27 Patrick Donnelly
1774
1775 28 Patrick Donnelly
h3. 2021 October 19
1776 27 Patrick Donnelly
1777
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211019.013028
1778
1779
* https://tracker.ceph.com/issues/52995
1780
    qa: test_standby_count_wanted failure
1781
* https://tracker.ceph.com/issues/52948
1782
    osd: fails to come up: "teuthology.misc:7 of 8 OSDs are up"
1783
* https://tracker.ceph.com/issues/52996
1784
    qa: test_perf_counters via test_openfiletable
1785
* https://tracker.ceph.com/issues/48772
1786
    qa: pjd: not ok 9, 44, 80
1787
* https://tracker.ceph.com/issues/52997
1788
    testing: hang ing umount
1789
* https://tracker.ceph.com/issues/50250
1790
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
1791
* https://tracker.ceph.com/issues/52624
1792
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1793
* https://tracker.ceph.com/issues/50223
1794
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
1795
* https://tracker.ceph.com/issues/50821
1796
    qa: untar_snap_rm failure during mds thrashing
1797
* https://tracker.ceph.com/issues/48773
1798
    qa: scrub does not complete
1799 26 Patrick Donnelly
1800
1801
h3. 2021 October 12
1802
1803
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211012.192211
1804
1805
Some failures caused by teuthology bug: https://tracker.ceph.com/issues/52944
1806
1807
New test caused failure: https://github.com/ceph/ceph/pull/43297#discussion_r729883167
1808
1809
1810
* https://tracker.ceph.com/issues/51282
1811
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
1812
* https://tracker.ceph.com/issues/52948
1813
    osd: fails to come up: "teuthology.misc:7 of 8 OSDs are up"
1814
* https://tracker.ceph.com/issues/48773
1815
    qa: scrub does not complete
1816
* https://tracker.ceph.com/issues/50224
1817
    qa: test_mirroring_init_failure_with_recovery failure
1818
* https://tracker.ceph.com/issues/52949
1819
    RuntimeError: The following counters failed to be set on mds daemons: {'mds.dir_split'}
1820 25 Patrick Donnelly
1821 23 Patrick Donnelly
1822 24 Patrick Donnelly
h3. 2021 October 02
1823
1824
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211002.163337
1825
1826
Some failures caused by cephadm upgrade test. Fixed in follow-up qa commit.
1827
1828
test_simple failures caused by PR in this set.
1829
1830
A few reruns because of QA infra noise.
1831
1832
* https://tracker.ceph.com/issues/52822
1833
    qa: failed pacific install on fs:upgrade
1834
* https://tracker.ceph.com/issues/52624
1835
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1836
* https://tracker.ceph.com/issues/50223
1837
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
1838
* https://tracker.ceph.com/issues/48773
1839
    qa: scrub does not complete
1840
1841
1842 23 Patrick Donnelly
h3. 2021 September 20
1843
1844
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210917.174826
1845
1846
* https://tracker.ceph.com/issues/52677
1847
    qa: test_simple failure
1848
* https://tracker.ceph.com/issues/51279
1849
    kclient hangs on umount (testing branch)
1850
* https://tracker.ceph.com/issues/50223
1851
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
1852
* https://tracker.ceph.com/issues/50250
1853
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
1854
* https://tracker.ceph.com/issues/52624
1855
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1856
* https://tracker.ceph.com/issues/52438
1857
    qa: ffsb timeout
1858 22 Patrick Donnelly
1859
1860
h3. 2021 September 10
1861
1862
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210910.181451
1863
1864
* https://tracker.ceph.com/issues/50223
1865
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
1866
* https://tracker.ceph.com/issues/50250
1867
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
1868
* https://tracker.ceph.com/issues/52624
1869
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1870
* https://tracker.ceph.com/issues/52625
1871
    qa: test_kill_mdstable (tasks.cephfs.test_snapshots.TestSnapshots)
1872
* https://tracker.ceph.com/issues/52439
1873
    qa: acls does not compile on centos stream
1874
* https://tracker.ceph.com/issues/50821
1875
    qa: untar_snap_rm failure during mds thrashing
1876
* https://tracker.ceph.com/issues/48773
1877
    qa: scrub does not complete
1878
* https://tracker.ceph.com/issues/52626
1879
    mds: ScrubStack.cc: 831: FAILED ceph_assert(diri)
1880
* https://tracker.ceph.com/issues/51279
1881
    kclient hangs on umount (testing branch)
1882 21 Patrick Donnelly
1883
1884
h3. 2021 August 27
1885
1886
Several jobs died because of device failures.
1887
1888
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210827.024746
1889
1890
* https://tracker.ceph.com/issues/52430
1891
    mds: fast async create client mount breaks racy test
1892
* https://tracker.ceph.com/issues/52436
1893
    fs/ceph: "corrupt mdsmap"
1894
* https://tracker.ceph.com/issues/52437
1895
    mds: InoTable::replay_release_ids abort via test_inotable_sync
1896
* https://tracker.ceph.com/issues/51282
1897
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
1898
* https://tracker.ceph.com/issues/52438
1899
    qa: ffsb timeout
1900
* https://tracker.ceph.com/issues/52439
1901
    qa: acls does not compile on centos stream
1902 20 Patrick Donnelly
1903
1904
h3. 2021 July 30
1905
1906
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210729.214022
1907
1908
* https://tracker.ceph.com/issues/50250
1909
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
1910
* https://tracker.ceph.com/issues/51282
1911
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
1912
* https://tracker.ceph.com/issues/48773
1913
    qa: scrub does not complete
1914
* https://tracker.ceph.com/issues/51975
1915
    pybind/mgr/stats: KeyError
1916 19 Patrick Donnelly
1917
1918
h3. 2021 July 28
1919
1920
https://pulpito.ceph.com/pdonnell-2021-07-28_00:39:45-fs-wip-pdonnell-testing-20210727.213757-distro-basic-smithi/
1921
1922
with qa fix: https://pulpito.ceph.com/pdonnell-2021-07-28_16:20:28-fs-wip-pdonnell-testing-20210728.141004-distro-basic-smithi/
1923
1924
* https://tracker.ceph.com/issues/51905
1925
    qa: "error reading sessionmap 'mds1_sessionmap'"
1926
* https://tracker.ceph.com/issues/48773
1927
    qa: scrub does not complete
1928
* https://tracker.ceph.com/issues/50250
1929
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
1930
* https://tracker.ceph.com/issues/51267
1931
    CommandFailedError: Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithi096 with status 1:...
1932
* https://tracker.ceph.com/issues/51279
1933
    kclient hangs on umount (testing branch)
1934 18 Patrick Donnelly
1935
1936
h3. 2021 July 16
1937
1938
https://pulpito.ceph.com/pdonnell-2021-07-16_05:50:11-fs-wip-pdonnell-testing-20210716.022804-distro-basic-smithi/
1939
1940
* https://tracker.ceph.com/issues/48773
1941
    qa: scrub does not complete
1942
* https://tracker.ceph.com/issues/48772
1943
    qa: pjd: not ok 9, 44, 80
1944
* https://tracker.ceph.com/issues/45434
1945
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
1946
* https://tracker.ceph.com/issues/51279
1947
    kclient hangs on umount (testing branch)
1948
* https://tracker.ceph.com/issues/50824
1949
    qa: snaptest-git-ceph bus error
1950 17 Patrick Donnelly
1951
1952
h3. 2021 July 04
1953
1954
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210703.052904
1955
1956
* https://tracker.ceph.com/issues/48773
1957
    qa: scrub does not complete
1958
* https://tracker.ceph.com/issues/39150
1959
    mon: "FAILED ceph_assert(session_map.sessions.empty())" when out of quorum
1960
* https://tracker.ceph.com/issues/45434
1961
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
1962
* https://tracker.ceph.com/issues/51282
1963
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
1964
* https://tracker.ceph.com/issues/48771
1965
    qa: iogen: workload fails to cause balancing
1966
* https://tracker.ceph.com/issues/51279
1967
    kclient hangs on umount (testing branch)
1968
* https://tracker.ceph.com/issues/50250
1969
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
1970 16 Patrick Donnelly
1971
1972
h3. 2021 July 01
1973
1974
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210701.192056
1975
1976
* https://tracker.ceph.com/issues/51197
1977
    qa: [WRN] Scrub error on inode 0x10000001520 (/client.0/tmp/t/linux-5.4/Documentation/driver-api) see mds.f log and `damage ls` output for details
1978
* https://tracker.ceph.com/issues/50866
1979
    osd: stat mismatch on objects
1980
* https://tracker.ceph.com/issues/48773
1981
    qa: scrub does not complete
1982 15 Patrick Donnelly
1983
1984
h3. 2021 June 26
1985
1986
https://pulpito.ceph.com/pdonnell-2021-06-26_00:57:00-fs-wip-pdonnell-testing-20210625.225421-distro-basic-smithi/
1987
1988
* https://tracker.ceph.com/issues/51183
1989
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
1990
* https://tracker.ceph.com/issues/51410
1991
    kclient: fails to finish reconnect during MDS thrashing (testing branch)
1992
* https://tracker.ceph.com/issues/48773
1993
    qa: scrub does not complete
1994
* https://tracker.ceph.com/issues/51282
1995
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
1996
* https://tracker.ceph.com/issues/51169
1997
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
1998
* https://tracker.ceph.com/issues/48772
1999
    qa: pjd: not ok 9, 44, 80
2000 14 Patrick Donnelly
2001
2002
h3. 2021 June 21
2003
2004
https://pulpito.ceph.com/pdonnell-2021-06-22_00:27:21-fs-wip-pdonnell-testing-20210621.231646-distro-basic-smithi/
2005
2006
One failure caused by PR: https://github.com/ceph/ceph/pull/41935#issuecomment-866472599
2007
2008
* https://tracker.ceph.com/issues/51282
2009
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2010
* https://tracker.ceph.com/issues/51183
2011
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
2012
* https://tracker.ceph.com/issues/48773
2013
    qa: scrub does not complete
2014
* https://tracker.ceph.com/issues/48771
2015
    qa: iogen: workload fails to cause balancing
2016
* https://tracker.ceph.com/issues/51169
2017
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
2018
* https://tracker.ceph.com/issues/50495
2019
    libcephfs: shutdown race fails with status 141
2020
* https://tracker.ceph.com/issues/45434
2021
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2022
* https://tracker.ceph.com/issues/50824
2023
    qa: snaptest-git-ceph bus error
2024
* https://tracker.ceph.com/issues/50223
2025
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2026 13 Patrick Donnelly
2027
2028
h3. 2021 June 16
2029
2030
https://pulpito.ceph.com/pdonnell-2021-06-16_21:26:55-fs-wip-pdonnell-testing-20210616.191804-distro-basic-smithi/
2031
2032
MDS abort class of failures caused by PR: https://github.com/ceph/ceph/pull/41667
2033
2034
* https://tracker.ceph.com/issues/45434
2035
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2036
* https://tracker.ceph.com/issues/51169
2037
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
2038
* https://tracker.ceph.com/issues/43216
2039
    MDSMonitor: removes MDS coming out of quorum election
2040
* https://tracker.ceph.com/issues/51278
2041
    mds: "FAILED ceph_assert(!segments.empty())"
2042
* https://tracker.ceph.com/issues/51279
2043
    kclient hangs on umount (testing branch)
2044
* https://tracker.ceph.com/issues/51280
2045
    mds: "FAILED ceph_assert(r == 0 || r == -2)"
2046
* https://tracker.ceph.com/issues/51183
2047
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
2048
* https://tracker.ceph.com/issues/51281
2049
    qa: snaptest-snap-rm-cmp.sh: "echo 'FAIL: bad match, /tmp/a 4637e766853d1ad16a7b17079e2c6f03 != real c3883760b18d50e8d78819c54d579b00'"
2050
* https://tracker.ceph.com/issues/48773
2051
    qa: scrub does not complete
2052
* https://tracker.ceph.com/issues/51076
2053
    "wait_for_recovery: failed before timeout expired" during thrashosd test with EC backend.
2054
* https://tracker.ceph.com/issues/51228
2055
    qa: rmdir: failed to remove 'a/.snap/*': No such file or directory
2056
* https://tracker.ceph.com/issues/51282
2057
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
2058 12 Patrick Donnelly
2059
2060
h3. 2021 June 14
2061
2062
https://pulpito.ceph.com/pdonnell-2021-06-14_20:53:05-fs-wip-pdonnell-testing-20210614.173325-distro-basic-smithi/
2063
2064
Some Ubuntu 20.04 upgrade fallout. In particular, upgrade tests are failing due to missing packages for 18.04 Pacific.
2065
2066
* https://tracker.ceph.com/issues/51169
2067
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
2068
* https://tracker.ceph.com/issues/51228
2069
    qa: rmdir: failed to remove 'a/.snap/*': No such file or directory
2070
* https://tracker.ceph.com/issues/48773
2071
    qa: scrub does not complete
2072
* https://tracker.ceph.com/issues/51183
2073
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
2074
* https://tracker.ceph.com/issues/45434
2075
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2076
* https://tracker.ceph.com/issues/51182
2077
    pybind/mgr/snap_schedule: Invalid command: Unexpected argument 'fs=cephfs'
2078
* https://tracker.ceph.com/issues/51229
2079
    qa: test_multi_snap_schedule list difference failure
2080
* https://tracker.ceph.com/issues/50821
2081
    qa: untar_snap_rm failure during mds thrashing
2082 11 Patrick Donnelly
2083
2084
h3. 2021 June 13
2085
2086
https://pulpito.ceph.com/pdonnell-2021-06-12_02:45:35-fs-wip-pdonnell-testing-20210612.002809-distro-basic-smithi/
2087
2088
Some Ubuntu 20.04 upgrade fallout. In particular, upgrade tests are failing due to missing packages for 18.04 Pacific.
2089
2090
* https://tracker.ceph.com/issues/51169
2091
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
2092
* https://tracker.ceph.com/issues/48773
2093
    qa: scrub does not complete
2094
* https://tracker.ceph.com/issues/51182
2095
    pybind/mgr/snap_schedule: Invalid command: Unexpected argument 'fs=cephfs'
2096
* https://tracker.ceph.com/issues/51183
2097
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
2098
* https://tracker.ceph.com/issues/51197
2099
    qa: [WRN] Scrub error on inode 0x10000001520 (/client.0/tmp/t/linux-5.4/Documentation/driver-api) see mds.f log and `damage ls` output for details
2100
* https://tracker.ceph.com/issues/45434
2101 10 Patrick Donnelly
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2102
2103
h3. 2021 June 11
2104
2105
https://pulpito.ceph.com/pdonnell-2021-06-11_18:02:10-fs-wip-pdonnell-testing-20210611.162716-distro-basic-smithi/
2106
2107
Some Ubuntu 20.04 upgrade fallout. In particular, upgrade tests are failing due to missing packages for 18.04 Pacific.
2108
2109
* https://tracker.ceph.com/issues/51169
2110
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
2111
* https://tracker.ceph.com/issues/45434
2112
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2113
* https://tracker.ceph.com/issues/48771
2114
    qa: iogen: workload fails to cause balancing
2115
* https://tracker.ceph.com/issues/43216
2116
    MDSMonitor: removes MDS coming out of quorum election
2117
* https://tracker.ceph.com/issues/51182
2118
    pybind/mgr/snap_schedule: Invalid command: Unexpected argument 'fs=cephfs'
2119
* https://tracker.ceph.com/issues/50223
2120
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2121
* https://tracker.ceph.com/issues/48773
2122
    qa: scrub does not complete
2123
* https://tracker.ceph.com/issues/51183
2124
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
2125
* https://tracker.ceph.com/issues/51184
2126
    qa: fs:bugs does not specify distro
2127 9 Patrick Donnelly
2128
2129
h3. 2021 June 03
2130
2131
https://pulpito.ceph.com/pdonnell-2021-06-03_03:40:33-fs-wip-pdonnell-testing-20210603.020013-distro-basic-smithi/
2132
2133
* https://tracker.ceph.com/issues/45434
2134
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2135
* https://tracker.ceph.com/issues/50016
2136
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
2137
* https://tracker.ceph.com/issues/50821
2138
    qa: untar_snap_rm failure during mds thrashing
2139
* https://tracker.ceph.com/issues/50622 (regression)
2140
    msg: active_connections regression
2141
* https://tracker.ceph.com/issues/49845#note-2 (regression)
2142
    qa: failed umount in test_volumes
2143
* https://tracker.ceph.com/issues/48773
2144
    qa: scrub does not complete
2145
* https://tracker.ceph.com/issues/43216
2146
    MDSMonitor: removes MDS coming out of quorum election
2147 7 Patrick Donnelly
2148
2149 8 Patrick Donnelly
h3. 2021 May 18
2150
2151
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210518.214114
2152
2153
Regression in testing kernel caused some failures. Ilya fixed those and rerun
2154
looked better. Some odd new noise in the rerun relating to packaging and "No
2155
module named 'tasks.ceph'".
2156
2157
* https://tracker.ceph.com/issues/50824
2158
    qa: snaptest-git-ceph bus error
2159
* https://tracker.ceph.com/issues/50622 (regression)
2160
    msg: active_connections regression
2161
* https://tracker.ceph.com/issues/49845#note-2 (regression)
2162
    qa: failed umount in test_volumes
2163
* https://tracker.ceph.com/issues/48203 (stock kernel update required)
2164
    qa: quota failure
2165
2166
2167 7 Patrick Donnelly
h3. 2021 May 18
2168
2169
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210518.025642
2170
2171
* https://tracker.ceph.com/issues/50821
2172
    qa: untar_snap_rm failure during mds thrashing
2173
* https://tracker.ceph.com/issues/48773
2174
    qa: scrub does not complete
2175
* https://tracker.ceph.com/issues/45591
2176
    mgr: FAILED ceph_assert(daemon != nullptr)
2177
* https://tracker.ceph.com/issues/50866
2178
    osd: stat mismatch on objects
2179
* https://tracker.ceph.com/issues/50016
2180
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
2181
* https://tracker.ceph.com/issues/50867
2182
    qa: fs:mirror: reduced data availability
2183
* https://tracker.ceph.com/issues/50821
2184
    qa: untar_snap_rm failure during mds thrashing
2185
* https://tracker.ceph.com/issues/50622 (regression)
2186
    msg: active_connections regression
2187
* https://tracker.ceph.com/issues/50223
2188
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2189
* https://tracker.ceph.com/issues/50868
2190
    qa: "kern.log.gz already exists; not overwritten"
2191
* https://tracker.ceph.com/issues/50870
2192
    qa: test_full: "rm: cannot remove 'large_file_a': Permission denied"
2193 6 Patrick Donnelly
2194
2195
h3. 2021 May 11
2196
2197
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210511.232042
2198
2199
* one class of failures caused by PR
2200
* https://tracker.ceph.com/issues/48812
2201
    qa: test_scrub_pause_and_resume_with_abort failure
2202
* https://tracker.ceph.com/issues/50390
2203
    mds: monclient: wait_auth_rotating timed out after 30
2204
* https://tracker.ceph.com/issues/48773
2205
    qa: scrub does not complete
2206
* https://tracker.ceph.com/issues/50821
2207
    qa: untar_snap_rm failure during mds thrashing
2208
* https://tracker.ceph.com/issues/50224
2209
    qa: test_mirroring_init_failure_with_recovery failure
2210
* https://tracker.ceph.com/issues/50622 (regression)
2211
    msg: active_connections regression
2212
* https://tracker.ceph.com/issues/50825
2213
    qa: snaptest-git-ceph hang during mon thrashing v2
2214
* https://tracker.ceph.com/issues/50821
2215
    qa: untar_snap_rm failure during mds thrashing
2216
* https://tracker.ceph.com/issues/50823
2217
    qa: RuntimeError: timeout waiting for cluster to stabilize
2218 5 Patrick Donnelly
2219
2220
h3. 2021 May 14
2221
2222
https://pulpito.ceph.com/pdonnell-2021-05-14_21:45:42-fs-master-distro-basic-smithi/
2223
2224
* https://tracker.ceph.com/issues/48812
2225
    qa: test_scrub_pause_and_resume_with_abort failure
2226
* https://tracker.ceph.com/issues/50821
2227
    qa: untar_snap_rm failure during mds thrashing
2228
* https://tracker.ceph.com/issues/50622 (regression)
2229
    msg: active_connections regression
2230
* https://tracker.ceph.com/issues/50822
2231
    qa: testing kernel patch for client metrics causes mds abort
2232
* https://tracker.ceph.com/issues/48773
2233
    qa: scrub does not complete
2234
* https://tracker.ceph.com/issues/50823
2235
    qa: RuntimeError: timeout waiting for cluster to stabilize
2236
* https://tracker.ceph.com/issues/50824
2237
    qa: snaptest-git-ceph bus error
2238
* https://tracker.ceph.com/issues/50825
2239
    qa: snaptest-git-ceph hang during mon thrashing v2
2240
* https://tracker.ceph.com/issues/50826
2241
    kceph: stock RHEL kernel hangs on snaptests with mon|osd thrashers
2242 4 Patrick Donnelly
2243
2244
h3. 2021 May 01
2245
2246
https://pulpito.ceph.com/pdonnell-2021-05-01_09:07:09-fs-wip-pdonnell-testing-20210501.040415-distro-basic-smithi/
2247
2248
* https://tracker.ceph.com/issues/45434
2249
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2250
* https://tracker.ceph.com/issues/50281
2251
    qa: untar_snap_rm timeout
2252
* https://tracker.ceph.com/issues/48203 (stock kernel update required)
2253
    qa: quota failure
2254
* https://tracker.ceph.com/issues/48773
2255
    qa: scrub does not complete
2256
* https://tracker.ceph.com/issues/50390
2257
    mds: monclient: wait_auth_rotating timed out after 30
2258
* https://tracker.ceph.com/issues/50250
2259
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details"
2260
* https://tracker.ceph.com/issues/50622 (regression)
2261
    msg: active_connections regression
2262
* https://tracker.ceph.com/issues/45591
2263
    mgr: FAILED ceph_assert(daemon != nullptr)
2264
* https://tracker.ceph.com/issues/50221
2265
    qa: snaptest-git-ceph failure in git diff
2266
* https://tracker.ceph.com/issues/50016
2267
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
2268 3 Patrick Donnelly
2269
2270
h3. 2021 Apr 15
2271
2272
https://pulpito.ceph.com/pdonnell-2021-04-15_01:35:57-fs-wip-pdonnell-testing-20210414.230315-distro-basic-smithi/
2273
2274
* https://tracker.ceph.com/issues/50281
2275
    qa: untar_snap_rm timeout
2276
* https://tracker.ceph.com/issues/50220
2277
    qa: dbench workload timeout
2278
* https://tracker.ceph.com/issues/50246
2279
    mds: failure replaying journal (EMetaBlob)
2280
* https://tracker.ceph.com/issues/50250
2281
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details"
2282
* https://tracker.ceph.com/issues/50016
2283
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
2284
* https://tracker.ceph.com/issues/50222
2285
    osd: 5.2s0 deep-scrub : stat mismatch
2286
* https://tracker.ceph.com/issues/45434
2287
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2288
* https://tracker.ceph.com/issues/49845
2289
    qa: failed umount in test_volumes
2290
* https://tracker.ceph.com/issues/37808
2291
    osd: osdmap cache weak_refs assert during shutdown
2292
* https://tracker.ceph.com/issues/50387
2293
    client: fs/snaps failure
2294
* https://tracker.ceph.com/issues/50389
2295
    mds: "cluster [ERR] Error recovering journal 0x203: (2) No such file or directory" in cluster log"
2296
* https://tracker.ceph.com/issues/50216
2297
    qa: "ls: cannot access 'lost+found': No such file or directory"
2298
* https://tracker.ceph.com/issues/50390
2299
    mds: monclient: wait_auth_rotating timed out after 30
2300
2301 1 Patrick Donnelly
2302
2303 2 Patrick Donnelly
h3. 2021 Apr 08
2304
2305
https://pulpito.ceph.com/pdonnell-2021-04-08_22:42:24-fs-wip-pdonnell-testing-20210408.192301-distro-basic-smithi/
2306
2307
* https://tracker.ceph.com/issues/45434
2308
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2309
* https://tracker.ceph.com/issues/50016
2310
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
2311
* https://tracker.ceph.com/issues/48773
2312
    qa: scrub does not complete
2313
* https://tracker.ceph.com/issues/50279
2314
    qa: "Replacing daemon mds.b as rank 0 with standby daemon mds.c"
2315
* https://tracker.ceph.com/issues/50246
2316
    mds: failure replaying journal (EMetaBlob)
2317
* https://tracker.ceph.com/issues/48365
2318
    qa: ffsb build failure on CentOS 8.2
2319
* https://tracker.ceph.com/issues/50216
2320
    qa: "ls: cannot access 'lost+found': No such file or directory"
2321
* https://tracker.ceph.com/issues/50223
2322
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2323
* https://tracker.ceph.com/issues/50280
2324
    cephadm: RuntimeError: uid/gid not found
2325
* https://tracker.ceph.com/issues/50281
2326
    qa: untar_snap_rm timeout
2327
2328 1 Patrick Donnelly
h3. 2021 Apr 08
2329
2330
https://pulpito.ceph.com/pdonnell-2021-04-08_04:31:36-fs-wip-pdonnell-testing-20210408.024225-distro-basic-smithi/
2331
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210408.142238 (with logic inversion / QA fix)
2332
2333
* https://tracker.ceph.com/issues/50246
2334
    mds: failure replaying journal (EMetaBlob)
2335
* https://tracker.ceph.com/issues/50250
2336
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details"
2337
2338
2339
h3. 2021 Apr 07
2340
2341
https://pulpito.ceph.com/pdonnell-2021-04-07_02:12:41-fs-wip-pdonnell-testing-20210406.213012-distro-basic-smithi/
2342
2343
* https://tracker.ceph.com/issues/50215
2344
    qa: "log [ERR] : error reading sessionmap 'mds2_sessionmap'"
2345
* https://tracker.ceph.com/issues/49466
2346
    qa: "Command failed on gibba030 with status 1: 'set -ex\nsudo dd of=/tmp/tmp.ZEeZBasJer'"
2347
* https://tracker.ceph.com/issues/50216
2348
    qa: "ls: cannot access 'lost+found': No such file or directory"
2349
* https://tracker.ceph.com/issues/48773
2350
    qa: scrub does not complete
2351
* https://tracker.ceph.com/issues/49845
2352
    qa: failed umount in test_volumes
2353
* https://tracker.ceph.com/issues/50220
2354
    qa: dbench workload timeout
2355
* https://tracker.ceph.com/issues/50221
2356
    qa: snaptest-git-ceph failure in git diff
2357
* https://tracker.ceph.com/issues/50222
2358
    osd: 5.2s0 deep-scrub : stat mismatch
2359
* https://tracker.ceph.com/issues/50223
2360
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2361
* https://tracker.ceph.com/issues/50224
2362
    qa: test_mirroring_init_failure_with_recovery failure
2363
2364
h3. 2021 Apr 01
2365
2366
https://pulpito.ceph.com/pdonnell-2021-04-01_00:45:34-fs-wip-pdonnell-testing-20210331.222326-distro-basic-smithi/
2367
2368
* https://tracker.ceph.com/issues/48772
2369
    qa: pjd: not ok 9, 44, 80
2370
* https://tracker.ceph.com/issues/50177
2371
    osd: "stalled aio... buggy kernel or bad device?"
2372
* https://tracker.ceph.com/issues/48771
2373
    qa: iogen: workload fails to cause balancing
2374
* https://tracker.ceph.com/issues/49845
2375
    qa: failed umount in test_volumes
2376
* https://tracker.ceph.com/issues/48773
2377
    qa: scrub does not complete
2378
* https://tracker.ceph.com/issues/48805
2379
    mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details"
2380
* https://tracker.ceph.com/issues/50178
2381
    qa: "TypeError: run() got an unexpected keyword argument 'shell'"
2382
* https://tracker.ceph.com/issues/45434
2383
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2384
2385
h3. 2021 Mar 24
2386
2387
https://pulpito.ceph.com/pdonnell-2021-03-24_23:26:35-fs-wip-pdonnell-testing-20210324.190252-distro-basic-smithi/
2388
2389
* https://tracker.ceph.com/issues/49500
2390
    qa: "Assertion `cb_done' failed."
2391
* https://tracker.ceph.com/issues/50019
2392
    qa: mount failure with cephadm "probably no MDS server is up?"
2393
* https://tracker.ceph.com/issues/50020
2394
    qa: "RADOS object not found (Failed to operate read op for oid cephfs_mirror)"
2395
* https://tracker.ceph.com/issues/48773
2396
    qa: scrub does not complete
2397
* https://tracker.ceph.com/issues/45434
2398
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2399
* https://tracker.ceph.com/issues/48805
2400
    mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details"
2401
* https://tracker.ceph.com/issues/48772
2402
    qa: pjd: not ok 9, 44, 80
2403
* https://tracker.ceph.com/issues/50021
2404
    qa: snaptest-git-ceph failure during mon thrashing
2405
* https://tracker.ceph.com/issues/48771
2406
    qa: iogen: workload fails to cause balancing
2407
* https://tracker.ceph.com/issues/50016
2408
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
2409
* https://tracker.ceph.com/issues/49466
2410
    qa: "Command failed on gibba030 with status 1: 'set -ex\nsudo dd of=/tmp/tmp.ZEeZBasJer'"
2411
2412
2413
h3. 2021 Mar 18
2414
2415
https://pulpito.ceph.com/pdonnell-2021-03-18_13:46:31-fs-wip-pdonnell-testing-20210318.024145-distro-basic-smithi/
2416
2417
* https://tracker.ceph.com/issues/49466
2418
    qa: "Command failed on gibba030 with status 1: 'set -ex\nsudo dd of=/tmp/tmp.ZEeZBasJer'"
2419
* https://tracker.ceph.com/issues/48773
2420
    qa: scrub does not complete
2421
* https://tracker.ceph.com/issues/48805
2422
    mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details"
2423
* https://tracker.ceph.com/issues/45434
2424
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2425
* https://tracker.ceph.com/issues/49845
2426
    qa: failed umount in test_volumes
2427
* https://tracker.ceph.com/issues/49605
2428
    mgr: drops command on the floor
2429
* https://tracker.ceph.com/issues/48203 (stock kernel update required)
2430
    qa: quota failure
2431
* https://tracker.ceph.com/issues/49928
2432
    client: items pinned in cache preventing unmount x2
2433
2434
h3. 2021 Mar 15
2435
2436
https://pulpito.ceph.com/pdonnell-2021-03-15_22:16:56-fs-wip-pdonnell-testing-20210315.182203-distro-basic-smithi/
2437
2438
* https://tracker.ceph.com/issues/49842
2439
    qa: stuck pkg install
2440
* https://tracker.ceph.com/issues/49466
2441
    qa: "Command failed on gibba030 with status 1: 'set -ex\nsudo dd of=/tmp/tmp.ZEeZBasJer'"
2442
* https://tracker.ceph.com/issues/49822
2443
    test: test_mirroring_command_idempotency (tasks.cephfs.test_admin.TestMirroringCommands) failure
2444
* https://tracker.ceph.com/issues/49240
2445
    terminate called after throwing an instance of 'std::bad_alloc'
2446
* https://tracker.ceph.com/issues/48773
2447
    qa: scrub does not complete
2448
* https://tracker.ceph.com/issues/45434
2449
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2450
* https://tracker.ceph.com/issues/49500
2451
    qa: "Assertion `cb_done' failed."
2452
* https://tracker.ceph.com/issues/49843
2453
    qa: fs/snaps/snaptest-upchildrealms.sh failure
2454
* https://tracker.ceph.com/issues/49845
2455
    qa: failed umount in test_volumes
2456
* https://tracker.ceph.com/issues/48805
2457
    mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details"
2458
* https://tracker.ceph.com/issues/49605
2459
    mgr: drops command on the floor
2460
2461
and failure caused by PR: https://github.com/ceph/ceph/pull/39969
2462
2463
2464
h3. 2021 Mar 09
2465
2466
https://pulpito.ceph.com/pdonnell-2021-03-09_03:27:39-fs-wip-pdonnell-testing-20210308.214827-distro-basic-smithi/
2467
2468
* https://tracker.ceph.com/issues/49500
2469
    qa: "Assertion `cb_done' failed."
2470
* https://tracker.ceph.com/issues/48805
2471
    mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details"
2472
* https://tracker.ceph.com/issues/48773
2473
    qa: scrub does not complete
2474
* https://tracker.ceph.com/issues/45434
2475
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2476
* https://tracker.ceph.com/issues/49240
2477
    terminate called after throwing an instance of 'std::bad_alloc'
2478
* https://tracker.ceph.com/issues/49466
2479
    qa: "Command failed on gibba030 with status 1: 'set -ex\nsudo dd of=/tmp/tmp.ZEeZBasJer'"
2480
* https://tracker.ceph.com/issues/49684
2481
    qa: fs:cephadm mount does not wait for mds to be created
2482
* https://tracker.ceph.com/issues/48771
2483
    qa: iogen: workload fails to cause balancing