Project

General

Profile

Main » History » Version 174

Rishabh Dave, 09/06/2023 03:25 PM

1 79 Venky Shankar
h1. MAIN
2
3 148 Rishabh Dave
h3. NEW ENTRY BELOW
4
5 174 Rishabh Dave
h3. 6 Sep 2023 Run 2
6
7
https://pulpito.ceph.com/rishabh-2023-08-25_01:50:32-fs-wip-rishabh-2023aug3-b5-testing-default-smithi/ 
8
9
* https://tracker.ceph.com/issues/51964
10
  test_cephfs_mirror_restart_sync_on_blocklist failure
11
* https://tracker.ceph.com/issues/59348
12
  test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
13
* https://tracker.ceph.com/issues/53859
14
  qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
15
* https://tracker.ceph.com/issues/61892
16
  test_strays.TestStrays.test_snapshot_remove failed
17
* https://tracker.ceph.com/issues/54460
18
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
19
* https://tracker.ceph.com/issues/59346
20
  fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write"
21
* https://tracker.ceph.com/issues/59344
22
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
23
* https://tracker.ceph.com/issues/62484
24
  qa: ffsb.sh test failure
25
* https://tracker.ceph.com/issues/62567
26
  Command failed with status 1: "sudo TESTDIR=/home/ubuntu/cephtest bash -c 'sudo -u postgres -- pgbench -s 500 -i'"
27
  
28
* https://tracker.ceph.com/issues/61399
29
  ior build failure
30
* https://tracker.ceph.com/issues/57676
31
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
32
* https://tracker.ceph.com/issues/55805
33
  error scrub thrashing reached max tries in 900 secs
34
35 172 Rishabh Dave
h3. 6 Sep 2023
36 171 Rishabh Dave
37 173 Rishabh Dave
https://pulpito.ceph.com/rishabh-2023-08-10_20:16:46-fs-wip-rishabh-2023Aug1-b4-testing-default-smithi/
38 171 Rishabh Dave
39 1 Patrick Donnelly
* https://tracker.ceph.com/issues/53859
40
  qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
41 173 Rishabh Dave
* https://tracker.ceph.com/issues/51964
42
  qa: test_cephfs_mirror_restart_sync_on_blocklist failure
43 1 Patrick Donnelly
* https://tracker.ceph.com/issues/61892
44 173 Rishabh Dave
  test_snapshot_remove (test_strays.TestStrays) failed
45
* https://tracker.ceph.com/issues/59348
46
  qa: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota
47
* https://tracker.ceph.com/issues/54462
48
  Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128
49
* https://tracker.ceph.com/issues/62556
50
  test_acls: xfstests_dev: python2 is missing
51
* https://tracker.ceph.com/issues/62067
52
  ffsb.sh failure "Resource temporarily unavailable"
53
* https://tracker.ceph.com/issues/57656
54
  dbench: write failed on handle 10010 (Resource temporarily unavailable)
55 1 Patrick Donnelly
* https://tracker.ceph.com/issues/59346
56
  fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write"
57 171 Rishabh Dave
* https://tracker.ceph.com/issues/59344
58 173 Rishabh Dave
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
59
60 171 Rishabh Dave
* https://tracker.ceph.com/issues/61399
61
  ior build failure
62
* https://tracker.ceph.com/issues/57676
63
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
64
* https://tracker.ceph.com/issues/55805
65
  error scrub thrashing reached max tries in 900 secs
66 173 Rishabh Dave
67
* https://tracker.ceph.com/issues/62567
68
  Command failed on smithi008 with status 1: "sudo TESTDIR=/home/ubuntu/cephtest bash -c 'sudo -u postgres -- pgbench -s 500 -i'"
69
* https://tracker.ceph.com/issues/62702
70
  workunit test suites/fsstress.sh on smithi066 with status 124
71 170 Rishabh Dave
72
h3. 5 Sep 2023
73
74
https://pulpito.ceph.com/rishabh-2023-08-25_06:38:25-fs-wip-rishabh-2023aug3-b5-testing-default-smithi/
75
orch:cephadm suite run: http://pulpito.front.sepia.ceph.com/rishabh-2023-09-05_12:16:09-orch:cephadm-wip-rishabh-2023aug3-b5-testing-default-smithi/
76
  this run has failures but acc to Adam King these are not relevant and should be ignored
77
78
* https://tracker.ceph.com/issues/61892
79
  test_snapshot_remove (test_strays.TestStrays) failed
80
* https://tracker.ceph.com/issues/59348
81
  test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota
82
* https://tracker.ceph.com/issues/54462
83
  Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128
84
* https://tracker.ceph.com/issues/62067
85
  ffsb.sh failure "Resource temporarily unavailable"
86
* https://tracker.ceph.com/issues/57656 
87
  dbench: write failed on handle 10010 (Resource temporarily unavailable)
88
* https://tracker.ceph.com/issues/59346
89
  fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write"
90
* https://tracker.ceph.com/issues/59344
91
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
92
* https://tracker.ceph.com/issues/50223
93
  client.xxxx isn't responding to mclientcaps(revoke)
94
* https://tracker.ceph.com/issues/57655
95
  qa: fs:mixed-clients kernel_untar_build failure
96
* https://tracker.ceph.com/issues/62187
97
  iozone.sh: line 5: iozone: command not found
98
 
99
* https://tracker.ceph.com/issues/61399
100
  ior build failure
101
* https://tracker.ceph.com/issues/57676
102
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
103
* https://tracker.ceph.com/issues/55805
104
  error scrub thrashing reached max tries in 900 secs
105 169 Venky Shankar
106
107
h3. 31 Aug 2023
108
109
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230824.045828
110
111
* https://tracker.ceph.com/issues/52624
112
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
113
* https://tracker.ceph.com/issues/62187
114
    iozone: command not found
115
* https://tracker.ceph.com/issues/61399
116
    ior build failure
117
* https://tracker.ceph.com/issues/59531
118
    quincy: "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio)"
119
* https://tracker.ceph.com/issues/61399
120
    qa: build failure for ior (the failed instance is when compiling `mdtest`)
121
* https://tracker.ceph.com/issues/57655
122
    qa: fs:mixed-clients kernel_untar_build failure
123
* https://tracker.ceph.com/issues/59344
124
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
125
* https://tracker.ceph.com/issues/59346
126
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
127
* https://tracker.ceph.com/issues/59348
128
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
129
* https://tracker.ceph.com/issues/59413
130
    cephfs: qa snaptest-git-ceph.sh failed with "got remote process result: 128"
131
* https://tracker.ceph.com/issues/62653
132
    qa: unimplemented fcntl command: 1036 with fsstress
133
* https://tracker.ceph.com/issues/61400
134
    valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
135
* https://tracker.ceph.com/issues/62658
136
    error during scrub thrashing: reached maximum tries (31) after waiting for 900 seconds
137
* https://tracker.ceph.com/issues/62188
138
    AttributeError: 'RemoteProcess' object has no attribute 'read' (happens only with multis-auth test)
139 168 Venky Shankar
140
141
h3. 25 Aug 2023
142
143
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230822.064807
144
145
* https://tracker.ceph.com/issues/59344
146
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
147
* https://tracker.ceph.com/issues/59346
148
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
149
* https://tracker.ceph.com/issues/59348
150
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
151
* https://tracker.ceph.com/issues/57655
152
    qa: fs:mixed-clients kernel_untar_build failure
153
* https://tracker.ceph.com/issues/61243
154
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
155
* https://tracker.ceph.com/issues/61399
156
    ior build failure
157
* https://tracker.ceph.com/issues/61399
158
    qa: build failure for ior (the failed instance is when compiling `mdtest`)
159
* https://tracker.ceph.com/issues/62484
160
    qa: ffsb.sh test failure
161
* https://tracker.ceph.com/issues/59531
162
    quincy: "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio)"
163
* https://tracker.ceph.com/issues/62510
164
    snaptest-git-ceph.sh failure with fs/thrash
165 167 Venky Shankar
166
167
h3. 24 Aug 2023
168
169
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230822.060131
170
171
* https://tracker.ceph.com/issues/57676
172
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
173
* https://tracker.ceph.com/issues/51964
174
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
175
* https://tracker.ceph.com/issues/59344
176
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
177
* https://tracker.ceph.com/issues/59346
178
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
179
* https://tracker.ceph.com/issues/59348
180
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
181
* https://tracker.ceph.com/issues/61399
182
    ior build failure
183
* https://tracker.ceph.com/issues/61399
184
    qa: build failure for ior (the failed instance is when compiling `mdtest`)
185
* https://tracker.ceph.com/issues/62510
186
    snaptest-git-ceph.sh failure with fs/thrash
187
* https://tracker.ceph.com/issues/62484
188
    qa: ffsb.sh test failure
189
* https://tracker.ceph.com/issues/57087
190
    qa: test_fragmented_injection (tasks.cephfs.test_data_scan.TestDataScan) failure
191
* https://tracker.ceph.com/issues/57656
192
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
193
* https://tracker.ceph.com/issues/62187
194
    iozone: command not found
195
* https://tracker.ceph.com/issues/62188
196
    AttributeError: 'RemoteProcess' object has no attribute 'read' (happens only with multis-auth test)
197
* https://tracker.ceph.com/issues/62567
198
    postgres workunit times out - MDS_SLOW_REQUEST in logs
199 166 Venky Shankar
200
201
h3. 22 Aug 2023
202
203
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230809.035933
204
205
* https://tracker.ceph.com/issues/57676
206
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
207
* https://tracker.ceph.com/issues/51964
208
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
209
* https://tracker.ceph.com/issues/59344
210
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
211
* https://tracker.ceph.com/issues/59346
212
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
213
* https://tracker.ceph.com/issues/59348
214
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
215
* https://tracker.ceph.com/issues/61399
216
    ior build failure
217
* https://tracker.ceph.com/issues/61399
218
    qa: build failure for ior (the failed instance is when compiling `mdtest`)
219
* https://tracker.ceph.com/issues/57655
220
    qa: fs:mixed-clients kernel_untar_build failure
221
* https://tracker.ceph.com/issues/61243
222
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
223
* https://tracker.ceph.com/issues/62188
224
    AttributeError: 'RemoteProcess' object has no attribute 'read' (happens only with multis-auth test)
225
* https://tracker.ceph.com/issues/62510
226
    snaptest-git-ceph.sh failure with fs/thrash
227
* https://tracker.ceph.com/issues/62511
228
    src/mds/MDLog.cc: 299: FAILED ceph_assert(!mds_is_shutting_down)
229 165 Venky Shankar
230
231
h3. 14 Aug 2023
232
233
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230808.093601
234
235
* https://tracker.ceph.com/issues/51964
236
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
237
* https://tracker.ceph.com/issues/61400
238
    valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
239
* https://tracker.ceph.com/issues/61399
240
    ior build failure
241
* https://tracker.ceph.com/issues/59348
242
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
243
* https://tracker.ceph.com/issues/59531
244
    cluster [WRN] OSD bench result of 137706.272521 IOPS exceeded the threshold
245
* https://tracker.ceph.com/issues/59344
246
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
247
* https://tracker.ceph.com/issues/59346
248
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
249
* https://tracker.ceph.com/issues/61399
250
    qa: build failure for ior (the failed instance is when compiling `mdtest`)
251
* https://tracker.ceph.com/issues/59684 [kclient bug]
252
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
253
* https://tracker.ceph.com/issues/61243 (NEW)
254
    test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
255
* https://tracker.ceph.com/issues/57655
256
    qa: fs:mixed-clients kernel_untar_build failure
257
* https://tracker.ceph.com/issues/57656
258
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
259 163 Venky Shankar
260
261
h3. 28 JULY 2023
262
263
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230725.053049
264
265
* https://tracker.ceph.com/issues/51964
266
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
267
* https://tracker.ceph.com/issues/61400
268
    valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
269
* https://tracker.ceph.com/issues/61399
270
    ior build failure
271
* https://tracker.ceph.com/issues/57676
272
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
273
* https://tracker.ceph.com/issues/59348
274
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
275
* https://tracker.ceph.com/issues/59531
276
    cluster [WRN] OSD bench result of 137706.272521 IOPS exceeded the threshold
277
* https://tracker.ceph.com/issues/59344
278
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
279
* https://tracker.ceph.com/issues/59346
280
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
281
* https://github.com/ceph/ceph/pull/52556
282
    task/test_nfs: AttributeError: 'TestNFS' object has no attribute 'run_ceph_cmd' (see note #4)
283
* https://tracker.ceph.com/issues/62187
284
    iozone: command not found
285
* https://tracker.ceph.com/issues/61399
286
    qa: build failure for ior (the failed instance is when compiling `mdtest`)
287
* https://tracker.ceph.com/issues/62188
288 164 Rishabh Dave
    AttributeError: 'RemoteProcess' object has no attribute 'read' (happens only with multis-auth test)
289 158 Rishabh Dave
290
h3. 24 Jul 2023
291
292
https://pulpito.ceph.com/rishabh-2023-07-13_21:35:13-fs-wip-rishabh-2023Jul13-testing-default-smithi/
293
https://pulpito.ceph.com/rishabh-2023-07-14_10:26:42-fs-wip-rishabh-2023Jul13-testing-default-smithi/
294
There were few failure from one of the PRs under testing. Following run confirms that removing this PR fixes these failures -
295
https://pulpito.ceph.com/rishabh-2023-07-18_02:11:50-fs-wip-rishabh-2023Jul13-m-quota-testing-default-smithi/
296
One more extra run to check if blogbench.sh fail every time:
297
https://pulpito.ceph.com/rishabh-2023-07-21_17:58:19-fs-wip-rishabh-2023Jul13-m-quota-testing-default-smithi/
298
blogbench.sh failure were seen on above runs for first time, following run with main branch that confirms that "blogbench.sh" was not related to any of the PRs that are under testing -
299 161 Rishabh Dave
https://pulpito.ceph.com/rishabh-2023-07-21_21:30:53-fs-wip-rishabh-2023Jul13-base-2-testing-default-smithi/
300
301
* https://tracker.ceph.com/issues/61892
302
  test_snapshot_remove (test_strays.TestStrays) failed
303
* https://tracker.ceph.com/issues/53859
304
  test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
305
* https://tracker.ceph.com/issues/61982
306
  test_clean_stale_subvolume_snapshot_metadata (tasks.cephfs.test_volumes.TestSubvolumeSnapshots)
307
* https://tracker.ceph.com/issues/52438
308
  qa: ffsb timeout
309
* https://tracker.ceph.com/issues/54460
310
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
311
* https://tracker.ceph.com/issues/57655
312
  qa: fs:mixed-clients kernel_untar_build failure
313
* https://tracker.ceph.com/issues/48773
314
  reached max tries: scrub does not complete
315
* https://tracker.ceph.com/issues/58340
316
  mds: fsstress.sh hangs with multimds
317
* https://tracker.ceph.com/issues/61400
318
  valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
319
* https://tracker.ceph.com/issues/57206
320
  libcephfs/test.sh: ceph_test_libcephfs_reclaim
321
  
322
* https://tracker.ceph.com/issues/57656
323
  [testing] dbench: write failed on handle 10010 (Resource temporarily unavailable)
324
* https://tracker.ceph.com/issues/61399
325
  ior build failure
326
* https://tracker.ceph.com/issues/57676
327
  error during scrub thrashing: backtrace
328
  
329
* https://tracker.ceph.com/issues/38452
330
  'sudo -u postgres -- pgbench -s 500 -i' failed
331 158 Rishabh Dave
* https://tracker.ceph.com/issues/62126
332 157 Venky Shankar
  blogbench.sh failure
333
334
h3. 18 July 2023
335
336
* https://tracker.ceph.com/issues/52624
337
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
338
* https://tracker.ceph.com/issues/57676
339
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
340
* https://tracker.ceph.com/issues/54460
341
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
342
* https://tracker.ceph.com/issues/57655
343
    qa: fs:mixed-clients kernel_untar_build failure
344
* https://tracker.ceph.com/issues/51964
345
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
346
* https://tracker.ceph.com/issues/59344
347
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
348
* https://tracker.ceph.com/issues/61182
349
    cephfs-mirror-ha-workunit: reached maximum tries (50) after waiting for 300 seconds
350
* https://tracker.ceph.com/issues/61957
351
    test_client_limits.TestClientLimits.test_client_release_bug
352
* https://tracker.ceph.com/issues/59348
353
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
354
* https://tracker.ceph.com/issues/61892
355
    test_strays.TestStrays.test_snapshot_remove failed
356
* https://tracker.ceph.com/issues/59346
357
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
358
* https://tracker.ceph.com/issues/44565
359
    src/mds/SimpleLock.h: 528: FAILED ceph_assert(state == LOCK_XLOCK || state == LOCK_XLOCKDONE || state == LOCK_XLOCKSNAP || state == LOCK_LOCK_XLOCK || state == LOCK_LOCK || is_locallock())
360
* https://tracker.ceph.com/issues/62067
361
    ffsb.sh failure "Resource temporarily unavailable"
362 156 Venky Shankar
363
364
h3. 17 July 2023
365
366
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230704.040136
367
368
* https://tracker.ceph.com/issues/61982
369
    Test failure: test_clean_stale_subvolume_snapshot_metadata (tasks.cephfs.test_volumes.TestSubvolumeSnapshots)
370
* https://tracker.ceph.com/issues/59344
371
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
372
* https://tracker.ceph.com/issues/61182
373
    cephfs-mirror-ha-workunit: reached maximum tries (50) after waiting for 300 seconds
374
* https://tracker.ceph.com/issues/61957
375
    test_client_limits.TestClientLimits.test_client_release_bug
376
* https://tracker.ceph.com/issues/61400
377
    valgrind+ceph-mon: segmentation fault in rocksdb+tcmalloc
378
* https://tracker.ceph.com/issues/59348
379
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
380
* https://tracker.ceph.com/issues/61892
381
    test_strays.TestStrays.test_snapshot_remove failed
382
* https://tracker.ceph.com/issues/59346
383
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
384
* https://tracker.ceph.com/issues/62036
385
    src/mds/MDCache.cc: 5131: FAILED ceph_assert(isolated_inodes.empty())
386
* https://tracker.ceph.com/issues/61737
387
    coredump from '/bin/podman pull quay.ceph.io/ceph-ci/ceph:pacific'
388
* https://tracker.ceph.com/issues/44565
389
    src/mds/SimpleLock.h: 528: FAILED ceph_assert(state == LOCK_XLOCK || state == LOCK_XLOCKDONE || state == LOCK_XLOCKSNAP || state == LOCK_LOCK_XLOCK || state == LOCK_LOCK || is_locallock())
390 155 Rishabh Dave
391 1 Patrick Donnelly
392 153 Rishabh Dave
h3. 13 July 2023 Run 2
393 152 Rishabh Dave
394
395
https://pulpito.ceph.com/rishabh-2023-07-08_23:33:40-fs-wip-rishabh-2023Jul9-testing-default-smithi/
396
https://pulpito.ceph.com/rishabh-2023-07-09_20:19:09-fs-wip-rishabh-2023Jul9-testing-default-smithi/
397
398
* https://tracker.ceph.com/issues/61957
399
  test_client_limits.TestClientLimits.test_client_release_bug
400
* https://tracker.ceph.com/issues/61982
401
  Test failure: test_clean_stale_subvolume_snapshot_metadata (tasks.cephfs.test_volumes.TestSubvolumeSnapshots)
402
* https://tracker.ceph.com/issues/59348
403
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
404
* https://tracker.ceph.com/issues/59344
405
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
406
* https://tracker.ceph.com/issues/54460
407
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
408
* https://tracker.ceph.com/issues/57655
409
  qa: fs:mixed-clients kernel_untar_build failure
410
* https://tracker.ceph.com/issues/61400
411
  valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
412
* https://tracker.ceph.com/issues/61399
413
  ior build failure
414
415 151 Venky Shankar
h3. 13 July 2023
416
417
https://pulpito.ceph.com/vshankar-2023-07-04_11:45:30-fs-wip-vshankar-testing-20230704.040242-testing-default-smithi/
418
419
* https://tracker.ceph.com/issues/54460
420
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
421
* https://tracker.ceph.com/issues/61400
422
    valgrind+ceph-mon: segmentation fault in rocksdb+tcmalloc
423
* https://tracker.ceph.com/issues/57655
424
    qa: fs:mixed-clients kernel_untar_build failure
425
* https://tracker.ceph.com/issues/61945
426
    LibCephFS.DelegTimeout failure
427
* https://tracker.ceph.com/issues/52624
428
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
429
* https://tracker.ceph.com/issues/57676
430
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
431
* https://tracker.ceph.com/issues/59348
432
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
433
* https://tracker.ceph.com/issues/59344
434
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
435
* https://tracker.ceph.com/issues/51964
436
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
437
* https://tracker.ceph.com/issues/59346
438
    fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
439
* https://tracker.ceph.com/issues/61982
440
    Test failure: test_clean_stale_subvolume_snapshot_metadata (tasks.cephfs.test_volumes.TestSubvolumeSnapshots)
441 150 Rishabh Dave
442
443
h3. 13 Jul 2023
444
445
https://pulpito.ceph.com/rishabh-2023-07-05_22:21:20-fs-wip-rishabh-2023Jul5-testing-default-smithi/
446
https://pulpito.ceph.com/rishabh-2023-07-06_19:33:28-fs-wip-rishabh-2023Jul5-testing-default-smithi/
447
448
* https://tracker.ceph.com/issues/61957
449
  test_client_limits.TestClientLimits.test_client_release_bug
450
* https://tracker.ceph.com/issues/59348
451
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
452
* https://tracker.ceph.com/issues/59346
453
  fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write" 
454
* https://tracker.ceph.com/issues/48773
455
  scrub does not complete: reached max tries
456
* https://tracker.ceph.com/issues/59344
457
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument" 
458
* https://tracker.ceph.com/issues/52438
459
  qa: ffsb timeout
460
* https://tracker.ceph.com/issues/57656
461
  [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
462
* https://tracker.ceph.com/issues/58742
463
  xfstests-dev: kcephfs: generic
464
* https://tracker.ceph.com/issues/61399
465 148 Rishabh Dave
  libmpich: undefined references to fi_strerror
466 149 Rishabh Dave
467 148 Rishabh Dave
h3. 12 July 2023
468
469
https://pulpito.ceph.com/rishabh-2023-07-05_18:32:52-fs-wip-rishabh-CephManager-in-CephFSTestCase-testing-default-smithi/
470
https://pulpito.ceph.com/rishabh-2023-07-06_19:46:43-fs-wip-rishabh-CephManager-in-CephFSTestCase-testing-default-smithi/
471
472
* https://tracker.ceph.com/issues/61892
473
  test_strays.TestStrays.test_snapshot_remove failed
474
* https://tracker.ceph.com/issues/59348
475
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
476
* https://tracker.ceph.com/issues/53859
477
  qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
478
* https://tracker.ceph.com/issues/59346
479
  fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write"
480
* https://tracker.ceph.com/issues/58742
481
  xfstests-dev: kcephfs: generic
482
* https://tracker.ceph.com/issues/59344
483
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
484
* https://tracker.ceph.com/issues/52438
485
  qa: ffsb timeout
486
* https://tracker.ceph.com/issues/57656
487
  [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
488
* https://tracker.ceph.com/issues/54460
489
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
490
* https://tracker.ceph.com/issues/57655
491
  qa: fs:mixed-clients kernel_untar_build failure
492
* https://tracker.ceph.com/issues/61182
493
  cephfs-mirror-ha-workunit: reached maximum tries (50) after waiting for 300 seconds
494
* https://tracker.ceph.com/issues/61400
495
  valgrind+ceph-mon issues: sudo ceph --cluster ceph osd crush tunables default
496 147 Rishabh Dave
* https://tracker.ceph.com/issues/48773
497 146 Patrick Donnelly
  reached max tries: scrub does not complete
498
499
h3. 05 July 2023
500
501
https://pulpito.ceph.com/pdonnell-2023-07-05_03:38:33-fs:libcephfs-wip-pdonnell-testing-20230705.003205-distro-default-smithi/
502
503 137 Rishabh Dave
* https://tracker.ceph.com/issues/59346
504 143 Rishabh Dave
  fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write"
505
506
h3. 27 Jun 2023
507
508
https://pulpito.ceph.com/rishabh-2023-06-21_23:38:17-fs-wip-rishabh-improvements-authmon-testing-default-smithi/
509 144 Rishabh Dave
https://pulpito.ceph.com/rishabh-2023-06-23_17:37:30-fs-wip-rishabh-improvements-authmon-distro-default-smithi/
510
511
* https://tracker.ceph.com/issues/59348
512
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
513
* https://tracker.ceph.com/issues/54460
514
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
515
* https://tracker.ceph.com/issues/59346
516
  fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write"
517
* https://tracker.ceph.com/issues/59344
518
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
519
* https://tracker.ceph.com/issues/61399
520
  libmpich: undefined references to fi_strerror
521
* https://tracker.ceph.com/issues/50223
522
  client.xxxx isn't responding to mclientcaps(revoke)
523 143 Rishabh Dave
* https://tracker.ceph.com/issues/61831
524
  Test failure: test_mirroring_init_failure_with_recovery (tasks.cephfs.test_mirroring.TestMirroring)
525 142 Venky Shankar
526
527
h3. 22 June 2023
528
529
* https://tracker.ceph.com/issues/57676
530
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
531
* https://tracker.ceph.com/issues/54460
532
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
533
* https://tracker.ceph.com/issues/59344
534
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
535
* https://tracker.ceph.com/issues/59348
536
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
537
* https://tracker.ceph.com/issues/61400
538
    valgrind+ceph-mon: segmentation fault in rocksdb+tcmalloc
539
* https://tracker.ceph.com/issues/57655
540
    qa: fs:mixed-clients kernel_untar_build failure
541
* https://tracker.ceph.com/issues/61394
542
    qa/quincy: cluster [WRN] evicting unresponsive client smithi152 (4298), after 303.726 seconds" in cluster log
543
* https://tracker.ceph.com/issues/61762
544
    qa: wait_for_clean: failed before timeout expired
545
* https://tracker.ceph.com/issues/61775
546
    cephfs-mirror: mirror daemon does not shutdown (in mirror ha tests)
547
* https://tracker.ceph.com/issues/44565
548
    src/mds/SimpleLock.h: 528: FAILED ceph_assert(state == LOCK_XLOCK || state == LOCK_XLOCKDONE || state == LOCK_XLOCKSNAP || state == LOCK_LOCK_XLOCK || state == LOCK_LOCK || is_locallock())
549
* https://tracker.ceph.com/issues/61790
550
    cephfs client to mds comms remain silent after reconnect
551
* https://tracker.ceph.com/issues/61791
552
    snaptest-git-ceph.sh test timed out (job dead)
553 139 Venky Shankar
554
555
h3. 20 June 2023
556
557
https://pulpito.ceph.com/vshankar-2023-06-15_04:58:28-fs-wip-vshankar-testing-20230614.124123-testing-default-smithi/
558
559
* https://tracker.ceph.com/issues/57676
560
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
561
* https://tracker.ceph.com/issues/54460
562
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
563 140 Venky Shankar
* https://tracker.ceph.com/issues/54462
564 1 Patrick Donnelly
  Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128
565 141 Venky Shankar
* https://tracker.ceph.com/issues/58340
566 139 Venky Shankar
  mds: fsstress.sh hangs with multimds
567
* https://tracker.ceph.com/issues/59344
568
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
569
* https://tracker.ceph.com/issues/59348
570
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
571
* https://tracker.ceph.com/issues/57656
572
  [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
573
* https://tracker.ceph.com/issues/61400
574
    valgrind+ceph-mon: segmentation fault in rocksdb+tcmalloc
575
* https://tracker.ceph.com/issues/57655
576
    qa: fs:mixed-clients kernel_untar_build failure
577
* https://tracker.ceph.com/issues/44565
578
    src/mds/SimpleLock.h: 528: FAILED ceph_assert(state == LOCK_XLOCK || state == LOCK_XLOCKDONE || state == LOCK_XLOCKSNAP || state == LOCK_LOCK_XLOCK || state == LOCK_LOCK || is_locallock())
579
* https://tracker.ceph.com/issues/61737
580 138 Rishabh Dave
    coredump from '/bin/podman pull quay.ceph.io/ceph-ci/ceph:pacific'
581
582
h3. 16 June 2023
583
584 1 Patrick Donnelly
https://pulpito.ceph.com/rishabh-2023-05-16_10:39:13-fs-wip-rishabh-2023May15-1524-testing-default-smithi/
585 145 Rishabh Dave
https://pulpito.ceph.com/rishabh-2023-05-17_11:09:48-fs-wip-rishabh-2023May15-1524-testing-default-smithi/
586 138 Rishabh Dave
https://pulpito.ceph.com/rishabh-2023-05-18_10:01:53-fs-wip-rishabh-2023May15-1524-testing-default-smithi/
587 1 Patrick Donnelly
(bins were rebuilt with a subset of orig PRs) https://pulpito.ceph.com/rishabh-2023-06-09_10:19:22-fs-wip-rishabh-2023Jun9-1308-testing-default-smithi/
588
589
590
* https://tracker.ceph.com/issues/59344
591
  qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
592 138 Rishabh Dave
* https://tracker.ceph.com/issues/59348
593
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
594 145 Rishabh Dave
* https://tracker.ceph.com/issues/59346
595
  fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write"
596
* https://tracker.ceph.com/issues/57656
597
  [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
598
* https://tracker.ceph.com/issues/54460
599
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
600 138 Rishabh Dave
* https://tracker.ceph.com/issues/54462
601
  Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128
602 145 Rishabh Dave
* https://tracker.ceph.com/issues/61399
603
  libmpich: undefined references to fi_strerror
604
* https://tracker.ceph.com/issues/58945
605
  xfstests-dev: ceph-fuse: generic 
606 138 Rishabh Dave
* https://tracker.ceph.com/issues/58742
607 136 Patrick Donnelly
  xfstests-dev: kcephfs: generic
608
609
h3. 24 May 2023
610
611
https://pulpito.ceph.com/pdonnell-2023-05-23_18:20:18-fs-wip-pdonnell-testing-20230523.134409-distro-default-smithi/
612
613
* https://tracker.ceph.com/issues/57676
614
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
615
* https://tracker.ceph.com/issues/59683
616
    Error: Unable to find a match: userspace-rcu-devel libedit-devel device-mapper-devel with fscrypt tests
617
* https://tracker.ceph.com/issues/61399
618
    qa: "[Makefile:299: ior] Error 1"
619
* https://tracker.ceph.com/issues/61265
620
    qa: tasks.cephfs.fuse_mount:process failed to terminate after unmount
621
* https://tracker.ceph.com/issues/59348
622
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
623
* https://tracker.ceph.com/issues/59346
624
    qa/workunits/fs/test_python.sh failed with "AssertionError: DiskQuotaExceeded not raised by write"
625
* https://tracker.ceph.com/issues/61400
626
    valgrind+ceph-mon: segmentation fault in rocksdb+tcmalloc
627
* https://tracker.ceph.com/issues/54460
628
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
629
* https://tracker.ceph.com/issues/51964
630
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
631
* https://tracker.ceph.com/issues/59344
632
    qa: workunit test fs/quota/quota.sh failed with "setfattr: .: Invalid argument"
633
* https://tracker.ceph.com/issues/61407
634
    mds: abort on CInode::verify_dirfrags
635
* https://tracker.ceph.com/issues/48773
636
    qa: scrub does not complete
637
* https://tracker.ceph.com/issues/57655
638
    qa: fs:mixed-clients kernel_untar_build failure
639
* https://tracker.ceph.com/issues/61409
640 128 Venky Shankar
    qa: _test_stale_caps does not wait for file flush before stat
641
642
h3. 15 May 2023
643 130 Venky Shankar
644 128 Venky Shankar
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230509.090020
645
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230509.090020-6
646
647
* https://tracker.ceph.com/issues/52624
648
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
649
* https://tracker.ceph.com/issues/54460
650
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
651
* https://tracker.ceph.com/issues/57676
652
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
653
* https://tracker.ceph.com/issues/59684 [kclient bug]
654
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
655
* https://tracker.ceph.com/issues/59348
656
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
657 131 Venky Shankar
* https://tracker.ceph.com/issues/61148
658
    dbench test results in call trace in dmesg [kclient bug]
659 133 Kotresh Hiremath Ravishankar
* https://tracker.ceph.com/issues/58340
660 134 Kotresh Hiremath Ravishankar
    mds: fsstress.sh hangs with multimds
661 125 Venky Shankar
662
 
663 129 Rishabh Dave
h3. 11 May 2023
664
665
https://pulpito.ceph.com/yuriw-2023-05-10_18:21:40-fs-wip-yuri7-testing-2023-05-10-0742-distro-default-smithi/
666
667
* https://tracker.ceph.com/issues/59684 [kclient bug]
668
  Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
669
* https://tracker.ceph.com/issues/59348
670
  qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
671
* https://tracker.ceph.com/issues/57655
672
  qa: fs:mixed-clients kernel_untar_build failure
673
* https://tracker.ceph.com/issues/57676
674
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
675
* https://tracker.ceph.com/issues/55805
676
  error during scrub thrashing reached max tries in 900 secs
677
* https://tracker.ceph.com/issues/54460
678
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
679
* https://tracker.ceph.com/issues/57656
680
  [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
681
* https://tracker.ceph.com/issues/58220
682
  Command failed (workunit test fs/quota/quota.sh) on smithi081 with status 1:
683 1 Patrick Donnelly
* https://tracker.ceph.com/issues/58220#note-9
684
  workunit fs/test_python.sh: test_disk_quota_exceeeded_error failure
685 134 Kotresh Hiremath Ravishankar
* https://tracker.ceph.com/issues/59342
686
  qa/workunits/kernel_untar_build.sh failed when compiling the Linux source
687 135 Kotresh Hiremath Ravishankar
* https://tracker.ceph.com/issues/58949
688
    test_cephfs.test_disk_quota_exceeeded_error - AssertionError: DiskQuotaExceeded not raised by write
689 129 Rishabh Dave
* https://tracker.ceph.com/issues/61243 (NEW)
690
  test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev) failed
691
692 125 Venky Shankar
h3. 11 May 2023
693 127 Venky Shankar
694
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230509.054005
695 126 Venky Shankar
696 125 Venky Shankar
(no fsstress job failure [https://tracker.ceph.com/issues/58340] since https://github.com/ceph/ceph/pull/49553
697
 was included in the branch, however, the PR got updated and needs retest).
698
699
* https://tracker.ceph.com/issues/52624
700
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
701
* https://tracker.ceph.com/issues/54460
702
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
703
* https://tracker.ceph.com/issues/57676
704
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
705
* https://tracker.ceph.com/issues/59683
706
    Error: Unable to find a match: userspace-rcu-devel libedit-devel device-mapper-devel with fscrypt tests
707
* https://tracker.ceph.com/issues/59684 [kclient bug]
708
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
709
* https://tracker.ceph.com/issues/59348
710 124 Venky Shankar
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
711
712
h3. 09 May 2023
713
714
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230506.143554
715
716
* https://tracker.ceph.com/issues/52624
717
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
718
* https://tracker.ceph.com/issues/58340
719
    mds: fsstress.sh hangs with multimds
720
* https://tracker.ceph.com/issues/54460
721
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
722
* https://tracker.ceph.com/issues/57676
723
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
724
* https://tracker.ceph.com/issues/51964
725
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
726
* https://tracker.ceph.com/issues/59350
727
    qa: test_flag_scrub_mdsdir (tasks.cephfs.test_scrub_checks.TestScrubChecks) ... ERROR
728
* https://tracker.ceph.com/issues/59683
729
    Error: Unable to find a match: userspace-rcu-devel libedit-devel device-mapper-devel with fscrypt tests
730
* https://tracker.ceph.com/issues/59684 [kclient bug]
731
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
732
* https://tracker.ceph.com/issues/59348
733 123 Venky Shankar
    qa: ERROR: test_disable_enable_human_readable_quota_values (tasks.cephfs.test_quota.TestQuota)
734
735
h3. 10 Apr 2023
736
737
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230330.105356
738
739
* https://tracker.ceph.com/issues/52624
740
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
741
* https://tracker.ceph.com/issues/58340
742
    mds: fsstress.sh hangs with multimds
743
* https://tracker.ceph.com/issues/54460
744
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
745
* https://tracker.ceph.com/issues/57676
746
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
747 119 Rishabh Dave
* https://tracker.ceph.com/issues/51964
748 120 Rishabh Dave
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
749 121 Rishabh Dave
750 120 Rishabh Dave
h3. 31 Mar 2023
751 122 Rishabh Dave
752
run: http://pulpito.front.sepia.ceph.com/rishabh-2023-03-03_21:39:49-fs-wip-rishabh-2023Mar03-2316-testing-default-smithi/
753 120 Rishabh Dave
re-run (some PRs removed from batch): http://pulpito.front.sepia.ceph.com/rishabh-2023-03-11_05:54:03-fs-wip-rishabh-2023Mar10-1727-testing-default-smithi/
754
re-run (some PRs removed from batch): http://pulpito.front.sepia.ceph.com/rishabh-2023-03-23_08:27:28-fs-wip-rishabh-2023Mar20-2250-testing-default-smithi/
755
756
There were many more re-runs for "failed+dead" jobs as well as for individual jobs. half of the PRs from the batch were removed (gradually over subsequent re-runs).
757
758
* https://tracker.ceph.com/issues/57676
759
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
760
* https://tracker.ceph.com/issues/54460
761
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
762
* https://tracker.ceph.com/issues/58220
763
  Command failed (workunit test fs/quota/quota.sh) on smithixxx with status 1
764
* https://tracker.ceph.com/issues/58220#note-9
765
  workunit fs/test_python.sh: test_disk_quota_exceeeded_error failure
766
* https://tracker.ceph.com/issues/56695
767
  Command failed (workunit test suites/pjd.sh)
768
* https://tracker.ceph.com/issues/58564 
769
  workuit dbench failed with error code 1
770
* https://tracker.ceph.com/issues/57206
771
  libcephfs/test.sh: ceph_test_libcephfs_reclaim
772
* https://tracker.ceph.com/issues/57580
773
  Test failure: test_newops_getvxattr (tasks.cephfs.test_newops.TestNewOps)
774
* https://tracker.ceph.com/issues/58940
775
  ceph osd hit ceph_abort
776
* https://tracker.ceph.com/issues/55805
777 118 Venky Shankar
  error scrub thrashing reached max tries in 900 secs
778
779
h3. 30 March 2023
780
781
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230315.085747
782
783
* https://tracker.ceph.com/issues/58938
784
    qa: xfstests-dev's generic test suite has 7 failures with kclient
785
* https://tracker.ceph.com/issues/51964
786
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
787
* https://tracker.ceph.com/issues/58340
788 114 Venky Shankar
    mds: fsstress.sh hangs with multimds
789
790 115 Venky Shankar
h3. 29 March 2023
791 114 Venky Shankar
792
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20230317.095222
793
794
* https://tracker.ceph.com/issues/56695
795
    [RHEL stock] pjd test failures
796
* https://tracker.ceph.com/issues/57676
797
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
798
* https://tracker.ceph.com/issues/57087
799
    qa: test_fragmented_injection (tasks.cephfs.test_data_scan.TestDataScan) failure
800 116 Venky Shankar
* https://tracker.ceph.com/issues/58340
801
    mds: fsstress.sh hangs with multimds
802 114 Venky Shankar
* https://tracker.ceph.com/issues/57655
803
    qa: fs:mixed-clients kernel_untar_build failure
804 117 Venky Shankar
* https://tracker.ceph.com/issues/59230
805
    Test failure: test_object_deletion (tasks.cephfs.test_damage.TestDamage)
806 114 Venky Shankar
* https://tracker.ceph.com/issues/58938
807 113 Venky Shankar
    qa: xfstests-dev's generic test suite has 7 failures with kclient
808
809
h3. 13 Mar 2023
810
811
* https://tracker.ceph.com/issues/56695
812
    [RHEL stock] pjd test failures
813
* https://tracker.ceph.com/issues/57676
814
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
815
* https://tracker.ceph.com/issues/51964
816
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
817
* https://tracker.ceph.com/issues/54460
818
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
819
* https://tracker.ceph.com/issues/57656
820 112 Venky Shankar
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
821
822
h3. 09 Mar 2023
823
824
https://pulpito.ceph.com/vshankar-2023-03-03_04:39:14-fs-wip-vshankar-testing-20230303.023823-testing-default-smithi/
825
https://pulpito.ceph.com/vshankar-2023-03-08_15:12:36-fs-wip-vshankar-testing-20230308.112059-testing-default-smithi/
826
827
* https://tracker.ceph.com/issues/56695
828
    [RHEL stock] pjd test failures
829
* https://tracker.ceph.com/issues/57676
830
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
831
* https://tracker.ceph.com/issues/51964
832
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
833
* https://tracker.ceph.com/issues/54460
834
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
835
* https://tracker.ceph.com/issues/58340
836
    mds: fsstress.sh hangs with multimds
837
* https://tracker.ceph.com/issues/57087
838 111 Venky Shankar
    qa: test_fragmented_injection (tasks.cephfs.test_data_scan.TestDataScan) failure
839
840
h3. 07 Mar 2023
841
842
https://pulpito.ceph.com/vshankar-2023-03-02_09:21:58-fs-wip-vshankar-testing-20230222.044949-testing-default-smithi/
843
https://pulpito.ceph.com/vshankar-2023-03-07_05:15:12-fs-wip-vshankar-testing-20230307.030510-testing-default-smithi/
844
845
* https://tracker.ceph.com/issues/56695
846
    [RHEL stock] pjd test failures
847
* https://tracker.ceph.com/issues/57676
848
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
849
* https://tracker.ceph.com/issues/51964
850
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
851
* https://tracker.ceph.com/issues/57656
852
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
853
* https://tracker.ceph.com/issues/57655
854
    qa: fs:mixed-clients kernel_untar_build failure
855
* https://tracker.ceph.com/issues/58220
856
    Command failed (workunit test fs/quota/quota.sh) on smithi081 with status 1:
857
* https://tracker.ceph.com/issues/54460
858
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
859
* https://tracker.ceph.com/issues/58934
860 109 Venky Shankar
    snaptest-git-ceph.sh failure with ceph-fuse
861
862
h3. 28 Feb 2023
863
864
https://pulpito.ceph.com/vshankar-2023-02-24_02:11:45-fs-wip-vshankar-testing-20230222.025426-testing-default-smithi/
865
866
* https://tracker.ceph.com/issues/56695
867
    [RHEL stock] pjd test failures
868
* https://tracker.ceph.com/issues/57676
869
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
870 110 Venky Shankar
* https://tracker.ceph.com/issues/56446
871 109 Venky Shankar
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
872
873 107 Venky Shankar
(teuthology infra issues causing testing delays - merging PRs which have tests passing)
874
875
h3. 25 Jan 2023
876
877
https://pulpito.ceph.com/vshankar-2023-01-25_07:57:32-fs-wip-vshankar-testing-20230125.055346-testing-default-smithi/
878
879
* https://tracker.ceph.com/issues/52624
880
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
881
* https://tracker.ceph.com/issues/56695
882
    [RHEL stock] pjd test failures
883
* https://tracker.ceph.com/issues/57676
884
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
885
* https://tracker.ceph.com/issues/56446
886
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
887
* https://tracker.ceph.com/issues/57206
888
    libcephfs/test.sh: ceph_test_libcephfs_reclaim
889
* https://tracker.ceph.com/issues/58220
890
    Command failed (workunit test fs/quota/quota.sh) on smithi081 with status 1:
891
* https://tracker.ceph.com/issues/58340
892
  mds: fsstress.sh hangs with multimds
893
* https://tracker.ceph.com/issues/56011
894
    fs/thrash: snaptest-snap-rm-cmp.sh fails in mds5sum comparison
895
* https://tracker.ceph.com/issues/54460
896 101 Rishabh Dave
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
897
898
h3. 30 JAN 2023
899
900
run: http://pulpito.front.sepia.ceph.com/rishabh-2022-11-28_08:04:11-fs-wip-rishabh-testing-2022Nov24-1818-testing-default-smithi/
901
re-run: http://pulpito.front.sepia.ceph.com/rishabh-2023-01-13_12:08:33-fs-wip-rishabh-testing-2022Nov24-11Jan2023-distro-default-smithi/
902 105 Rishabh Dave
re-run of re-run: http://pulpito.front.sepia.ceph.com/rishabh-2023-01-23_18:53:32-fs-wip-rishabh-testing-2022Nov24-11Jan2023-distro-default-smithi/
903
904 101 Rishabh Dave
* https://tracker.ceph.com/issues/52624
905
  qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" 
906
* https://tracker.ceph.com/issues/56695
907
  [RHEL stock] pjd test failures
908
* https://tracker.ceph.com/issues/57676
909
  qa: error during scrub thrashing: rank damage found: {'backtrace'}
910
* https://tracker.ceph.com/issues/55332
911
  Failure in snaptest-git-ceph.sh
912
* https://tracker.ceph.com/issues/51964
913
  qa: test_cephfs_mirror_restart_sync_on_blocklist failure
914
* https://tracker.ceph.com/issues/56446
915
  Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
916
* https://tracker.ceph.com/issues/57655 
917
  qa: fs:mixed-clients kernel_untar_build failure
918
* https://tracker.ceph.com/issues/54460
919
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
920 103 Rishabh Dave
* https://tracker.ceph.com/issues/58340
921
  mds: fsstress.sh hangs with multimds
922 101 Rishabh Dave
* https://tracker.ceph.com/issues/58219
923 102 Rishabh Dave
  Command crashed: 'ceph-dencoder type inode_backtrace_t import - decode dump_json'
924
925
* "Failed to load ceph-mgr modules: prometheus" in cluster log"
926 106 Rishabh Dave
  http://pulpito.front.sepia.ceph.com/rishabh-2023-01-23_18:53:32-fs-wip-rishabh-testing-2022Nov24-11Jan2023-distro-default-smithi/7134086
927
  Acc to Venky this was fixed in https://github.com/ceph/ceph/commit/cf6089200d96fc56b08ee17a4e31f19823370dc8
928 102 Rishabh Dave
* Created https://tracker.ceph.com/issues/58564
929 100 Venky Shankar
  workunit test suites/dbench.sh failed error code 1
930
931
h3. 15 Dec 2022
932
933
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20221215.112736
934
935
* https://tracker.ceph.com/issues/52624
936
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
937
* https://tracker.ceph.com/issues/56695
938
    [RHEL stock] pjd test failures
939
* https://tracker.ceph.com/issues/58219
940
* https://tracker.ceph.com/issues/57655
941
* qa: fs:mixed-clients kernel_untar_build failure
942
    Test failure: test_journal_migration (tasks.cephfs.test_journal_migration.TestJournalMigration)
943
* https://tracker.ceph.com/issues/57676
944
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
945
* https://tracker.ceph.com/issues/58340
946 96 Venky Shankar
    mds: fsstress.sh hangs with multimds
947
948
h3. 08 Dec 2022
949 99 Venky Shankar
950 96 Venky Shankar
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20221130.043104
951
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20221209.043803
952
953
(lots of transient git.ceph.com failures)
954
955
* https://tracker.ceph.com/issues/52624
956
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
957
* https://tracker.ceph.com/issues/56695
958
    [RHEL stock] pjd test failures
959
* https://tracker.ceph.com/issues/57655
960
    qa: fs:mixed-clients kernel_untar_build failure
961
* https://tracker.ceph.com/issues/58219
962
    Test failure: test_journal_migration (tasks.cephfs.test_journal_migration.TestJournalMigration)
963
* https://tracker.ceph.com/issues/58220
964
    Command failed (workunit test fs/quota/quota.sh) on smithi081 with status 1:
965 97 Venky Shankar
* https://tracker.ceph.com/issues/57676
966
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
967 98 Venky Shankar
* https://tracker.ceph.com/issues/53859
968
    qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
969
* https://tracker.ceph.com/issues/54460
970
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
971 96 Venky Shankar
* https://tracker.ceph.com/issues/58244
972 95 Venky Shankar
    Test failure: test_rebuild_inotable (tasks.cephfs.test_data_scan.TestDataScan)
973
974
h3. 14 Oct 2022
975
976
https://pulpito.ceph.com/vshankar-2022-10-12_04:56:59-fs-wip-vshankar-testing-20221011-145847-testing-default-smithi/
977
https://pulpito.ceph.com/vshankar-2022-10-14_04:04:57-fs-wip-vshankar-testing-20221014-072608-testing-default-smithi/
978
979
* https://tracker.ceph.com/issues/52624
980
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
981
* https://tracker.ceph.com/issues/55804
982
    Command failed (workunit test suites/pjd.sh)
983
* https://tracker.ceph.com/issues/51964
984
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
985
* https://tracker.ceph.com/issues/57682
986
    client: ERROR: test_reconnect_after_blocklisted
987 90 Rishabh Dave
* https://tracker.ceph.com/issues/54460
988 91 Rishabh Dave
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
989
990
h3. 10 Oct 2022
991 92 Rishabh Dave
992 91 Rishabh Dave
http://pulpito.front.sepia.ceph.com/rishabh-2022-09-30_19:45:21-fs-wip-rishabh-testing-30Sep2022-testing-default-smithi/
993
994
reruns
995
* fs-thrash, passed: http://pulpito.front.sepia.ceph.com/rishabh-2022-10-04_13:19:47-fs-wip-rishabh-testing-30Sep2022-testing-default-smithi/
996 94 Rishabh Dave
* fs-verify, passed: http://pulpito.front.sepia.ceph.com/rishabh-2022-10-05_12:25:37-fs-wip-rishabh-testing-30Sep2022-testing-default-smithi/
997 91 Rishabh Dave
* cephadm failures also passed after many re-runs: http://pulpito.front.sepia.ceph.com/rishabh-2022-10-06_13:50:51-fs-wip-rishabh-testing-30Sep2022-2-testing-default-smithi/
998 93 Rishabh Dave
    ** needed this PR to be merged in ceph-ci branch - https://github.com/ceph/ceph/pull/47458
999 91 Rishabh Dave
1000
known bugs
1001
* https://tracker.ceph.com/issues/52624
1002
  qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1003
* https://tracker.ceph.com/issues/50223
1004
  client.xxxx isn't responding to mclientcaps(revoke
1005
* https://tracker.ceph.com/issues/57299
1006
  qa: test_dump_loads fails with JSONDecodeError
1007
* https://tracker.ceph.com/issues/57655 [Exist in main as well]
1008
  qa: fs:mixed-clients kernel_untar_build failure
1009
* https://tracker.ceph.com/issues/57206
1010 90 Rishabh Dave
  libcephfs/test.sh: ceph_test_libcephfs_reclaim
1011
1012
h3. 2022 Sep 29
1013
1014
http://pulpito.front.sepia.ceph.com/rishabh-2022-09-14_12:48:43-fs-wip-rishabh-testing-2022Sep9-1708-testing-default-smithi/
1015
1016
* https://tracker.ceph.com/issues/55804
1017
  Command failed (workunit test suites/pjd.sh)
1018
* https://tracker.ceph.com/issues/36593
1019
  Command failed (workunit test fs/quota/quota.sh) on smithixxx with status 1
1020
* https://tracker.ceph.com/issues/52624
1021
  qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1022
* https://tracker.ceph.com/issues/51964
1023
  qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1024
* https://tracker.ceph.com/issues/56632
1025
  Test failure: test_subvolume_snapshot_clone_quota_exceeded
1026
* https://tracker.ceph.com/issues/50821
1027 88 Patrick Donnelly
  qa: untar_snap_rm failure during mds thrashing
1028
1029
h3. 2022 Sep 26
1030
1031
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20220923.171109
1032
1033
* https://tracker.ceph.com/issues/55804
1034
    qa failure: pjd link tests failed
1035
* https://tracker.ceph.com/issues/57676
1036
    qa: error during scrub thrashing: rank damage found: {'backtrace'}
1037
* https://tracker.ceph.com/issues/52624
1038
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1039
* https://tracker.ceph.com/issues/57580
1040
    Test failure: test_newops_getvxattr (tasks.cephfs.test_newops.TestNewOps)
1041
* https://tracker.ceph.com/issues/48773
1042
    qa: scrub does not complete
1043
* https://tracker.ceph.com/issues/57299
1044
    qa: test_dump_loads fails with JSONDecodeError
1045
* https://tracker.ceph.com/issues/57280
1046
    qa: tasks/kernel_cfuse_workunits_untarbuild_blogbench fails - Failed to fetch package version from shaman
1047
* https://tracker.ceph.com/issues/57205
1048
    Test failure: test_subvolume_group_ls_filter_internal_directories (tasks.cephfs.test_volumes.TestSubvolumeGroups)
1049
* https://tracker.ceph.com/issues/57656
1050
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
1051
* https://tracker.ceph.com/issues/57677
1052
    qa: "1 MDSs behind on trimming (MDS_TRIM)"
1053
* https://tracker.ceph.com/issues/57206
1054
    libcephfs/test.sh: ceph_test_libcephfs_reclaim
1055
* https://tracker.ceph.com/issues/57446
1056
    qa: test_subvolume_snapshot_info_if_orphan_clone fails
1057 89 Patrick Donnelly
* https://tracker.ceph.com/issues/57655 [Exist in main as well]
1058
    qa: fs:mixed-clients kernel_untar_build failure
1059 88 Patrick Donnelly
* https://tracker.ceph.com/issues/57682
1060
    client: ERROR: test_reconnect_after_blocklisted
1061 87 Patrick Donnelly
1062
1063
h3. 2022 Sep 22
1064
1065
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20220920.234701
1066
1067
* https://tracker.ceph.com/issues/57299
1068
    qa: test_dump_loads fails with JSONDecodeError
1069
* https://tracker.ceph.com/issues/57205
1070
    Test failure: test_subvolume_group_ls_filter_internal_directories (tasks.cephfs.test_volumes.TestSubvolumeGroups)
1071
* https://tracker.ceph.com/issues/52624
1072
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1073
* https://tracker.ceph.com/issues/57580
1074
    Test failure: test_newops_getvxattr (tasks.cephfs.test_newops.TestNewOps)
1075
* https://tracker.ceph.com/issues/57280
1076
    qa: tasks/kernel_cfuse_workunits_untarbuild_blogbench fails - Failed to fetch package version from shaman
1077
* https://tracker.ceph.com/issues/48773
1078
    qa: scrub does not complete
1079
* https://tracker.ceph.com/issues/56446
1080
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
1081
* https://tracker.ceph.com/issues/57206
1082
    libcephfs/test.sh: ceph_test_libcephfs_reclaim
1083
* https://tracker.ceph.com/issues/51267
1084
    CommandFailedError: Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithi096 with status 1:...
1085
1086
NEW:
1087
1088
* https://tracker.ceph.com/issues/57656
1089
    [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
1090
* https://tracker.ceph.com/issues/57655 [Exist in main as well]
1091
    qa: fs:mixed-clients kernel_untar_build failure
1092
* https://tracker.ceph.com/issues/57657
1093
    mds: scrub locates mismatch between child accounted_rstats and self rstats
1094
1095
Segfault probably caused by: https://github.com/ceph/ceph/pull/47795#issuecomment-1255724799
1096 80 Venky Shankar
1097 79 Venky Shankar
1098
h3. 2022 Sep 16
1099
1100
https://pulpito.ceph.com/?branch=wip-vshankar-testing1-20220905-132828
1101
1102
* https://tracker.ceph.com/issues/57446
1103
    qa: test_subvolume_snapshot_info_if_orphan_clone fails
1104
* https://tracker.ceph.com/issues/57299
1105
    qa: test_dump_loads fails with JSONDecodeError
1106
* https://tracker.ceph.com/issues/50223
1107
    client.xxxx isn't responding to mclientcaps(revoke)
1108
* https://tracker.ceph.com/issues/52624
1109
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1110
* https://tracker.ceph.com/issues/57205
1111
    Test failure: test_subvolume_group_ls_filter_internal_directories (tasks.cephfs.test_volumes.TestSubvolumeGroups)
1112
* https://tracker.ceph.com/issues/57280
1113
    qa: tasks/kernel_cfuse_workunits_untarbuild_blogbench fails - Failed to fetch package version from shaman
1114
* https://tracker.ceph.com/issues/51282
1115
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
1116
* https://tracker.ceph.com/issues/48203
1117
  https://tracker.ceph.com/issues/36593
1118
    qa: quota failure
1119
    qa: quota failure caused by clients stepping on each other
1120
* https://tracker.ceph.com/issues/57580
1121 77 Rishabh Dave
    Test failure: test_newops_getvxattr (tasks.cephfs.test_newops.TestNewOps)
1122
1123 76 Rishabh Dave
1124
h3. 2022 Aug 26
1125
1126
http://pulpito.front.sepia.ceph.com/rishabh-2022-08-22_17:49:59-fs-wip-rishabh-testing-2022Aug19-testing-default-smithi/
1127
http://pulpito.front.sepia.ceph.com/rishabh-2022-08-24_11:56:51-fs-wip-rishabh-testing-2022Aug19-testing-default-smithi/
1128
1129
* https://tracker.ceph.com/issues/57206
1130
  libcephfs/test.sh: ceph_test_libcephfs_reclaim
1131
* https://tracker.ceph.com/issues/56632
1132
  Test failure: test_subvolume_snapshot_clone_quota_exceeded (tasks.cephfs.test_volumes.TestSubvolumeSnapshotClones)
1133
* https://tracker.ceph.com/issues/56446
1134
  Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
1135
* https://tracker.ceph.com/issues/51964
1136
  qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1137
* https://tracker.ceph.com/issues/53859
1138
  qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
1139
1140
* https://tracker.ceph.com/issues/54460
1141
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1142
* https://tracker.ceph.com/issues/54462
1143
  Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128
1144
* https://tracker.ceph.com/issues/54460
1145
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1146
* https://tracker.ceph.com/issues/36593
1147
  Command failed (workunit test fs/quota/quota.sh) on smithixxx with status 1
1148
1149
* https://tracker.ceph.com/issues/52624
1150
  qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1151
* https://tracker.ceph.com/issues/55804
1152
  Command failed (workunit test suites/pjd.sh)
1153
* https://tracker.ceph.com/issues/50223
1154
  client.xxxx isn't responding to mclientcaps(revoke)
1155 75 Venky Shankar
1156
1157
h3. 2022 Aug 22
1158
1159
https://pulpito.ceph.com/vshankar-2022-08-12_09:34:24-fs-wip-vshankar-testing1-20220812-072441-testing-default-smithi/
1160
https://pulpito.ceph.com/vshankar-2022-08-18_04:30:42-fs-wip-vshankar-testing1-20220818-082047-testing-default-smithi/ (drop problematic PR and re-run)
1161
1162
* https://tracker.ceph.com/issues/52624
1163
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1164
* https://tracker.ceph.com/issues/56446
1165
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
1166
* https://tracker.ceph.com/issues/55804
1167
    Command failed (workunit test suites/pjd.sh)
1168
* https://tracker.ceph.com/issues/51278
1169
    mds: "FAILED ceph_assert(!segments.empty())"
1170
* https://tracker.ceph.com/issues/54460
1171
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1172
* https://tracker.ceph.com/issues/57205
1173
    Test failure: test_subvolume_group_ls_filter_internal_directories (tasks.cephfs.test_volumes.TestSubvolumeGroups)
1174
* https://tracker.ceph.com/issues/57206
1175
    ceph_test_libcephfs_reclaim crashes during test
1176
* https://tracker.ceph.com/issues/53859
1177
  Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
1178
* https://tracker.ceph.com/issues/50223
1179 72 Venky Shankar
    client.xxxx isn't responding to mclientcaps(revoke)
1180
1181
h3. 2022 Aug 12
1182
1183
https://pulpito.ceph.com/vshankar-2022-08-10_04:06:00-fs-wip-vshankar-testing-20220805-190751-testing-default-smithi/
1184
https://pulpito.ceph.com/vshankar-2022-08-11_12:16:58-fs-wip-vshankar-testing-20220811-145809-testing-default-smithi/ (drop problematic PR and re-run)
1185
1186
* https://tracker.ceph.com/issues/52624
1187
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1188
* https://tracker.ceph.com/issues/56446
1189
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
1190
* https://tracker.ceph.com/issues/51964
1191
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1192
* https://tracker.ceph.com/issues/55804
1193
    Command failed (workunit test suites/pjd.sh)
1194
* https://tracker.ceph.com/issues/50223
1195
    client.xxxx isn't responding to mclientcaps(revoke)
1196
* https://tracker.ceph.com/issues/50821
1197 73 Venky Shankar
    qa: untar_snap_rm failure during mds thrashing
1198 72 Venky Shankar
* https://tracker.ceph.com/issues/54460
1199 71 Venky Shankar
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1200
1201
h3. 2022 Aug 04
1202
1203
https://pulpito.ceph.com/?branch=wip-vshankar-testing1-20220804-123835 (only mgr/volumes, mgr/stats)
1204
1205 69 Rishabh Dave
Unrealted teuthology failure on rhel
1206 68 Rishabh Dave
1207
h3. 2022 Jul 25
1208
1209
http://pulpito.front.sepia.ceph.com/rishabh-2022-07-22_11:34:20-fs-wip-rishabh-testing-2022Jul22-1400-testing-default-smithi/
1210
1211 74 Rishabh Dave
1st re-run: http://pulpito.front.sepia.ceph.com/rishabh-2022-07-24_03:51:19-fs-wip-rishabh-testing-2022Jul22-1400-testing-default-smithi
1212
2nd re-run: http://pulpito.front.sepia.ceph.com/rishabh-2022-07-24_08:53:36-fs-wip-rishabh-testing-2022Jul22-1400-testing-default-smithi/
1213 68 Rishabh Dave
3rd re-run: http://pulpito.front.sepia.ceph.com/rishabh-2022-07-24_08:53:36-fs-wip-rishabh-testing-2022Jul22-1400-testing-default-smithi/
1214
4th (final) re-run: http://pulpito.front.sepia.ceph.com/rishabh-2022-07-28_03:59:01-fs-wip-rishabh-testing-2022Jul28-0143-testing-default-smithi/
1215
1216
* https://tracker.ceph.com/issues/55804
1217
  Command failed (workunit test suites/pjd.sh)
1218
* https://tracker.ceph.com/issues/50223
1219
  client.xxxx isn't responding to mclientcaps(revoke)
1220
1221
* https://tracker.ceph.com/issues/54460
1222
  Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithixxx with status 1
1223 1 Patrick Donnelly
* https://tracker.ceph.com/issues/36593
1224 74 Rishabh Dave
  Command failed (workunit test fs/quota/quota.sh) on smithixxx with status 1
1225 68 Rishabh Dave
* https://tracker.ceph.com/issues/54462
1226 67 Patrick Donnelly
  Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128~
1227
1228
h3. 2022 July 22
1229
1230
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20220721.235756
1231
1232
MDS_HEALTH_DUMMY error in log fixed by followup commit.
1233
transient selinux ping failure
1234
1235
* https://tracker.ceph.com/issues/56694
1236
    qa: avoid blocking forever on hung umount
1237
* https://tracker.ceph.com/issues/56695
1238
    [RHEL stock] pjd test failures
1239
* https://tracker.ceph.com/issues/56696
1240
    admin keyring disappears during qa run
1241
* https://tracker.ceph.com/issues/56697
1242
    qa: fs/snaps fails for fuse
1243
* https://tracker.ceph.com/issues/50222
1244
    osd: 5.2s0 deep-scrub : stat mismatch
1245
* https://tracker.ceph.com/issues/56698
1246
    client: FAILED ceph_assert(_size == 0)
1247
* https://tracker.ceph.com/issues/50223
1248
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
1249 66 Rishabh Dave
1250 65 Rishabh Dave
1251
h3. 2022 Jul 15
1252
1253
http://pulpito.front.sepia.ceph.com/rishabh-2022-07-08_23:53:34-fs-wip-rishabh-testing-2022Jul08-1820-testing-default-smithi/
1254
1255
re-run: http://pulpito.front.sepia.ceph.com/rishabh-2022-07-15_06:42:04-fs-wip-rishabh-testing-2022Jul08-1820-testing-default-smithi/
1256
1257
* https://tracker.ceph.com/issues/53859
1258
  Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
1259
* https://tracker.ceph.com/issues/55804
1260
  Command failed (workunit test suites/pjd.sh)
1261
* https://tracker.ceph.com/issues/50223
1262
  client.xxxx isn't responding to mclientcaps(revoke)
1263
* https://tracker.ceph.com/issues/50222
1264
  osd: deep-scrub : stat mismatch
1265
1266
* https://tracker.ceph.com/issues/56632
1267
  Test failure: test_subvolume_snapshot_clone_quota_exceeded (tasks.cephfs.test_volumes.TestSubvolumeSnapshotClones)
1268
* https://tracker.ceph.com/issues/56634
1269
  workunit test fs/snaps/snaptest-intodir.sh
1270
* https://tracker.ceph.com/issues/56644
1271
  Test failure: test_rapid_creation (tasks.cephfs.test_fragment.TestFragmentation)
1272
1273 61 Rishabh Dave
1274
1275
h3. 2022 July 05
1276 62 Rishabh Dave
1277 64 Rishabh Dave
http://pulpito.front.sepia.ceph.com/rishabh-2022-07-02_14:14:52-fs-wip-rishabh-testing-20220702-1631-testing-default-smithi/
1278
1279
On 1st re-run some jobs passed - http://pulpito.front.sepia.ceph.com/rishabh-2022-07-03_15:10:28-fs-wip-rishabh-testing-20220702-1631-distro-default-smithi/
1280
1281
On 2nd re-run only few jobs failed -
1282 62 Rishabh Dave
http://pulpito.front.sepia.ceph.com/rishabh-2022-07-06_05:24:29-fs-wip-rishabh-testing-20220705-2132-distro-default-smithi/
1283
http://pulpito.front.sepia.ceph.com/rishabh-2022-07-06_05:24:29-fs-wip-rishabh-testing-20220705-2132-distro-default-smithi/
1284
1285
* https://tracker.ceph.com/issues/56446
1286
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
1287
* https://tracker.ceph.com/issues/55804
1288
    Command failed (workunit test suites/pjd.sh) on smithi047 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/
1289
1290
* https://tracker.ceph.com/issues/56445
1291 63 Rishabh Dave
    Command failed on smithi080 with status 123: "find /home/ubuntu/cephtest/archive/syslog -name '*.log' -print0 | sudo xargs -0 --no-run-if-empty -- gzip --"
1292
* https://tracker.ceph.com/issues/51267
1293
    Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithi098 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1
1294 62 Rishabh Dave
* https://tracker.ceph.com/issues/50224
1295
    Test failure: test_mirroring_init_failure_with_recovery (tasks.cephfs.test_mirroring.TestMirroring)
1296 61 Rishabh Dave
1297 58 Venky Shankar
1298
1299
h3. 2022 July 04
1300
1301
https://pulpito.ceph.com/vshankar-2022-06-29_09:19:00-fs-wip-vshankar-testing-20220627-100931-testing-default-smithi/
1302
(rhel runs were borked due to: https://lists.ceph.io/hyperkitty/list/dev@ceph.io/thread/JSZQFUKVLDND4W33PXDGCABPHNSPT6SS/, tests ran with --filter-out=rhel)
1303
1304
* https://tracker.ceph.com/issues/56445
1305 59 Rishabh Dave
    Command failed on smithi162 with status 123: "find /home/ubuntu/cephtest/archive/syslog -name '*.log' -print0 | sudo xargs -0 --no-run-if-empty -- gzip --"
1306
* https://tracker.ceph.com/issues/56446
1307
    Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
1308
* https://tracker.ceph.com/issues/51964
1309 60 Rishabh Dave
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1310 59 Rishabh Dave
* https://tracker.ceph.com/issues/52624
1311 57 Venky Shankar
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1312
1313
h3. 2022 June 20
1314
1315
https://pulpito.ceph.com/vshankar-2022-06-15_04:03:39-fs-wip-vshankar-testing1-20220615-072516-testing-default-smithi/
1316
https://pulpito.ceph.com/vshankar-2022-06-19_08:22:46-fs-wip-vshankar-testing1-20220619-102531-testing-default-smithi/
1317
1318
* https://tracker.ceph.com/issues/52624
1319
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1320
* https://tracker.ceph.com/issues/55804
1321
    qa failure: pjd link tests failed
1322
* https://tracker.ceph.com/issues/54108
1323
    qa: iogen workunit: "The following counters failed to be set on mds daemons: {'mds.exported', 'mds.imported'}"
1324
* https://tracker.ceph.com/issues/55332
1325 56 Patrick Donnelly
    Failure in snaptest-git-ceph.sh (it's an async unlink/create bug)
1326
1327
h3. 2022 June 13
1328
1329
https://pulpito.ceph.com/pdonnell-2022-06-12_05:08:12-fs:workload-wip-pdonnell-testing-20220612.004943-distro-default-smithi/
1330
1331
* https://tracker.ceph.com/issues/56024
1332
    cephadm: removes ceph.conf during qa run causing command failure
1333
* https://tracker.ceph.com/issues/48773
1334
    qa: scrub does not complete
1335
* https://tracker.ceph.com/issues/56012
1336
    mds: src/mds/MDLog.cc: 283: FAILED ceph_assert(!mds->is_ any_replay())
1337 55 Venky Shankar
1338 54 Venky Shankar
1339
h3. 2022 Jun 13
1340
1341
https://pulpito.ceph.com/vshankar-2022-06-07_00:25:50-fs-wip-vshankar-testing-20220606-223254-testing-default-smithi/
1342
https://pulpito.ceph.com/vshankar-2022-06-10_01:04:46-fs-wip-vshankar-testing-20220609-175550-testing-default-smithi/
1343
1344
* https://tracker.ceph.com/issues/52624
1345
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1346
* https://tracker.ceph.com/issues/51964
1347
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1348
* https://tracker.ceph.com/issues/53859
1349
    qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
1350
* https://tracker.ceph.com/issues/55804
1351
    qa failure: pjd link tests failed
1352
* https://tracker.ceph.com/issues/56003
1353
    client: src/include/xlist.h: 81: FAILED ceph_assert(_size == 0)
1354
* https://tracker.ceph.com/issues/56011
1355
    fs/thrash: snaptest-snap-rm-cmp.sh fails in mds5sum comparison
1356
* https://tracker.ceph.com/issues/56012
1357 53 Venky Shankar
    mds: src/mds/MDLog.cc: 283: FAILED ceph_assert(!mds->is_ any_replay())
1358
1359
h3. 2022 Jun 07
1360
1361
https://pulpito.ceph.com/vshankar-2022-06-06_21:25:41-fs-wip-vshankar-testing1-20220606-230129-testing-default-smithi/
1362
https://pulpito.ceph.com/vshankar-2022-06-07_10:53:31-fs-wip-vshankar-testing1-20220607-104134-testing-default-smithi/ (rerun after dropping a problematic PR)
1363
1364
* https://tracker.ceph.com/issues/52624
1365
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1366
* https://tracker.ceph.com/issues/50223
1367
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
1368
* https://tracker.ceph.com/issues/50224
1369 51 Venky Shankar
    qa: test_mirroring_init_failure_with_recovery failure
1370
1371
h3. 2022 May 12
1372 52 Venky Shankar
1373 51 Venky Shankar
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20220509-125847
1374
https://pulpito.ceph.com/vshankar-2022-05-13_17:09:16-fs-wip-vshankar-testing-20220513-120051-testing-default-smithi/ (drop prs + rerun)
1375
1376
* https://tracker.ceph.com/issues/52624
1377
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1378
* https://tracker.ceph.com/issues/50223
1379
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
1380
* https://tracker.ceph.com/issues/55332
1381
    Failure in snaptest-git-ceph.sh
1382
* https://tracker.ceph.com/issues/53859
1383 1 Patrick Donnelly
    qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
1384 52 Venky Shankar
* https://tracker.ceph.com/issues/55538
1385
    Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead)
1386 51 Venky Shankar
* https://tracker.ceph.com/issues/55258
1387 49 Venky Shankar
    lots of "heartbeat_check: no reply from X.X.X.X" in OSD logs (cropss up again, though very infrequent)
1388
1389 50 Venky Shankar
h3. 2022 May 04
1390
1391
https://pulpito.ceph.com/vshankar-2022-05-01_13:18:44-fs-wip-vshankar-testing1-20220428-204527-testing-default-smithi/
1392 49 Venky Shankar
https://pulpito.ceph.com/vshankar-2022-05-02_16:58:59-fs-wip-vshankar-testing1-20220502-201957-testing-default-smithi/ (after dropping PRs)
1393
1394
* https://tracker.ceph.com/issues/52624
1395
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1396
* https://tracker.ceph.com/issues/50223
1397
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
1398
* https://tracker.ceph.com/issues/55332
1399
    Failure in snaptest-git-ceph.sh
1400
* https://tracker.ceph.com/issues/53859
1401
    qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
1402
* https://tracker.ceph.com/issues/55516
1403
    qa: fs suite tests failing with "json.decoder.JSONDecodeError: Extra data: line 2 column 82 (char 82)"
1404
* https://tracker.ceph.com/issues/55537
1405
    mds: crash during fs:upgrade test
1406
* https://tracker.ceph.com/issues/55538
1407 48 Venky Shankar
    Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead)
1408
1409
h3. 2022 Apr 25
1410
1411
https://pulpito.ceph.com/?branch=wip-vshankar-testing-20220420-113951 (owner vshankar)
1412
1413
* https://tracker.ceph.com/issues/52624
1414
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1415
* https://tracker.ceph.com/issues/50223
1416
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
1417
* https://tracker.ceph.com/issues/55258
1418
    lots of "heartbeat_check: no reply from X.X.X.X" in OSD logs
1419
* https://tracker.ceph.com/issues/55377
1420 47 Venky Shankar
    kclient: mds revoke Fwb caps stuck after the kclient tries writebcak once
1421
1422
h3. 2022 Apr 14
1423
1424
https://pulpito.ceph.com/?branch=wip-vshankar-testing1-20220411-144044
1425
1426
* https://tracker.ceph.com/issues/52624
1427
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1428
* https://tracker.ceph.com/issues/50223
1429
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
1430
* https://tracker.ceph.com/issues/52438
1431
    qa: ffsb timeout
1432
* https://tracker.ceph.com/issues/55170
1433
    mds: crash during rejoin (CDir::fetch_keys)
1434
* https://tracker.ceph.com/issues/55331
1435
    pjd failure
1436
* https://tracker.ceph.com/issues/48773
1437
    qa: scrub does not complete
1438
* https://tracker.ceph.com/issues/55332
1439
    Failure in snaptest-git-ceph.sh
1440
* https://tracker.ceph.com/issues/55258
1441 45 Venky Shankar
    lots of "heartbeat_check: no reply from X.X.X.X" in OSD logs
1442
1443 46 Venky Shankar
h3. 2022 Apr 11
1444 45 Venky Shankar
1445
https://pulpito.ceph.com/?branch=wip-vshankar-testing-55110-20220408-203242
1446
1447
* https://tracker.ceph.com/issues/48773
1448
    qa: scrub does not complete
1449
* https://tracker.ceph.com/issues/52624
1450
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1451
* https://tracker.ceph.com/issues/52438
1452
    qa: ffsb timeout
1453
* https://tracker.ceph.com/issues/48680
1454
    mds: scrubbing stuck "scrub active (0 inodes in the stack)"
1455
* https://tracker.ceph.com/issues/55236
1456
    qa: fs/snaps tests fails with "hit max job timeout"
1457
* https://tracker.ceph.com/issues/54108
1458
    qa: iogen workunit: "The following counters failed to be set on mds daemons: {'mds.exported', 'mds.imported'}"
1459
* https://tracker.ceph.com/issues/54971
1460
    Test failure: test_perf_stats_stale_metrics (tasks.cephfs.test_mds_metrics.TestMDSMetrics)
1461
* https://tracker.ceph.com/issues/50223
1462
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
1463
* https://tracker.ceph.com/issues/55258
1464 44 Venky Shankar
    lots of "heartbeat_check: no reply from X.X.X.X" in OSD logs
1465 42 Venky Shankar
1466 43 Venky Shankar
h3. 2022 Mar 21
1467
1468
https://pulpito.ceph.com/vshankar-2022-03-20_02:16:37-fs-wip-vshankar-testing-20220319-163539-testing-default-smithi/
1469
1470
Run didn't go well, lots of failures - debugging by dropping PRs and running against master branch. Only merging unrelated PRs that pass tests.
1471
1472
1473 42 Venky Shankar
h3. 2022 Mar 08
1474
1475
https://pulpito.ceph.com/vshankar-2022-02-28_04:32:15-fs-wip-vshankar-testing-20220226-211550-testing-default-smithi/
1476
1477
rerun with
1478
- (drop) https://github.com/ceph/ceph/pull/44679
1479
- (drop) https://github.com/ceph/ceph/pull/44958
1480
https://pulpito.ceph.com/vshankar-2022-03-06_14:47:51-fs-wip-vshankar-testing-20220304-132102-testing-default-smithi/
1481
1482
* https://tracker.ceph.com/issues/54419 (new)
1483
    `ceph orch upgrade start` seems to never reach completion
1484
* https://tracker.ceph.com/issues/51964
1485
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1486
* https://tracker.ceph.com/issues/52624
1487
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1488
* https://tracker.ceph.com/issues/50223
1489
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
1490
* https://tracker.ceph.com/issues/52438
1491
    qa: ffsb timeout
1492
* https://tracker.ceph.com/issues/50821
1493
    qa: untar_snap_rm failure during mds thrashing
1494 41 Venky Shankar
1495
1496
h3. 2022 Feb 09
1497
1498
https://pulpito.ceph.com/vshankar-2022-02-05_17:27:49-fs-wip-vshankar-testing-20220201-113815-testing-default-smithi/
1499
1500
rerun with
1501
- (drop) https://github.com/ceph/ceph/pull/37938
1502
- (drop) https://github.com/ceph/ceph/pull/44335
1503
- (drop) https://github.com/ceph/ceph/pull/44491
1504
- (drop) https://github.com/ceph/ceph/pull/44501
1505
https://pulpito.ceph.com/vshankar-2022-02-08_14:27:29-fs-wip-vshankar-testing-20220208-181241-testing-default-smithi/
1506
1507
* https://tracker.ceph.com/issues/51964
1508
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1509
* https://tracker.ceph.com/issues/54066
1510
    test_subvolume_no_upgrade_v1_sanity fails with `AssertionError: 1000 != 0`
1511
* https://tracker.ceph.com/issues/48773
1512
    qa: scrub does not complete
1513
* https://tracker.ceph.com/issues/52624
1514
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1515
* https://tracker.ceph.com/issues/50223
1516
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
1517
* https://tracker.ceph.com/issues/52438
1518 40 Patrick Donnelly
    qa: ffsb timeout
1519
1520
h3. 2022 Feb 01
1521
1522
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20220127.171526
1523
1524
* https://tracker.ceph.com/issues/54107
1525
    kclient: hang during umount
1526
* https://tracker.ceph.com/issues/54106
1527
    kclient: hang during workunit cleanup
1528
* https://tracker.ceph.com/issues/54108
1529
    qa: iogen workunit: "The following counters failed to be set on mds daemons: {'mds.exported', 'mds.imported'}"
1530
* https://tracker.ceph.com/issues/48773
1531
    qa: scrub does not complete
1532
* https://tracker.ceph.com/issues/52438
1533
    qa: ffsb timeout
1534 36 Venky Shankar
1535
1536
h3. 2022 Jan 13
1537 39 Venky Shankar
1538 36 Venky Shankar
https://pulpito.ceph.com/vshankar-2022-01-06_13:18:41-fs-wip-vshankar-testing-20220106-145819-testing-default-smithi/
1539 38 Venky Shankar
1540
rerun with:
1541 36 Venky Shankar
- (add) https://github.com/ceph/ceph/pull/44570
1542
- (drop) https://github.com/ceph/ceph/pull/43184
1543
https://pulpito.ceph.com/vshankar-2022-01-13_04:42:40-fs-wip-vshankar-testing-20220106-145819-testing-default-smithi/
1544
1545
* https://tracker.ceph.com/issues/50223
1546
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
1547
* https://tracker.ceph.com/issues/51282
1548
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
1549
* https://tracker.ceph.com/issues/48773
1550
    qa: scrub does not complete
1551
* https://tracker.ceph.com/issues/52624
1552
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1553
* https://tracker.ceph.com/issues/53859
1554 34 Venky Shankar
    qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
1555
1556
h3. 2022 Jan 03
1557
1558
https://pulpito.ceph.com/vshankar-2021-12-22_07:37:44-fs-wip-vshankar-testing-20211216-114012-testing-default-smithi/
1559
https://pulpito.ceph.com/vshankar-2022-01-03_12:27:45-fs-wip-vshankar-testing-20220103-142738-testing-default-smithi/ (rerun)
1560
1561
* https://tracker.ceph.com/issues/50223
1562
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
1563
* https://tracker.ceph.com/issues/51964
1564
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1565
* https://tracker.ceph.com/issues/51267
1566
    CommandFailedError: Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithi096 with status 1:...
1567
* https://tracker.ceph.com/issues/51282
1568
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
1569
* https://tracker.ceph.com/issues/50821
1570
    qa: untar_snap_rm failure during mds thrashing
1571 35 Ramana Raja
* https://tracker.ceph.com/issues/51278
1572
    mds: "FAILED ceph_assert(!segments.empty())"
1573
* https://tracker.ceph.com/issues/52279
1574 34 Venky Shankar
    cephadm tests fail due to: error adding seccomp filter rule for syscall bdflush: requested action matches default action of filter
1575 33 Patrick Donnelly
1576
1577
h3. 2021 Dec 22
1578
1579
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211222.014316
1580
1581
* https://tracker.ceph.com/issues/52624
1582
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1583
* https://tracker.ceph.com/issues/50223
1584
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
1585
* https://tracker.ceph.com/issues/52279
1586
    cephadm tests fail due to: error adding seccomp filter rule for syscall bdflush: requested action matches default action of filter
1587
* https://tracker.ceph.com/issues/50224
1588
    qa: test_mirroring_init_failure_with_recovery failure
1589
* https://tracker.ceph.com/issues/48773
1590
    qa: scrub does not complete
1591 32 Venky Shankar
1592
1593
h3. 2021 Nov 30
1594
1595
https://pulpito.ceph.com/vshankar-2021-11-24_07:14:27-fs-wip-vshankar-testing-20211124-094330-testing-default-smithi/
1596
https://pulpito.ceph.com/vshankar-2021-11-30_06:23:32-fs-wip-vshankar-testing-20211124-094330-distro-default-smithi/ (rerun w/ QA fixes)
1597
1598
* https://tracker.ceph.com/issues/53436
1599
    mds, mon: mds beacon messages get dropped? (mds never reaches up:active state)
1600
* https://tracker.ceph.com/issues/51964
1601
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1602
* https://tracker.ceph.com/issues/48812
1603
    qa: test_scrub_pause_and_resume_with_abort failure
1604
* https://tracker.ceph.com/issues/51076
1605
    "wait_for_recovery: failed before timeout expired" during thrashosd test with EC backend.
1606
* https://tracker.ceph.com/issues/50223
1607
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
1608
* https://tracker.ceph.com/issues/52624
1609
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1610
* https://tracker.ceph.com/issues/50250
1611
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
1612 31 Patrick Donnelly
1613
1614
h3. 2021 November 9
1615
1616
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211109.180315
1617
1618
* https://tracker.ceph.com/issues/53214
1619
    qa: "dd: error reading '/sys/kernel/debug/ceph/2a934501-6731-4052-a836-f42229a869be.client4874/metrics': Is a directory"
1620
* https://tracker.ceph.com/issues/48773
1621
    qa: scrub does not complete
1622
* https://tracker.ceph.com/issues/50223
1623
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
1624
* https://tracker.ceph.com/issues/51282
1625
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
1626
* https://tracker.ceph.com/issues/52624
1627
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1628
* https://tracker.ceph.com/issues/53216
1629
    qa: "RuntimeError: value of attributes should be either str or None. client_id"
1630
* https://tracker.ceph.com/issues/50250
1631
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
1632
1633 30 Patrick Donnelly
1634
1635
h3. 2021 November 03
1636
1637
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211103.023355
1638
1639
* https://tracker.ceph.com/issues/51964
1640
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
1641
* https://tracker.ceph.com/issues/51282
1642
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
1643
* https://tracker.ceph.com/issues/52436
1644
    fs/ceph: "corrupt mdsmap"
1645
* https://tracker.ceph.com/issues/53074
1646
    pybind/mgr/cephadm: upgrade sequence does not continue if no MDS are active
1647
* https://tracker.ceph.com/issues/53150
1648
    pybind/mgr/cephadm/upgrade: tolerate MDS failures during upgrade straddling v16.2.5
1649
* https://tracker.ceph.com/issues/53155
1650
    MDSMonitor: assertion during upgrade to v16.2.5+
1651 29 Patrick Donnelly
1652
1653
h3. 2021 October 26
1654
1655
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211025.000447
1656
1657
* https://tracker.ceph.com/issues/53074
1658
    pybind/mgr/cephadm: upgrade sequence does not continue if no MDS are active
1659
* https://tracker.ceph.com/issues/52997
1660
    testing: hang ing umount
1661
* https://tracker.ceph.com/issues/50824
1662
    qa: snaptest-git-ceph bus error
1663
* https://tracker.ceph.com/issues/52436
1664
    fs/ceph: "corrupt mdsmap"
1665
* https://tracker.ceph.com/issues/48773
1666
    qa: scrub does not complete
1667
* https://tracker.ceph.com/issues/53082
1668
    ceph-fuse: segmenetation fault in Client::handle_mds_map
1669
* https://tracker.ceph.com/issues/50223
1670
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
1671
* https://tracker.ceph.com/issues/52624
1672
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1673
* https://tracker.ceph.com/issues/50224
1674
    qa: test_mirroring_init_failure_with_recovery failure
1675
* https://tracker.ceph.com/issues/50821
1676
    qa: untar_snap_rm failure during mds thrashing
1677
* https://tracker.ceph.com/issues/50250
1678
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
1679
1680 27 Patrick Donnelly
1681
1682 28 Patrick Donnelly
h3. 2021 October 19
1683 27 Patrick Donnelly
1684
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211019.013028
1685
1686
* https://tracker.ceph.com/issues/52995
1687
    qa: test_standby_count_wanted failure
1688
* https://tracker.ceph.com/issues/52948
1689
    osd: fails to come up: "teuthology.misc:7 of 8 OSDs are up"
1690
* https://tracker.ceph.com/issues/52996
1691
    qa: test_perf_counters via test_openfiletable
1692
* https://tracker.ceph.com/issues/48772
1693
    qa: pjd: not ok 9, 44, 80
1694
* https://tracker.ceph.com/issues/52997
1695
    testing: hang ing umount
1696
* https://tracker.ceph.com/issues/50250
1697
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
1698
* https://tracker.ceph.com/issues/52624
1699
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1700
* https://tracker.ceph.com/issues/50223
1701
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
1702
* https://tracker.ceph.com/issues/50821
1703
    qa: untar_snap_rm failure during mds thrashing
1704
* https://tracker.ceph.com/issues/48773
1705
    qa: scrub does not complete
1706 26 Patrick Donnelly
1707
1708
h3. 2021 October 12
1709
1710
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211012.192211
1711
1712
Some failures caused by teuthology bug: https://tracker.ceph.com/issues/52944
1713
1714
New test caused failure: https://github.com/ceph/ceph/pull/43297#discussion_r729883167
1715
1716
1717
* https://tracker.ceph.com/issues/51282
1718
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
1719
* https://tracker.ceph.com/issues/52948
1720
    osd: fails to come up: "teuthology.misc:7 of 8 OSDs are up"
1721
* https://tracker.ceph.com/issues/48773
1722
    qa: scrub does not complete
1723
* https://tracker.ceph.com/issues/50224
1724
    qa: test_mirroring_init_failure_with_recovery failure
1725
* https://tracker.ceph.com/issues/52949
1726
    RuntimeError: The following counters failed to be set on mds daemons: {'mds.dir_split'}
1727 25 Patrick Donnelly
1728 23 Patrick Donnelly
1729 24 Patrick Donnelly
h3. 2021 October 02
1730
1731
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20211002.163337
1732
1733
Some failures caused by cephadm upgrade test. Fixed in follow-up qa commit.
1734
1735
test_simple failures caused by PR in this set.
1736
1737
A few reruns because of QA infra noise.
1738
1739
* https://tracker.ceph.com/issues/52822
1740
    qa: failed pacific install on fs:upgrade
1741
* https://tracker.ceph.com/issues/52624
1742
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1743
* https://tracker.ceph.com/issues/50223
1744
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
1745
* https://tracker.ceph.com/issues/48773
1746
    qa: scrub does not complete
1747
1748
1749 23 Patrick Donnelly
h3. 2021 September 20
1750
1751
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210917.174826
1752
1753
* https://tracker.ceph.com/issues/52677
1754
    qa: test_simple failure
1755
* https://tracker.ceph.com/issues/51279
1756
    kclient hangs on umount (testing branch)
1757
* https://tracker.ceph.com/issues/50223
1758
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
1759
* https://tracker.ceph.com/issues/50250
1760
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
1761
* https://tracker.ceph.com/issues/52624
1762
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1763
* https://tracker.ceph.com/issues/52438
1764
    qa: ffsb timeout
1765 22 Patrick Donnelly
1766
1767
h3. 2021 September 10
1768
1769
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210910.181451
1770
1771
* https://tracker.ceph.com/issues/50223
1772
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
1773
* https://tracker.ceph.com/issues/50250
1774
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
1775
* https://tracker.ceph.com/issues/52624
1776
    qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
1777
* https://tracker.ceph.com/issues/52625
1778
    qa: test_kill_mdstable (tasks.cephfs.test_snapshots.TestSnapshots)
1779
* https://tracker.ceph.com/issues/52439
1780
    qa: acls does not compile on centos stream
1781
* https://tracker.ceph.com/issues/50821
1782
    qa: untar_snap_rm failure during mds thrashing
1783
* https://tracker.ceph.com/issues/48773
1784
    qa: scrub does not complete
1785
* https://tracker.ceph.com/issues/52626
1786
    mds: ScrubStack.cc: 831: FAILED ceph_assert(diri)
1787
* https://tracker.ceph.com/issues/51279
1788
    kclient hangs on umount (testing branch)
1789 21 Patrick Donnelly
1790
1791
h3. 2021 August 27
1792
1793
Several jobs died because of device failures.
1794
1795
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210827.024746
1796
1797
* https://tracker.ceph.com/issues/52430
1798
    mds: fast async create client mount breaks racy test
1799
* https://tracker.ceph.com/issues/52436
1800
    fs/ceph: "corrupt mdsmap"
1801
* https://tracker.ceph.com/issues/52437
1802
    mds: InoTable::replay_release_ids abort via test_inotable_sync
1803
* https://tracker.ceph.com/issues/51282
1804
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
1805
* https://tracker.ceph.com/issues/52438
1806
    qa: ffsb timeout
1807
* https://tracker.ceph.com/issues/52439
1808
    qa: acls does not compile on centos stream
1809 20 Patrick Donnelly
1810
1811
h3. 2021 July 30
1812
1813
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210729.214022
1814
1815
* https://tracker.ceph.com/issues/50250
1816
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
1817
* https://tracker.ceph.com/issues/51282
1818
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
1819
* https://tracker.ceph.com/issues/48773
1820
    qa: scrub does not complete
1821
* https://tracker.ceph.com/issues/51975
1822
    pybind/mgr/stats: KeyError
1823 19 Patrick Donnelly
1824
1825
h3. 2021 July 28
1826
1827
https://pulpito.ceph.com/pdonnell-2021-07-28_00:39:45-fs-wip-pdonnell-testing-20210727.213757-distro-basic-smithi/
1828
1829
with qa fix: https://pulpito.ceph.com/pdonnell-2021-07-28_16:20:28-fs-wip-pdonnell-testing-20210728.141004-distro-basic-smithi/
1830
1831
* https://tracker.ceph.com/issues/51905
1832
    qa: "error reading sessionmap 'mds1_sessionmap'"
1833
* https://tracker.ceph.com/issues/48773
1834
    qa: scrub does not complete
1835
* https://tracker.ceph.com/issues/50250
1836
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
1837
* https://tracker.ceph.com/issues/51267
1838
    CommandFailedError: Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithi096 with status 1:...
1839
* https://tracker.ceph.com/issues/51279
1840
    kclient hangs on umount (testing branch)
1841 18 Patrick Donnelly
1842
1843
h3. 2021 July 16
1844
1845
https://pulpito.ceph.com/pdonnell-2021-07-16_05:50:11-fs-wip-pdonnell-testing-20210716.022804-distro-basic-smithi/
1846
1847
* https://tracker.ceph.com/issues/48773
1848
    qa: scrub does not complete
1849
* https://tracker.ceph.com/issues/48772
1850
    qa: pjd: not ok 9, 44, 80
1851
* https://tracker.ceph.com/issues/45434
1852
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
1853
* https://tracker.ceph.com/issues/51279
1854
    kclient hangs on umount (testing branch)
1855
* https://tracker.ceph.com/issues/50824
1856
    qa: snaptest-git-ceph bus error
1857 17 Patrick Donnelly
1858
1859
h3. 2021 July 04
1860
1861
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210703.052904
1862
1863
* https://tracker.ceph.com/issues/48773
1864
    qa: scrub does not complete
1865
* https://tracker.ceph.com/issues/39150
1866
    mon: "FAILED ceph_assert(session_map.sessions.empty())" when out of quorum
1867
* https://tracker.ceph.com/issues/45434
1868
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
1869
* https://tracker.ceph.com/issues/51282
1870
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
1871
* https://tracker.ceph.com/issues/48771
1872
    qa: iogen: workload fails to cause balancing
1873
* https://tracker.ceph.com/issues/51279
1874
    kclient hangs on umount (testing branch)
1875
* https://tracker.ceph.com/issues/50250
1876
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details" ("freshly-calculated rstats don't match existing ones")
1877 16 Patrick Donnelly
1878
1879
h3. 2021 July 01
1880
1881
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210701.192056
1882
1883
* https://tracker.ceph.com/issues/51197
1884
    qa: [WRN] Scrub error on inode 0x10000001520 (/client.0/tmp/t/linux-5.4/Documentation/driver-api) see mds.f log and `damage ls` output for details
1885
* https://tracker.ceph.com/issues/50866
1886
    osd: stat mismatch on objects
1887
* https://tracker.ceph.com/issues/48773
1888
    qa: scrub does not complete
1889 15 Patrick Donnelly
1890
1891
h3. 2021 June 26
1892
1893
https://pulpito.ceph.com/pdonnell-2021-06-26_00:57:00-fs-wip-pdonnell-testing-20210625.225421-distro-basic-smithi/
1894
1895
* https://tracker.ceph.com/issues/51183
1896
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
1897
* https://tracker.ceph.com/issues/51410
1898
    kclient: fails to finish reconnect during MDS thrashing (testing branch)
1899
* https://tracker.ceph.com/issues/48773
1900
    qa: scrub does not complete
1901
* https://tracker.ceph.com/issues/51282
1902
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
1903
* https://tracker.ceph.com/issues/51169
1904
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
1905
* https://tracker.ceph.com/issues/48772
1906
    qa: pjd: not ok 9, 44, 80
1907 14 Patrick Donnelly
1908
1909
h3. 2021 June 21
1910
1911
https://pulpito.ceph.com/pdonnell-2021-06-22_00:27:21-fs-wip-pdonnell-testing-20210621.231646-distro-basic-smithi/
1912
1913
One failure caused by PR: https://github.com/ceph/ceph/pull/41935#issuecomment-866472599
1914
1915
* https://tracker.ceph.com/issues/51282
1916
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
1917
* https://tracker.ceph.com/issues/51183
1918
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
1919
* https://tracker.ceph.com/issues/48773
1920
    qa: scrub does not complete
1921
* https://tracker.ceph.com/issues/48771
1922
    qa: iogen: workload fails to cause balancing
1923
* https://tracker.ceph.com/issues/51169
1924
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
1925
* https://tracker.ceph.com/issues/50495
1926
    libcephfs: shutdown race fails with status 141
1927
* https://tracker.ceph.com/issues/45434
1928
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
1929
* https://tracker.ceph.com/issues/50824
1930
    qa: snaptest-git-ceph bus error
1931
* https://tracker.ceph.com/issues/50223
1932
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
1933 13 Patrick Donnelly
1934
1935
h3. 2021 June 16
1936
1937
https://pulpito.ceph.com/pdonnell-2021-06-16_21:26:55-fs-wip-pdonnell-testing-20210616.191804-distro-basic-smithi/
1938
1939
MDS abort class of failures caused by PR: https://github.com/ceph/ceph/pull/41667
1940
1941
* https://tracker.ceph.com/issues/45434
1942
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
1943
* https://tracker.ceph.com/issues/51169
1944
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
1945
* https://tracker.ceph.com/issues/43216
1946
    MDSMonitor: removes MDS coming out of quorum election
1947
* https://tracker.ceph.com/issues/51278
1948
    mds: "FAILED ceph_assert(!segments.empty())"
1949
* https://tracker.ceph.com/issues/51279
1950
    kclient hangs on umount (testing branch)
1951
* https://tracker.ceph.com/issues/51280
1952
    mds: "FAILED ceph_assert(r == 0 || r == -2)"
1953
* https://tracker.ceph.com/issues/51183
1954
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
1955
* https://tracker.ceph.com/issues/51281
1956
    qa: snaptest-snap-rm-cmp.sh: "echo 'FAIL: bad match, /tmp/a 4637e766853d1ad16a7b17079e2c6f03 != real c3883760b18d50e8d78819c54d579b00'"
1957
* https://tracker.ceph.com/issues/48773
1958
    qa: scrub does not complete
1959
* https://tracker.ceph.com/issues/51076
1960
    "wait_for_recovery: failed before timeout expired" during thrashosd test with EC backend.
1961
* https://tracker.ceph.com/issues/51228
1962
    qa: rmdir: failed to remove 'a/.snap/*': No such file or directory
1963
* https://tracker.ceph.com/issues/51282
1964
    pybind/mgr/mgr_util: .mgr pool may be created to early causing spurious PG_DEGRADED warnings
1965 12 Patrick Donnelly
1966
1967
h3. 2021 June 14
1968
1969
https://pulpito.ceph.com/pdonnell-2021-06-14_20:53:05-fs-wip-pdonnell-testing-20210614.173325-distro-basic-smithi/
1970
1971
Some Ubuntu 20.04 upgrade fallout. In particular, upgrade tests are failing due to missing packages for 18.04 Pacific.
1972
1973
* https://tracker.ceph.com/issues/51169
1974
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
1975
* https://tracker.ceph.com/issues/51228
1976
    qa: rmdir: failed to remove 'a/.snap/*': No such file or directory
1977
* https://tracker.ceph.com/issues/48773
1978
    qa: scrub does not complete
1979
* https://tracker.ceph.com/issues/51183
1980
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
1981
* https://tracker.ceph.com/issues/45434
1982
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
1983
* https://tracker.ceph.com/issues/51182
1984
    pybind/mgr/snap_schedule: Invalid command: Unexpected argument 'fs=cephfs'
1985
* https://tracker.ceph.com/issues/51229
1986
    qa: test_multi_snap_schedule list difference failure
1987
* https://tracker.ceph.com/issues/50821
1988
    qa: untar_snap_rm failure during mds thrashing
1989 11 Patrick Donnelly
1990
1991
h3. 2021 June 13
1992
1993
https://pulpito.ceph.com/pdonnell-2021-06-12_02:45:35-fs-wip-pdonnell-testing-20210612.002809-distro-basic-smithi/
1994
1995
Some Ubuntu 20.04 upgrade fallout. In particular, upgrade tests are failing due to missing packages for 18.04 Pacific.
1996
1997
* https://tracker.ceph.com/issues/51169
1998
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
1999
* https://tracker.ceph.com/issues/48773
2000
    qa: scrub does not complete
2001
* https://tracker.ceph.com/issues/51182
2002
    pybind/mgr/snap_schedule: Invalid command: Unexpected argument 'fs=cephfs'
2003
* https://tracker.ceph.com/issues/51183
2004
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
2005
* https://tracker.ceph.com/issues/51197
2006
    qa: [WRN] Scrub error on inode 0x10000001520 (/client.0/tmp/t/linux-5.4/Documentation/driver-api) see mds.f log and `damage ls` output for details
2007
* https://tracker.ceph.com/issues/45434
2008 10 Patrick Donnelly
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2009
2010
h3. 2021 June 11
2011
2012
https://pulpito.ceph.com/pdonnell-2021-06-11_18:02:10-fs-wip-pdonnell-testing-20210611.162716-distro-basic-smithi/
2013
2014
Some Ubuntu 20.04 upgrade fallout. In particular, upgrade tests are failing due to missing packages for 18.04 Pacific.
2015
2016
* https://tracker.ceph.com/issues/51169
2017
    qa: ubuntu 20.04 sys protections prevent multiuser file access in /tmp
2018
* https://tracker.ceph.com/issues/45434
2019
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2020
* https://tracker.ceph.com/issues/48771
2021
    qa: iogen: workload fails to cause balancing
2022
* https://tracker.ceph.com/issues/43216
2023
    MDSMonitor: removes MDS coming out of quorum election
2024
* https://tracker.ceph.com/issues/51182
2025
    pybind/mgr/snap_schedule: Invalid command: Unexpected argument 'fs=cephfs'
2026
* https://tracker.ceph.com/issues/50223
2027
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2028
* https://tracker.ceph.com/issues/48773
2029
    qa: scrub does not complete
2030
* https://tracker.ceph.com/issues/51183
2031
    qa: FileNotFoundError: [Errno 2] No such file or directory: '/sys/kernel/debug/ceph/3fab6bea-f243-47a4-a956-8c03a62b61b5.client4721/mds_sessions'
2032
* https://tracker.ceph.com/issues/51184
2033
    qa: fs:bugs does not specify distro
2034 9 Patrick Donnelly
2035
2036
h3. 2021 June 03
2037
2038
https://pulpito.ceph.com/pdonnell-2021-06-03_03:40:33-fs-wip-pdonnell-testing-20210603.020013-distro-basic-smithi/
2039
2040
* https://tracker.ceph.com/issues/45434
2041
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2042
* https://tracker.ceph.com/issues/50016
2043
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
2044
* https://tracker.ceph.com/issues/50821
2045
    qa: untar_snap_rm failure during mds thrashing
2046
* https://tracker.ceph.com/issues/50622 (regression)
2047
    msg: active_connections regression
2048
* https://tracker.ceph.com/issues/49845#note-2 (regression)
2049
    qa: failed umount in test_volumes
2050
* https://tracker.ceph.com/issues/48773
2051
    qa: scrub does not complete
2052
* https://tracker.ceph.com/issues/43216
2053
    MDSMonitor: removes MDS coming out of quorum election
2054 7 Patrick Donnelly
2055
2056 8 Patrick Donnelly
h3. 2021 May 18
2057
2058
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210518.214114
2059
2060
Regression in testing kernel caused some failures. Ilya fixed those and rerun
2061
looked better. Some odd new noise in the rerun relating to packaging and "No
2062
module named 'tasks.ceph'".
2063
2064
* https://tracker.ceph.com/issues/50824
2065
    qa: snaptest-git-ceph bus error
2066
* https://tracker.ceph.com/issues/50622 (regression)
2067
    msg: active_connections regression
2068
* https://tracker.ceph.com/issues/49845#note-2 (regression)
2069
    qa: failed umount in test_volumes
2070
* https://tracker.ceph.com/issues/48203 (stock kernel update required)
2071
    qa: quota failure
2072
2073
2074 7 Patrick Donnelly
h3. 2021 May 18
2075
2076
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210518.025642
2077
2078
* https://tracker.ceph.com/issues/50821
2079
    qa: untar_snap_rm failure during mds thrashing
2080
* https://tracker.ceph.com/issues/48773
2081
    qa: scrub does not complete
2082
* https://tracker.ceph.com/issues/45591
2083
    mgr: FAILED ceph_assert(daemon != nullptr)
2084
* https://tracker.ceph.com/issues/50866
2085
    osd: stat mismatch on objects
2086
* https://tracker.ceph.com/issues/50016
2087
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
2088
* https://tracker.ceph.com/issues/50867
2089
    qa: fs:mirror: reduced data availability
2090
* https://tracker.ceph.com/issues/50821
2091
    qa: untar_snap_rm failure during mds thrashing
2092
* https://tracker.ceph.com/issues/50622 (regression)
2093
    msg: active_connections regression
2094
* https://tracker.ceph.com/issues/50223
2095
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2096
* https://tracker.ceph.com/issues/50868
2097
    qa: "kern.log.gz already exists; not overwritten"
2098
* https://tracker.ceph.com/issues/50870
2099
    qa: test_full: "rm: cannot remove 'large_file_a': Permission denied"
2100 6 Patrick Donnelly
2101
2102
h3. 2021 May 11
2103
2104
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210511.232042
2105
2106
* one class of failures caused by PR
2107
* https://tracker.ceph.com/issues/48812
2108
    qa: test_scrub_pause_and_resume_with_abort failure
2109
* https://tracker.ceph.com/issues/50390
2110
    mds: monclient: wait_auth_rotating timed out after 30
2111
* https://tracker.ceph.com/issues/48773
2112
    qa: scrub does not complete
2113
* https://tracker.ceph.com/issues/50821
2114
    qa: untar_snap_rm failure during mds thrashing
2115
* https://tracker.ceph.com/issues/50224
2116
    qa: test_mirroring_init_failure_with_recovery failure
2117
* https://tracker.ceph.com/issues/50622 (regression)
2118
    msg: active_connections regression
2119
* https://tracker.ceph.com/issues/50825
2120
    qa: snaptest-git-ceph hang during mon thrashing v2
2121
* https://tracker.ceph.com/issues/50821
2122
    qa: untar_snap_rm failure during mds thrashing
2123
* https://tracker.ceph.com/issues/50823
2124
    qa: RuntimeError: timeout waiting for cluster to stabilize
2125 5 Patrick Donnelly
2126
2127
h3. 2021 May 14
2128
2129
https://pulpito.ceph.com/pdonnell-2021-05-14_21:45:42-fs-master-distro-basic-smithi/
2130
2131
* https://tracker.ceph.com/issues/48812
2132
    qa: test_scrub_pause_and_resume_with_abort failure
2133
* https://tracker.ceph.com/issues/50821
2134
    qa: untar_snap_rm failure during mds thrashing
2135
* https://tracker.ceph.com/issues/50622 (regression)
2136
    msg: active_connections regression
2137
* https://tracker.ceph.com/issues/50822
2138
    qa: testing kernel patch for client metrics causes mds abort
2139
* https://tracker.ceph.com/issues/48773
2140
    qa: scrub does not complete
2141
* https://tracker.ceph.com/issues/50823
2142
    qa: RuntimeError: timeout waiting for cluster to stabilize
2143
* https://tracker.ceph.com/issues/50824
2144
    qa: snaptest-git-ceph bus error
2145
* https://tracker.ceph.com/issues/50825
2146
    qa: snaptest-git-ceph hang during mon thrashing v2
2147
* https://tracker.ceph.com/issues/50826
2148
    kceph: stock RHEL kernel hangs on snaptests with mon|osd thrashers
2149 4 Patrick Donnelly
2150
2151
h3. 2021 May 01
2152
2153
https://pulpito.ceph.com/pdonnell-2021-05-01_09:07:09-fs-wip-pdonnell-testing-20210501.040415-distro-basic-smithi/
2154
2155
* https://tracker.ceph.com/issues/45434
2156
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2157
* https://tracker.ceph.com/issues/50281
2158
    qa: untar_snap_rm timeout
2159
* https://tracker.ceph.com/issues/48203 (stock kernel update required)
2160
    qa: quota failure
2161
* https://tracker.ceph.com/issues/48773
2162
    qa: scrub does not complete
2163
* https://tracker.ceph.com/issues/50390
2164
    mds: monclient: wait_auth_rotating timed out after 30
2165
* https://tracker.ceph.com/issues/50250
2166
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details"
2167
* https://tracker.ceph.com/issues/50622 (regression)
2168
    msg: active_connections regression
2169
* https://tracker.ceph.com/issues/45591
2170
    mgr: FAILED ceph_assert(daemon != nullptr)
2171
* https://tracker.ceph.com/issues/50221
2172
    qa: snaptest-git-ceph failure in git diff
2173
* https://tracker.ceph.com/issues/50016
2174
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
2175 3 Patrick Donnelly
2176
2177
h3. 2021 Apr 15
2178
2179
https://pulpito.ceph.com/pdonnell-2021-04-15_01:35:57-fs-wip-pdonnell-testing-20210414.230315-distro-basic-smithi/
2180
2181
* https://tracker.ceph.com/issues/50281
2182
    qa: untar_snap_rm timeout
2183
* https://tracker.ceph.com/issues/50220
2184
    qa: dbench workload timeout
2185
* https://tracker.ceph.com/issues/50246
2186
    mds: failure replaying journal (EMetaBlob)
2187
* https://tracker.ceph.com/issues/50250
2188
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details"
2189
* https://tracker.ceph.com/issues/50016
2190
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
2191
* https://tracker.ceph.com/issues/50222
2192
    osd: 5.2s0 deep-scrub : stat mismatch
2193
* https://tracker.ceph.com/issues/45434
2194
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2195
* https://tracker.ceph.com/issues/49845
2196
    qa: failed umount in test_volumes
2197
* https://tracker.ceph.com/issues/37808
2198
    osd: osdmap cache weak_refs assert during shutdown
2199
* https://tracker.ceph.com/issues/50387
2200
    client: fs/snaps failure
2201
* https://tracker.ceph.com/issues/50389
2202
    mds: "cluster [ERR] Error recovering journal 0x203: (2) No such file or directory" in cluster log"
2203
* https://tracker.ceph.com/issues/50216
2204
    qa: "ls: cannot access 'lost+found': No such file or directory"
2205
* https://tracker.ceph.com/issues/50390
2206
    mds: monclient: wait_auth_rotating timed out after 30
2207
2208 1 Patrick Donnelly
2209
2210 2 Patrick Donnelly
h3. 2021 Apr 08
2211
2212
https://pulpito.ceph.com/pdonnell-2021-04-08_22:42:24-fs-wip-pdonnell-testing-20210408.192301-distro-basic-smithi/
2213
2214
* https://tracker.ceph.com/issues/45434
2215
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2216
* https://tracker.ceph.com/issues/50016
2217
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
2218
* https://tracker.ceph.com/issues/48773
2219
    qa: scrub does not complete
2220
* https://tracker.ceph.com/issues/50279
2221
    qa: "Replacing daemon mds.b as rank 0 with standby daemon mds.c"
2222
* https://tracker.ceph.com/issues/50246
2223
    mds: failure replaying journal (EMetaBlob)
2224
* https://tracker.ceph.com/issues/48365
2225
    qa: ffsb build failure on CentOS 8.2
2226
* https://tracker.ceph.com/issues/50216
2227
    qa: "ls: cannot access 'lost+found': No such file or directory"
2228
* https://tracker.ceph.com/issues/50223
2229
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2230
* https://tracker.ceph.com/issues/50280
2231
    cephadm: RuntimeError: uid/gid not found
2232
* https://tracker.ceph.com/issues/50281
2233
    qa: untar_snap_rm timeout
2234
2235 1 Patrick Donnelly
h3. 2021 Apr 08
2236
2237
https://pulpito.ceph.com/pdonnell-2021-04-08_04:31:36-fs-wip-pdonnell-testing-20210408.024225-distro-basic-smithi/
2238
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20210408.142238 (with logic inversion / QA fix)
2239
2240
* https://tracker.ceph.com/issues/50246
2241
    mds: failure replaying journal (EMetaBlob)
2242
* https://tracker.ceph.com/issues/50250
2243
    mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~dmtmp/COREL) see mds.a log and `damage ls` output for details"
2244
2245
2246
h3. 2021 Apr 07
2247
2248
https://pulpito.ceph.com/pdonnell-2021-04-07_02:12:41-fs-wip-pdonnell-testing-20210406.213012-distro-basic-smithi/
2249
2250
* https://tracker.ceph.com/issues/50215
2251
    qa: "log [ERR] : error reading sessionmap 'mds2_sessionmap'"
2252
* https://tracker.ceph.com/issues/49466
2253
    qa: "Command failed on gibba030 with status 1: 'set -ex\nsudo dd of=/tmp/tmp.ZEeZBasJer'"
2254
* https://tracker.ceph.com/issues/50216
2255
    qa: "ls: cannot access 'lost+found': No such file or directory"
2256
* https://tracker.ceph.com/issues/48773
2257
    qa: scrub does not complete
2258
* https://tracker.ceph.com/issues/49845
2259
    qa: failed umount in test_volumes
2260
* https://tracker.ceph.com/issues/50220
2261
    qa: dbench workload timeout
2262
* https://tracker.ceph.com/issues/50221
2263
    qa: snaptest-git-ceph failure in git diff
2264
* https://tracker.ceph.com/issues/50222
2265
    osd: 5.2s0 deep-scrub : stat mismatch
2266
* https://tracker.ceph.com/issues/50223
2267
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
2268
* https://tracker.ceph.com/issues/50224
2269
    qa: test_mirroring_init_failure_with_recovery failure
2270
2271
h3. 2021 Apr 01
2272
2273
https://pulpito.ceph.com/pdonnell-2021-04-01_00:45:34-fs-wip-pdonnell-testing-20210331.222326-distro-basic-smithi/
2274
2275
* https://tracker.ceph.com/issues/48772
2276
    qa: pjd: not ok 9, 44, 80
2277
* https://tracker.ceph.com/issues/50177
2278
    osd: "stalled aio... buggy kernel or bad device?"
2279
* https://tracker.ceph.com/issues/48771
2280
    qa: iogen: workload fails to cause balancing
2281
* https://tracker.ceph.com/issues/49845
2282
    qa: failed umount in test_volumes
2283
* https://tracker.ceph.com/issues/48773
2284
    qa: scrub does not complete
2285
* https://tracker.ceph.com/issues/48805
2286
    mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details"
2287
* https://tracker.ceph.com/issues/50178
2288
    qa: "TypeError: run() got an unexpected keyword argument 'shell'"
2289
* https://tracker.ceph.com/issues/45434
2290
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2291
2292
h3. 2021 Mar 24
2293
2294
https://pulpito.ceph.com/pdonnell-2021-03-24_23:26:35-fs-wip-pdonnell-testing-20210324.190252-distro-basic-smithi/
2295
2296
* https://tracker.ceph.com/issues/49500
2297
    qa: "Assertion `cb_done' failed."
2298
* https://tracker.ceph.com/issues/50019
2299
    qa: mount failure with cephadm "probably no MDS server is up?"
2300
* https://tracker.ceph.com/issues/50020
2301
    qa: "RADOS object not found (Failed to operate read op for oid cephfs_mirror)"
2302
* https://tracker.ceph.com/issues/48773
2303
    qa: scrub does not complete
2304
* https://tracker.ceph.com/issues/45434
2305
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2306
* https://tracker.ceph.com/issues/48805
2307
    mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details"
2308
* https://tracker.ceph.com/issues/48772
2309
    qa: pjd: not ok 9, 44, 80
2310
* https://tracker.ceph.com/issues/50021
2311
    qa: snaptest-git-ceph failure during mon thrashing
2312
* https://tracker.ceph.com/issues/48771
2313
    qa: iogen: workload fails to cause balancing
2314
* https://tracker.ceph.com/issues/50016
2315
    qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
2316
* https://tracker.ceph.com/issues/49466
2317
    qa: "Command failed on gibba030 with status 1: 'set -ex\nsudo dd of=/tmp/tmp.ZEeZBasJer'"
2318
2319
2320
h3. 2021 Mar 18
2321
2322
https://pulpito.ceph.com/pdonnell-2021-03-18_13:46:31-fs-wip-pdonnell-testing-20210318.024145-distro-basic-smithi/
2323
2324
* https://tracker.ceph.com/issues/49466
2325
    qa: "Command failed on gibba030 with status 1: 'set -ex\nsudo dd of=/tmp/tmp.ZEeZBasJer'"
2326
* https://tracker.ceph.com/issues/48773
2327
    qa: scrub does not complete
2328
* https://tracker.ceph.com/issues/48805
2329
    mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details"
2330
* https://tracker.ceph.com/issues/45434
2331
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2332
* https://tracker.ceph.com/issues/49845
2333
    qa: failed umount in test_volumes
2334
* https://tracker.ceph.com/issues/49605
2335
    mgr: drops command on the floor
2336
* https://tracker.ceph.com/issues/48203 (stock kernel update required)
2337
    qa: quota failure
2338
* https://tracker.ceph.com/issues/49928
2339
    client: items pinned in cache preventing unmount x2
2340
2341
h3. 2021 Mar 15
2342
2343
https://pulpito.ceph.com/pdonnell-2021-03-15_22:16:56-fs-wip-pdonnell-testing-20210315.182203-distro-basic-smithi/
2344
2345
* https://tracker.ceph.com/issues/49842
2346
    qa: stuck pkg install
2347
* https://tracker.ceph.com/issues/49466
2348
    qa: "Command failed on gibba030 with status 1: 'set -ex\nsudo dd of=/tmp/tmp.ZEeZBasJer'"
2349
* https://tracker.ceph.com/issues/49822
2350
    test: test_mirroring_command_idempotency (tasks.cephfs.test_admin.TestMirroringCommands) failure
2351
* https://tracker.ceph.com/issues/49240
2352
    terminate called after throwing an instance of 'std::bad_alloc'
2353
* https://tracker.ceph.com/issues/48773
2354
    qa: scrub does not complete
2355
* https://tracker.ceph.com/issues/45434
2356
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2357
* https://tracker.ceph.com/issues/49500
2358
    qa: "Assertion `cb_done' failed."
2359
* https://tracker.ceph.com/issues/49843
2360
    qa: fs/snaps/snaptest-upchildrealms.sh failure
2361
* https://tracker.ceph.com/issues/49845
2362
    qa: failed umount in test_volumes
2363
* https://tracker.ceph.com/issues/48805
2364
    mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details"
2365
* https://tracker.ceph.com/issues/49605
2366
    mgr: drops command on the floor
2367
2368
and failure caused by PR: https://github.com/ceph/ceph/pull/39969
2369
2370
2371
h3. 2021 Mar 09
2372
2373
https://pulpito.ceph.com/pdonnell-2021-03-09_03:27:39-fs-wip-pdonnell-testing-20210308.214827-distro-basic-smithi/
2374
2375
* https://tracker.ceph.com/issues/49500
2376
    qa: "Assertion `cb_done' failed."
2377
* https://tracker.ceph.com/issues/48805
2378
    mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and `damage ls` output for details"
2379
* https://tracker.ceph.com/issues/48773
2380
    qa: scrub does not complete
2381
* https://tracker.ceph.com/issues/45434
2382
    qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
2383
* https://tracker.ceph.com/issues/49240
2384
    terminate called after throwing an instance of 'std::bad_alloc'
2385
* https://tracker.ceph.com/issues/49466
2386
    qa: "Command failed on gibba030 with status 1: 'set -ex\nsudo dd of=/tmp/tmp.ZEeZBasJer'"
2387
* https://tracker.ceph.com/issues/49684
2388
    qa: fs:cephadm mount does not wait for mds to be created
2389
* https://tracker.ceph.com/issues/48771
2390
    qa: iogen: workload fails to cause balancing