Project

General

Profile

Quincy » History » Version 74

Venky Shankar, 10/21/2023 05:22 AM

1 1 Venky Shankar
h1. Quincy
2 2 Venky Shankar
3
h2. On-call Schedule
4
5 63 Venky Shankar
* Jul: Venky
6
* Aug: Patrick
7
* Sep: Jos
8
* Oct: Xiubo
9
* Nov: Rishabh
10
* Dec: Kotresh
11
* Jan: Milind
12 41 Venky Shankar
13 74 Venky Shankar
h3. 2023 October 19
14
15
https://pulpito.ceph.com/?branch=wip-vshankar-testing-quincy-20231019.172112
16
17
* https://tracker.ceph.com/issues/55825
18
    cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log
19
* https://tracker.ceph.com/issues/63132
20
    qa: subvolume_snapshot_rm.sh stalls when waiting for OSD_FULL warning
21
* https://tracker.ceph.com/issues/61892
22
    [testing] qa: test_snapshot_remove (tasks.cephfs.test_strays.TestStrays)
23
* https://tracker.ceph.com/issues/62278 (missed qa fix in back port)
24
    pybind/mgr/volumes: pending_subvolume_deletions count is always zero in fs volume info output
25
* https://tracker.ceph.com/issues/59531
26
    quincy: "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the 
27
* https://tracker.ceph.com/issues/61399
28
    qa: build failure for ior (tarball name changed, so test fails with missing tarball - https://tracker.ceph.com/issues/61399#note-20)
29
* https://tracker.ceph.com/issues/62658
30
    error during scrub thrashing: reached maximum tries (31) after waiting for 900 seconds
31
* https://tracker.ceph.com/issues/62510
32
    snaptest-git-ceph.sh failure with fs/thrash
33
34 73 Venky Shankar
h3. 2023 October 10
35
36
https://pulpito.ceph.com/?branch=wip-yuri3-testing-2023-10-10-0720-quincy
37
38
* https://tracker.ceph.com/issues/55825
39
    cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log
40
* https://tracker.ceph.com/issues/61892
41
    [testing] qa: test_snapshot_remove (tasks.cephfs.test_strays.TestStrays)
42
* https://tracker.ceph.com/issues/61399
43
    qa: build failure for ior (tarball name changed, so test fails with missing tarball - https://tracker.ceph.com/issues/61399#note-20)
44
* https://tracker.ceph.com/issues/59531
45
    quincy: "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the 
46
* https://tracker.ceph.com/issues/62278 (missed qa fix in back port)
47
    pybind/mgr/volumes: pending_subvolume_deletions count is always zero in fs volume info output
48
* https://tracker.ceph.com/issues/63132
49
    qa: subvolume_snapshot_rm.sh stalls when waiting for OSD_FULL warning
50
* https://tracker.ceph.com/issues/57255
51
    rados/cephadm/mds_upgrade_sequence, pacific : cephadm [ERR] Upgrade: Paused due to UPGRADE_NO_STANDBY_MGR: Upgrade: Need standby mgr daemon
52
53 72 Venky Shankar
h3. 2023 October 09
54
55
https://pulpito.ceph.com/?branch=wip-yuri-testing-2023-10-06-0949-quincy
56
57
* https://tracker.ceph.com/issues/55825
58
    cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log
59
* https://tracker.ceph.com/issues/51964
60
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
61
* https://tracker.ceph.com/issues/63132
62
    qa: subvolume_snapshot_rm.sh stalls when waiting for OSD_FULL warning
63
* https://tracker.ceph.com/issues/61892
64
    [testing] qa: test_snapshot_remove (tasks.cephfs.test_strays.TestStrays)
65
* https://tracker.ceph.com/issues/59531
66
    quincy: "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the 
67
* https://tracker.ceph.com/issues/61182
68
    qa: workloads/cephfs-mirror-ha-workunit - stopping mirror daemon after the test finishes timesout.
69
* https://tracker.ceph.com/issues/61399
70
    qa: build failure for ior (tarball name changed, so test fails with missing tarball - https://tracker.ceph.com/issues/61399#note-20)
71
72 71 Venky Shankar
h3. 2023 October 06
73
74
https://pulpito.ceph.com/?branch=wip-yuri3-testing-2023-10-06-0948-quincy
75
76
* https://tracker.ceph.com/issues/55825
77
    cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log
78
* https://tracker.ceph.com/issues/51964
79
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
80
* https://tracker.ceph.com/issues/63132
81
    qa: subvolume_snapshot_rm.sh stalls when waiting for OSD_FULL warning
82
* https://tracker.ceph.com/issues/62810
83
    Failure in snaptest-git-ceph.sh (it's an async unlink/create bug) -- Need to fix again
84
* https://tracker.ceph.com/issues/59343
85
    qa: fs/snaps/snaptest-multiple-capsnaps.sh failed (pending kclient fix)
86
* https://tracker.ceph.com/issues/61892
87
    [testing] qa: test_snapshot_remove (tasks.cephfs.test_strays.TestStrays)
88
89
90 70 Milind Changire
h3. 2023 October 03
91
92
https://pulpito.ceph.com/vshankar-2023-09-29_10:09:00-fs-wip-vshankar-testing-quincy-20230929.071619-testing-default-smithi/
93
94
* https://tracker.ceph.com/issues/51964
95
  qa: test_cephfs_mirror_restart_sync_on_blocklist failure
96
* https://tracker.ceph.com/issues/59531
97
  quincy: "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the 
98
* https://tracker.ceph.com/issues/55825
99
  cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log
100
* https://tracker.ceph.com/issues/61610
101
  CommandFailedError for qa/workunits/suites/fsstress.sh
102
* https://tracker.ceph.com/issues/63071
103
  qa: Test failure: test_valid_dump_blocked_ops_count (tasks.cephfs.test_admin.TestValidTell)
104
* https://tracker.ceph.com/issues/61394
105
  mds.a (mds.0) 1 : cluster [WRN] evicting unresponsive client smithi152 (4298), after 303.726 seconds" in cluster log
106
107
108 68 Patrick Donnelly
h3. 2023 August 08
109
110 69 Patrick Donnelly
https://trello.com/c/ZjPC9CcN/1820-wip-yuri5-testing-2023-08-08-0807-quincy
111
https://pulpito.ceph.com/?branch=wip-yuri5-testing-2023-08-08-0807-quincy
112
113 68 Patrick Donnelly
* https://tracker.ceph.com/issues/55825
114
    cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log
115
* https://tracker.ceph.com/issues/62484
116
    quincy (?): qa: ffsb.sh test failure
117
* https://tracker.ceph.com/issues/62485
118
    quincy (?): pybind/mgr/volumes: subvolume rm timeout
119
* https://tracker.ceph.com/issues/58726
120
    Test failure: test_acls (tasks.cephfs.test_acls.TestACLs)
121
* https://tracker.ceph.com/issues/51964
122
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
123 69 Patrick Donnelly
* https://tracker.ceph.com/issues/62489
124
    testing: did not reconnect to MDS during up:reconnect
125
126 68 Patrick Donnelly
127 66 Venky Shankar
h3. 4 August 2023
128
129
https://pulpito.ceph.com/?branch=wip-yuri7-testing-2023-07-27-1336-quincy
130
131
* https://tracker.ceph.com/issues/51964
132
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
133
* https://tracker.ceph.com/issues/61610
134
    CommandFailedError for qa/workunits/suites/fsstress.sh
135
* http://tracker.ceph.com/issues/52624
136
    cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log
137
* https://tracker.ceph.com/issues/59531
138
    cluster [WRN] OSD bench result of 137706.272521 IOPS exceeded the threshold
139
* https://tracker.ceph.com/issues/58726
140 67 Venky Shankar
    Test failure: test_acls (tasks.cephfs.test_acls.TestACLs)
141 66 Venky Shankar
142
143 65 Venky Shankar
h3. 25 July 2023
144 64 Venky Shankar
145
https://pulpito.ceph.com/?branch=wip-yuri3-testing-2023-07-14-0724-quincy
146
147
* http://tracker.ceph.com/issues/52624
148
    cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log
149
* https://tracker.ceph.com/issues/58726
150
    Test failure: test_acls (tasks.cephfs.test_acls.TestACLs)
151
* https://tracker.ceph.com/issues/59531
152
    cluster [WRN] OSD bench result of 137706.272521 IOPS exceeded the threshold
153
* https://tracker.ceph.com/issues/61775
154
    cephfs-mirror: mirror daemon does not shutdown (in mirror ha tests)
155
* https://tracker.ceph.com/issues/61610
156
  CommandFailedError for qa/workunits/suites/fsstress.sh
157
158
159 62 Venky Shankar
h3. 2023 July 04
160
161
https://pulpito.ceph.com/yuriw-2023-07-03_15:34:02-fs-quincy_release-distro-default-smithi/
162
163
* http://tracker.ceph.com/issues/52624
164
  cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log
165
* https://tracker.ceph.com/issues/61610
166
  CommandFailedError for qa/workunits/suites/fsstress.sh
167
* https://tracker.ceph.com/issues/58726
168
  Test failure: test_acls (tasks.cephfs.test_acls.TestACLs)
169
* https://tracker.ceph.com/issues/59531
170
  cluster [WRN] OSD bench result of 137706.272521 IOPS exceeded the threshold
171
* https://tracker.ceph.com/issues/50223
172
    cluster [WRN] client.xxxx isn't responding to mclientcaps(revoke)
173
* https://tracker.ceph.com/issues/61775
174
    cephfs-mirror: mirror daemon does not shutdown (in mirror ha tests)
175
* https://tracker.ceph.com/issues/61892
176
    Test failure: test_snapshot_remove (tasks.cephfs.test_strays.TestStrays)
177
178 61 Milind Changire
h3. 2023 June 14
179
180
http://pulpito.front.sepia.ceph.com/yuriw-2023-06-13_23:20:02-fs-wip-yuri3-testing-2023-06-13-1204-quincy-distro-default-smithi/
181
182
* https://tracker.ceph.com/issues/59531
183
  cluster [WRN] OSD bench result of 137706.272521 IOPS exceeded the threshold
184
* http://tracker.ceph.com/issues/52624
185
  cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log
186
* https://tracker.ceph.com/issues/61610
187
  CommandFailedError for qa/workunits/suites/fsstress.sh
188
* https://tracker.ceph.com/issues/58726
189
  Test failure: test_acls (tasks.cephfs.test_acls.TestACLs)
190
* Failed to fetch package version
191
  http://pulpito.front.sepia.ceph.com/yuriw-2023-06-13_23:20:02-fs-wip-yuri3-testing-2023-06-13-1204-quincy-distro-default-smithi/7303252
192
  http://pulpito.front.sepia.ceph.com/yuriw-2023-06-13_23:20:02-fs-wip-yuri3-testing-2023-06-13-1204-quincy-distro-default-smithi/7303360
193
* cephfs_mirror: reached maximum tries (51) after waiting for 300 seconds
194
  http://pulpito.front.sepia.ceph.com/yuriw-2023-06-13_23:20:02-fs-wip-yuri3-testing-2023-06-13-1204-quincy-distro-default-smithi/7303322
195
196
197 58 Milind Changire
h3. 2023 June 07
198
199
http://pulpito.front.sepia.ceph.com/yuriw-2023-05-31_21:56:15-fs-wip-yuri6-testing-2023-05-31-0933-quincy-distro-default-smithi/
200
201 60 Milind Changire
* https://tracker.ceph.com/issues/59531
202
  cluster [WRN] OSD bench result of 137706.272521 IOPS exceeded the threshold
203 58 Milind Changire
* https://tracker.ceph.com/issues/61609
204
  CommandFailedError for qa/workunits/libcephfs/test.sh
205
* https://tracker.ceph.com/issues/61610
206
  CommandFailedError for qa/workunits/suites/fsstress.sh
207
* http://tracker.ceph.com/issues/52624
208
  cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log
209
* https://tracker.ceph.com/issues/61182
210
  workloads/cephfs-mirror-ha-workunit: reached maximum tries (50) after waiting for 300 seconds (mirror daemon stop times out)
211
* https://tracker.ceph.com/issues/51282
212
  cluster [WRN] Health check failed: Degraded data redundancy: 1 pg degraded (PG_DEGRADED)" in cluster log
213 59 Milind Changire
214
Failed to fetch package version
215 58 Milind Changire
* http://pulpito.front.sepia.ceph.com/yuriw-2023-05-31_21:56:15-fs-wip-yuri6-testing-2023-05-31-0933-quincy-distro-default-smithi/7292615
216
* http://pulpito.front.sepia.ceph.com/yuriw-2023-05-31_21:56:15-fs-wip-yuri6-testing-2023-05-31-0933-quincy-distro-default-smithi/7292784
217
218 54 Kotresh Hiremath Ravishankar
h3. 2023 May 24
219
220
https://pulpito.ceph.com/yuriw-2023-05-23_15:23:11-fs-wip-yuri10-testing-2023-05-18-0815-quincy-distro-default-smithi/
221
222 55 Kotresh Hiremath Ravishankar
* https://tracker.ceph.com/issues/61393 (NEW - not related)
223 54 Kotresh Hiremath Ravishankar
  orchestrator bug: cephadm command failed
224
* https://tracker.ceph.com/issues/58340
225
  mds: fsstress.sh hangs with multimds
226
* https://tracker.ceph.com/issues/55332
227
  Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128
228 55 Kotresh Hiremath Ravishankar
* https://tracker.ceph.com/issues/61394 (NEW - not related)
229 54 Kotresh Hiremath Ravishankar
  mds.a (mds.0) 1 : cluster [WRN] evicting unresponsive client smithi152 (4298), after 303.726 seconds" in cluster log
230
* https://tracker.ceph.com/issues/61182
231
  workloads/cephfs-mirror-ha-workunit: reached maximum tries (50) after waiting for 300 seconds (mirror daemon stop times out)
232
* https://tracker.ceph.com/issues/51964
233
  qa: test_cephfs_mirror_restart_sync_on_blocklist failure
234
* http://tracker.ceph.com/issues/52624
235
  cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log
236 56 Kotresh Hiremath Ravishankar
* https://tracker.ceph.com/issues/51282
237 54 Kotresh Hiremath Ravishankar
  cluster [WRN] Health check failed: Degraded data redundancy: 1 pg degraded (PG_DEGRADED)" in cluster log
238
* https://tracker.ceph.com/issues/59531
239
  quincy: "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark 
240
  tools (e.g. Fio)"
241
* https://tracker.ceph.com/issues/58726
242
  Test failure: test_acls (tasks.cephfs.test_acls.TestACLs)
243
* Failed to fetch package version
244
  https://pulpito.ceph.com/yuriw-2023-05-23_15:23:11-fs-wip-yuri10-testing-2023-05-18-0815-quincy-distro-default-smithi/7284063
245
  https://pulpito.ceph.com/yuriw-2023-05-23_15:23:11-fs-wip-yuri10-testing-2023-05-18-0815-quincy-distro-default-smithi/7284130
246 53 Patrick Donnelly
247
h3. 2023 Apr 21/24
248
    
249
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20230420.183701-quincy 
250
251
252
2 Failures:
253
    "Failed to fetch package version from https://shaman.ceph.com/api/search/?status=ready&project=ceph&flavor=default&distros=ubuntu%2F22.04%2Fx86_64&sha1=58e06d348d8a2da339540be5425a40ec7683e512 "
254
255
are a side-effect of revert https://github.com/ceph/ceph/pull/51029 . This is expected and should be fixed by the new backport that is reverted.
256
257
* http://tracker.ceph.com/issues/52624
258
    cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log
259
* https://tracker.ceph.com/issues/54460
260
  snaptest-multiple-capsnaps.sh test failure
261
* https://tracker.ceph.com/issues/59531
262
  quincy: "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio)"
263
* https://tracker.ceph.com/issues/59532
264
  quincy: cephadm.upgrade from 16.2.4 (related?) stuck with one OSD upgraded
265
266
267 51 Venky Shankar
h3. 2023 Mar 02
268
269
https://pulpito.ceph.com/yuriw-2023-02-22_20:50:58-fs-wip-yuri4-testing-2023-02-22-0817-quincy-distro-default-smithi/
270 52 Venky Shankar
https://pulpito.ceph.com/yuriw-2023-02-28_22:41:58-fs-wip-yuri10-testing-2023-02-28-0752-quincy-distro-default-smithi/
271 51 Venky Shankar
272
* http://tracker.ceph.com/issues/52624
273
    cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log
274 52 Venky Shankar
* https://tracker.ceph.com/issues/55825
275
    cluster [WRN] Health check failed: Degraded data redundancy: 1 pg degraded (PG_DEGRADED)" in cluster log
276 51 Venky Shankar
* https://tracker.ceph.com/issues/58726
277
    Test failure: test_acls (tasks.cephfs.test_acls.TestACLs)
278
* https://tracker.ceph.com/issues/50223
279
    cluster [WRN] client.xxxx isn't responding to mclientcaps(revoke)
280 1 Venky Shankar
* https://tracker.ceph.com/issues/54462
281 52 Venky Shankar
    Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128
282
* https://tracker.ceph.com/issues/58340
283
    mds: fsstress.sh hangs with multimds
284 51 Venky Shankar
285 46 Jos Collin
h3. 2023 Feb 17
286 47 Jos Collin
287 49 Jos Collin
https://pulpito.ceph.com/yuriw-2023-02-16_19:08:52-fs-wip-yuri3-testing-2023-02-16-0752-quincy-distro-default-smithi/
288 46 Jos Collin
289
* https://tracker.ceph.com/issues/58754
290
    Test failure: test_subvolume_snapshot_info_if_orphan_clone (tasks.cephfs.test_volumes.TestSubvolumeSnapshotClones)
291 48 Jos Collin
* https://tracker.ceph.com/issues/58756
292
    qa: error during scrub thrashing
293 50 Jos Collin
* https://tracker.ceph.com/issues/58757
294
    qa: Command failed (workunit test suites/fsstress.sh)
295
296 46 Jos Collin
297 45 Jos Collin
h3. 2023 Feb 16
298
299
https://pulpito.ceph.com/yuriw-2023-02-13_20:44:19-fs-wip-yuri8-testing-2023-02-07-0753-quincy-distro-default-smithi/
300
301
* https://tracker.ceph.com/issues/58746
302
    qa: VersionNotFoundError: Failed to fetch package version
303
* https://tracker.ceph.com/issues/58745
304
    qa: cephadm failed to stop mon
305
306 44 Venky Shankar
h3. 2023 Feb 15
307
308
https://pulpito.ceph.com/yuriw-2023-02-13_20:43:24-fs-wip-yuri5-testing-2023-02-07-0850-quincy-distro-default-smithi/
309
310
* https://tracker.ceph.com/issues/57446
311
    Test failure: test_subvolume_snapshot_info_if_orphan_clone (tasks.cephfs.test_volumes.TestSubvolumeSnapshotClones)
312
* https://tracker.ceph.com/issues/58656
313
    qa: Test failure: test_cephfs_mirror_restart_sync_on_blocklist (tasks.cephfs.test_mirroring.TestMirroring)
314
* https://tracker.ceph.com/issues/58726
315
    quincy: Test failure: test_acls (tasks.cephfs.test_acls.TestACLs)
316
* https://tracker.ceph.com/issues/58727
317
    quincy: Test failure: test_dirfrag_limit (tasks.cephfs.test_strays.TestStrays)
318
319
320 43 Jos Collin
h3. 2023 Feb 07
321
322
https://pulpito.ceph.com/yuriw-2023-02-03_23:44:47-fs-wip-yuri8-testing-2023-01-30-1510-quincy-distro-default-smithi/
323
324
* https://tracker.ceph.com/issues/58656
325
    qa: Test failure: test_cephfs_mirror_restart_sync_on_blocklist (tasks.cephfs.test_mirroring.TestMirroring)
326
327 5 Jeff Layton
328 40 Milind Changire
h3. 2022 Oct 21
329 1 Venky Shankar
330 40 Milind Changire
http://pulpito.front.sepia.ceph.com/yuriw-2022-10-17_17:37:24-fs-wip-yuri-testing-2022-10-17-0746-quincy-distro-default-smithi/
331
332
* https://tracker.ceph.com/issues/57205
333
    Test failure: test_subvolume_group_ls_filter_internal_directories (tasks.cephfs.test_volumes.TestSubvolumeGroups)
334
* https://tracker.ceph.com/issues/57446
335
    Test failure: test_subvolume_snapshot_info_if_orphan_clone (tasks.cephfs.test_volumes.TestSubvolumeSnapshotClones)
336
* https://tracker.ceph.com/issues/55825
337
    cluster [WRN] Health check failed: Degraded data redundancy: 1 pg degraded (PG_DEGRADED)" in cluster log
338
  cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log
339
340
341
h3. 2022 Oct 17
342
343 39 Milind Changire
http://pulpito.front.sepia.ceph.com/yuriw-2022-10-12_16:32:23-fs-wip-yuri8-testing-2022-10-12-0718-quincy-distro-default-smithi/
344
345
* https://tracker.ceph.com/issues/54460
346
  snaptest-multiple-capsnaps.sh test failure
347
* https://tracker.ceph.com/issues/57446
348
  Test failure: test_subvolume_snapshot_info_if_orphan_clone (tasks.cephfs.test_volumes.TestSubvolumeSnapshotClones)
349
* https://tracker.ceph.com/issues/55825
350
  cluster [WRN] Health check failed: Degraded data redundancy: 1 pg degraded (PG_DEGRADED)" in cluster log
351
  cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log
352
* https://tracker.ceph.com/issues/50223
353
  cluster [WRN] client.4490 isn't responding to mclientcaps(revoke)
354
355 1 Venky Shankar
356 39 Milind Changire
357 40 Milind Changire
h3. 2022 Sep 29
358 38 Venky Shankar
359
http://pulpito.front.sepia.ceph.com/?branch=wip-yuri6-testing-2022-09-23-1008-quincy
360
361
* https://tracker.ceph.com/issues/57205
362
    Test failure: test_subvolume_group_ls_filter_internal_directories (tasks.cephfs.test_volumes.TestSubvolumeGroups)
363
* https://tracker.ceph.com/issues/57446
364
    qa: test_subvolume_snapshot_info_if_orphan_clone fails
365
* https://tracker.ceph.com/issues/50224
366
    Test failure: test_mirroring_init_failure_with_recovery (tasks.cephfs.test_mirroring.TestMirroring)
367
* https://tracker.ceph.com/issues/57280
368
    qa: tasks/kernel_cfuse_workunits_untarbuild_blogbench fails - Failed to fetch package version from shaman
369 1 Venky Shankar
* https://tracker.ceph.com/issues/50223
370 38 Venky Shankar
    cluster [WRN] client.xxxx isn't responding to mclientcaps(revoke)
371
372 40 Milind Changire
h3. 2022 Sep 09
373 37 Venky Shankar
374
http://pulpito.front.sepia.ceph.com/yuriw-2022-09-08_18:29:21-fs-wip-yuri6-testing-2022-09-08-0859-quincy-distro-default-smithi/
375
376
* http://tracker.ceph.com/issues/52624
377
    cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log
378
* https://tracker.ceph.com/issues/51282
379
    cluster [WRN] Health check failed: Degraded data redundancy: 1 pg degraded (PG_DEGRADED)" in cluster log
380
* https://tracker.ceph.com/issues/50223
381
    cluster [WRN] client.xxxx isn't responding to mclientcaps(revoke)
382
* https://tracker.ceph.com/issues/57205
383
    Test failure: test_subvolume_group_ls_filter_internal_directories (tasks.cephfs.test_volumes.TestSubvolumeGroups)
384
* https://tracker.ceph.com/issues/57446
385
    qa: test_subvolume_snapshot_info_if_orphan_clone fails
386
* https://tracker.ceph.com/issues/51964
387 1 Venky Shankar
    Test failure: test_cephfs_mirror_restart_sync_on_blocklist (tasks.cephfs.test_mirroring.TestMirroring)
388 37 Venky Shankar
* https://tracker.ceph.com/issues/57280
389
    Failed to fetch package version from https://shaman.ceph.com/api/search/?status=ready&project=kernel&flavor=default&distros=ubuntu%2F22.04%2Fx86_64&ref=testing 
390
391 40 Milind Changire
h3. 2022 Sep 02
392 35 Venky Shankar
393
https://pulpito.ceph.com/yuriw-2022-09-01_18:27:02-fs-wip-yuri11-testing-2022-09-01-0804-quincy-distro-default-smithi/
394 36 Venky Shankar
395
and
396
397
https://pulpito.ceph.com/?branch=wip-lflores-testing-2-2022-08-26-2240-quincy
398 35 Venky Shankar
399
* https://tracker.ceph.com/issues/57280
400
    Failed to fetch package version from https://shaman.ceph.com/api/search/?status=ready&project=kernel&flavor=default&distros=ubuntu%2F22.04%2Fx86_64&ref=testing 
401
* https://tracker.ceph.com/issues/50223
402
    cluster [WRN] client.xxxx isn't responding to mclientcaps(revoke)
403
* http://tracker.ceph.com/issues/52624
404
  cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log
405 1 Venky Shankar
* https://tracker.ceph.com/issues/48773
406 35 Venky Shankar
    error during scrub thrashing: Command failed on smithi085 with status 1: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph tell mds.1:0 scrub status'
407 36 Venky Shankar
* https://tracker.ceph.com/issues/54462
408 35 Venky Shankar
  Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128
409 34 Venky Shankar
410 40 Milind Changire
h3. 2022 Aug 31
411 34 Venky Shankar
412
https://pulpito.ceph.com/?branch=wip-yuri-testing-2022-08-23-1120-quincy
413
414
* https://tracker.ceph.com/issues/51964
415
    Test failure: test_cephfs_mirror_restart_sync_on_blocklist (tasks.cephfs.test_mirroring.TestMirroring)
416
* https://tracker.ceph.com/issues/57280
417
    Failed to fetch package version from https://shaman.ceph.com/api/search/?status=ready&project=kernel&flavor=default&distros=ubuntu%2F22.04%2Fx86_64&ref=testing 
418
* http://tracker.ceph.com/issues/52624
419 1 Venky Shankar
    cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log
420 34 Venky Shankar
* https://tracker.ceph.com/issues/48773
421
    error during scrub thrashing: Command failed on smithi085 with status 1: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph tell mds.1:0 scrub status'
422
* https://tracker.ceph.com/issues/50223
423
    cluster [WRN] client.xxxx isn't responding to mclientcaps(revoke)
424
425 40 Milind Changire
h3. 2022 Aug 17
426 31 Kotresh Hiremath Ravishankar
427
https://pulpito.ceph.com/yuriw-2022-08-17_18:46:04-fs-wip-yuri7-testing-2022-08-17-0943-quincy-distro-default-smithi/
428
429
There were following errors not related to tests which fixed in rerun:
430
431
* Command failed on smithi161 with status 127: "sudo /home/ubuntu/cephtest/cephadm --image docker.io/ceph/ceph:v16.2.4 shell -c ... -- bash -c 'ceph fs dump'"
432
* Failed to fetch package version from https://shaman.ceph.com/api/search/?status=ready&project=kernel&flavor=default&distros=ubuntu%2F22.04%2Fx86_64&ref=testing 
433 33 Kotresh Hiremath Ravishankar
* reached maximum tries (90) after waiting for 540 seconds - DEBUG:teuthology.misc:7 of 8 OSDs are up
434 31 Kotresh Hiremath Ravishankar
* https://tracker.ceph.com/issues/56697 - qa: fs/snaps fails for fuse - Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithi150 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp ..."
435
* SSH connection to smithi077 was lost: 'sudo rm -rf -- /home/ubuntu/cephtest/workunits.list.client.0 /home/ubuntu/cephtest/clone.client.0' 
436
437 1 Venky Shankar
Rerun: https://pulpito.ceph.com/yuriw-2022-08-18_15:08:53-fs-wip-yuri7-testing-2022-08-17-0943-quincy-distro-default-smithi/
438 31 Kotresh Hiremath Ravishankar
439
* http://tracker.ceph.com/issues/52624
440
  cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log 
441
* https://tracker.ceph.com/issues/51282
442
  cluster [WRN] Health check failed: Degraded data redundancy: 1 pg degraded (PG_DEGRADED)" in cluster log
443
444 40 Milind Changire
h3. 2022 Aug 10
445 29 Kotresh Hiremath Ravishankar
446
http://pulpito.front.sepia.ceph.com/yuriw-2022-08-11_02:21:28-fs-wip-yuri-testing-2022-08-10-1103-quincy-distro-default-smithi/
447
* Most of the failures are passed in re-run. Please check rerun failures below.
448
  - tasks/{1-thrash/mon 2-workunit/fs/snaps - reached maximum tries (90) after waiting for 540 seconds - DEBUG:teuthology.misc:7 of 8 OSDs are up  
449
  - tasks/{1-thrash/osd 2-workunit/suites/iozone - reached maximum tries (90) after waiting for 540 seconds - DEBUG:teuthology.misc:7 of 8 OSDs are up
450
  - tasks/metrics - cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log
451
  - tasks/scrub - No module named 'tasks.cephfs.fuse_mount' 
452
  - tasks/{0-check-counter workunit/suites/iozone} wsync/{no}} - No module named 'tasks.fs' 
453
  - tasks/snap-schedule - cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log
454
  - tasks/volumes/{overrides test/clone}} - No module named 'tasks.ceph'
455
  - tasks/snapshots - CommandFailedError: Command failed on smithi035 with status 100: 'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes - INFO:teuthology.orchestra.run.smithi035.stderr:E: Version '17.2.3-414-ge5c30ac2-1focal' for 'python-ceph' was not found - INFO:teuthology.orchestra.run.smithi035.stderr:E: Unable to locate package libcephfs1
456
  - tasks/{0-octopus 1-client 2-upgrade 3-compat_client/no}} - No module named 'tasks.ceph'
457
  - tasks/{1-thrash/osd 2-workunit/suites/pjd}} - No module named 'tasks.ceph'
458
  - tasks/cfuse_workunit_suites_fsstress traceless/50pc} - No module named 'tasks'
459
  - tasks/{0-octopus 1-upgrade}} - No module named 'tasks'
460
  - tasks/{1-thrash/osd 2-workunit/fs/snaps}} - cluster [WRN] client.4520 isn't responding to mclientcaps(revoke),
461
  - tasks/{1-thrash/mds 2-workunit/cfuse_workunit_snaptests}} - reached maximum tries (90) after waiting for 540 seconds - teuthology.misc:7 of 8 OSDs are up
462
463
Re-run1: http://pulpito.front.sepia.ceph.com/yuriw-2022-08-11_14:24:26-fs-wip-yuri-testing-2022-08-10-1103-quincy-distro-default-smithi/
464
465
* tasks/{1-thrash/mon 2-workunit/fs/snaps - reached maximum tries (90) after waiting for 540 seconds 
466
  DEBUG:teuthology.misc:7 of 8 OSDs are up 
467
* http://tracker.ceph.com/issues/52624
468
  cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log 
469
* https://tracker.ceph.com/issues/50223
470
  cluster [WRN] client.xxxx isn't responding to mclientcaps(revoke)
471 1 Venky Shankar
* tasks/{1-thrash/mds 2-workunit/cfuse_workunit_snaptests}} - reached maximum tries (90) after waiting for 540 seconds 	
472 29 Kotresh Hiremath Ravishankar
  DEBUG:teuthology.misc:7 of 8 OSDs are up
473 30 Kotresh Hiremath Ravishankar
474
Re-run2: http://pulpito.front.sepia.ceph.com/yuriw-2022-08-16_14:46:15-fs-wip-yuri-testing-2022-08-10-1103-quincy-distro-default-smithi/
475
476
* http://tracker.ceph.com/issues/52624
477 29 Kotresh Hiremath Ravishankar
  cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log 
478 28 Kotresh Hiremath Ravishankar
479 40 Milind Changire
h3. 2022 Aug 03
480 27 Kotresh Hiremath Ravishankar
481
https://pulpito.ceph.com/yuriw-2022-08-04_11:54:20-fs-wip-yuri8-testing-2022-08-03-1028-quincy-distro-default-smithi/
482
Re-run: https://pulpito.ceph.com/yuriw-2022-08-09_15:36:21-fs-wip-yuri8-testing-2022-08-03-1028-quincy-distro-default-smithi
483
484
* No module named 'tasks' - Fixed in re-run
485
486
* https://tracker.ceph.com/issues/51282
487
  cluster [WRN] Health check failed: Degraded data redundancy: 1 pg degraded (PG_DEGRADED)" in cluster log
488
 
489 1 Venky Shankar
* https://tracker.ceph.com/issues/57064
490 27 Kotresh Hiremath Ravishankar
  qa: test_add_ancestor_and_child_directory failure
491
  
492
* http://tracker.ceph.com/issues/52624
493
  cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log
494
495
* https://tracker.ceph.com/issues/50223
496
  cluster [WRN] client.xxxx isn't responding to mclientcaps(revoke)
497 24 Rishabh Dave
498 40 Milind Changire
h3. 2022 Jul 22
499 24 Rishabh Dave
500
https://pulpito.ceph.com/yuriw-2022-07-11_13:37:40-fs-wip-yuri5-testing-2022-07-06-1020-quincy-distro-default-smithi/
501 1 Venky Shankar
re-run: https://pulpito.ceph.com/yuriw-2022-07-12_13:37:44-fs-wip-yuri5-testing-2022-07-06-1020-quincy-distro-default-smithi/
502 24 Rishabh Dave
Most failure weren't seen in re-run.
503
504
* http://tracker.ceph.com/issues/52624
505
  Health check failed: Reduced data availability
506 25 Rishabh Dave
* https://tracker.ceph.com/issues/50223
507 26 Rishabh Dave
  client.xxxx isn't responding to mclientcaps(revoke)
508 24 Rishabh Dave
* https://tracker.ceph.com/issues/54462
509 20 Rishabh Dave
  Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128
510 1 Venky Shankar
511 40 Milind Changire
h3. 2022 Jul 13
512 1 Venky Shankar
513 22 Rishabh Dave
https://pulpito.ceph.com/yuriw-2022-07-08_17:05:01-fs-wip-yuri2-testing-2022-07-08-0453-quincy-distro-default-smithi/
514
515
* http://tracker.ceph.com/issues/52624 
516
    cluster [WRN] Health check failed: Reduced data availability: 2 pgs peering (PG_AVAILABILITY)" in cluster log
517 23 Rishabh Dave
* https://tracker.ceph.com/issues/51964
518 1 Venky Shankar
  Test failure: test_cephfs_mirror_restart_sync_on_blocklist (tasks.cephfs.test_mirroring.TestMirroring)
519 23 Rishabh Dave
* https://tracker.ceph.com/issues/48773
520 22 Rishabh Dave
  error during scrub thrashing: Command failed on smithi085 with status 1: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph tell mds.1:0 scrub status' 
521 19 Venky Shankar
522 40 Milind Changire
h3. 2022 Jun 08
523 19 Venky Shankar
524 1 Venky Shankar
http://pulpito.front.sepia.ceph.com/yuriw-2022-06-07_22:29:43-fs-wip-yuri3-testing-2022-06-07-0722-quincy-distro-default-smithi/
525 19 Venky Shankar
526
* http://tracker.ceph.com/issues/52624
527
    cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log
528 18 Venky Shankar
529
530 40 Milind Changire
h3. 2022 Jun 07
531 18 Venky Shankar
532
http://pulpito.front.sepia.ceph.com/yuriw-2022-06-02_20:32:25-fs-wip-yuri5-testing-2022-06-02-0825-quincy-distro-default-smithi/
533
534 1 Venky Shankar
* http://tracker.ceph.com/issues/52624
535 18 Venky Shankar
    cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log
536 17 Venky Shankar
537 40 Milind Changire
h3. 2022 Jun 03
538 17 Venky Shankar
539
https://pulpito.ceph.com/?branch=wip-yuri-testing-2022-06-02-0810-quincy
540 1 Venky Shankar
541 17 Venky Shankar
* http://tracker.ceph.com/issues/52624
542
    cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log
543
* https://tracker.ceph.com/issues/50223
544
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
545
* https://tracker.ceph.com/issues/54462
546 16 Venky Shankar
    Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128
547
548 40 Milind Changire
h3. 2022 May 31
549 16 Venky Shankar
550 1 Venky Shankar
http://pulpito.front.sepia.ceph.com/yuriw-2022-05-27_21:58:39-fs-wip-yuri2-testing-2022-05-27-1033-quincy-distro-default-smithi/
551 16 Venky Shankar
552
* http://tracker.ceph.com/issues/52624
553
    cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log
554 15 Venky Shankar
555 40 Milind Changire
h3. 2022 May 26
556 15 Venky Shankar
557
https://pulpito.ceph.com/?branch=wip-yuri-testing-2022-05-10-1027-quincy
558 1 Venky Shankar
559 15 Venky Shankar
* http://tracker.ceph.com/issues/52624
560
    cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log
561
* https://tracker.ceph.com/issues/50223
562
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
563
* https://tracker.ceph.com/issues/54462
564
    Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128
565 14 Venky Shankar
566 40 Milind Changire
h3. 2022 May 10
567 14 Venky Shankar
568 1 Venky Shankar
http://pulpito.front.sepia.ceph.com/?branch=wip-yuri-testing-2022-05-05-0838-quincy
569 14 Venky Shankar
570
* http://tracker.ceph.com/issues/52624
571
    cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log
572
* https://tracker.ceph.com/issues/50223
573
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
574 13 Venky Shankar
575 40 Milind Changire
h3. 2022 April 29
576 13 Venky Shankar
577
https://pulpito.ceph.com/?branch=wip-yuri3-testing-2022-04-22-0534-quincy
578 1 Venky Shankar
579 13 Venky Shankar
* http://tracker.ceph.com/issues/52624
580
    cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log
581
* https://tracker.ceph.com/issues/50223
582
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
583
* https://tracker.ceph.com/issues/54462
584
    Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128
585 12 Venky Shankar
586 40 Milind Changire
h3. 2022 April 13
587 12 Venky Shankar
588
http://pulpito.front.sepia.ceph.com/?branch=wip-yuri3-testing-2022-04-11-0746-quincy
589
590
* http://tracker.ceph.com/issues/52624
591
    cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log
592
* https://tracker.ceph.com/issues/50223
593
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
594
* https://tracker.ceph.com/issues/52438
595
   qa: ffsb timeout
596 9 Venky Shankar
597 40 Milind Changire
h3. 2022 March 31
598 10 Venky Shankar
599 9 Venky Shankar
http://pulpito.front.sepia.ceph.com/yuriw-2022-03-29_20:09:22-fs-wip-yuri-testing-2022-03-29-0741-quincy-distro-default-smithi/
600
http://pulpito.front.sepia.ceph.com/yuriw-2022-03-30_14:35:58-fs-wip-yuri-testing-2022-03-29-0741-quincy-distro-default-smithi/
601
602
603
* http://tracker.ceph.com/issues/52624
604
    cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log
605
* https://tracker.ceph.com/issues/54460
606
    snaptest-multiple-capsnaps.sh test failure
607
* https://tracker.ceph.com/issues/50223
608
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
609
* http://tracker.ceph.com/issues/54606
610
   check-counter task runs till max job timeout
611 11 Venky Shankar
612 9 Venky Shankar
Handful of failed jobs due to:
613
<pre>
614
Command failed on smithi055 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:c5bb4e7d582f118c1093d94fbfedfb197eaa03b4 -v bootstrap --fsid 44e07f86-b03b-11ec-8c35-001a4aab830c --config /home/ubuntu/cephtest/seed.ceph.conf --output-config /etc/ceph/ceph.conf --output-keyring /etc/ceph/ceph.client.admin.keyring --output-pub-ssh-key /home/ubuntu/cephtest/ceph.pub --mon-id a --mgr-id x --orphan-initial-daemons --skip-monitoring-stack --mon-ip 172.21.15.55 --skip-admin-label && sudo chmod +r /etc/ceph/ceph.client.admin.keyring'
615 1 Venky Shankar
</pre>
616 9 Venky Shankar
617 40 Milind Changire
h3. 2022 March 17
618 8 Venky Shankar
619 7 Venky Shankar
http://pulpito.front.sepia.ceph.com/yuriw-2022-03-14_18:57:01-fs-wip-yuri2-testing-2022-03-14-0946-quincy-distro-default-smithi/
620
621
* http://tracker.ceph.com/issues/52624
622
   cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log
623
* http://tracker.ceph.com/issues/54461
624
   ffsb.sh test failure
625
* http://tracker.ceph.com/issues/54606
626
   check-counter task runs till max job timeout
627
628
Couple of jobs that are dead with:
629
630
<pre>
631
    2022-03-15T05:15:22.447 ERROR:paramiko.transport:Socket exception: No route to host (113)
632
    2022-03-15T05:15:22.452 DEBUG:teuthology.orchestra.run:got remote process result: None
633
    2022-03-15T05:15:22.453 INFO:tasks.workunit:Stopping ['suites/fsstress.sh'] on client.0...
634
</pre>
635
636 40 Milind Changire
h3. 2022 March 1
637 5 Jeff Layton
638
* https://tracker.ceph.com/issues/51282 (maybe?)
639
   cluster [WRN] Health check failed: Degraded data redundancy: 2/4 objects degraded (50.000%), 1 pg degraded (PG_DEGRADED)" in cluster log
640
* https://tracker.ceph.com/issues/52624
641
   cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log
642 6 Jeff Layton
* https://tracker.ceph.com/issues/54460
643
   Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithi152 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=465157b30605a0c958df893de628c923386baa8e TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/fs/snaps/snaptest-multiple-capsnaps.sh'
644
* https://tracker.ceph.com/issues/50223
645 1 Venky Shankar
cluster [WRN] client.14480 isn't responding to mclientcaps(revoke), ino 0x1000000f3fd pending pAsLsXsFsc issued pAsLsXsFscb, sent 304.933510 seconds ago" in cluster log
646 6 Jeff Layton
* https://tracker.ceph.com/issues/54461
647
  Command failed (workunit test suites/ffsb.sh) on smithi124 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=465157b30605a0c958df893de628c923386baa8e TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/suites/ffsb.sh'
648
* https://tracker.ceph.com/issues/54462
649 22 Rishabh Dave
  Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=465157b30605a0c958df893de628c923386baa8e TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/fs/snaps/snaptest-git-ceph.sh'