Project

General

Profile

Quincy » History » Version 85

Milind Changire, 01/31/2024 09:17 AM

1 1 Venky Shankar
h1. Quincy
2 2 Venky Shankar
3
h2. On-call Schedule
4
5 63 Venky Shankar
* Jul: Venky
6
* Aug: Patrick
7
* Sep: Jos
8
* Oct: Xiubo
9
* Nov: Rishabh
10
* Dec: Kotresh
11
* Jan: Milind
12 41 Venky Shankar
13 82 Milind Changire
14 85 Milind Changire
h3. 2024 Jan 31
15
16
https://pulpito.ceph.com/yuriw-2024-01-26_01:07:29-fs-wip-yuri4-testing-2024-01-25-1331-quincy-distro-default-smithi/
17
18
* https://tracker.ceph.com/issues/61610
19
  CommandFailedError for qa/workunits/suites/fsstress.sh
20
* https://tracker.ceph.com/issues/59534
21
  qa/workunits/suites/dbench.sh failed with "write failed on handle 9938 (Input/output error)"
22
* https://tracker.ceph.com/issues/62510
23
  snaptest-git-ceph.sh failure with fs/thrash
24
* https://tracker.ceph.com/issues/63132
25
  qa: subvolume_snapshot_rm.sh stalls when waiting for OSD_FULL warning
26
* https://tracker.ceph.com/issues/58476
27
  test_non_existent_cluster: cluster does not exist - Ceph - CephFS
28
* http://tracker.ceph.com/issues/52624
29
  cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log
30
* https://tracker.ceph.com/issues/51282
31
  cluster [WRN] Health check failed: Degraded data redundancy: 1 pg degraded (PG_DEGRADED)" in cluster log
32
* https://tracker.ceph.com/issues/59531
33
  quincy: "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the
34
35
36
37 84 Milind Changire
h3. 2024 Jan 17
38
39
https://pulpito.ceph.com/yuriw-2024-01-10_16:13:48-fs-wip-yuri6-testing-2024-01-05-0744-quincy-distro-default-smithi/
40
41
* https://tracker.ceph.com/issues/63132
42
  qa: subvolume_snapshot_rm.sh stalls when waiting for OSD_FULL warning
43
* https://tracker.ceph.com/issues/58476
44
  test_non_existent_cluster: cluster does not exist - Ceph - CephFS
45
* https://tracker.ceph.com/issues/64059
46
  ior.tbz2 not found (new)
47
* https://tracker.ceph.com/issues/64060
48
  Test failure: test_subvolume_group_rm_when_its_not_empty (tasks.cephfs.test_volumes.TestSubvolumeGroups) (new)
49
* https://tracker.ceph.com/issues/61892
50
  test_snapshot_remove (tasks.cephfs.test_strays.TestStrays)
51
* https://tracker.ceph.com/issues/50223
52
  cluster [WRN] client.xxxx isn't responding to mclientcaps(revoke)
53
54
55
56
57 82 Milind Changire
h3. 2024 Jan 12
58
59
https://pulpito.ceph.com/yuriw-2024-01-10_19:20:36-fs-wip-vshankar-testing1-quincy-2024-01-10-2010-quincy-distro-default-smithi/
60
61
* https://tracker.ceph.com/issues/58476
62
  test_non_existent_cluster: cluster does not exist - Ceph - CephFS
63
* https://tracker.ceph.com/issues/64011 (new)
64
  qa: Command failed qa/workunits/suites/pjd.sh
65
* https://tracker.ceph.com/issues/61892
66
  test_snapshot_remove (tasks.cephfs.test_strays.TestStrays)
67 83 Milind Changire
* https://tracker.ceph.com/issues/64012 (new)
68 82 Milind Changire
  qa: Command failed qa/workunits/fs/full/subvolume_clone.sh
69
70
71
72 80 Kotresh Hiremath Ravishankar
h3. 2024 Jan 2
73
74
Re-run : https://pulpito.ceph.com/yuriw-2023-12-27_16:33:25-fs-wip-yuri-testing-2023-12-26-0957-quincy-distro-default-smithi/
75
76
* https://tracker.ceph.com/issues/63132
77
  qa: subvolume_snapshot_rm.sh stalls when waiting for OSD_FULL warning
78
* https://tracker.ceph.com/issues/52624
79
  cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log
80
* https://tracker.ceph.com/issues/58476
81
  test_non_existent_cluster: cluster does not exist - Ceph - CephFS
82 81 Kotresh Hiremath Ravishankar
* https://tracker.ceph.com/issues/63931
83 80 Kotresh Hiremath Ravishankar
  Test failure: test_mirroring_init_failure_with_recovery (tasks.cephfs.test_mirroring.TestMirroring)
84
* https://tracker.ceph.com/issues/63212
85
  qa: failed to download ior.tbz2
86
* https://tracker.ceph.com/issues/59531
87
  quincy: "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the
88
* https://tracker.ceph.com/issues/61892
89
  [testing] qa: test_snapshot_remove (tasks.cephfs.test_strays.TestStrays)
90
91 78 Kotresh Hiremath Ravishankar
h3. 2023 Dec 27
92
93
https://pulpito.ceph.com/yuriw-2023-12-26_19:48:51-fs-wip-yuri-testing-2023-12-26-0957-quincy-distro-default-smithi/
94
95
* https://tracker.ceph.com/issues/63132
96
    qa: subvolume_snapshot_rm.sh stalls when waiting for OSD_FULL warning
97 79 Kotresh Hiremath Ravishankar
* https://tracker.ceph.com/issues/52624
98 78 Kotresh Hiremath Ravishankar
    cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log
99
* https://tracker.ceph.com/issues/63212
100
    qa: failed to download ior.tbz2
101
* https://tracker.ceph.com/issues/59531
102
    quincy: "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the
103
* https://tracker.ceph.com/issues/61892
104
    [testing] qa: test_snapshot_remove (tasks.cephfs.test_strays.TestStrays)
105
* https://tracker.ceph.com/issues/63894
106
    orchestrator: cephadm failed - alertmanager container not found
107
108 77 Venky Shankar
h3. 2023 Dec 21
109
110
https://pulpito.ceph.com/?branch=wip-yuri11-testing-2023-12-14-1108-quincy
111
112
* https://tracker.ceph.com/issues/55825
113
    cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log
114
* https://tracker.ceph.com/issues/63132
115
    qa: subvolume_snapshot_rm.sh stalls when waiting for OSD_FULL warning
116 81 Kotresh Hiremath Ravishankar
* https://tracker.ceph.com/issues/63931
117 77 Venky Shankar
    Test failure: test_mirroring_init_failure_with_recovery (tasks.cephfs.test_mirroring.TestMirroring)
118
* https://tracker.ceph.com/issues/59531
119
    quincy: "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the 
120
* https://tracker.ceph.com/issues/61892
121
    [testing] qa: test_snapshot_remove (tasks.cephfs.test_strays.TestStrays)
122
123 76 Venky Shankar
h3. 2023 Dec 20
124
125
https://pulpito.ceph.com/?branch=wip-vshankar-testing1-2023-12-18-1207-reef-2
126
(Lots of centos/rhel related issues)
127
128
* https://tracker.ceph.com/issues/55825
129
    cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log
130
* https://tracker.ceph.com/issues/59684
131
    Test failure: test_fscrypt_dummy_encryption_with_quick_group (tasks.cephfs.test_fscrypt.TestFscrypt)
132
* https://tracker.ceph.com/issues/61892
133
    [testing] qa: test_snapshot_remove (tasks.cephfs.test_strays.TestStrays)
134
* https://tracker.ceph.com/issues/50224
135
    Test failure: test_mirroring_init_failure_with_recovery (tasks.cephfs.test_mirroring.TestMirroring)
136
* https://tracker.ceph.com/issues/57087
137
    qa: test_fragmented_injection (tasks.cephfs.test_data_scan.TestDataScan) failure
138
* https://tracker.ceph.com/issues/59531
139
    quincy: "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the 
140
* https://tracker.ceph.com/issues/57655
141
    qa: fs:mixed-clients kernel_untar_build failure
142
* https://tracker.ceph.com/issues/63700
143
    qa: test_cd_with_args failure
144
* https://tracker.ceph.com/issues/63699
145
    qa: failed cephfs-shell test_reading_conf
146
* https://tracker.ceph.com/issues/63233
147
    mon|client|mds: valgrind reports possible leaks in the MDS
148 75 Milind Changire
149
h3. 2023 December 14
150
151
https://pulpito.ceph.com/vshankar-2023-12-13_09:42:45-fs-wip-vshankar-testing3-2023-12-13-1225-quincy-testing-default-smithi/
152
153
* https://tracker.ceph.com/issues/63132
154
    qa: subvolume_snapshot_rm.sh stalls when waiting for OSD_FULL warning
155
* https://tracker.ceph.com/issues/50224
156
    Test failure: test_mirroring_init_failure_with_recovery (tasks.cephfs.test_mirroring.TestMirroring)
157
* https://tracker.ceph.com/issues/61610
158
    CommandFailedError for qa/workunits/suites/fsstress.sh
159
160
161 74 Venky Shankar
h3. 2023 October 19
162
163
https://pulpito.ceph.com/?branch=wip-vshankar-testing-quincy-20231019.172112
164
165
* https://tracker.ceph.com/issues/55825
166
    cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log
167
* https://tracker.ceph.com/issues/63132
168
    qa: subvolume_snapshot_rm.sh stalls when waiting for OSD_FULL warning
169
* https://tracker.ceph.com/issues/61892
170
    [testing] qa: test_snapshot_remove (tasks.cephfs.test_strays.TestStrays)
171
* https://tracker.ceph.com/issues/62278 (missed qa fix in back port)
172
    pybind/mgr/volumes: pending_subvolume_deletions count is always zero in fs volume info output
173
* https://tracker.ceph.com/issues/59531
174
    quincy: "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the 
175
* https://tracker.ceph.com/issues/61399
176
    qa: build failure for ior (tarball name changed, so test fails with missing tarball - https://tracker.ceph.com/issues/61399#note-20)
177
* https://tracker.ceph.com/issues/62658
178
    error during scrub thrashing: reached maximum tries (31) after waiting for 900 seconds
179
* https://tracker.ceph.com/issues/62510
180
    snaptest-git-ceph.sh failure with fs/thrash
181
182 73 Venky Shankar
h3. 2023 October 10
183
184
https://pulpito.ceph.com/?branch=wip-yuri3-testing-2023-10-10-0720-quincy
185
186
* https://tracker.ceph.com/issues/55825
187
    cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log
188
* https://tracker.ceph.com/issues/61892
189
    [testing] qa: test_snapshot_remove (tasks.cephfs.test_strays.TestStrays)
190
* https://tracker.ceph.com/issues/61399
191
    qa: build failure for ior (tarball name changed, so test fails with missing tarball - https://tracker.ceph.com/issues/61399#note-20)
192
* https://tracker.ceph.com/issues/59531
193
    quincy: "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the 
194
* https://tracker.ceph.com/issues/62278 (missed qa fix in back port)
195
    pybind/mgr/volumes: pending_subvolume_deletions count is always zero in fs volume info output
196
* https://tracker.ceph.com/issues/63132
197
    qa: subvolume_snapshot_rm.sh stalls when waiting for OSD_FULL warning
198
* https://tracker.ceph.com/issues/57255
199
    rados/cephadm/mds_upgrade_sequence, pacific : cephadm [ERR] Upgrade: Paused due to UPGRADE_NO_STANDBY_MGR: Upgrade: Need standby mgr daemon
200
201 72 Venky Shankar
h3. 2023 October 09
202
203
https://pulpito.ceph.com/?branch=wip-yuri-testing-2023-10-06-0949-quincy
204
205
* https://tracker.ceph.com/issues/55825
206
    cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log
207
* https://tracker.ceph.com/issues/51964
208
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
209
* https://tracker.ceph.com/issues/63132
210
    qa: subvolume_snapshot_rm.sh stalls when waiting for OSD_FULL warning
211
* https://tracker.ceph.com/issues/61892
212
    [testing] qa: test_snapshot_remove (tasks.cephfs.test_strays.TestStrays)
213
* https://tracker.ceph.com/issues/59531
214
    quincy: "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the 
215
* https://tracker.ceph.com/issues/61182
216
    qa: workloads/cephfs-mirror-ha-workunit - stopping mirror daemon after the test finishes timesout.
217
* https://tracker.ceph.com/issues/61399
218
    qa: build failure for ior (tarball name changed, so test fails with missing tarball - https://tracker.ceph.com/issues/61399#note-20)
219
220 71 Venky Shankar
h3. 2023 October 06
221
222
https://pulpito.ceph.com/?branch=wip-yuri3-testing-2023-10-06-0948-quincy
223
224
* https://tracker.ceph.com/issues/55825
225
    cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log
226
* https://tracker.ceph.com/issues/51964
227
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
228
* https://tracker.ceph.com/issues/63132
229
    qa: subvolume_snapshot_rm.sh stalls when waiting for OSD_FULL warning
230
* https://tracker.ceph.com/issues/62810
231
    Failure in snaptest-git-ceph.sh (it's an async unlink/create bug) -- Need to fix again
232
* https://tracker.ceph.com/issues/59343
233
    qa: fs/snaps/snaptest-multiple-capsnaps.sh failed (pending kclient fix)
234
* https://tracker.ceph.com/issues/61892
235
    [testing] qa: test_snapshot_remove (tasks.cephfs.test_strays.TestStrays)
236
237
238 70 Milind Changire
h3. 2023 October 03
239
240
https://pulpito.ceph.com/vshankar-2023-09-29_10:09:00-fs-wip-vshankar-testing-quincy-20230929.071619-testing-default-smithi/
241
242
* https://tracker.ceph.com/issues/51964
243
  qa: test_cephfs_mirror_restart_sync_on_blocklist failure
244
* https://tracker.ceph.com/issues/59531
245
  quincy: "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the 
246
* https://tracker.ceph.com/issues/55825
247
  cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log
248
* https://tracker.ceph.com/issues/61610
249
  CommandFailedError for qa/workunits/suites/fsstress.sh
250
* https://tracker.ceph.com/issues/63071
251
  qa: Test failure: test_valid_dump_blocked_ops_count (tasks.cephfs.test_admin.TestValidTell)
252
* https://tracker.ceph.com/issues/61394
253
  mds.a (mds.0) 1 : cluster [WRN] evicting unresponsive client smithi152 (4298), after 303.726 seconds" in cluster log
254
255
256 68 Patrick Donnelly
h3. 2023 August 08
257
258 69 Patrick Donnelly
https://trello.com/c/ZjPC9CcN/1820-wip-yuri5-testing-2023-08-08-0807-quincy
259
https://pulpito.ceph.com/?branch=wip-yuri5-testing-2023-08-08-0807-quincy
260
261 68 Patrick Donnelly
* https://tracker.ceph.com/issues/55825
262
    cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log
263
* https://tracker.ceph.com/issues/62484
264
    quincy (?): qa: ffsb.sh test failure
265
* https://tracker.ceph.com/issues/62485
266
    quincy (?): pybind/mgr/volumes: subvolume rm timeout
267
* https://tracker.ceph.com/issues/58726
268
    Test failure: test_acls (tasks.cephfs.test_acls.TestACLs)
269
* https://tracker.ceph.com/issues/51964
270
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
271 69 Patrick Donnelly
* https://tracker.ceph.com/issues/62489
272
    testing: did not reconnect to MDS during up:reconnect
273
274 68 Patrick Donnelly
275 66 Venky Shankar
h3. 4 August 2023
276
277
https://pulpito.ceph.com/?branch=wip-yuri7-testing-2023-07-27-1336-quincy
278
279
* https://tracker.ceph.com/issues/51964
280
    qa: test_cephfs_mirror_restart_sync_on_blocklist failure
281
* https://tracker.ceph.com/issues/61610
282
    CommandFailedError for qa/workunits/suites/fsstress.sh
283
* http://tracker.ceph.com/issues/52624
284
    cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log
285
* https://tracker.ceph.com/issues/59531
286
    cluster [WRN] OSD bench result of 137706.272521 IOPS exceeded the threshold
287
* https://tracker.ceph.com/issues/58726
288 67 Venky Shankar
    Test failure: test_acls (tasks.cephfs.test_acls.TestACLs)
289 66 Venky Shankar
290
291 65 Venky Shankar
h3. 25 July 2023
292 64 Venky Shankar
293
https://pulpito.ceph.com/?branch=wip-yuri3-testing-2023-07-14-0724-quincy
294
295
* http://tracker.ceph.com/issues/52624
296
    cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log
297
* https://tracker.ceph.com/issues/58726
298
    Test failure: test_acls (tasks.cephfs.test_acls.TestACLs)
299
* https://tracker.ceph.com/issues/59531
300
    cluster [WRN] OSD bench result of 137706.272521 IOPS exceeded the threshold
301
* https://tracker.ceph.com/issues/61775
302
    cephfs-mirror: mirror daemon does not shutdown (in mirror ha tests)
303
* https://tracker.ceph.com/issues/61610
304
  CommandFailedError for qa/workunits/suites/fsstress.sh
305
306
307 62 Venky Shankar
h3. 2023 July 04
308
309
https://pulpito.ceph.com/yuriw-2023-07-03_15:34:02-fs-quincy_release-distro-default-smithi/
310
311
* http://tracker.ceph.com/issues/52624
312
  cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log
313
* https://tracker.ceph.com/issues/61610
314
  CommandFailedError for qa/workunits/suites/fsstress.sh
315
* https://tracker.ceph.com/issues/58726
316
  Test failure: test_acls (tasks.cephfs.test_acls.TestACLs)
317
* https://tracker.ceph.com/issues/59531
318
  cluster [WRN] OSD bench result of 137706.272521 IOPS exceeded the threshold
319
* https://tracker.ceph.com/issues/50223
320
    cluster [WRN] client.xxxx isn't responding to mclientcaps(revoke)
321
* https://tracker.ceph.com/issues/61775
322
    cephfs-mirror: mirror daemon does not shutdown (in mirror ha tests)
323
* https://tracker.ceph.com/issues/61892
324
    Test failure: test_snapshot_remove (tasks.cephfs.test_strays.TestStrays)
325
326 61 Milind Changire
h3. 2023 June 14
327
328
http://pulpito.front.sepia.ceph.com/yuriw-2023-06-13_23:20:02-fs-wip-yuri3-testing-2023-06-13-1204-quincy-distro-default-smithi/
329
330
* https://tracker.ceph.com/issues/59531
331
  cluster [WRN] OSD bench result of 137706.272521 IOPS exceeded the threshold
332
* http://tracker.ceph.com/issues/52624
333
  cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log
334
* https://tracker.ceph.com/issues/61610
335
  CommandFailedError for qa/workunits/suites/fsstress.sh
336
* https://tracker.ceph.com/issues/58726
337
  Test failure: test_acls (tasks.cephfs.test_acls.TestACLs)
338
* Failed to fetch package version
339
  http://pulpito.front.sepia.ceph.com/yuriw-2023-06-13_23:20:02-fs-wip-yuri3-testing-2023-06-13-1204-quincy-distro-default-smithi/7303252
340
  http://pulpito.front.sepia.ceph.com/yuriw-2023-06-13_23:20:02-fs-wip-yuri3-testing-2023-06-13-1204-quincy-distro-default-smithi/7303360
341
* cephfs_mirror: reached maximum tries (51) after waiting for 300 seconds
342
  http://pulpito.front.sepia.ceph.com/yuriw-2023-06-13_23:20:02-fs-wip-yuri3-testing-2023-06-13-1204-quincy-distro-default-smithi/7303322
343
344
345 58 Milind Changire
h3. 2023 June 07
346
347
http://pulpito.front.sepia.ceph.com/yuriw-2023-05-31_21:56:15-fs-wip-yuri6-testing-2023-05-31-0933-quincy-distro-default-smithi/
348
349 60 Milind Changire
* https://tracker.ceph.com/issues/59531
350
  cluster [WRN] OSD bench result of 137706.272521 IOPS exceeded the threshold
351 58 Milind Changire
* https://tracker.ceph.com/issues/61609
352
  CommandFailedError for qa/workunits/libcephfs/test.sh
353
* https://tracker.ceph.com/issues/61610
354
  CommandFailedError for qa/workunits/suites/fsstress.sh
355
* http://tracker.ceph.com/issues/52624
356
  cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log
357
* https://tracker.ceph.com/issues/61182
358
  workloads/cephfs-mirror-ha-workunit: reached maximum tries (50) after waiting for 300 seconds (mirror daemon stop times out)
359
* https://tracker.ceph.com/issues/51282
360
  cluster [WRN] Health check failed: Degraded data redundancy: 1 pg degraded (PG_DEGRADED)" in cluster log
361 59 Milind Changire
362
Failed to fetch package version
363 58 Milind Changire
* http://pulpito.front.sepia.ceph.com/yuriw-2023-05-31_21:56:15-fs-wip-yuri6-testing-2023-05-31-0933-quincy-distro-default-smithi/7292615
364
* http://pulpito.front.sepia.ceph.com/yuriw-2023-05-31_21:56:15-fs-wip-yuri6-testing-2023-05-31-0933-quincy-distro-default-smithi/7292784
365
366 54 Kotresh Hiremath Ravishankar
h3. 2023 May 24
367
368
https://pulpito.ceph.com/yuriw-2023-05-23_15:23:11-fs-wip-yuri10-testing-2023-05-18-0815-quincy-distro-default-smithi/
369
370 55 Kotresh Hiremath Ravishankar
* https://tracker.ceph.com/issues/61393 (NEW - not related)
371 54 Kotresh Hiremath Ravishankar
  orchestrator bug: cephadm command failed
372
* https://tracker.ceph.com/issues/58340
373
  mds: fsstress.sh hangs with multimds
374
* https://tracker.ceph.com/issues/55332
375
  Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128
376 55 Kotresh Hiremath Ravishankar
* https://tracker.ceph.com/issues/61394 (NEW - not related)
377 54 Kotresh Hiremath Ravishankar
  mds.a (mds.0) 1 : cluster [WRN] evicting unresponsive client smithi152 (4298), after 303.726 seconds" in cluster log
378
* https://tracker.ceph.com/issues/61182
379
  workloads/cephfs-mirror-ha-workunit: reached maximum tries (50) after waiting for 300 seconds (mirror daemon stop times out)
380
* https://tracker.ceph.com/issues/51964
381
  qa: test_cephfs_mirror_restart_sync_on_blocklist failure
382
* http://tracker.ceph.com/issues/52624
383
  cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log
384 56 Kotresh Hiremath Ravishankar
* https://tracker.ceph.com/issues/51282
385 54 Kotresh Hiremath Ravishankar
  cluster [WRN] Health check failed: Degraded data redundancy: 1 pg degraded (PG_DEGRADED)" in cluster log
386
* https://tracker.ceph.com/issues/59531
387
  quincy: "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark 
388
  tools (e.g. Fio)"
389
* https://tracker.ceph.com/issues/58726
390
  Test failure: test_acls (tasks.cephfs.test_acls.TestACLs)
391
* Failed to fetch package version
392
  https://pulpito.ceph.com/yuriw-2023-05-23_15:23:11-fs-wip-yuri10-testing-2023-05-18-0815-quincy-distro-default-smithi/7284063
393
  https://pulpito.ceph.com/yuriw-2023-05-23_15:23:11-fs-wip-yuri10-testing-2023-05-18-0815-quincy-distro-default-smithi/7284130
394 53 Patrick Donnelly
395
h3. 2023 Apr 21/24
396
    
397
https://pulpito.ceph.com/?branch=wip-pdonnell-testing-20230420.183701-quincy 
398
399
400
2 Failures:
401
    "Failed to fetch package version from https://shaman.ceph.com/api/search/?status=ready&project=ceph&flavor=default&distros=ubuntu%2F22.04%2Fx86_64&sha1=58e06d348d8a2da339540be5425a40ec7683e512 "
402
403
are a side-effect of revert https://github.com/ceph/ceph/pull/51029 . This is expected and should be fixed by the new backport that is reverted.
404
405
* http://tracker.ceph.com/issues/52624
406
    cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log
407
* https://tracker.ceph.com/issues/54460
408
  snaptest-multiple-capsnaps.sh test failure
409
* https://tracker.ceph.com/issues/59531
410
  quincy: "OSD bench result of 228617.361065 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio)"
411
* https://tracker.ceph.com/issues/59532
412
  quincy: cephadm.upgrade from 16.2.4 (related?) stuck with one OSD upgraded
413
414
415 51 Venky Shankar
h3. 2023 Mar 02
416
417
https://pulpito.ceph.com/yuriw-2023-02-22_20:50:58-fs-wip-yuri4-testing-2023-02-22-0817-quincy-distro-default-smithi/
418 52 Venky Shankar
https://pulpito.ceph.com/yuriw-2023-02-28_22:41:58-fs-wip-yuri10-testing-2023-02-28-0752-quincy-distro-default-smithi/
419 51 Venky Shankar
420
* http://tracker.ceph.com/issues/52624
421
    cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log
422 52 Venky Shankar
* https://tracker.ceph.com/issues/55825
423
    cluster [WRN] Health check failed: Degraded data redundancy: 1 pg degraded (PG_DEGRADED)" in cluster log
424 51 Venky Shankar
* https://tracker.ceph.com/issues/58726
425
    Test failure: test_acls (tasks.cephfs.test_acls.TestACLs)
426
* https://tracker.ceph.com/issues/50223
427
    cluster [WRN] client.xxxx isn't responding to mclientcaps(revoke)
428 1 Venky Shankar
* https://tracker.ceph.com/issues/54462
429 52 Venky Shankar
    Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128
430
* https://tracker.ceph.com/issues/58340
431
    mds: fsstress.sh hangs with multimds
432 51 Venky Shankar
433 46 Jos Collin
h3. 2023 Feb 17
434 47 Jos Collin
435 49 Jos Collin
https://pulpito.ceph.com/yuriw-2023-02-16_19:08:52-fs-wip-yuri3-testing-2023-02-16-0752-quincy-distro-default-smithi/
436 46 Jos Collin
437
* https://tracker.ceph.com/issues/58754
438
    Test failure: test_subvolume_snapshot_info_if_orphan_clone (tasks.cephfs.test_volumes.TestSubvolumeSnapshotClones)
439 48 Jos Collin
* https://tracker.ceph.com/issues/58756
440
    qa: error during scrub thrashing
441 50 Jos Collin
* https://tracker.ceph.com/issues/58757
442
    qa: Command failed (workunit test suites/fsstress.sh)
443
444 46 Jos Collin
445 45 Jos Collin
h3. 2023 Feb 16
446
447
https://pulpito.ceph.com/yuriw-2023-02-13_20:44:19-fs-wip-yuri8-testing-2023-02-07-0753-quincy-distro-default-smithi/
448
449
* https://tracker.ceph.com/issues/58746
450
    qa: VersionNotFoundError: Failed to fetch package version
451
* https://tracker.ceph.com/issues/58745
452
    qa: cephadm failed to stop mon
453
454 44 Venky Shankar
h3. 2023 Feb 15
455
456
https://pulpito.ceph.com/yuriw-2023-02-13_20:43:24-fs-wip-yuri5-testing-2023-02-07-0850-quincy-distro-default-smithi/
457
458
* https://tracker.ceph.com/issues/57446
459
    Test failure: test_subvolume_snapshot_info_if_orphan_clone (tasks.cephfs.test_volumes.TestSubvolumeSnapshotClones)
460
* https://tracker.ceph.com/issues/58656
461
    qa: Test failure: test_cephfs_mirror_restart_sync_on_blocklist (tasks.cephfs.test_mirroring.TestMirroring)
462
* https://tracker.ceph.com/issues/58726
463
    quincy: Test failure: test_acls (tasks.cephfs.test_acls.TestACLs)
464
* https://tracker.ceph.com/issues/58727
465
    quincy: Test failure: test_dirfrag_limit (tasks.cephfs.test_strays.TestStrays)
466
467
468 43 Jos Collin
h3. 2023 Feb 07
469
470
https://pulpito.ceph.com/yuriw-2023-02-03_23:44:47-fs-wip-yuri8-testing-2023-01-30-1510-quincy-distro-default-smithi/
471
472
* https://tracker.ceph.com/issues/58656
473
    qa: Test failure: test_cephfs_mirror_restart_sync_on_blocklist (tasks.cephfs.test_mirroring.TestMirroring)
474
475 5 Jeff Layton
476 40 Milind Changire
h3. 2022 Oct 21
477 1 Venky Shankar
478 40 Milind Changire
http://pulpito.front.sepia.ceph.com/yuriw-2022-10-17_17:37:24-fs-wip-yuri-testing-2022-10-17-0746-quincy-distro-default-smithi/
479
480
* https://tracker.ceph.com/issues/57205
481
    Test failure: test_subvolume_group_ls_filter_internal_directories (tasks.cephfs.test_volumes.TestSubvolumeGroups)
482
* https://tracker.ceph.com/issues/57446
483
    Test failure: test_subvolume_snapshot_info_if_orphan_clone (tasks.cephfs.test_volumes.TestSubvolumeSnapshotClones)
484
* https://tracker.ceph.com/issues/55825
485
    cluster [WRN] Health check failed: Degraded data redundancy: 1 pg degraded (PG_DEGRADED)" in cluster log
486
  cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log
487
488
489
h3. 2022 Oct 17
490
491 39 Milind Changire
http://pulpito.front.sepia.ceph.com/yuriw-2022-10-12_16:32:23-fs-wip-yuri8-testing-2022-10-12-0718-quincy-distro-default-smithi/
492
493
* https://tracker.ceph.com/issues/54460
494
  snaptest-multiple-capsnaps.sh test failure
495
* https://tracker.ceph.com/issues/57446
496
  Test failure: test_subvolume_snapshot_info_if_orphan_clone (tasks.cephfs.test_volumes.TestSubvolumeSnapshotClones)
497
* https://tracker.ceph.com/issues/55825
498
  cluster [WRN] Health check failed: Degraded data redundancy: 1 pg degraded (PG_DEGRADED)" in cluster log
499
  cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log
500
* https://tracker.ceph.com/issues/50223
501
  cluster [WRN] client.4490 isn't responding to mclientcaps(revoke)
502
503 1 Venky Shankar
504 39 Milind Changire
505 40 Milind Changire
h3. 2022 Sep 29
506 38 Venky Shankar
507
http://pulpito.front.sepia.ceph.com/?branch=wip-yuri6-testing-2022-09-23-1008-quincy
508
509
* https://tracker.ceph.com/issues/57205
510
    Test failure: test_subvolume_group_ls_filter_internal_directories (tasks.cephfs.test_volumes.TestSubvolumeGroups)
511
* https://tracker.ceph.com/issues/57446
512
    qa: test_subvolume_snapshot_info_if_orphan_clone fails
513
* https://tracker.ceph.com/issues/50224
514
    Test failure: test_mirroring_init_failure_with_recovery (tasks.cephfs.test_mirroring.TestMirroring)
515
* https://tracker.ceph.com/issues/57280
516
    qa: tasks/kernel_cfuse_workunits_untarbuild_blogbench fails - Failed to fetch package version from shaman
517 1 Venky Shankar
* https://tracker.ceph.com/issues/50223
518 38 Venky Shankar
    cluster [WRN] client.xxxx isn't responding to mclientcaps(revoke)
519
520 40 Milind Changire
h3. 2022 Sep 09
521 37 Venky Shankar
522
http://pulpito.front.sepia.ceph.com/yuriw-2022-09-08_18:29:21-fs-wip-yuri6-testing-2022-09-08-0859-quincy-distro-default-smithi/
523
524
* http://tracker.ceph.com/issues/52624
525
    cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log
526
* https://tracker.ceph.com/issues/51282
527
    cluster [WRN] Health check failed: Degraded data redundancy: 1 pg degraded (PG_DEGRADED)" in cluster log
528
* https://tracker.ceph.com/issues/50223
529
    cluster [WRN] client.xxxx isn't responding to mclientcaps(revoke)
530
* https://tracker.ceph.com/issues/57205
531
    Test failure: test_subvolume_group_ls_filter_internal_directories (tasks.cephfs.test_volumes.TestSubvolumeGroups)
532
* https://tracker.ceph.com/issues/57446
533
    qa: test_subvolume_snapshot_info_if_orphan_clone fails
534
* https://tracker.ceph.com/issues/51964
535 1 Venky Shankar
    Test failure: test_cephfs_mirror_restart_sync_on_blocklist (tasks.cephfs.test_mirroring.TestMirroring)
536 37 Venky Shankar
* https://tracker.ceph.com/issues/57280
537
    Failed to fetch package version from https://shaman.ceph.com/api/search/?status=ready&project=kernel&flavor=default&distros=ubuntu%2F22.04%2Fx86_64&ref=testing 
538
539 40 Milind Changire
h3. 2022 Sep 02
540 35 Venky Shankar
541
https://pulpito.ceph.com/yuriw-2022-09-01_18:27:02-fs-wip-yuri11-testing-2022-09-01-0804-quincy-distro-default-smithi/
542 36 Venky Shankar
543
and
544
545
https://pulpito.ceph.com/?branch=wip-lflores-testing-2-2022-08-26-2240-quincy
546 35 Venky Shankar
547
* https://tracker.ceph.com/issues/57280
548
    Failed to fetch package version from https://shaman.ceph.com/api/search/?status=ready&project=kernel&flavor=default&distros=ubuntu%2F22.04%2Fx86_64&ref=testing 
549
* https://tracker.ceph.com/issues/50223
550
    cluster [WRN] client.xxxx isn't responding to mclientcaps(revoke)
551
* http://tracker.ceph.com/issues/52624
552
  cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log
553 1 Venky Shankar
* https://tracker.ceph.com/issues/48773
554 35 Venky Shankar
    error during scrub thrashing: Command failed on smithi085 with status 1: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph tell mds.1:0 scrub status'
555 36 Venky Shankar
* https://tracker.ceph.com/issues/54462
556 35 Venky Shankar
  Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128
557 34 Venky Shankar
558 40 Milind Changire
h3. 2022 Aug 31
559 34 Venky Shankar
560
https://pulpito.ceph.com/?branch=wip-yuri-testing-2022-08-23-1120-quincy
561
562
* https://tracker.ceph.com/issues/51964
563
    Test failure: test_cephfs_mirror_restart_sync_on_blocklist (tasks.cephfs.test_mirroring.TestMirroring)
564
* https://tracker.ceph.com/issues/57280
565
    Failed to fetch package version from https://shaman.ceph.com/api/search/?status=ready&project=kernel&flavor=default&distros=ubuntu%2F22.04%2Fx86_64&ref=testing 
566
* http://tracker.ceph.com/issues/52624
567 1 Venky Shankar
    cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log
568 34 Venky Shankar
* https://tracker.ceph.com/issues/48773
569
    error during scrub thrashing: Command failed on smithi085 with status 1: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph tell mds.1:0 scrub status'
570
* https://tracker.ceph.com/issues/50223
571
    cluster [WRN] client.xxxx isn't responding to mclientcaps(revoke)
572
573 40 Milind Changire
h3. 2022 Aug 17
574 31 Kotresh Hiremath Ravishankar
575
https://pulpito.ceph.com/yuriw-2022-08-17_18:46:04-fs-wip-yuri7-testing-2022-08-17-0943-quincy-distro-default-smithi/
576
577
There were following errors not related to tests which fixed in rerun:
578
579
* Command failed on smithi161 with status 127: "sudo /home/ubuntu/cephtest/cephadm --image docker.io/ceph/ceph:v16.2.4 shell -c ... -- bash -c 'ceph fs dump'"
580
* Failed to fetch package version from https://shaman.ceph.com/api/search/?status=ready&project=kernel&flavor=default&distros=ubuntu%2F22.04%2Fx86_64&ref=testing 
581 33 Kotresh Hiremath Ravishankar
* reached maximum tries (90) after waiting for 540 seconds - DEBUG:teuthology.misc:7 of 8 OSDs are up
582 31 Kotresh Hiremath Ravishankar
* https://tracker.ceph.com/issues/56697 - qa: fs/snaps fails for fuse - Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithi150 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp ..."
583
* SSH connection to smithi077 was lost: 'sudo rm -rf -- /home/ubuntu/cephtest/workunits.list.client.0 /home/ubuntu/cephtest/clone.client.0' 
584
585 1 Venky Shankar
Rerun: https://pulpito.ceph.com/yuriw-2022-08-18_15:08:53-fs-wip-yuri7-testing-2022-08-17-0943-quincy-distro-default-smithi/
586 31 Kotresh Hiremath Ravishankar
587
* http://tracker.ceph.com/issues/52624
588
  cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log 
589
* https://tracker.ceph.com/issues/51282
590
  cluster [WRN] Health check failed: Degraded data redundancy: 1 pg degraded (PG_DEGRADED)" in cluster log
591
592 40 Milind Changire
h3. 2022 Aug 10
593 29 Kotresh Hiremath Ravishankar
594
http://pulpito.front.sepia.ceph.com/yuriw-2022-08-11_02:21:28-fs-wip-yuri-testing-2022-08-10-1103-quincy-distro-default-smithi/
595
* Most of the failures are passed in re-run. Please check rerun failures below.
596
  - tasks/{1-thrash/mon 2-workunit/fs/snaps - reached maximum tries (90) after waiting for 540 seconds - DEBUG:teuthology.misc:7 of 8 OSDs are up  
597
  - tasks/{1-thrash/osd 2-workunit/suites/iozone - reached maximum tries (90) after waiting for 540 seconds - DEBUG:teuthology.misc:7 of 8 OSDs are up
598
  - tasks/metrics - cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log
599
  - tasks/scrub - No module named 'tasks.cephfs.fuse_mount' 
600
  - tasks/{0-check-counter workunit/suites/iozone} wsync/{no}} - No module named 'tasks.fs' 
601
  - tasks/snap-schedule - cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log
602
  - tasks/volumes/{overrides test/clone}} - No module named 'tasks.ceph'
603
  - tasks/snapshots - CommandFailedError: Command failed on smithi035 with status 100: 'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes - INFO:teuthology.orchestra.run.smithi035.stderr:E: Version '17.2.3-414-ge5c30ac2-1focal' for 'python-ceph' was not found - INFO:teuthology.orchestra.run.smithi035.stderr:E: Unable to locate package libcephfs1
604
  - tasks/{0-octopus 1-client 2-upgrade 3-compat_client/no}} - No module named 'tasks.ceph'
605
  - tasks/{1-thrash/osd 2-workunit/suites/pjd}} - No module named 'tasks.ceph'
606
  - tasks/cfuse_workunit_suites_fsstress traceless/50pc} - No module named 'tasks'
607
  - tasks/{0-octopus 1-upgrade}} - No module named 'tasks'
608
  - tasks/{1-thrash/osd 2-workunit/fs/snaps}} - cluster [WRN] client.4520 isn't responding to mclientcaps(revoke),
609
  - tasks/{1-thrash/mds 2-workunit/cfuse_workunit_snaptests}} - reached maximum tries (90) after waiting for 540 seconds - teuthology.misc:7 of 8 OSDs are up
610
611
Re-run1: http://pulpito.front.sepia.ceph.com/yuriw-2022-08-11_14:24:26-fs-wip-yuri-testing-2022-08-10-1103-quincy-distro-default-smithi/
612
613
* tasks/{1-thrash/mon 2-workunit/fs/snaps - reached maximum tries (90) after waiting for 540 seconds 
614
  DEBUG:teuthology.misc:7 of 8 OSDs are up 
615
* http://tracker.ceph.com/issues/52624
616
  cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log 
617
* https://tracker.ceph.com/issues/50223
618
  cluster [WRN] client.xxxx isn't responding to mclientcaps(revoke)
619 1 Venky Shankar
* tasks/{1-thrash/mds 2-workunit/cfuse_workunit_snaptests}} - reached maximum tries (90) after waiting for 540 seconds 	
620 29 Kotresh Hiremath Ravishankar
  DEBUG:teuthology.misc:7 of 8 OSDs are up
621 30 Kotresh Hiremath Ravishankar
622
Re-run2: http://pulpito.front.sepia.ceph.com/yuriw-2022-08-16_14:46:15-fs-wip-yuri-testing-2022-08-10-1103-quincy-distro-default-smithi/
623
624
* http://tracker.ceph.com/issues/52624
625 29 Kotresh Hiremath Ravishankar
  cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log 
626 28 Kotresh Hiremath Ravishankar
627 40 Milind Changire
h3. 2022 Aug 03
628 27 Kotresh Hiremath Ravishankar
629
https://pulpito.ceph.com/yuriw-2022-08-04_11:54:20-fs-wip-yuri8-testing-2022-08-03-1028-quincy-distro-default-smithi/
630
Re-run: https://pulpito.ceph.com/yuriw-2022-08-09_15:36:21-fs-wip-yuri8-testing-2022-08-03-1028-quincy-distro-default-smithi
631
632
* No module named 'tasks' - Fixed in re-run
633
634
* https://tracker.ceph.com/issues/51282
635
  cluster [WRN] Health check failed: Degraded data redundancy: 1 pg degraded (PG_DEGRADED)" in cluster log
636
 
637 1 Venky Shankar
* https://tracker.ceph.com/issues/57064
638 27 Kotresh Hiremath Ravishankar
  qa: test_add_ancestor_and_child_directory failure
639
  
640
* http://tracker.ceph.com/issues/52624
641
  cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log
642
643
* https://tracker.ceph.com/issues/50223
644
  cluster [WRN] client.xxxx isn't responding to mclientcaps(revoke)
645 24 Rishabh Dave
646 40 Milind Changire
h3. 2022 Jul 22
647 24 Rishabh Dave
648
https://pulpito.ceph.com/yuriw-2022-07-11_13:37:40-fs-wip-yuri5-testing-2022-07-06-1020-quincy-distro-default-smithi/
649 1 Venky Shankar
re-run: https://pulpito.ceph.com/yuriw-2022-07-12_13:37:44-fs-wip-yuri5-testing-2022-07-06-1020-quincy-distro-default-smithi/
650 24 Rishabh Dave
Most failure weren't seen in re-run.
651
652
* http://tracker.ceph.com/issues/52624
653
  Health check failed: Reduced data availability
654 25 Rishabh Dave
* https://tracker.ceph.com/issues/50223
655 26 Rishabh Dave
  client.xxxx isn't responding to mclientcaps(revoke)
656 24 Rishabh Dave
* https://tracker.ceph.com/issues/54462
657 20 Rishabh Dave
  Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128
658 1 Venky Shankar
659 40 Milind Changire
h3. 2022 Jul 13
660 1 Venky Shankar
661 22 Rishabh Dave
https://pulpito.ceph.com/yuriw-2022-07-08_17:05:01-fs-wip-yuri2-testing-2022-07-08-0453-quincy-distro-default-smithi/
662
663
* http://tracker.ceph.com/issues/52624 
664
    cluster [WRN] Health check failed: Reduced data availability: 2 pgs peering (PG_AVAILABILITY)" in cluster log
665 23 Rishabh Dave
* https://tracker.ceph.com/issues/51964
666 1 Venky Shankar
  Test failure: test_cephfs_mirror_restart_sync_on_blocklist (tasks.cephfs.test_mirroring.TestMirroring)
667 23 Rishabh Dave
* https://tracker.ceph.com/issues/48773
668 22 Rishabh Dave
  error during scrub thrashing: Command failed on smithi085 with status 1: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph tell mds.1:0 scrub status' 
669 19 Venky Shankar
670 40 Milind Changire
h3. 2022 Jun 08
671 19 Venky Shankar
672 1 Venky Shankar
http://pulpito.front.sepia.ceph.com/yuriw-2022-06-07_22:29:43-fs-wip-yuri3-testing-2022-06-07-0722-quincy-distro-default-smithi/
673 19 Venky Shankar
674
* http://tracker.ceph.com/issues/52624
675
    cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log
676 18 Venky Shankar
677
678 40 Milind Changire
h3. 2022 Jun 07
679 18 Venky Shankar
680
http://pulpito.front.sepia.ceph.com/yuriw-2022-06-02_20:32:25-fs-wip-yuri5-testing-2022-06-02-0825-quincy-distro-default-smithi/
681
682 1 Venky Shankar
* http://tracker.ceph.com/issues/52624
683 18 Venky Shankar
    cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log
684 17 Venky Shankar
685 40 Milind Changire
h3. 2022 Jun 03
686 17 Venky Shankar
687
https://pulpito.ceph.com/?branch=wip-yuri-testing-2022-06-02-0810-quincy
688 1 Venky Shankar
689 17 Venky Shankar
* http://tracker.ceph.com/issues/52624
690
    cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log
691
* https://tracker.ceph.com/issues/50223
692
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
693
* https://tracker.ceph.com/issues/54462
694 16 Venky Shankar
    Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128
695
696 40 Milind Changire
h3. 2022 May 31
697 16 Venky Shankar
698 1 Venky Shankar
http://pulpito.front.sepia.ceph.com/yuriw-2022-05-27_21:58:39-fs-wip-yuri2-testing-2022-05-27-1033-quincy-distro-default-smithi/
699 16 Venky Shankar
700
* http://tracker.ceph.com/issues/52624
701
    cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log
702 15 Venky Shankar
703 40 Milind Changire
h3. 2022 May 26
704 15 Venky Shankar
705
https://pulpito.ceph.com/?branch=wip-yuri-testing-2022-05-10-1027-quincy
706 1 Venky Shankar
707 15 Venky Shankar
* http://tracker.ceph.com/issues/52624
708
    cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log
709
* https://tracker.ceph.com/issues/50223
710
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
711
* https://tracker.ceph.com/issues/54462
712
    Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128
713 14 Venky Shankar
714 40 Milind Changire
h3. 2022 May 10
715 14 Venky Shankar
716 1 Venky Shankar
http://pulpito.front.sepia.ceph.com/?branch=wip-yuri-testing-2022-05-05-0838-quincy
717 14 Venky Shankar
718
* http://tracker.ceph.com/issues/52624
719
    cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log
720
* https://tracker.ceph.com/issues/50223
721
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
722 13 Venky Shankar
723 40 Milind Changire
h3. 2022 April 29
724 13 Venky Shankar
725
https://pulpito.ceph.com/?branch=wip-yuri3-testing-2022-04-22-0534-quincy
726 1 Venky Shankar
727 13 Venky Shankar
* http://tracker.ceph.com/issues/52624
728
    cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log
729
* https://tracker.ceph.com/issues/50223
730
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
731
* https://tracker.ceph.com/issues/54462
732
    Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128
733 12 Venky Shankar
734 40 Milind Changire
h3. 2022 April 13
735 12 Venky Shankar
736
http://pulpito.front.sepia.ceph.com/?branch=wip-yuri3-testing-2022-04-11-0746-quincy
737
738
* http://tracker.ceph.com/issues/52624
739
    cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log
740
* https://tracker.ceph.com/issues/50223
741
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
742
* https://tracker.ceph.com/issues/52438
743
   qa: ffsb timeout
744 9 Venky Shankar
745 40 Milind Changire
h3. 2022 March 31
746 10 Venky Shankar
747 9 Venky Shankar
http://pulpito.front.sepia.ceph.com/yuriw-2022-03-29_20:09:22-fs-wip-yuri-testing-2022-03-29-0741-quincy-distro-default-smithi/
748
http://pulpito.front.sepia.ceph.com/yuriw-2022-03-30_14:35:58-fs-wip-yuri-testing-2022-03-29-0741-quincy-distro-default-smithi/
749
750
751
* http://tracker.ceph.com/issues/52624
752
    cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log
753
* https://tracker.ceph.com/issues/54460
754
    snaptest-multiple-capsnaps.sh test failure
755
* https://tracker.ceph.com/issues/50223
756
    qa: "client.4737 isn't responding to mclientcaps(revoke)"
757
* http://tracker.ceph.com/issues/54606
758
   check-counter task runs till max job timeout
759 11 Venky Shankar
760 9 Venky Shankar
Handful of failed jobs due to:
761
<pre>
762
Command failed on smithi055 with status 1: 'sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:c5bb4e7d582f118c1093d94fbfedfb197eaa03b4 -v bootstrap --fsid 44e07f86-b03b-11ec-8c35-001a4aab830c --config /home/ubuntu/cephtest/seed.ceph.conf --output-config /etc/ceph/ceph.conf --output-keyring /etc/ceph/ceph.client.admin.keyring --output-pub-ssh-key /home/ubuntu/cephtest/ceph.pub --mon-id a --mgr-id x --orphan-initial-daemons --skip-monitoring-stack --mon-ip 172.21.15.55 --skip-admin-label && sudo chmod +r /etc/ceph/ceph.client.admin.keyring'
763 1 Venky Shankar
</pre>
764 9 Venky Shankar
765 40 Milind Changire
h3. 2022 March 17
766 8 Venky Shankar
767 7 Venky Shankar
http://pulpito.front.sepia.ceph.com/yuriw-2022-03-14_18:57:01-fs-wip-yuri2-testing-2022-03-14-0946-quincy-distro-default-smithi/
768
769
* http://tracker.ceph.com/issues/52624
770
   cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log
771
* http://tracker.ceph.com/issues/54461
772
   ffsb.sh test failure
773
* http://tracker.ceph.com/issues/54606
774
   check-counter task runs till max job timeout
775
776
Couple of jobs that are dead with:
777
778
<pre>
779
    2022-03-15T05:15:22.447 ERROR:paramiko.transport:Socket exception: No route to host (113)
780
    2022-03-15T05:15:22.452 DEBUG:teuthology.orchestra.run:got remote process result: None
781
    2022-03-15T05:15:22.453 INFO:tasks.workunit:Stopping ['suites/fsstress.sh'] on client.0...
782
</pre>
783
784 40 Milind Changire
h3. 2022 March 1
785 5 Jeff Layton
786
* https://tracker.ceph.com/issues/51282 (maybe?)
787
   cluster [WRN] Health check failed: Degraded data redundancy: 2/4 objects degraded (50.000%), 1 pg degraded (PG_DEGRADED)" in cluster log
788
* https://tracker.ceph.com/issues/52624
789
   cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)" in cluster log
790 6 Jeff Layton
* https://tracker.ceph.com/issues/54460
791
   Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithi152 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=465157b30605a0c958df893de628c923386baa8e TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/fs/snaps/snaptest-multiple-capsnaps.sh'
792
* https://tracker.ceph.com/issues/50223
793 1 Venky Shankar
cluster [WRN] client.14480 isn't responding to mclientcaps(revoke), ino 0x1000000f3fd pending pAsLsXsFsc issued pAsLsXsFscb, sent 304.933510 seconds ago" in cluster log
794 6 Jeff Layton
* https://tracker.ceph.com/issues/54461
795
  Command failed (workunit test suites/ffsb.sh) on smithi124 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=465157b30605a0c958df893de628c923386baa8e TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/suites/ffsb.sh'
796
* https://tracker.ceph.com/issues/54462
797 22 Rishabh Dave
  Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status 128: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=465157b30605a0c958df893de628c923386baa8e TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/fs/snaps/snaptest-git-ceph.sh'